A 'silly' Docker Gunicorn-Flask API

*This post contains references to Monty Python's quotes!


A few days ago, I received an email from a newsletter showing a step-by-step guide to deploy a Ruby based lightweight web framework on Docker. So, it gave me the idea to try it but with a more familiar tool, the good old Python. Also, instead of deploying the app to Docker, I’ll deploy it in a container in some cloud services (which I'll share on a later post).

The idea for the API

A lot of different tutorials about deploying Web frameworks in containers show how to deploy a web equivalent of a ‘hello world’. I thought that this time I should try something completely different*.

After looking a while for ideas to develop an API, I found a precious little dataset on Kaggle: The Monty Python Flying Circus dialogue dataset! I have found my Holy Grail*!

I decided to code an API with two endpoints. One that would return a random dialogue from the original show. And another endpoint to try to create, using Markov chains, a new material based on the original dialogues.

The first endpoint was straight forward just using SQLAlchemy to fetch the data from the db file and return the json to the request. Cool!

The second part of generating a new dialogue was a bit tricky, because even for Monty Python's standards it was pretty crazy random! You have to keep in mind that the prediction model is using texts from dialogues with different number of people, and Monty Python’s dialogues are as nonsense as it can be. Thus I decided that I needed to adopt, adapt, and improve (motto of the round table)*.

I downloaded the US president speeches corpus and mixed a random speech with a random Monty Python dialogue! Now the material was pretty close to what you would hear from a Monty Python’s mock political speech (or from some deranged politician).

#Initial set up

The very first step in any Python project should be creating your Python env. Different projects use different libraries, and it is a good idea, in terms of cost and security, to keep your containers with as little dependencies as possible.

$ python -m venv flask_env

The libraries I used:

Flask1.1.2lightweight WSGI web application framework
Flask-RESTful0.3.8Flask extension that adds support for quickly building REST APIs
Flask-SQLAlchemy2.4.4extension for Flask that adds support for SQLAlchemy to your application. It aims to simplify using SQLAlchemy with Flask by providing useful defaults and extra helpers
gunicorn20.0.4a Python Web Server Gateway Interface (WSGI) HTTP server. Ported from Ruby's Unicorn project.
markovify0.8.2simple, extensible Markov chain generator. Right now, its primary use is for building Markov models of large corpora of text and generating random sentences

The folder structure is:

├── Dockerfile
├── README.md
├── app.py
├── db
│   └── data2.db
├── db.py
├── models
│   ├── sketches.py
│   └── speeches.py
├── requirements.txt
├── resources
│   ├── default_resource.py
│   ├── new_data.py
│   └── original_data.py
└── wsgi.py

To generate the requirements for your container you can just run:

$ pip freeze > requirements.txt

The complete code is available at the repo, but I'll go quickly over the main files:



from flask import Flask
from flask_restful import Api 
from resources.original_data import Data
from resources.new_data import NewData
from resources.default_resource import Default
from db import db


app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///db/data2.db'
api = Api(app)

def create_tables():



if __name__ == "__main__":

Part 1

We import flask and flask_restful libraries, and the resources where the endpoints will direct the requests to.

Part 2

We then add the boilerplate code to pass the configs to Flask and the DB file do SQLAlchemy.

Part 3

Now we add the endpoints do their respective resources and run the app.


This is the file that generates pythonic political speeches:


from flask_restful import Resource
from models.sketches import SketchModel 
from models.speeches import SpeechModel
import random
import markovify

class NewData(Resource):

    def get(self):
        text = ''
        dialogue_size = 0
        while True:
            index = random.randint(1, 45)
            sketch = SketchModel.find_by_index(index)
            text = ' '.join(sketch['dialogue'])
            dialogue_size = len(sketch['dialogue'])
            if dialogue_size > 5:
        speech = SpeechModel.find_by_index(index)
        text += speech['body']
        mk = markovify.Text(text)
        mk = mk.compile()
        result = '...'

        for line in range(0, random.randint(10,30)):
            sentence = mk.make_sentence()
            if sentence:
                result+=' '+sentence
        return {'author':speech['author'],'date':speech['date'],'title':speech['title'],'episode':sketch['sketch'],'new_speech': result}

Part 1

We import the Resource from flask_restful, random, markovify and the models that abstract the access to the DB.

Part 2

We randomly choose a dialogue with at least five exchanges of lines.

Part 3

We randomly choose a speech from the +1000 speeches available and concatenate it to the previously chosen dialogue.

Part 4

We train the Markov Model with the text, and then create 1-to 30 new sentences based on the trained model.

Part 5

It returns the json with some data used to train the model and the resulting speech!


This is boilerplate code to access the data from the db file. We need to specify the tablename, the columns and a function to fetch the data by some specified criteria. In this case, the index of the speech.

from db import db
import random

class SpeechModel(db.Model):

    __tablename__ = 'speeches'

    index = db.Column(db.Integer, primary_key = True)
    author = db.Column(db.String(50))
    body = db.Column(db.String(200000))
    date = db.Column(db.String(50))
    title = db.Column(db.String(100))

    def __init__(self, author, body, date,title ):
        self.author = author
        self.body = body
        self.date = date
        self.title = title

    def find_by_index(cls, index):
        result = cls.query.filter_by(index=index).first()
        return {'author':result.author,'body':result.body,'date':result.date,'title':result.title }

Lastly the wsgi.py is used by gunicorn to run the flask app.

from app import app as application

if __name__ == "__main__":

Great! Time to set up the Dockerfile.


This part is fairly easy. There is a lot of resources with 'recipes' to create your Dockerfile for simple Python apps like this. You just need to copy the files to the /code directory in the container, install the libraries and run the app:

FROM python:3.6.1-alpine

ADD . /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt

CMD gunicorn -w 4 --bind$PORT wsgi

I added the '$PORT' environment variable to be able to deploy it to some cloud services. Some services, such as Heroku, the port number is managed by them. So you need to leave the env.var in the dockerfile.

If you would rather just use Flask instead of using Gunicorn, you would need to change the last line to:

CMD flask run -h -p $PORT

Building it:

Building the image:

$docker build -t mpfc_api .

Running it:

docker run -p 3000:3000 -e PORT=3000 bruck1701/montypython_api

Now, we test it Postman or a browser:

http://localhost:3000/get/original returns:

    "episode": 10,
    "sketch": "Bank robber (lingerie shop)",
    "dialogue": [
        "Good morning, I am a bank robber. Er, please don't panic, just hand over all your money.",
        "This is a lingerie shop, sir.",
        "Fine, fine, fine.",
        "Adopt, adapt and improve. Motto of the round table. Well, um ... what have you got?",
        "Er, we've got corsets, stockings, suspender belts, tights, bras, slips, petticoats, knickers, socks and garters, sir.",
        "Fine, fine, fine, fine. No large piles of money in safes?",
        "No, sir.",
        "No deposit accounts?",
        "No sir.",
        "No piles of cash in easy to carry bags?",
        "None at all sir.",
        "No luncheon vouchers?",
        "Fine, fine. Well, um... adopt, adapt and improve. Just a pair of knickers then please."

http://localhost:3000/get/new_material returns:

    "author": "harding",
    "date": "July 22, 1920",
    "title": "High Wages for High Production",
    "episode": "Lumberjack song",
    "new_speech": "... I am ready to acclaim the highest essential
 to human happiness. In conflict is disaster, in understanding 
there is a minimum production when our need is maximal. The 
destruction of one unavoidably involves the other. I cut down 
trees, He eats his lunch, He goes to the people and their 
obligation to the foundation on which industry is bigger than any
 element in its modern making. In bars??????? I chop down trees, I
 eat my lunch, I go shopping, And have buttered scones for tea. 
The suspicion or rebellion of one is the call of America. I am 
ready to acclaim the highest essential to human happiness. Well I 
object to all this sex on the necessity for understanding, 
particularly that understanding that concerns ourselves at home.
 He cuts down trees, I eat my lunch, I go shopping, And have 
buttered scones for tea. I wish to complain in the strongest 
possible terms about the lumberjack who wears women's clothes. 
The destruction of one is the call of America. I am speaking as 
one who has counted the contents of the millions of American wage 
earners. I want the wage earners of America that mounting wages 
and they abide...."


Oh president Harding, I thought you were so rugged!*

Great! Now I have a very silly lorem ipsum generator to use on other projects! And the fact that it is already dockerized, it can be readily deployed on any cloud provider such as Heroku or AWS ! I hope this post has made you smile! :)

No Comments Yet