Building an Async Python 3 Serverless Application with Docker

Post to Twitter

In a prior article I wrote how to create a Docker Swarm and deploy a replicated Python 3 application. I mentioned Ben Firshman had done a demo at DockerCon with Serverless apps. I will show you a simple way to build a “serverless” application and test it via Docker.

When I refer to “serverless” I’m referring to the idea that the application is a short lived app, does its job, stops – just like AWS Lambda.

I will create two applications each in their own project folders: serverless-app and serverless-web

The serverless-app piece is the actual “serverless” piece of this, the web app will run as long as we want. I just gave them similar names to make it easier to keep the projects named closely but different enough to know what does what.

I’m using Python 3.4.4, prior Python 3.x versions may also work but I have not tested that. I’m also using Docker for Mac. I created two projects as mentioned and the appropriate venv for each.

$ python -m venv serverless-web
$ cd serverless-web
$ source bin/activate
$ mkdir src && cd src
$ touch app.py
$ touch Dockerfile

In a second terminal window/tab create the second project:

$ python -m venv serverless-app
$ cd serverless-app
$ source bin/activate
$ mkdir src && cd src
$ touch app.py
$ touch Dockerfile

I will start with the serverless-app project first. The Dockerfile is very easy:

FROM python:3.4.4
COPY . /serverless-app
WORKDIR /serverless-app
ENTRYPOINT ["python", "app.py"]

The Python code is even easier, it just takes an argument and prints it out:

import sys
import time

if __name__ == "__main__":
    print(str(sys.argv[1]))

From the serverless-app/src folder run the following to build the Docker image, substitute your own name:

$ docker build . -t chadlung/serverless-app

Let’s move onto the second project: serverless-web

Note: This next project has the socket /var/run/docker.sock exposed to the container as a volume. This is a potential security risk. Depending on how you plan to use something like this you need to seriously consider if this causes a problem. If you are deploying internally and are in complete control (or creator) of all the containers then the risk is mitigated substantially. Do your own research.

The Dockerfile for this project is:

FROM python:3.4.4
RUN pip install -U pip
RUN pip install -U falcon dockerrun
EXPOSE 8080
COPY . /hello-app
WORKDIR /hello-app
VOLUME ["/var/run/docker.sock"]
CMD ["python", "app.py"]

The Python code for the app.py:

import json
from wsgiref import simple_server
import dockerrun
import falcon


client = dockerrun.from_env()


class HelloResource(object):
    def on_get(self, req, resp):
        try:
            value = req.get_param('value')

            print(client.run(
                "chadlung/serverless-app",
                command=value,
                detach=True
            ))

            resp.content_type = 'application/json'
            resp.status = falcon.HTTP_200
            resp.body = json.dumps({'message': str(value)})
        except Exception as ex:
            resp.status = falcon.HTTP_500
            resp.body = str(ex)


if __name__ == '__main__':
    app = falcon.API()
    hello_resource = HelloResource()
    app.add_route('/hello', hello_resource)
    httpd = simple_server.make_server('0.0.0.0', 8080, app)
    httpd.serve_forever()

This is a simple Falcon web app that has an endpoint of /hello and takes a querystring param like this: /hello?value=JohnDoe

You can build the Docker image for this substituting your own name:

$ docker build . -t chadlung/serverless-web

Run it like so:

$ docker run -v /var/run/docker.sock:/var/run/docker.sock -it --rm -p 8080:8080 chadlung/serverless-web

Now, when you hit the service: http://127.0.0.1:8080/hello?value=JohnDoe

You won’t see too much interesting output since this was run asynchronous.

{'Id': 'a143cecd7e2d455d488ab5d70e989b7738e4a449664d256527105d9f2e1e1a73', 'Warnings': None}
172.17.0.1 - - [09/Jul/2016 01:30:51] "GET /hello?value=JohnDoe HTTP/1.1" 200 22

However, assuming it was setup correctly your serverless-app did indeed run – it just ran asyncronous, but, let’s verify that. In the HelloResource code of the app.py change this from:

print(client.run(
    "chadlung/serverless-app",
    command=value,
    detach=True
))

To this (we are just changing the detach param value so it will not be asynchronous anymore):

print(client.run(
    "chadlung/serverless-app",
    command=value,
    detach=False
))

Now it will wait for the results to come back. Stop the service. Rebuild the serverless-web image and run it again.

In the output you should see the johndoe value come back:

b'JohnDoe\n'
172.17.0.1 - - [09/Jul/2016 01:36:18] "GET /hello?value=JohnDoe HTTP/1.1" 200 22

For most projects you will want probably want to ensure detach=True is set to run the container asynchronously but it will depend on your project, etc. Typically in a web service scenario you wouldn’t have the caller wait around too long for a response. That said, if the image is on the machine running this then the response should be pretty fast assuming its not doing a bunch of calculations, etc.

This of course has potential for allowing you to have a Python REST service that can fire off async tasks to Docker containers to be processed. This could mean you may not need to use something like Celery or RQ. Of course a lot depends on your requirements.

With all this being said keep in mind this is not isolated to just Python. A Docker image of any sort can be used and it doesn’t matter what the app inside was built with: Ruby, Go, Python, Java, etc.

As I mentioned at the start of this article in regards to a prior article I did on Docker Swarm, if you take that knowledge and add in this article’s ideas you end up with a potentially simple, yet powerful tool to solve problems. Just keep in mind the potential security issues regarding Docker.sock.

Post to Twitter

This entry was posted in Docker, Open Source, Python. Bookmark the permalink.