Lecture: Lessons from dockerizing a Python service
While "dockerizing" an existing Python service, I faced some interesting challenges and learned many useful "best practices". In this talk I would like to share my findings with you.
Docker offers a revolutionary way of packaging and running applications and services. It offers the same benefits of a VM (virtual-machine) but at a much lower "cost".
In order to have a (Python) service running in Docker, one hast to first build and "image" of that service (which will contain all the run-time dependencies needed). Once an image is built, it can be started. When a Docker image runs, it does so by creating a "container" based on that image.
Docker containers provide isolation, reproducibility and scalability. It consists of a main process, and a virtual-filesystem. This process runs together with other processes of the host machine, but is isolated from them. Same goes for its virtual file-system. That is how Docker provides similar benefits to a full-blown VM, while remaining "lightweight".
It is very easy to convert an existing Python service (e.g. a REST web-service) to a Docker image. But the devil is in the details!
For example, there are optimal and sub-optimal ways of building a Docker image. Depending on this, one could be wasting time, disk-space and/or network bandwidth. Other things that can go wrong are false assumptions about the running environment, which can result in the service not being reachable at all, or the standard-output not being visible. Finally, there are better and worse ways to making sure that the service survives reboots of the host-machine.
In this talk I want to give a brief introduction to Docker, and talk about the lessons I learnt while making a Python web-service Docker compatible.