Category: Docker

Containerising your Django Application into Docker and eventually Kubernetes

There shift to containers is happening, in some places faster than others...

People underestimate the complexity and all the parts involved in making you applciation work.

The Django Example

In the case of Django, we would in the past (traditionally) deployed it on a webserver running:

  • a Webserver (nginx)
  • a python wsgi - web server gateway interface (gunicorn or uwsgi)
  • a Database (sqlite, mySQL or Postgres)
  • Sendmail
  • Maybe some other stuff: redis for cache and user session

So the server would become a snowflake very quickly as it needs to do multiple things and must be configured to communicate with multiple things.

It violates the single responsibility principle.

But, we did understand it that way. Now there is a bit of a mind shift when docker is brought in.

The key principle is:

Be stateless, kill your servers almost every day

Taken from Node Best Practices

So what does that mean for out Django Application?

Well, we have to think differently. Now for each process we are running we need to decide if it is stateless or stateful.

If it is stateful (not ephemeral) then it should be set aside and run in a traditional manner (or run by a cloud provider). In our case the stateful part is luckily only the database. When a say stateful I mean the state needs to persisit...forever. User session, cache and emails do need to work and persist for shorter time periods - it won't be a total disaster if they fail. User's will just need to reauth.

So all the other parts that can all run on containers are:

  • Nginx
  • Gunicorn
  • Sendmail

For simplicity sake I'm going to gloss over redis as cache and user session. I'm also not that keen to include sendmail because it introduces more complexity and another component - namely message queues.

Lets start Containerising our Django Application

Alright so I'm assuming that you know python and django pretty well and have at least deployed a django app into production (the traditional way).

So we have all the code, we just need to get it runnning in a container - locally.

A good resource to use is ruddra's docker-django repo. You can use some of his Dockerfile examples.

First install docker engine

Let's get it running in docker using just a docker file. Create a file called Dockerfile in the root of the project.


# pull official base image - set the exact version of python
FROM python:3.8.0

LABEL maintainer="Your Name <your@email.com>"

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# Install dependencies
RUN pip install --no-cache-dir -U pip

# Set the user to run the project as, do not run as root
RUN useradd --create-home code
WORKDIR /home/code
USER code

COPY path/to/requirements.txt /tmp/
RUN pip install --user --no-cache-dir -r /tmp/requirements.txt

# Copy Project
COPY . /home/code/

# Documentation from person who built the image to person running the container
EXPOSE 8000

CMD python manage.py runserver 0.0.0.0:8000

A reference on the Dockerfile commands

Remember to update the settings of the project so that:

ALLOWED_HOSTS = ['127.0.0.1', '0.0.0.0']

Now let us build the image and run it:


docker build . -t company/project
docker run -p 8000:8000 -i --name project -t company/project --name project

Now everthing should just work!...go to: http://0.0.0.0:8000>/code>

 

Pros and Cons of OKD, Portainer and Kubernetes and Docker Swarm Mode Underlying

If you are about to make a choice about which container orchestrator, sheduler or paltform to choose you have a tough decision ahead. It is difficult to understand the nuances, features and limitations of each platform without having tried them out for a while. I am focusing on your own cluster - ie. not using a public cloud provider like Amazon EKS or Azure Container Service.

So in this short post, I'm going to tell you the main issues and features to look out for.

Openshift OKD

Kubernetes optimized for continuous application development and multi-tenant deployment.

Issues / Cons

  • Bad developer experience as containers don't just work - as they cannot run as root and run as a random user - some containers can only run as root.
  • Your containers need to be specifically built for Openshift (sometime)
  • More secure and less prone to vulnerabilities as containers will never run as root user
  • Frontend can be a bit tricky

Features / Pros

  • Like most Red Hat things - great docs and well designed and stable product
  • Good default - sets up a docker registry for you - using kubernetes in a container
  • Everything is managed within OKD - routes are automatically setup - no need to open ports and point stuff all around the place
  • No support for docker-compose as kubernetes has their own way of doing things - you have to convert your docker compose files.
  • Frontend can be nice and simple

Portainer

Making Docker Management Easy - it uses Docker Swarm - no kubernetes.

Issues / Cons

  • No routes (via domain names) - you have to manually configure and manage ports and routes to the containers

Features / Pros

  • Docker developer friendly - containers just work
  • Containers run as root

Deploying Praeco on Portainer

In this post I'm going to attempt to setup Praeco on Portainer.Praeco is a frontend for setting up Elastalert rules that check an elasticsearch index and send automated messages via telegram, slack or webhook.

To set this up you need 2 things already setup:

  • Elasticsearch instance
  • Portainer Instance

You also should know a bit about docker and docker-compose.yml.

praeco-alertselasticsearch-praeco-portainer

 

 

 

 

 

 

 

Things should Be Easy

In theory or in development, you set an environment variable and then docker-compose up -d and everything just works. When deploying to a production system that doesn't work as good.

I tried to create an AppTemplate, portainer would say it worked but it was not shown in the list.

So I tried to create a stack, go to stacks -> add stack. I add the required fields and set the repo url as: https://github.com/ServerCentral/praeco

portainer-praeco-stack-create

I created the stack and booom - I thought it just worked!

Unfortunately when I view the stack I see a very different picture:

portainer-praeco-deploy-rejected

The error message on one of the tasks for praeco_webapp isĀ Invalid mount config for type "bind": bind source path does not exist: /data/compose/3/public/praeco.config.json

The error on praeco_elastalert is Invalid mount config for type "bind": bind source path does not exist: /data/compose/3/config/elastalert.yaml

So it expects these files to exist on the host file system.

Bind mounts...what do I do now

This system is not your dev machine, where you pull the git repo and then make config and rule changes and then get the docker instances up. This system pretty much only uses your docker compose to build the images and fails because the repo doesn't actually exists on the host nodes.

Since it is doing a bind mount - that is not something you want to do in a production system.

Damn...I'm stuck