There shift to containers is happening, in some places faster than others...
People underestimate the complexity and all the parts involved in making you applciation work.
The Django Example
In the case of Django, we would in the past (traditionally) deployed it on a webserver running:
- a Webserver (nginx)
- a python wsgi - web server gateway interface (gunicorn or uwsgi)
- a Database (sqlite, mySQL or Postgres)
- Maybe some other stuff: redis for cache and user session
So the server would become a snowflake very quickly as it needs to do multiple things and must be configured to communicate with multiple things.
It violates the single responsibility principle.
But, we did understand it that way. Now there is a bit of a mind shift when docker is brought in.
The key principle is:
Be stateless, kill your servers almost every day
Taken from Node Best Practices
So what does that mean for out Django Application?
Well, we have to think differently. Now for each process we are running we need to decide if it is stateless or stateful.
If it is stateful (not ephemeral) then it should be set aside and run in a traditional manner (or run by a cloud provider). In our case the stateful part is luckily only the database. When a say stateful I mean the state needs to persisit...forever. User session, cache and emails do need to work and persist for shorter time periods - it won't be a total disaster if they fail. User's will just need to reauth.
So all the other parts that can all run on containers are:
For simplicity sake I'm going to gloss over redis as cache and user session. I'm also not that keen to include sendmail because it introduces more complexity and another component - namely message queues.
Lets start Containerising our Django Application
Alright so I'm assuming that you know python and django pretty well and have at least deployed a django app into production (the traditional way).
So we have all the code, we just need to get it runnning in a container - locally.
A good resource to use is ruddra's docker-django repo. You can use some of his Dockerfile examples.
First install docker engine
Let's get it running in docker using just a docker file. Create a file called
Dockerfile in the root of the project.
# pull official base image - set the exact version of python FROM python:3.8.0 LABEL maintainer="Your Name <firstname.lastname@example.org>" # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Install dependencies RUN pip install --no-cache-dir -U pip # Set the user to run the project as, do not run as root RUN useradd --create-home code WORKDIR /home/code USER code COPY path/to/requirements.txt /tmp/ RUN pip install --user --no-cache-dir -r /tmp/requirements.txt # Copy Project COPY . /home/code/ # Documentation from person who built the image to person running the container EXPOSE 8000 CMD python manage.py runserver 0.0.0.0:8000
I found a cool thing that can audit your Dockerfile - https://www.fromlatest.io
A reference on the Dockerfile commands
Remember to update the settings of the project so that:
ALLOWED_HOSTS = ['127.0.0.1', '0.0.0.0']
Now let us build the image and run it:
docker build . -t company/project docker run -p 8000:8000 -i --name project -t company/project --name project
Now everthing should just work!...go to: