Category: Docker

Pros and Cons of OKD, Portainer and Kubernetes and Docker Swarm Mode Underlying

If you are about to make a choice about which container orchestrator, sheduler or paltform to choose you have a tough decision ahead. It is difficult to understand the nuances, features and limitations of each platform without having tried them out for a while. I am focusing on your own cluster – ie. not using a public cloud provider like Amazon EKS or Azure Container Service.

So in this short post, I’m going to tell you the main issues and features to look out for.

Openshift OKD

Kubernetes optimized for continuous application development and multi-tenant deployment.

Issues / Cons

  • Bad developer experience as containers don’t just work – as they cannot run as root and run as a random user – some containers can only run as root.
  • Your containers need to be specifically built for Openshift (sometime)
  • More secure and less prone to vulnerabilities as containers will never run as root user
  • Frontend can be a bit tricky

Features / Pros

  • Like most Red Hat things – great docs and well designed and stable product
  • Good default – sets up a docker registry for you – using kubernetes in a container
  • Everything is managed within OKD – routes are automatically setup – no need to open ports and point stuff all around the place
  • No support for docker-compose as kubernetes has their own way of doing things – you have to convert your docker compose files.
  • Frontend can be nice and simple

Portainer

Making Docker Management Easy – it uses Docker Swarm – no kubernetes.

Issues / Cons

  • No routes (via domain names) – you have to manually configure and manage ports and routes to the containers

Features / Pros

  • Docker developer friendly – containers just work
  • Containers run as root

Deploying Praeco on Portainer

In this post I’m going to attempt to setup Praeco on Portainer.Praeco is a frontend for setting up Elastalert rules that check an elasticsearch index and send automated messages via telegram, slack or webhook.

To set this up you need 2 things already setup:

  • Elasticsearch instance
  • Portainer Instance

You also should know a bit about docker and docker-compose.yml.

praeco-alertselasticsearch-praeco-portainer

 

 

 

 

 

 

 

Things should Be Easy

In theory or in development, you set an environment variable and then docker-compose up -d and everything just works. When deploying to a production system that doesn’t work as good.

I tried to create an AppTemplate, portainer would say it worked but it was not shown in the list.

So I tried to create a stack, go to stacks -> add stack. I add the required fields and set the repo url as: https://github.com/ServerCentral/praeco

portainer-praeco-stack-create

I created the stack and booom – I thought it just worked!

Unfortunately when I view the stack I see a very different picture:

portainer-praeco-deploy-rejected

The error message on one of the tasks for praeco_webapp isĀ Invalid mount config for type "bind": bind source path does not exist: /data/compose/3/public/praeco.config.json

The error on praeco_elastalert is Invalid mount config for type "bind": bind source path does not exist: /data/compose/3/config/elastalert.yaml

So it expects these files to exist on the host file system.

Bind mounts…what do I do now

This system is not your dev machine, where you pull the git repo and then make config and rule changes and then get the docker instances up. This system pretty much only uses your docker compose to build the images and fails because the repo doesn’t actually exists on the host nodes.

Since it is doing a bind mount – that is not something you want to do in a production system.

Damn…I’m stuck

 

Help Me Understand Containerisation (Docker) – Part 2

This is part 2, I’d recommend checking out Part 1 of Help Me Understand Containerisation (Docker)

We have gone over the development workflow – in a django context – and it was quite a hefty chunk. Now let us look at continuous integration with gitlab and containers.

The Container CI Workflow

We are going to use gitlab for Continuous Integration. Gitlab is closely aligned with a container based development workflow.

jenkins-analog-player-in-containerised-world

A: I use gitlab for my ci .. so that part will depend on gitlab ..

A: but containers do make that easy .. cause you just need to build the container .. and then you can do everything inside the container

gitlab-ci-containerisation

Let’s dive into container based CI with gitlab:

continuous-integration-discussion-docker

For Gitlab pipelines you should read the docs on How Gitlab CI/CD works? and basic ci/cd workflow.

The New Trend in Continuous Integration

  1. Create an application image.
  2. Run tests against the created image.
  3. Push image to a remote registry.
  4. Deploy to a server from the pushed image.

 

Importantly you will need to enable the container registry if you have a private gitlab instance, you can do that with the following admin documentation for enabling the container registry.

Step 1 is to make a .gitlab-ci.yml file.

 

The Container Deployment

Where should you DB and session storage live?

What is the backup strategy?

Lets do some autoscaling.

Docker Compose docs for production gives some good tips on deploying to production.