Pros and Cons of OKD, Portainer and Kubernetes and Docker Swarm Mode Underlying

If you are about to make a choice about which container orchestrator, sheduler or paltform to choose you have a tough decision ahead. It is difficult to understand the nuances, features and limitations of each platform without having tried them out for a while. I am focusing on your own cluster – ie. not using a public cloud provider like Amazon EKS or Azure Container Service.

So in this short post, I’m going to tell you the main issues and features to look out for.

Openshift OKD

Kubernetes optimized for continuous application development and multi-tenant deployment.

Issues / Cons

  • Bad developer experience as containers don’t just work – as they cannot run as root and run as a random user – some containers can only run as root.
  • Your containers need to be specifically built for Openshift (sometime)
  • More secure and less prone to vulnerabilities as containers will never run as root user
  • Frontend can be a bit tricky

Features / Pros

  • Like most Red Hat things – great docs and well designed and stable product
  • Good default – sets up a docker registry for you – using kubernetes in a container
  • Everything is managed within OKD – routes are automatically setup – no need to open ports and point stuff all around the place
  • No support for docker-compose as kubernetes has their own way of doing things – you have to convert your docker compose files.
  • Frontend can be nice and simple

Portainer

Making Docker Management Easy – it uses Docker Swarm – no kubernetes.

Issues / Cons

  • No routes (via domain names) – you have to manually configure and manage ports and routes to the containers

Features / Pros

  • Docker developer friendly – containers just work
  • Containers run as root

Deploying Praeco on Portainer

In this post I’m going to attempt to setup Praeco on Portainer.Praeco is a frontend for setting up Elastalert rules that check an elasticsearch index and send automated messages via telegram, slack or webhook.

To set this up you need 2 things already setup:

  • Elasticsearch instance
  • Portainer Instance

You also should know a bit about docker and docker-compose.yml.

praeco-alertselasticsearch-praeco-portainer

 

 

 

 

 

 

 

Things should Be Easy

In theory or in development, you set an environment variable and then docker-compose up -d and everything just works. When deploying to a production system that doesn’t work as good.

I tried to create an AppTemplate, portainer would say it worked but it was not shown in the list.

So I tried to create a stack, go to stacks -> add stack. I add the required fields and set the repo url as: https://github.com/ServerCentral/praeco

portainer-praeco-stack-create

I created the stack and booom – I thought it just worked!

Unfortunately when I view the stack I see a very different picture:

portainer-praeco-deploy-rejected

The error message on one of the tasks for praeco_webapp is Invalid mount config for type "bind": bind source path does not exist: /data/compose/3/public/praeco.config.json

The error on praeco_elastalert is Invalid mount config for type "bind": bind source path does not exist: /data/compose/3/config/elastalert.yaml

So it expects these files to exist on the host file system.

Bind mounts…what do I do now

This system is not your dev machine, where you pull the git repo and then make config and rule changes and then get the docker instances up. This system pretty much only uses your docker compose to build the images and fails because the repo doesn’t actually exists on the host nodes.

Since it is doing a bind mount – that is not something you want to do in a production system.

Damn…I’m stuck

 

Openshift will not run your container as a root user

So you have setup OpenShift Container Platform and try to deploy your first image, dockerhub’s nginx image and what do we get…an error:


2019/10/03 06:39:24 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2019/10/03 06:39:24 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)

The reality is that you are being forced to run as an arbitrary user ID and that means that some container images may not run out of the box in OpenShift

This will be the case where images do not adopt security best practices and need to be run as the root user ID even though they have no actual requirement to run as root. Even an image which has been setup to run as a fixed user ID which isn’t root may not work – Openshift cookbook

A massive blow to developer experience coming from standard Kubernetes.

openshift-okd-banner

This is a very important consideration and the people at Red Hat Openshift have taken a stand against unnecessarily running containers as root. From what I have read kubernetes and docker swarm don’t care, they will run your root container.

It is best to read what Openshift says about support for arbitrary ID’s.

By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID.

For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.

So to get it working you do the following to the directory being written to:

RUN chgrp -R 0 /some/directory && \
    chmod -R g=u /some/directory

Remember we are talking root group bot root user.

The root group does not have any special permissions (unlike the root user) so there are no security concerns with this arrangement

There is also a concern where an associated entry in /etc/passwd is required.

Lastly, the final USER declaration in the Dockerfile should specify the user ID (numeric value) and not the user name

If the image does not specify a USER, it inherits the USER from the parent image

You can allow containers to run as the root user in the configuration of Openshift Container Platform.

Check this Example Dockerfile to build your image. It seems as though you will be building your container specifically to fit into OKD’s paradigm.

Some containers require root – and can’t get around it, so in this case an admin will have to enable those accounts. These seem to be data stores though.

openshift-container-service-images-require-root

Openshift ignores the USER directive of the Dockerfile and launches the container with a random UUID

What are Non-root Containers?

By default, Docker containers are run as root users. This means that you can do whatever you want in your container, such as install system packages, edit configuration files, bind privilege ports, adjust permissions, create system users and groups, access networking information.

It is also important to note that the processes running in the container cannot listen on privileged ports: So all ports below 1024.

How to Run Nginx in Open Shift?

How to run nginx in openshift

Sources: