Category: Containerisation

Minikube: Deploy a container using a private image registry

This post is mainly for wanting to test a private image on your local k8s instance.

At this point in time I feel minikube is the lightweight standard for this.

You should have a private registry with the image you want to deploy, otherwise - use a public image.

Getting Started

Install minikube

Once minikube is installed, start it:

minikube start

The setup will tell you:

  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.18.3 preload ...
    > preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: 526.01 MiB
🔥  Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

So you don't even need to update your kubectl.

View the k8s Cluster Nodes

There should only be a single node called minikube:

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m13s   v1.18.3

The next few steps are done in the default namespace

Add the Private Registry Secret

Read on k8s docs how to pull an image from a private registry

kubectl create secret docker-registry regcred \
--docker-server= \
--docker-username= \
--docker-password= \
--docker-email=

Inspect the secret

kubectl get secret regcred --output=yaml

It is base64 encoded, so to view actual:

kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode

Create a pod that uses the secret

In a file my-pod.yml:

apiVersion: v1
kind: Pod
metadata:
  name: private-reg
  labels:
    name: private-reg
spec:
  containers:
  - name: private-reg-container
    image: 
  imagePullSecrets:
  - name: regcred

Apply and view the pod:

kubectl apply -f my-pod.yml
kubectl get pod private-reg

If you have an ImagePullBackoff issue then scroll to the bottom of this page

The more k8s way of doing things would be to use a deployment instead of deploying the pod:

kubectl create deployment my-project --image=k8s.gcr.io/echoserver:1.4

Pod is Running how to View the Frontend

Full tutorial on k8s

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster.
To expose the pod outside of kubernetes you need to create a service.

kubectl expose deployment my-project --type=LoadBalancer --port=8080

or if you just deployed a pod - you need to update the pod spec to include a label:

kubectl expose po private-reg--type=LoadBalancer --port=8080

Ensure the service is running with:

kubectl get svc

On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.

minikube service 

Finding the reason for an ImagePullBackoff

In my case the pod did not deploy:

$ kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
private-reg   0/1     ImagePullBackOff   0          45s

This question on stackoverflow - highlights how to debug an ImagePullBackoff

kubectl describe pod private-reg

The error I saw was:

Failed to pull image "//:": rpc 
error: code = Unknown desc = Error response from daemon: pull access denied for //,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied

Usually this happens if your username, password, registry url or image name is incorrect.

Cannot use harbor robot account ImagePullBackOff pull access denied

This post is mainly about harbor robot accounts.

Robot accounts are accounts used to run automated operations and have no access to the frontend. The account to use in your continuous integration or k8s registry secrets.

You create a robot account by going to:
Project -> Robot Accounts -> New Robot Account

The Problem $

Harbor forces the username of the robot account to be: robot$<account_name>.

The robot$ prefix makes it easily distinguishable from a normal Harbor user account

For a rancher robot:

robot$rancher

harbor-robot-account-creation

When adding that secret in kubernetes (via rancher) I get:

Error: ImagePullBackOff

When I save the username as robot$rancher as a gitlab environment variable. It does not store anything after the $ sign.

$ echo $HARBOR_USERNAME
robot

Doing a 1 line login does not work:

echo -n $HARBOR_PASSWORD | docker login -u "robot$gitlab_portal" --password-stdin $HARBOR_REGISTRY

however, doing it with this method does:

docker login $HARBOR_REGISTRY
username: ...
password: ...

The username seems to be the issue:

$ export HARBOR_USERNAME="robot$gitlab_portal"
$ echo $HARBOR_USERNAME
robot

The Fix

In this post one user suggested quoting the password:

docker login  -u '' -p ''

that worked. So in your gitlab ci file you can do:

echo -n $HARBOR_PASSWORD | docker login -u 'robot$gitlab_portal' --password-stdin $HARBOR_REGISTRY

https://github.com/goharbor/harbor/issues/9553

Use Self-hosted Gitlab to build and deploy images to Harbor

I have a gitlab version control and CI instance running.
I also have a Harbor registry running.

Now I want to ensure the I can build and push images from gitlab onto harbor using gitlab's continuous integration and continuous deployment (CI/CD).

First Steps

  • Create a git repo on gitlab with your Dockerfile
  • Create a user and project on harbor to store your image
  • Take note of the credentials needed to login to the registry

I am basing this tutorial on a similar tutorial that uses gitlab's container registry

.Gitlab-ci.yml

The core to controlling and managing the tasks that gitlab ci will do, is the .gitlab-ci.yml file that lives inside you repo.
So you can version control your CI process.

You can read the full reference guide for the .gitlab-ci.yml file

Create this file.

Specify the image and stages of the the CI Process

image: docker:19.0.2
variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  DOCKER_HOST: tcp://localhost:2375
stages:
  - build
  - push
services:
  - docker:dind

Here we set the image of the container we will use to run the actual process. In this case we are running docker inside docker.

I have read that this is bad, but it my first attempt so I just want it to work - then I will optimise.

We set the stages which are labels - that will be used later to link to tasks.

Other variables are need for this to work DOCKER_DRIVER, DOCKER_TLS_CERTDIR and DOCKER_HOST

Before and After

Much like a test suite's setUp and tearDown phase the CI process has a before_script and after_script tasks that will be executed before and after respectively.

In our case that involves logging in and out of our registry.

For this phase the username and password of your harbor user is required (or your cli_secret if you are using openidc to connect).

Importantly these should not be written here in plaintext but rather set up as custom environment variables in gitlab.

In gitlab:

  1. Go to Settings > CI/CD
  2. Expand the Environment variables section
  3. Enter the variable names and values
  4. Set them as protected

gitlab_ci_environment_variables

I used: HARBOR_REGISTRY, HARBOR_USERNAME, HARBOR_REGISTRY_IMAGE and HARBOR_PASSWORD

So now we can set in the yaml:

before_script:
  - echo -n $HARBOR_PASSWORD | docker login -u $HARBOR_USERNAME --password-stdin $HARBOR_REGISTRY
  - docker version
  - docker info

after_script:
  - docker logout $HARBOR_REGISTRY

Important if you are using a xxx$robot account you must set this explicitly and not as an environment variable as it will not save correctly as $ escapes in shell

Setting the tasks in the build stages

The build stage:

Build:
  stage: build
script:
  - docker pull $HARBOR_REGISTRY_IMAGE:latest || true
  - >
    docker build
    --pull
    --cache-from $HARBOR_REGISTRY_IMAGE:latest
    --tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA .
  - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA

This pulls the last image which will be used for the cache when building a new image. The image is then pushed to the repo with the commit SHA1 as the version.

Tag Management

According to the tutorial it is good pratice to keep your git tags in sync with your docker tags.

I don't actually know what this does.

Push_When_tag:
  stage: push
  only:
    - tags
  script:
    - docker pull $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
    - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME

The Full .gitlab-ci.yaml

.gitlab-ci.yml

image: docker:18-git

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  DOCKER_HOST: tcp://localhost:2375

stages:
  - build
  - push
services:
  - docker:18-dind

before_script:
  - echo $HARBOR_USERNAME
  - echo -n $HARBOR_PASSWORD | docker login -u 'robot$gitlab_portal' --password-stdin $HARBOR_REGISTRY
  - docker version
  - docker info

after_script:
  - docker logout $HARBOR_REGISTRY

Build:
  stage: build
  script:
    - docker pull $HARBOR_REGISTRY_IMAGE:latest || true
    - >
      docker build
      --pull
      --cache-from $HARBOR_REGISTRY_IMAGE:latest
      --tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA

Push_When_tag:
  stage: push
  only:
    - tags
  script:
    - docker pull $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
    - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME

Lets Try it out

After going to gitlab and checking the CI jobs after commiting the .gitlab-ci.yml file, the job was in a pending / stuck stage.

gitlab-job-stuck-no-runners

The reason was because I had not setup a runner.

Setting Up a Runner

Gitlab runners run the tasks in .gitlab-ci.yml.

There are 3 types:

  • Shared (for all projects)
  • Group (for all projects in a group)
  • Specific (for specific projects)

On gitlab.com you would use the shared runners, on your own instance you can setup a shared runner.

Shared runners are available to all projects and I like that to simplify things.

But Where should gitlab runners live?, the answer is whereever you want.

The problem with this is there are so many options, there is no way to just start and get a runner going

  • Should I use k8s? - docs are long and horrendous
  • Should I use the vm gitlab is on?
  • Should I use a vm gitlab is not on?

I suppose if you read the full gitlab runners docs you will have a better idea, but I don't have a week.
So I am going to try the k8s way.

You will need 2 variables you can get from: <my-gitlab-instance>/admin/runners

  • gitlabUrl
  • runnerRegistrationToken

So follow the steps in the k8s runner setup resulting in you creating a values.yaml file.

There are many additional settings to change, but I just want it to work now.

You must set privileged: true in your values file of the helm chart if you are doing docker in docker - as we are

values.yml:

gitlabUrl: https://<gitlab_url>/
runnerRegistrationToken: "<Token>"

imagePullPolicy: IfNotPresent

terminationGracePeriodSeconds: 3600

concurrent: 5

checkInterval: 60

rbac:
  create: false
  clusterWideAccess: false

metrics:
  enabled: true

runners:
  image: ubuntu:16.04
  privileged: true
  pollTimeout: 180
  outputLimit: 4096
  cache: {}
  builds: {}
  services: {}
  helpers: {}
securityContext:
  fsGroup: 65533
  runAsUser: 100

resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
hostAliases: []
podAnnotations: {}
podLabels: {}

Searching for the Helm Chart

Add the gitlab helm chart repo and search for the version you want:

helm repo add gitlab https://charts.gitlab.io
helm search repo -l gitlab/gitlab-runner

The gitlab-runner version must be in sync with the gitlab server version: https://docs.gitlab.com/runner/#compatibility-with-gitlab-versions

In my case (11.7.x):

gitlab/gitlab-runner    0.1.45          11.7.0          GitLab Runner

Create the k8s namespace:

kubectl create namespace gitlab-runner

Install the helm chart:

helm install --namespace gitlab-runner gitlab-runner -f values.yml gitlab/gitlab-runner --version 0.1.45

Unfortunately...I got an error:

Error: unable to build kubernetes objects from release
manifest: unable to recognize "": no matches for kind
"Deployment" in version "extensions/v1beta1"

Problem is that in k8s 1.16 some api's changed but the helm chart at that time still specifies the old version. So now I have to clone the runner repo and fix the deployment.

Fixing the Helm Chart

Get the helm chart

helm fetch --untar gitlab/gitlab-runner --version 0.1.45

then find the deployment and change:

apiVersion: extensions/v1beta1

to:

apiVersion: apps/v1

Add the selector:

spec:
  selector:
    matchLabels:
      app: {{ include "gitlab-runner.fullname" . }}

Change the values.yaml inn the repo and finally install from the local changes:

helm install --namespace gitlab-runner gitlab-runner-1 .
NAME: gitlab-runner-1
LAST DEPLOYED: Fri Jun 26 10:24:51 2020
NAMESPACE: gitlab-runner
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Your GitLab Runner should now be registered against the GitLab instance reachable at: "https://xxx"

*BOOM!!!

The runner should now be showing as a shared runner at: https://<my-gitlab-instance>/admin/runners

Enable the Shared Runner on your Repo

Run the job again...go to Ci/CD -> Pipelines -> Run Pipeline

Permission error on K8s instance

Running with gitlab-runner 11.7.0 (8bb608ff)
  on gitlab-runner-1-gitlab-runner-7487b4cf77-lz9cr 11AFa4Fw
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image docker:19 ...
ERROR: Job failed (system failure): secrets is forbidden: User "system:serviceaccount:gitlab-runner:default" cannot create resource "secrets" in API group "" in the namespace "gitlab-runner"

I think this means the service account that helm created does not have the ability to create secrets.

Get all service accounts:

kubectl get sa -A

View the specific service account:

kubectl get sa default -o yaml -n gitlab-runner

It is important to look at kubernetes managing and configuring service accounts. I found this stackoverflow question that gives us the quick fix

So let us edit the serivce account and give it permission (I don't want to give it cluster admin):

kubectl edit sa default -n gitlab-runner

Well it rejected the rule key and documentation is too much or too sparse

I just took the easy / insecure option:

$ kubectl create clusterrolebinding default --clusterrole=cluster-admin --group=system:serviceaccounts --namespace=gitlab
clusterrolebinding.rbac.authorization.k8s.io/default created

That worked but now that user has cluster admin rights. So be wary.

Results

Cloning repository...
Checking out eb9823f7 as master...
Skipping Git submodules setup
$ echo -n $HARBOR_PASSWORD | docker login -u $HARBOR_USERNAME --password-stdin $HARBOR_REGISTRY

Login Succeeded
$ docker version
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:42:53 2020
 OS/Arch:           linux/amd64
 Experimental:      false
Running after script...
$ docker logout $HARBOR_REGISTRY
Removing login credentials for xxx
ERROR: Job failed: command terminated with exit code 1

Partially successful...accordinng to this issue it is a common problem with dind.

So we need to ensure that the runner is set as privileged: true and then set the TCP port of the docker host.

To redo that (not required if you started from the top)

helm upgrade --namespace gitlab-runner gitlab-runner-1 .

Conclusion

Done. It is working:

Job succeeded