Category: DevOps

Deploying Mkdocs with Gitlab CI and Gitlab Pages

Gitlab does not have mkdocs in their prexisting ci/cd templates.

.gitlab-ci.yml

On your repo create a .gitlab-ci.yml file with the following contents:

image: python:3.8-buster

before_script:
  - pip install --upgrade pip && pip install -r requirements.txt

pages:
  stage: deploy
  script:
    - mkdocs build
    - mv site public
  artifacts:
    paths:
    - public
  only:
  - master
  • All Gitlab CI is run in containers - so we supply a base image that has python already.
  • Before running the build we need to ensure the mkdocs python package is available.
  • It uses the pages keyword - a special job on gitlab specifically for serving your static site
  • builds the docs and moves it to the public folder - where gitlab pages servers the static content from
  • Done only on the master branch

Pending / Stuck jobs

After this file is commited the pipeline might be in a stuck state:

This is because you have not specified a gitlab runner to run your continuous integration.

On your repo:

  1. Go to settings -> CI/CD
  2. Expand the Runners Section
  3. Click Enabled Shared runners

If it is still stuck in a pending state, just cancel it and it should work on the next run.

How do we View the Site now

For self hosted pages you need to ensure you have:

  • A shared gitlab runner
  • A wildcard domain record pointing to your gitlab instance eg. *.pages.number1.co.za -> x.x.x.x.

Add the domain to your config at /etc/gitlab/gitlab.rb;

pages_external_url 'http://pages.number1.co.za'

Then reconfigure gitlab:

sudo gitlab-ctl reconfigure

All the information for this is in Pages Administration Article

There should be a link on the project Settings -> pages of the repo.

Reference of Gitlab Pages options

##! Define to enable GitLab Pages
# pages_external_url "http://pages.example.com/"
# gitlab_pages['enable'] = false

##! Configure to expose GitLab Pages on external IP address, serving the HTTP
# gitlab_pages['external_http'] = []

##! Configure to expose GitLab Pages on external IP address, serving the HTTPS
# gitlab_pages['external_https'] = []

##! Configure to enable health check endpoint on GitLab Pages
# gitlab_pages['status_uri'] = "/@status"

##! Configure to use JSON structured logging in GitLab Pages
# gitlab_pages['log_format'] = "json"

# gitlab_pages['listen_proxy'] = "localhost:8090"
# gitlab_pages['redirect_http'] = true
# gitlab_pages['use_http2'] = true
# gitlab_pages['dir'] = "/var/opt/gitlab/gitlab-pages"
# gitlab_pages['log_directory'] = "/var/log/gitlab/gitlab-pages"

# gitlab_pages['artifacts_server'] = true
# gitlab_pages['artifacts_server_url'] = nil # Defaults to external_url + '/api/v4'
# gitlab_pages['artifacts_server_timeout'] = 10

##! Environments that do not support bind-mounting should set this parameter to
##! true. This is incompatible with the artifacts server
# gitlab_pages['inplace_chroot'] = false

##! Prometheus metrics for Pages docs: https://gitlab.com/gitlab-org/gitlab-pages/#enable-prometheus-metrics
# gitlab_pages['metrics_address'] = ":9235"

Sources

Minikube: Deploy a container using a private image registry

This post is mainly for wanting to test a private image on your local k8s instance.

At this point in time I feel minikube is the lightweight standard for this.

You should have a private registry with the image you want to deploy, otherwise - use a public image.

Getting Started

Install minikube

Once minikube is installed, start it:

minikube start

The setup will tell you:

  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.18.3 preload ...
    > preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: 526.01 MiB
🔥  Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

So you don't even need to update your kubectl.

View the k8s Cluster Nodes

There should only be a single node called minikube:

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m13s   v1.18.3

The next few steps are done in the default namespace

Add the Private Registry Secret

Read on k8s docs how to pull an image from a private registry

kubectl create secret docker-registry regcred \
--docker-server= \
--docker-username= \
--docker-password= \
--docker-email=

Inspect the secret

kubectl get secret regcred --output=yaml

It is base64 encoded, so to view actual:

kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode

Create a pod that uses the secret

In a file my-pod.yml:

apiVersion: v1
kind: Pod
metadata:
  name: private-reg
  labels:
    name: private-reg
spec:
  containers:
  - name: private-reg-container
    image: 
  imagePullSecrets:
  - name: regcred

Apply and view the pod:

kubectl apply -f my-pod.yml
kubectl get pod private-reg

If you have an ImagePullBackoff issue then scroll to the bottom of this page

The more k8s way of doing things would be to use a deployment instead of deploying the pod:

kubectl create deployment my-project --image=k8s.gcr.io/echoserver:1.4

Pod is Running how to View the Frontend

Full tutorial on k8s

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster.
To expose the pod outside of kubernetes you need to create a service.

kubectl expose deployment my-project --type=LoadBalancer --port=8080

or if you just deployed a pod - you need to update the pod spec to include a label:

kubectl expose po private-reg--type=LoadBalancer --port=8080

Ensure the service is running with:

kubectl get svc

On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.

minikube service 

Finding the reason for an ImagePullBackoff

In my case the pod did not deploy:

$ kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
private-reg   0/1     ImagePullBackOff   0          45s

This question on stackoverflow - highlights how to debug an ImagePullBackoff

kubectl describe pod private-reg

The error I saw was:

Failed to pull image "//:": rpc 
error: code = Unknown desc = Error response from daemon: pull access denied for //,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied

Usually this happens if your username, password, registry url or image name is incorrect.

Cannot use harbor robot account ImagePullBackOff pull access denied

This post is mainly about harbor robot accounts.

Robot accounts are accounts used to run automated operations and have no access to the frontend. The account to use in your continuous integration or k8s registry secrets.

You create a robot account by going to:
Project -> Robot Accounts -> New Robot Account

The Problem $

Harbor forces the username of the robot account to be: robot$<account_name>.

The robot$ prefix makes it easily distinguishable from a normal Harbor user account

For a rancher robot:

robot$rancher

harbor-robot-account-creation

When adding that secret in kubernetes (via rancher) I get:

Error: ImagePullBackOff

When I save the username as robot$rancher as a gitlab environment variable. It does not store anything after the $ sign.

$ echo $HARBOR_USERNAME
robot

Doing a 1 line login does not work:

echo -n $HARBOR_PASSWORD | docker login -u "robot$gitlab_portal" --password-stdin $HARBOR_REGISTRY

however, doing it with this method does:

docker login $HARBOR_REGISTRY
username: ...
password: ...

The username seems to be the issue:

$ export HARBOR_USERNAME="robot$gitlab_portal"
$ echo $HARBOR_USERNAME
robot

The Fix

In this post one user suggested quoting the password:

docker login  -u '' -p ''

that worked. So in your gitlab ci file you can do:

echo -n $HARBOR_PASSWORD | docker login -u 'robot$gitlab_portal' --password-stdin $HARBOR_REGISTRY

https://github.com/goharbor/harbor/issues/9553