Category: git

Deploying Mkdocs with Gitlab CI and Gitlab Pages

Gitlab does not have mkdocs in their prexisting ci/cd templates.

.gitlab-ci.yml

On your repo create a .gitlab-ci.yml file with the following contents:

image: python:3.8-buster

before_script:
  - pip install --upgrade pip && pip install -r requirements.txt

pages:
  stage: deploy
  script:
    - mkdocs build
    - mv site public
  artifacts:
    paths:
    - public
  only:
  - master
  • All Gitlab CI is run in containers - so we supply a base image that has python already.
  • Before running the build we need to ensure the mkdocs python package is available.
  • It uses the pages keyword - a special job on gitlab specifically for serving your static site
  • builds the docs and moves it to the public folder - where gitlab pages servers the static content from
  • Done only on the master branch

Pending / Stuck jobs

After this file is commited the pipeline might be in a stuck state:

This is because you have not specified a gitlab runner to run your continuous integration.

On your repo:

  1. Go to settings -> CI/CD
  2. Expand the Runners Section
  3. Click Enabled Shared runners

If it is still stuck in a pending state, just cancel it and it should work on the next run.

How do we View the Site now

For self hosted pages you need to ensure you have:

  • A shared gitlab runner
  • A wildcard domain record pointing to your gitlab instance eg. *.pages.number1.co.za -> x.x.x.x.

Add the domain to your config at /etc/gitlab/gitlab.rb;

pages_external_url 'http://pages.number1.co.za'

Then reconfigure gitlab:

sudo gitlab-ctl reconfigure

All the information for this is in Pages Administration Article

There should be a link on the project Settings -> pages of the repo.

Reference of Gitlab Pages options

##! Define to enable GitLab Pages
# pages_external_url "http://pages.example.com/"
# gitlab_pages['enable'] = false

##! Configure to expose GitLab Pages on external IP address, serving the HTTP
# gitlab_pages['external_http'] = []

##! Configure to expose GitLab Pages on external IP address, serving the HTTPS
# gitlab_pages['external_https'] = []

##! Configure to enable health check endpoint on GitLab Pages
# gitlab_pages['status_uri'] = "/@status"

##! Configure to use JSON structured logging in GitLab Pages
# gitlab_pages['log_format'] = "json"

# gitlab_pages['listen_proxy'] = "localhost:8090"
# gitlab_pages['redirect_http'] = true
# gitlab_pages['use_http2'] = true
# gitlab_pages['dir'] = "/var/opt/gitlab/gitlab-pages"
# gitlab_pages['log_directory'] = "/var/log/gitlab/gitlab-pages"

# gitlab_pages['artifacts_server'] = true
# gitlab_pages['artifacts_server_url'] = nil # Defaults to external_url + '/api/v4'
# gitlab_pages['artifacts_server_timeout'] = 10

##! Environments that do not support bind-mounting should set this parameter to
##! true. This is incompatible with the artifacts server
# gitlab_pages['inplace_chroot'] = false

##! Prometheus metrics for Pages docs: https://gitlab.com/gitlab-org/gitlab-pages/#enable-prometheus-metrics
# gitlab_pages['metrics_address'] = ":9235"

Sources

Cannot use harbor robot account ImagePullBackOff pull access denied

This post is mainly about harbor robot accounts.

Robot accounts are accounts used to run automated operations and have no access to the frontend. The account to use in your continuous integration or k8s registry secrets.

You create a robot account by going to:
Project -> Robot Accounts -> New Robot Account

The Problem $

Harbor forces the username of the robot account to be: robot$<account_name>.

The robot$ prefix makes it easily distinguishable from a normal Harbor user account

For a rancher robot:

robot$rancher

harbor-robot-account-creation

When adding that secret in kubernetes (via rancher) I get:

Error: ImagePullBackOff

When I save the username as robot$rancher as a gitlab environment variable. It does not store anything after the $ sign.

$ echo $HARBOR_USERNAME
robot

Doing a 1 line login does not work:

echo -n $HARBOR_PASSWORD | docker login -u "robot$gitlab_portal" --password-stdin $HARBOR_REGISTRY

however, doing it with this method does:

docker login $HARBOR_REGISTRY
username: ...
password: ...

The username seems to be the issue:

$ export HARBOR_USERNAME="robot$gitlab_portal"
$ echo $HARBOR_USERNAME
robot

The Fix

In this post one user suggested quoting the password:

docker login  -u '' -p ''

that worked. So in your gitlab ci file you can do:

echo -n $HARBOR_PASSWORD | docker login -u 'robot$gitlab_portal' --password-stdin $HARBOR_REGISTRY

https://github.com/goharbor/harbor/issues/9553

Use Self-hosted Gitlab to build and deploy images to Harbor

I have a gitlab version control and CI instance running.
I also have a Harbor registry running.

Now I want to ensure the I can build and push images from gitlab onto harbor using gitlab's continuous integration and continuous deployment (CI/CD).

First Steps

  • Create a git repo on gitlab with your Dockerfile
  • Create a user and project on harbor to store your image
  • Take note of the credentials needed to login to the registry

I am basing this tutorial on a similar tutorial that uses gitlab's container registry

.Gitlab-ci.yml

The core to controlling and managing the tasks that gitlab ci will do, is the .gitlab-ci.yml file that lives inside you repo.
So you can version control your CI process.

You can read the full reference guide for the .gitlab-ci.yml file

Create this file.

Specify the image and stages of the the CI Process

image: docker:19.0.2
variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  DOCKER_HOST: tcp://localhost:2375
stages:
  - build
  - push
services:
  - docker:dind

Here we set the image of the container we will use to run the actual process. In this case we are running docker inside docker.

I have read that this is bad, but it my first attempt so I just want it to work - then I will optimise.

We set the stages which are labels - that will be used later to link to tasks.

Other variables are need for this to work DOCKER_DRIVER, DOCKER_TLS_CERTDIR and DOCKER_HOST

Before and After

Much like a test suite's setUp and tearDown phase the CI process has a before_script and after_script tasks that will be executed before and after respectively.

In our case that involves logging in and out of our registry.

For this phase the username and password of your harbor user is required (or your cli_secret if you are using openidc to connect).

Importantly these should not be written here in plaintext but rather set up as custom environment variables in gitlab.

In gitlab:

  1. Go to Settings > CI/CD
  2. Expand the Environment variables section
  3. Enter the variable names and values
  4. Set them as protected

gitlab_ci_environment_variables

I used: HARBOR_REGISTRY, HARBOR_USERNAME, HARBOR_REGISTRY_IMAGE and HARBOR_PASSWORD

So now we can set in the yaml:

before_script:
  - echo -n $HARBOR_PASSWORD | docker login -u $HARBOR_USERNAME --password-stdin $HARBOR_REGISTRY
  - docker version
  - docker info

after_script:
  - docker logout $HARBOR_REGISTRY

Important if you are using a xxx$robot account you must set this explicitly and not as an environment variable as it will not save correctly as $ escapes in shell

Setting the tasks in the build stages

The build stage:

Build:
  stage: build
script:
  - docker pull $HARBOR_REGISTRY_IMAGE:latest || true
  - >
    docker build
    --pull
    --cache-from $HARBOR_REGISTRY_IMAGE:latest
    --tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA .
  - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA

This pulls the last image which will be used for the cache when building a new image. The image is then pushed to the repo with the commit SHA1 as the version.

Tag Management

According to the tutorial it is good pratice to keep your git tags in sync with your docker tags.

I don't actually know what this does.

Push_When_tag:
  stage: push
  only:
    - tags
  script:
    - docker pull $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
    - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME

The Full .gitlab-ci.yaml

.gitlab-ci.yml

image: docker:18-git

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  DOCKER_HOST: tcp://localhost:2375

stages:
  - build
  - push
services:
  - docker:18-dind

before_script:
  - echo $HARBOR_USERNAME
  - echo -n $HARBOR_PASSWORD | docker login -u 'robot$gitlab_portal' --password-stdin $HARBOR_REGISTRY
  - docker version
  - docker info

after_script:
  - docker logout $HARBOR_REGISTRY

Build:
  stage: build
  script:
    - docker pull $HARBOR_REGISTRY_IMAGE:latest || true
    - >
      docker build
      --pull
      --cache-from $HARBOR_REGISTRY_IMAGE:latest
      --tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA

Push_When_tag:
  stage: push
  only:
    - tags
  script:
    - docker pull $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_SHA $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
    - docker push $HARBOR_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME

Lets Try it out

After going to gitlab and checking the CI jobs after commiting the .gitlab-ci.yml file, the job was in a pending / stuck stage.

gitlab-job-stuck-no-runners

The reason was because I had not setup a runner.

Setting Up a Runner

Gitlab runners run the tasks in .gitlab-ci.yml.

There are 3 types:

  • Shared (for all projects)
  • Group (for all projects in a group)
  • Specific (for specific projects)

On gitlab.com you would use the shared runners, on your own instance you can setup a shared runner.

Shared runners are available to all projects and I like that to simplify things.

But Where should gitlab runners live?, the answer is whereever you want.

The problem with this is there are so many options, there is no way to just start and get a runner going

  • Should I use k8s? - docs are long and horrendous
  • Should I use the vm gitlab is on?
  • Should I use a vm gitlab is not on?

I suppose if you read the full gitlab runners docs you will have a better idea, but I don't have a week.
So I am going to try the k8s way.

You will need 2 variables you can get from: <my-gitlab-instance>/admin/runners

  • gitlabUrl
  • runnerRegistrationToken

So follow the steps in the k8s runner setup resulting in you creating a values.yaml file.

There are many additional settings to change, but I just want it to work now.

You must set privileged: true in your values file of the helm chart if you are doing docker in docker - as we are

values.yml:

gitlabUrl: https://<gitlab_url>/
runnerRegistrationToken: "<Token>"

imagePullPolicy: IfNotPresent

terminationGracePeriodSeconds: 3600

concurrent: 5

checkInterval: 60

rbac:
  create: false
  clusterWideAccess: false

metrics:
  enabled: true

runners:
  image: ubuntu:16.04
  privileged: true
  pollTimeout: 180
  outputLimit: 4096
  cache: {}
  builds: {}
  services: {}
  helpers: {}
securityContext:
  fsGroup: 65533
  runAsUser: 100

resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
hostAliases: []
podAnnotations: {}
podLabels: {}

Searching for the Helm Chart

Add the gitlab helm chart repo and search for the version you want:

helm repo add gitlab https://charts.gitlab.io
helm search repo -l gitlab/gitlab-runner

The gitlab-runner version must be in sync with the gitlab server version: https://docs.gitlab.com/runner/#compatibility-with-gitlab-versions

In my case (11.7.x):

gitlab/gitlab-runner    0.1.45          11.7.0          GitLab Runner

Create the k8s namespace:

kubectl create namespace gitlab-runner

Install the helm chart:

helm install --namespace gitlab-runner gitlab-runner -f values.yml gitlab/gitlab-runner --version 0.1.45

Unfortunately...I got an error:

Error: unable to build kubernetes objects from release
manifest: unable to recognize "": no matches for kind
"Deployment" in version "extensions/v1beta1"

Problem is that in k8s 1.16 some api's changed but the helm chart at that time still specifies the old version. So now I have to clone the runner repo and fix the deployment.

Fixing the Helm Chart

Get the helm chart

helm fetch --untar gitlab/gitlab-runner --version 0.1.45

then find the deployment and change:

apiVersion: extensions/v1beta1

to:

apiVersion: apps/v1

Add the selector:

spec:
  selector:
    matchLabels:
      app: {{ include "gitlab-runner.fullname" . }}

Change the values.yaml inn the repo and finally install from the local changes:

helm install --namespace gitlab-runner gitlab-runner-1 .
NAME: gitlab-runner-1
LAST DEPLOYED: Fri Jun 26 10:24:51 2020
NAMESPACE: gitlab-runner
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Your GitLab Runner should now be registered against the GitLab instance reachable at: "https://xxx"

*BOOM!!!

The runner should now be showing as a shared runner at: https://<my-gitlab-instance>/admin/runners

Enable the Shared Runner on your Repo

Run the job again...go to Ci/CD -> Pipelines -> Run Pipeline

Permission error on K8s instance

Running with gitlab-runner 11.7.0 (8bb608ff)
  on gitlab-runner-1-gitlab-runner-7487b4cf77-lz9cr 11AFa4Fw
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image docker:19 ...
ERROR: Job failed (system failure): secrets is forbidden: User "system:serviceaccount:gitlab-runner:default" cannot create resource "secrets" in API group "" in the namespace "gitlab-runner"

I think this means the service account that helm created does not have the ability to create secrets.

Get all service accounts:

kubectl get sa -A

View the specific service account:

kubectl get sa default -o yaml -n gitlab-runner

It is important to look at kubernetes managing and configuring service accounts. I found this stackoverflow question that gives us the quick fix

So let us edit the serivce account and give it permission (I don't want to give it cluster admin):

kubectl edit sa default -n gitlab-runner

Well it rejected the rule key and documentation is too much or too sparse

I just took the easy / insecure option:

$ kubectl create clusterrolebinding default --clusterrole=cluster-admin --group=system:serviceaccounts --namespace=gitlab
clusterrolebinding.rbac.authorization.k8s.io/default created

That worked but now that user has cluster admin rights. So be wary.

Results

Cloning repository...
Checking out eb9823f7 as master...
Skipping Git submodules setup
$ echo -n $HARBOR_PASSWORD | docker login -u $HARBOR_USERNAME --password-stdin $HARBOR_REGISTRY

Login Succeeded
$ docker version
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:42:53 2020
 OS/Arch:           linux/amd64
 Experimental:      false
Running after script...
$ docker logout $HARBOR_REGISTRY
Removing login credentials for xxx
ERROR: Job failed: command terminated with exit code 1

Partially successful...accordinng to this issue it is a common problem with dind.

So we need to ensure that the runner is set as privileged: true and then set the TCP port of the docker host.

To redo that (not required if you started from the top)

helm upgrade --namespace gitlab-runner gitlab-runner-1 .

Conclusion

Done. It is working:

Job succeeded