Category: Kubernetes

Minikube: Deploy a container using a private image registry

This post is mainly for wanting to test a private image on your local k8s instance.

At this point in time I feel minikube is the lightweight standard for this.

You should have a private registry with the image you want to deploy, otherwise - use a public image.

Getting Started

Install minikube

Once minikube is installed, start it:

minikube start

The setup will tell you:

  Starting control plane node minikube in cluster minikube
๐Ÿ’พ  Downloading Kubernetes v1.18.3 preload ...
    > preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: 526.01 MiB
๐Ÿ”ฅ  Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
๐Ÿณ  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
๐Ÿ”Ž  Verifying Kubernetes components...
๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
๐Ÿ„  Done! kubectl is now configured to use "minikube"

So you don't even need to update your kubectl.

View the k8s Cluster Nodes

There should only be a single node called minikube:

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m13s   v1.18.3

The next few steps are done in the default namespace

Add the Private Registry Secret

Read on k8s docs how to pull an image from a private registry

kubectl create secret docker-registry regcred \
--docker-server= \
--docker-username= \
--docker-password= \
--docker-email=

Inspect the secret

kubectl get secret regcred --output=yaml

It is base64 encoded, so to view actual:

kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode

Create a pod that uses the secret

In a file my-pod.yml:

apiVersion: v1
kind: Pod
metadata:
  name: private-reg
  labels:
    name: private-reg
spec:
  containers:
  - name: private-reg-container
    image: 
  imagePullSecrets:
  - name: regcred

Apply and view the pod:

kubectl apply -f my-pod.yml
kubectl get pod private-reg

If you have an ImagePullBackoff issue then scroll to the bottom of this page

The more k8s way of doing things would be to use a deployment instead of deploying the pod:

kubectl create deployment my-project --image=k8s.gcr.io/echoserver:1.4

Pod is Running how to View the Frontend

Full tutorial on k8s

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster.
To expose the pod outside of kubernetes you need to create a service.

kubectl expose deployment my-project --type=LoadBalancer --port=8080

or if you just deployed a pod - you need to update the pod spec to include a label:

kubectl expose po private-reg--type=LoadBalancer --port=8080

Ensure the service is running with:

kubectl get svc

On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.

minikube service 

Finding the reason for an ImagePullBackoff

In my case the pod did not deploy:

$ kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
private-reg   0/1     ImagePullBackOff   0          45s

This question on stackoverflow - highlights how to debug an ImagePullBackoff

kubectl describe pod private-reg

The error I saw was:

Failed to pull image "//:": rpc 
error: code = Unknown desc = Error response from daemon: pull access denied for //,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied

Usually this happens if your username, password, registry url or image name is incorrect.

Setting up Keycloak on Kubernetes

First thing to do is get familiar with keycloak. once you are happy it might be useful take a look at the keycloak quickstarts.
They seem to have all the examples and samples on getting going with keycloak.

In particular you want to look at the keycloak examples

For posterity I will show the contents of keycloak.yaml:

apiVersion: v1
kind: Service
metadata:
  name: keycloak
  labels:
    app: keycloak
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app: keycloak
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
  namespace: default
  labels:
    app: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
      - name: keycloak
        image: quay.io/keycloak/keycloak:10.0.1
        env:
        - name: KEYCLOAK_USER
          value: "admin"
        - name: KEYCLOAK_PASSWORD
          value: "admin"
        - name: PROXY_ADDRESS_FORWARDING
          value: "true"
        ports:
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
        readinessProbe:
          httpGet:
            path: /auth/realms/master
            port: 8080

and keycloak-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: keycloak
spec:
  tls:
    - hosts:
      - KEYCLOAK_HOST
  rules:
  - host: KEYCLOAK_HOST
    http:
      paths:
      - backend:
          serviceName: keycloak
          servicePort: 8080

Environment Variables

We want to customise a few things about how keycloak runs and we do this by updating the environment variables.
So let us find what environment variabels are available and which we need to change.

We know the image beind used it:

quay.io/keycloak/keycloak:10.0.1

So lets see what the readme of that container image says

It is rather dissapointing that when we check on quay for keycloak, there is an empty readme. So our princess is in another castle.

The best readme I could find was on keycloak-containers.

So the list of available environment variables I could find were:

  • KEYCLOAK_USER
  • KEYCLOAK_PASSWORD
  • DB_VENDOR - h2, postgres, mysql, mariadb, oracle, mssql
  • DB_ADDR - database hostname
  • DB_PORT - optoinal defaults to vendor port
  • DB_DATABASE - database name
  • DB_SCHEMA - only postgres uses this
  • DB_USER - user to auth with db
  • DB_PASSWORD - user password to auth with db
  • KEYCLOAK_FRONTEND_URL - A set fixed url for frontend requests
  • KEYCLOAK_LOGLEVEL
  • ROOT_LOGLEVEL - ALL, DEBUG,ERROR, FATAL, INFO, OFF, TRACE and WARN
  • KEYCLOAK_STATISTICS - db,http or all

Oh I found an even more exhaustive list of environment variables in the docker entrypoint

Creating a K8s service as a reference to an external servie

As per kubernetes up and running, it is worthwhile to represent an external service in kubernetes. That way you get built in naming, service discovery and it looks like the database is a k8s service.

It also helps when replacing a service or switching between prod and test.

my-db.yaml:

kind: Service
apiVersion: v1
metadata:
  name: external-database
  namespace: prod
spec:
  type: ExternalName
  externalName: database.company.com

If you just have an ip you need to create the service and the endpoint with:

kind: Service
apiVersion: v1
metadata:
  name: keycloak-external-db-ip
spec:
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
kind: Endpoints
apiVersion: v1
metadata:
  name: keycloak-external-db-ip
subsets:
  - addresses:
    - ip: my-ip.example.com
    ports:
    - port: 3306

now the actual service dns name will be:

    my-svc.my-namespace.svc.cluster.local

so in this case:

    keycloak-external-db-ip.keycloak.svc.cluster.local

Set that as DB_ADDR with the other credentials and we should be good to go.

So updarte that and the other environment variables and deploy:

Create the deployment:

kubectl create -f keycloak-deployment.yml -n keycloak

create the service and the ingress:

kubectl apply -f keycloak-service.yml -n keycloak
kubectl apply -f keycloak-ingress.yml -n keycloak

Boom and you should be up and running

Sources

Walkthough of Creating and Running Plays on AWX

AWX Ad Hoc Test

The first step before you do anything on AWX, is just get your toes wet and do a simple ad hoc command locally.

To do this got to Inventories -> +

Call it localhost. Next you have to actually add hosts or groups to this inventory.

To do this edit the inventory and go to hosts -> + and then put the hostname as localhost. It is very important that you add in the host variables:

ansible_connection: local

If you do not add that local connection, you will use ssh isntead and won't be able to connect

awx-inventory-for-localhost

Now go back to the hosts page, select the host you want to run an ad hoc command on. Then select Run Commands

awx-ad-hoc-run-commands-on-a-host

Then use the module ping which connects to a host, checks there is a usable python and then returns pong

awx-localhost-ping

The output of the command should be:

awx-successful-local-ping

But Can you ICMP Ping 1.1.1.1

Depending on the way you deployed, this might not work. So try it out, using the command module and doing a ping -c 4 1.1.1.1.

awx-ping-cloudflare

If you are running on kubernetes and the container running the task does not have the ping utility you will get:

localhost | FAILED | rc=2 >>
ping: socket: Operation not permittednon-zero return code

then if you run it with privilege escalation you get:

{
    "module_stdout": "",
    "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1,
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "_ansible_no_log": false,
    "changed": false
}

Running this same command without privilege escalation on an older version of AWX running with deployed with docker-compose you get a success:

awx-successful-ping

However, running on k8s is actually preferred. You might not have access to some standard tools on the docker deploy but you will hardly need them - I think.

Walkthrough of Setting up your playbook to Run

There is a bit of terminology that is AWX (Ansible tower) specific. That is a bit different from pure ansible. We will cross that bridge when we get their though.

The first thing to do is ensure your playbooks are in a git repo.

So what a repo is called in Asnsible is a project.
A project is a Logical collection of ansible playbooks. Although sane people keep these in git repos.

But wait, to access that repo you need to setup a Source control credential first.

So the flow is:

  1. Create a Credential for Source Control
  2. Create a Project
    ...

1. Setup Credentials (for gitlab source control)

First create a ssh key pair for awx.
Using ssh-keygen -t rsa -b 4096 -C "your_email@example.com" and store it as awx_key for example.
Then copy the private key.

Click Credentials on the side -> + and add set the credential type to Source Control. Then Add your private key.

awx-gitlab-scm-privatekey

In gitlab you need to go to your: Repo -> settings -> repository -> Deploy Keys (You can use Deploy tokens if you do not want to use ssh - only https).
Ensure the key is enabled.

2. Create Project

Go to Projects -> +

Set the SCM details and selecting the gitlab scm credentials.

Save, and then repo should eventually be pulled -> shown by a green light.

awx-create-a-project

3. Create a Job Template

You can only create a job template if you have a project. A job template basically links up the inventory (variables), credentials and playbook you are going to run.

Go to Templates -> + -> Job Templates

awx-job-template

4. Run your Job

Run the job template by pressing the Launch button

Extra: Using a Survey

Surveys set extra variables in a user-friendly question and answer way

  1. Click Create Survey on the job Template

awx-add-survey

Now you can add questions to the user and it will fill them out in extra vars.

Sources