Category: Kubernetes

Seperate your K8s cluster, Identity Provider and Private Registry.

My proposal is that your k8s cluster or management should be completely seperate from your private registry. Then your private reigstry should be completely seperate from your identity provider.

The main reason is we want to decrease the chance of circular dependencies - that can never be fixed.

Should all be in different places

private registry
identity provider (keycloak)
rancher

Circular Dependency Example

Your registry uses keycloak as the authentication provider.
But the image keycloak uses is custom and resides in the private registry.
If something happens and Rancher needs to pull the image again - keycloak will go down.
But now it won't be able to pull the image because the identityprovider is down.

Circular Dependencies

Keycloak Image in your private registry secured by Keycloak

An accident waiting to happen.

If your k8s cannot pull the

Minikube: Deploy a container using a private image registry

This post is mainly for wanting to test a private image on your local k8s instance.

At this point in time I feel minikube is the lightweight standard for this.

You should have a private registry with the image you want to deploy, otherwise - use a public image.

Getting Started

Install minikube

Once minikube is installed, start it:

minikube start

The setup will tell you:

  Starting control plane node minikube in cluster minikube
๐Ÿ’พ  Downloading Kubernetes v1.18.3 preload ...
    > preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: 526.01 MiB
๐Ÿ”ฅ  Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
๐Ÿณ  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
๐Ÿ”Ž  Verifying Kubernetes components...
๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
๐Ÿ„  Done! kubectl is now configured to use "minikube"

So you don't even need to update your kubectl.

View the k8s Cluster Nodes

There should only be a single node called minikube:

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m13s   v1.18.3

The next few steps are done in the default namespace

Add the Private Registry Secret

Read on k8s docs how to pull an image from a private registry

kubectl create secret docker-registry regcred \
--docker-server= \
--docker-username= \
--docker-password= \
--docker-email=

Inspect the secret

kubectl get secret regcred --output=yaml

It is base64 encoded, so to view actual:

kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode

Create a pod that uses the secret

In a file my-pod.yml:

apiVersion: v1
kind: Pod
metadata:
  name: private-reg
  labels:
    name: private-reg
spec:
  containers:
  - name: private-reg-container
    image: 
  imagePullSecrets:
  - name: regcred

Apply and view the pod:

kubectl apply -f my-pod.yml
kubectl get pod private-reg

If you have an ImagePullBackoff issue then scroll to the bottom of this page

The more k8s way of doing things would be to use a deployment instead of deploying the pod:

kubectl create deployment my-project --image=k8s.gcr.io/echoserver:1.4

Pod is Running how to View the Frontend

Full tutorial on k8s

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster.
To expose the pod outside of kubernetes you need to create a service.

kubectl expose deployment my-project --type=LoadBalancer --port=8080

or if you just deployed a pod - you need to update the pod spec to include a label:

kubectl expose po private-reg--type=LoadBalancer --port=8080

Ensure the service is running with:

kubectl get svc

On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.

minikube service 

Finding the reason for an ImagePullBackoff

In my case the pod did not deploy:

$ kubectl get po
NAME          READY   STATUS             RESTARTS   AGE
private-reg   0/1     ImagePullBackOff   0          45s

This question on stackoverflow - highlights how to debug an ImagePullBackoff

kubectl describe pod private-reg

The error I saw was:

Failed to pull image "//:": rpc 
error: code = Unknown desc = Error response from daemon: pull access denied for //,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied

Usually this happens if your username, password, registry url or image name is incorrect.

Setting up Keycloak on Kubernetes

First thing to do is get familiar with keycloak. once you are happy it might be useful take a look at the keycloak quickstarts.
They seem to have all the examples and samples on getting going with keycloak.

In particular you want to look at the keycloak examples

For posterity I will show the contents of keycloak.yaml:

apiVersion: v1
kind: Service
metadata:
  name: keycloak
  labels:
    app: keycloak
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app: keycloak
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
  namespace: default
  labels:
    app: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
      - name: keycloak
        image: quay.io/keycloak/keycloak:10.0.1
        env:
        - name: KEYCLOAK_USER
          value: "admin"
        - name: KEYCLOAK_PASSWORD
          value: "admin"
        - name: PROXY_ADDRESS_FORWARDING
          value: "true"
        ports:
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
        readinessProbe:
          httpGet:
            path: /auth/realms/master
            port: 8080

and keycloak-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: keycloak
spec:
  tls:
    - hosts:
      - KEYCLOAK_HOST
  rules:
  - host: KEYCLOAK_HOST
    http:
      paths:
      - backend:
          serviceName: keycloak
          servicePort: 8080

Environment Variables

We want to customise a few things about how keycloak runs and we do this by updating the environment variables.
So let us find what environment variabels are available and which we need to change.

We know the image beind used it:

quay.io/keycloak/keycloak:10.0.1

So lets see what the readme of that container image says

It is rather dissapointing that when we check on quay for keycloak, there is an empty readme. So our princess is in another castle.

The best readme I could find was on keycloak-containers.

So the list of available environment variables I could find were:

  • KEYCLOAK_USER
  • KEYCLOAK_PASSWORD
  • DB_VENDOR - h2, postgres, mysql, mariadb, oracle, mssql
  • DB_ADDR - database hostname
  • DB_PORT - optoinal defaults to vendor port
  • DB_DATABASE - database name
  • DB_SCHEMA - only postgres uses this
  • DB_USER - user to auth with db
  • DB_PASSWORD - user password to auth with db
  • KEYCLOAK_FRONTEND_URL - A set fixed url for frontend requests
  • KEYCLOAK_LOGLEVEL
  • ROOT_LOGLEVEL - ALL, DEBUG,ERROR, FATAL, INFO, OFF, TRACE and WARN
  • KEYCLOAK_STATISTICS - db,http or all

Oh I found an even more exhaustive list of environment variables in the docker entrypoint

Creating a K8s service as a reference to an external servie

As per kubernetes up and running, it is worthwhile to represent an external service in kubernetes. That way you get built in naming, service discovery and it looks like the database is a k8s service.

It also helps when replacing a service or switching between prod and test.

my-db.yaml:

kind: Service
apiVersion: v1
metadata:
  name: external-database
  namespace: prod
spec:
  type: ExternalName
  externalName: database.company.com

If you just have an ip you need to create the service and the endpoint with:

kind: Service
apiVersion: v1
metadata:
  name: keycloak-external-db-ip
spec:
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
kind: Endpoints
apiVersion: v1
metadata:
  name: keycloak-external-db-ip
subsets:
  - addresses:
    - ip: my-ip.example.com
    ports:
    - port: 3306

now the actual service dns name will be:

    my-svc.my-namespace.svc.cluster.local

so in this case:

    keycloak-external-db-ip.keycloak.svc.cluster.local

Set that as DB_ADDR with the other credentials and we should be good to go.

So updarte that and the other environment variables and deploy:

Create the deployment:

kubectl create -f keycloak-deployment.yml -n keycloak

create the service and the ingress:

kubectl apply -f keycloak-service.yml -n keycloak
kubectl apply -f keycloak-ingress.yml -n keycloak

Boom and you should be up and running

Sources