Month: May 2020

Setting up Keycloak on Kubernetes

First thing to do is get familiar with keycloak. once you are happy it might be useful take a look at the keycloak quickstarts.
They seem to have all the examples and samples on getting going with keycloak.

In particular you want to look at the keycloak examples

For posterity I will show the contents of keycloak.yaml:

apiVersion: v1
kind: Service
metadata:
  name: keycloak
  labels:
    app: keycloak
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app: keycloak
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
  namespace: default
  labels:
    app: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
      - name: keycloak
        image: quay.io/keycloak/keycloak:10.0.1
        env:
        - name: KEYCLOAK_USER
          value: "admin"
        - name: KEYCLOAK_PASSWORD
          value: "admin"
        - name: PROXY_ADDRESS_FORWARDING
          value: "true"
        ports:
        - name: http
          containerPort: 8080
        - name: https
          containerPort: 8443
        readinessProbe:
          httpGet:
            path: /auth/realms/master
            port: 8080

and keycloak-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: keycloak
spec:
  tls:
    - hosts:
      - KEYCLOAK_HOST
  rules:
  - host: KEYCLOAK_HOST
    http:
      paths:
      - backend:
          serviceName: keycloak
          servicePort: 8080

Environment Variables

We want to customise a few things about how keycloak runs and we do this by updating the environment variables.
So let us find what environment variabels are available and which we need to change.

We know the image beind used it:

quay.io/keycloak/keycloak:10.0.1

So lets see what the readme of that container image says

It is rather dissapointing that when we check on quay for keycloak, there is an empty readme. So our princess is in another castle.

The best readme I could find was on keycloak-containers.

So the list of available environment variables I could find were:

  • KEYCLOAK_USER
  • KEYCLOAK_PASSWORD
  • DB_VENDOR - h2, postgres, mysql, mariadb, oracle, mssql
  • DB_ADDR - database hostname
  • DB_PORT - optoinal defaults to vendor port
  • DB_DATABASE - database name
  • DB_SCHEMA - only postgres uses this
  • DB_USER - user to auth with db
  • DB_PASSWORD - user password to auth with db
  • KEYCLOAK_FRONTEND_URL - A set fixed url for frontend requests
  • KEYCLOAK_LOGLEVEL
  • ROOT_LOGLEVEL - ALL, DEBUG,ERROR, FATAL, INFO, OFF, TRACE and WARN
  • KEYCLOAK_STATISTICS - db,http or all

Oh I found an even more exhaustive list of environment variables in the docker entrypoint

Creating a K8s service as a reference to an external servie

As per kubernetes up and running, it is worthwhile to represent an external service in kubernetes. That way you get built in naming, service discovery and it looks like the database is a k8s service.

It also helps when replacing a service or switching between prod and test.

my-db.yaml:

kind: Service
apiVersion: v1
metadata:
  name: external-database
  namespace: prod
spec:
  type: ExternalName
  externalName: database.company.com

If you just have an ip you need to create the service and the endpoint with:

kind: Service
apiVersion: v1
metadata:
  name: keycloak-external-db-ip
spec:
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
kind: Endpoints
apiVersion: v1
metadata:
  name: keycloak-external-db-ip
subsets:
  - addresses:
    - ip: my-ip.example.com
    ports:
    - port: 3306

now the actual service dns name will be:

    my-svc.my-namespace.svc.cluster.local

so in this case:

    keycloak-external-db-ip.keycloak.svc.cluster.local

Set that as DB_ADDR with the other credentials and we should be good to go.

So updarte that and the other environment variables and deploy:

Create the deployment:

kubectl create -f keycloak-deployment.yml -n keycloak

create the service and the ingress:

kubectl apply -f keycloak-service.yml -n keycloak
kubectl apply -f keycloak-ingress.yml -n keycloak

Boom and you should be up and running

Sources

Is there a speed gain when moving from Apache Mod PHP to Nginx PHP-FPM?

I had a chance to deploy one of my running websites on another virtual machine.
I wanted to improve performance as customers are paying for the product and wanted to give a faster experience.

On the old site I used Apache with PHP mod apache to run the site. On the new site I went with Nginx and PHP-FPM.

The Server Setups

Both websites use the Yii Framework on PHP with a MySQL database. There has been some performance tweaks on the Old Site. The new site I left everything standard.

Old Site:

  • 2GB RAM (free 222MB)
  • CPU(s): 2
  • Site shared - vhosts with a few other sites
  • HTTPS Enabled (letencrypt)
    * PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
  • Server hosted in Nederlands (Testing from South Africa)

New Site:

  • 2GB RAM (Free 1161MB)
  • CPU(s): 2
  • Site dedicated, not other site on server
  • No HTTPS
  • PRETTY_NAME="Ubuntu 18.04.4 LTS"
  • Server hosted in South Africa (Testing from South Africa)

Method

The method for the performance test is as follows.

  1. Enable response time logging in the access logs of both apache and nginx - I wrote a post on this with apache and there are docs online for nginx
  2. Browsing Test - I will browse as a non logged in and logged in user on both sites in isolation. The statistics of response times will be recorded from the user's perspective in the browser and from the log response times.
  3. WebPage Test - I will use Web Page Test to compare both sites for a few pages.
  4. Load Test - I will test concurrent load with locustio
  5. Sitespeed.io Test - Test using sitespeed.io open source sitespeed testing utility

This will not be a scientific comparison - purely anecdotal

Browsing Test

PageNginx + PHP-FPM (ms)Apache + ModPHP (ms)Difference
Home Page1380166020%
Contact Us1060131024%
About Us997128028%
Login (POST)14107550435%
Portfolio (Db intentensive)19206960263%
Calculator946131038%
Chart (TTFB)105348231%

From the chart above it is safe to say that without a shadow of a doubt, the new site is faster.

Naturally the server being much closer helps. Instead of 9354km the new server is about 50km away. The average latency on a ping is 187 ms to the old server and about 12ms to the new one.

WebPage Test

I tested both sites from south africa, here are the screenshots and relevant info below:

speed-test-nginx-php-fpm
Speed Test of the New Nginx PHP-FPM website
speed-test-apache-php
Speed Test of the Old Apache ModPHP website
WebPageTest MetricNginx + PHP-FPMApache + ModPHP
First Byte102895
Speed Index7691660
Document Complete Time38503353
Document Complete Requests3633
Fully Loaded Time47464087
Fully Loaded Requests4846

Surprisingly the new website performed worse (in total). It was faster to first byte but full load was worse. Furthermore no caching and webpagetest does not like that.

WebPageTest MetricNginx + PHP-FPMApache + ModPHP
First Byte126913
Speed Index8001681
Document Complete Time68252989
Document Complete Requests1816
Fully Loaded Time68693215
Fully Loaded Requests1917

The results of this were also pretty annoying. It seems that webpagetest wants me to cache static content, gzip assets and use a CDN. Then it will be happy.

Let me add gzip and static caching to nginx and see.
Just uncomment the gzip section in the default nginx.conf.

After adding updaing it is looking a bit better:

add-gzip-and-static-caching-nginx
After updating the new site enabling gzip compression and browser caching

I then removed the twitter feed and things were better:

Old Site:

New Site:

all-a-web-page-test

Load Test

I created a test to make some GET requests against the server - while not logged in. The test has users spawn at 1 a second.

The new site performed as follows

number-of-users-nginx-php-fpm
N umber of users nginx php-fpm
response-times-(ms)-php-fpm-nginx
Response times (ms) php-fpm nginx
total-requests-per-second-nginx-php-fpm
Total requests per second nginx php-fpm

So it can run stably from 80 to 100 RPS.

The old site performed terribly. When I got up to 2 RPS all the other sites monitoring was saying it was down. It was weird that the RPS didn't grow according to users as fast with the old site - perhaps locust knows it couldn't handle that spawn rate.

load-test-apapche-mod-php-total-requests

apache-mod-php-load-test-response-time

user-growth-apache-modphp-old-site-loadstest

Sitespeed.io

To do a more comprehensive test I employed sitespeed.io. I then ran the test against both sites are here are the results...

The Old Mod-PHP and Apache site

sitespeed-io-for-old-apache-mod-php-site

The New PHP-FPM and Nginx site

sitespeed-io-for-new-nginx-php-fpm-site

Update Adding HTTP/2

I was using the default nginx config that is HTTP/1.1, so I updated it to serve with HTTP/2

Now, I have switched over the performance site to be the current site.

how-to-trade-http2-speed-test

  • First bytes is a bit slower 0.192s vs 0.095s on the HTTP/1.1 version
  • Start render is also about 100ms faster at 0.400s vs 0.500s on the HTTP/2 site
  • Document complete and fully loaded is however much faster on HTTP/2 - even with the 100ms handicap. It is about 300ms faster - probably due to HTTP/2 multiplexing of asset acquisition.

Conclusion

Some tests were conclustive - others were still in the balance.
From a load testing and user initial response view - the new site clearly wins. The biggest gain comes more from concurrent users and handling load. Another significant bit was moving the server closer to the users.

The PHP-FPM with Nginx site can handle 40 or more times more load than the other site and has a faster response even with the 200ms handicap.

Next Steps

The next steps to take would be to look at how to maximise performance with nginx and php-fpm