How to connect to your remote kuberenetes cluster with kubectl from you local?

You've just set up your kubernetes cluster. Excellent, now you want to start deploying your specs...but they are on a repo on your local machine.

All good let's setup your kubeconfig file so you can connect to your k8s api with kubectl.

  1. Log into your server

  2. Create a service account spec:

kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  1. Create the account

    kubectl create -f server-account.yaml

  2. Create the cluster role binding:

kind: ClusterRoleBinding
  name: admin-user
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  1. Apply the role binding

kubectl apply -f admin-role-binding.yml

  1. Find the secrets used by the service account
kubectl describe serviceAccounts admin-user
Name:                devacc
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   devacc-token-47p8n
Tokens:              devacc-token-47p8n
Events:              <none>
  1. Fetch the token

kubectl describe secrets devacc-token-47p8n

Keep the token

  1. Get the certificate info for the cluseter
kubectl config view --flatten --minify > cluster-cert.txt
cat cluster-cert.txt

Copy certificate-authority-data and server from the output.

  1. Now you can create your kubeconfig file

Create a file called my-service-account-config.yaml and substitute the values for token, certificate-authority-data and server

apiVersion: v1
kind: Config
- name: admin-user
    token: <replace this with token info>
- cluster:
    certificate-authority-data: <replace this with certificate-authority-data info>
    server: <replace this with server info>
  name: self-hosted-cluster
- context:
    cluster: self-hosted-cluster
    user: devacc
  name: devacc-context
current-context: devavv-context
  1. Copy the file to $HOME/.kube

  2. Tell kubectl to use that context:

kubectl config --kubeconfig=$HOME/.kube/my-service-account-config.yaml set-context svcs-acct-context

It is better to append it to the base config


Junos PyEZ how to fix the xmlSAX2Characters:huge text node

When trying to acquire a huge bit of config from juniper routers and parse it with PyEZ sometimes you get an error. Something like:

pyez xmlSAX2Characters: huge text node, line 256071, column 53 (<string>, line 256071)

PyEZ uses ncclient the python netconf client behind the scenes.It isn't well documented in their docs though.

Allowing a huge text nodes in pyEZ, you should connect to your device and pass the huge_tree=True parameter:

with Device(host='x.x.x.x', user=junos_username, passwd=junos_password, huge_tree=True) as dev:

Kubernetes Questions – Please answer them

What is the Difference between a Persistent Volume and a Storage Class?

What happens when pods are killed, is the data persisted - How do you test this?

What is the difference between a Service and an Ingress?

By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine.

If you check the pod ip and there is an open containerPort then you should be able to access it via the node - with curl.

What happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.

A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality

When created, each Service is assigned a unique IP address (also called clusterIP)

This address is tied to the lifespan of the Service, and will not change while the Service is alive

communication to the Service will be automatically load-balanced

  • targetPort: is the port the container accepts traffic on
  • port: is the abstracted Service port, which can be any port other pods use to access the Service

Note that the Service IP is completely virtual, it never hits the wire

Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS - DNS requires a COreDNS addon

Ingress is...

An API object that manages external access to the services in a cluster, typically HTTP

How do you know the size of the PV's to create for the PVC's of a helm chart?

Are helm chart declarative or imperitive?

What is a kubernetes operator?

How do you start a new mysql docker container with an existing data directory?

Usage against an existing databaseIf you start your mysql container instance with a data directory that already contains a database (specifically, a mysql subdirectory), the $MYSQL_ROOT_PASSWORD variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.

The above did not work for me.

trademate-db_1   | Initializing database
trademate-db_1   | 2020-01-16T05:59:38.689547Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
trademate-db_1   | 2020-01-16T05:59:38.690778Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
trademate-db_1   | 2020-01-16T05:59:38.690818Z 0 [ERROR] Aborting
trademate-db_1   | 
trademate_trademate-db_1 exited with code 1

How do you view the contents of a docker volume?

You can't do this without a container for named volumes (the one's docker manages). So kak...

How do you run python debugger and attach inside a docker container?

Its a mess up...just develop locally

If you were mounting a conf file into an nginx image from docker-compose - how do you do that in production? Do you bake it into the image?

Yes you should.

Do something like this:

FROM nginx:1.17

COPY ./config/nginx/conf.d /etc/nginx/conf.d

# Remove default config
RUN rm /etc/nginx/conf.d/default.conf 

How do you deploy all the k8s spec files in a folder at once? If not is there a specific order to deploy them in?

This Service should exists before the replicas - as it adds environment variables to containers on the pods in the replicaset based on services created.

Should gunicorn and nginx containers be in the same pod?