Kubernetes, Kubernetes Certs and Getting Your Feet Wet
Containers, container orchestration, cloud native.. At this point these might feel like buzzwords to some IT professionals, but numbers don’t lie. Latest reports from the CNCF show that Kubernetes usage increased by 67% since last year and now a staggering 5.6 million developers use it on a daily basis and made it part of their tool box. Like it or not Kubernetes is here to stay, so why not take the chance to help you get started and learn some of its fundamentals.
Before we jump in and get our hands dirty, let me tell you that today the CNCF offers 5 different Kubernetes certifications you can sign up for. The exams are hands-on (sorry multiple choice certs, nothing personal) so you could cement your knowledge by actually taking them. I’ll link each one of them down below so you can have a look at the specific details and figure out which one would suit your career path best in case you’re planning on getting certified.
Certified Kubernetes Administrator (CKA): Certified Kubernetes Administrator (CKA) | Cloud Native Computing Foundation
Certified Kubernetes Application Developer (CKAD): Certified Kubernetes Application Developer (CKAD) | Cloud Native Computing Foundation
Kubernetes Certified Service Provider (KCSP): Kubernetes Certified Service Provider (KCSP) | Cloud Native Computing Foundation
Certified Kubernetes Security Security Specialist (CKS): Certified Kubernetes Security Specialist (CKS) | Cloud Native Computing Foundation
Kubernetes and Cloud Native Associate (KCNA): Kubernetes and Cloud Native Associate (KCNA) | Cloud Native Computing Foundation
For the technical part of this article you will need to setup your own K8 environment. My intention today is not to show how to setup a cluster but rather touch base with different terminologies and resources you’ll come across when using Kubernetes and help you get started, I’ll leave that up to you. There’s multiple guides around showing how to create a cluster. Here’s a few of them:
Official K8 docs: Creating a cluster with kubeadm | Kubernetes
Personal blog: CKA Preparation: Setting Up a Kubernetes Cluster with 2 Nodes – rdbreak.com
Digital Ocean: How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 20.04 | DigitalOcean
Pods: Deployments, StatefulSets, DaemonSets, what’s all this shenanigans?
Learning Kubernetes is a long but rewarding process, and we have to start somewhere. A logical starting point would be pods so let’s talk a little about them, shall we?
Pods in Kubernetes are the smallest piece of unit you can create and manage. A Pod can consist of one or multiple containers sharing an execution context. To create a pod you can simply touch (as your non-root user, from your control node) a manifest file called ’pod.yml’ that looks like this:
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
This will create a single pod with the name ’nginx’, the image used for this pod will be ’nginx 1.14.2’ and expose port 80 on the container.
To create this pod you can run ’kubectl apply -f pod.yml’
You can check your running pods by executing ’kubectl get pods’. The output should look similar to this:
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 26m
Managing only one pod at the time is quite boring though, so let’s try something a bit more interesting. A way to do that is using workload resources. Workload Resources help you manage your applications and ensure that the desired state you defined in your manifest files is met. Some of the most common workload resources are these:
Deployments and ReplicaSets: Deployments help you create and/or modify your containerised applications running on your pods in a dynamic way. They can scale up or down the amount of replicas running, enable rolling updates and more. When we say ‘replicas’ we refer to an identical copy of a running pod, so if you specify that you want 3 replicas running a ‘busybox:latest’ image , then 3 of the same pods will be running across your K8 cluster at all times.
To create a deployment you can create a manifest called ’deployment.yml’ and paste the following code:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
So what is this doing? This manifest will create a deployment called ‘nginx-deployment’, it’ll add a label to this deployment for easier recognition later on. 3 replicas will be created across our nodes. Once again the chosen image is ‘nginx 1.14.2’ and the exposed container port is 80.
Create this deployment by running ’kubectl apply -f deployment.yml —record=true’ . Passing ‘record=true’ flag serves a purpose here. We’ll be discussing that in a minute. To check that everything worked as expected you can run ’kubectl get deployments nginx-deployment’
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 5m46s
If you’d like to see more information about the resources you’ve created so far you can always use the ’kubectl describe {resource} {name-of-resource}’ command and pipe it to ‘less’, or grep something you are interesting in knowing more about. Let’s try this:
[me@k8-control ~]$ kubectl describe deployments nginx-deployment | grep Image
Image: nginx:1.14.2
Grepping for the word ‘Image’ in our deployment prints the current version of nginx that is running on these pods, and it’s good to know this information in the case we plan on updating our nginx deployment to a newer version. Remember the extra flag we passed along when we created the deployment? Now it’s time to see what was that all about.
Run ’kubectl rollout history deployments nginx-deployment’ . The output will show 1 Revision. This single revision (for now) will help you roll back to nginx 1.14.2 if you ever decide to update nginx.
REVISION CHANGE-CAUSE
1 kubectl apply --filename=deployment.yml --record=true
You can update your deployment in at least a couple ways, but what I find easiest is to simply edit the deployment file. Run ’kubectl edit deployment nginx-deployment’ , search for ‘1.14.2’ and replace it by 1.16.1 , save and exit. As soon as you close the file the rolling update procedure will start. You can also check how the update process is going by running ’kubectl get rs’ . Assuming everything went fine, your deployment should now be running nginx 1.16.1. As you probably expect, updating nginx also created a new revision entry which we can check in our rollout history, so now we have 2 revisions:
1. Pointing to nginx 1.14
2. Pointing to nginx 1.16
See how easy is it to revert to your previous version:
[me@k8-control ~]$ kubectl rollout undo deployment/nginx-deployment --to-revision=1
deployment.apps/nginx-deployment rolled back
[me@k8-control ~]$ kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
nginx-deployment-2426bbabc 3 3 3 30m nginx nginx:1.14.2 app=nginx,pod-template-hash= 2426bbabc
nginx-deployment-ff2655abc 0 0 0 17m nginx nginx:1.16.1 app=nginx,pod-template-hash= ff2655abc
If you’d like to read more on Kubernetes Deployments you can click here. Now that we know the fundamentals of working with deployments it’s time to move onto a different resource.
StatefulSets: As the name implies they are normally used for running stateful applications. Although StatefulSets work similar to Deployments in the sense that they both manage pods defined in a container spec block, they differentiate from each other in one thing and that’s the fact that each StatefulSet pod has persistent identifiers.
For the next exercise we’ll need to create 3 small PersistentVolumes. The reason for this is we’ll be mounting each StatefulSet replica (3 in total) to a different PersistentVolume. In an ideal scenario you’d take a different approach such as dynamically provisioning your volumes, but this is fine for testing purposes.
Create a manifest file called pv.yml and paste this code:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv1
labels:
type: local
spec:
storageClassName: www-pv
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv2
labels:
type: local
spec:
storageClassName: www-pv
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv3
labels:
type: local
spec:
storageClassName: www-pv
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data3"
Execute it with ’kubectl apply -f pv.yml’ and check that the PVs were successfully created with ’kubectl get pv’. What we did in the previous step is create 3 PVs called ‘local-pv{1, 2, 3}’ , each one of them is 50Mi in size and all of them have the same label assigned (‘type: local’). This means if we were to claim and use these volumes we could “call them” by using that same label. If you need more details on the other parameters passed you can read the official docs for setting up your storage in K8.
Now let’s create our StatefulSet with 3 replicas. Crete a new manifest file called ‘state.yml’ that looks like this:
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "www-pv"
resources:
requests:
storage: 50Mi
selector:
matchLabels:
type: local
The first part of our manifest handles the creation of a headless service called ‘nginx’. The second creates a StatefulSet called ‘web’, which will create 3 replicas using the ‘nginx-slim’ image, expose the containers on port 80 and mount the ‘www’ (comes from our PVs hostPath) volumes to the pod’s ‘/usr/share/nginx/html’ directory.
Under the ’volumeClaimTemplates’ block we define a name (www) and we see a selector searching for a label. This label points to our PersistentVolumes.
Feel free to run this state.yml manifest and let’s have a look at what happened
[me@k8-control ~]$ kubectl get statefulsets.apps web -o wide
NAME READY AGE CONTAINERS IMAGES
web 3/3 45m nginx k8s.gcr.io/nginx-slim:0.8
[me@k8-control ~]$ kubectl get svc -o wide --selector app=nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx ClusterIP None 80/TCP 25h app=nginx
We first see that the set is running our 3 replicas and the nginx service was created.
[me@k8-control ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 48m
web-1 1/1 Running 0 48m
web-2 1/1 Running 0 48m
By now you probably noticed that the naming convention StatefulSets have for its pods is a bit particular. StatefulSets are created in order, ‘web-1’ is not created until ‘web-0’ is in a running state and so on. They also have a unique ordinal index number after the name that was passed in the metadata, so the naming convention would then be –
If you want to confirm that the PVs were mounted in the correct directory you can place a random ‘index.html’ file in the ‘/mnt/data{1,2,3}’ directory of the node running your pod. So if node1 is running the ‘web-0’ pod (you can easily check this in the output of ’kubectl get pods -o wide’) then you can ssh into the node, cd to /mnt/data and touch an index.html file with some random content. Once that’s done you can run ’kubectl exec -it web-0 — /bin/bash’ from your master node to open up a shell in your web-0 pod and then ‘cat /usr/share/nginx/html/index.html’ . The content of the file should be the same.
StatefulSets can be a really complex topic and we haven’t even scratched the surface of it all with what I’ve showed you so far. To read more on StatefulSets you can visit these docs
DaemonSets: DaemonSets are really straight forward. They guarantee that any pod you configured has a copy running on every node that forms your cluster, so if you have a cluster with 4 nodes, there will be 4 pods. If you were to expand your cluster to 5 in the future, another pod would be automatically added. It works the other way around if you were to remove a node. Some common use cases for DaemonSets are:
* Monitoring your nodes
* Collecting logs
* Running a cluster storage daemon
Let’s create an easy to follow example. Create a ’daemonset.yml’ file with this code and execute it with ’kubectl apply -f daemonset.yml’
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-test
spec:
selector:
matchLabels:
name: daemonset-test
template:
metadata:
labels:
name: daemonset-test
spec:
containers:
- name: mybusybox
image: busybox:latest
command:
- "sleep"
- "3600"
The previous manifest creates a DaemonSet called ’daemonset-test’ using the ’busybox:latest’ image and also passes a command to keep the container running for 1 hour. You can check the status and confirm that you have the same amount of pods running as you have nodes. In my case I have only 2 nodes, so I’m expecting 2 pods running
[me@k8-control ~]$ kubectl get daemonsets.apps daemonset-test
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset-test 2 2 2 2 2 3m32s
To read more on DaemonSets you can click here
With this we reach the end of this article. Today we learnt a bit more on the reasons why you should consider picking up Kubernetes if you were still a bit indecisive. We also learnt about pods and different Workload Resources such as Deployments, StatefulSets and DaemonSets, and even though we didn’t do anything super challenging we built some knowledge that could hit the right nerve and help us dive deeper into this technology.
I’m a passionate, communicative go-getter and highly motivated to build, maintain and improve a stable and effective IT infrastructure at different sized companies. My hearth is with open source, Linux, DevOps, Kubernetes and everything that is cloud native.