Running Kubernetes node with Containerd

Introduction

In currently preparing my next Kubernetes Certification ( Certified Kubernetes Security Specialist ), and at the time when I am writing this blog post, it’s still based on Kubernetes 1.19. And the certification is still based on Docker.

But the Kubernetes world is changing, and Docker support has been dropped with the Kubernetes 1.20.

Moreover, in order to play with other more secure runtime, ContainerD is needed.

In this blog post, I will explain how I’ve setup a dedicated node that run with containerd. Maybe my own way is not the best way to install and configure it, so don’t hesitate to comment or contact me to improve my setup :)

Install the packages

We are starting from a fresh Ubuntu 20.20 server, so first thing to do is to install the different binaries needed, both for Kubernetes than for containerd.

So let’s start by adding the configuration to the needed Ubuntu repositories :

# Add K8S Repository, needed to install kubeadm, kubelet and kubectl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

# Add Docker repo, needed to install containerd
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

Once it’s done, we can install the needed packages :

sudo apt update
apt install -y kubectl=1.19.6-00 kubeadm=1.19.6-00 kubelet=1.19.6-00 containerd.io

Prepare ContainerD

We can now start the configuration of containerd itself

# Enable  Kernel Modules needed by containerd
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# Load the Kernel Modules
sudo modprobe overlay
sudo modprobe br_netfilter

#  Setup  the  configuration of containerd
sudo mkdir -p /etc/containerd
# This will generate defaut config for containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

# Need to enable IP Forwarding 
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload configs
sudo sysctl --system

sudo systemctl restart containerd

Register the node

At this stage, the node should be ready to be registered in the cluster.

So let’s connect on a master node, create a bootstrap token and print the command to be executed on the worker node :

kubeadm token create --print-join-command

and then execute it on the new worker node :

kubeadm join 10.10.10.121:6443 --token <output-token>     --discovery-token-ca-cert-hash <output-sha256>

After some seconds, the worker is ready, and already able to run workload with RunC/ContainerD.

[email protected]:~$k get node -o=custom-columns=Name:metadata.name,STATUS:status.conditions[-1].type,Runtime:status.nodeInfo.containerRuntimeVersion
Name                  STATUS   Runtime
kubernetes-master-0   Ready    containerd://1.4.3
kubernetes-worker-0   Ready    containerd://1.4.3

Ok great, but let check and run a pod to be sure!

Run workload

For that, it’s simple, run this command :

[email protected]:~$kubectl run sample --image=nginx
pod/sample created

Seems to be ok, but where is it running ?

[email protected]:~$k get pods -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP           NODE                  NOMINATED NODE   READINESS GATES
sample   1/1     Running   0          25s   10.244.1.2   kubernetes-worker-0   <none>           <none>

It’s on the worker node ! Great that’s work !

But .. how to debug ?

In normal situation, we should never connect into the worker node to debug, but sometimes it can be usefull. In those situations, before we were able to connect and use Docker to see what’s running on the node but … now ?

[email protected]:~$ docker ps

Command 'docker' not found, but can be installed with:

sudo snap install docker     # version 19.03.11, or
sudo apt  install docker.io

See 'snap info docker' for additional versions.

Indeed, Docker is not used anymore, so how can we see containers running on the host ? For that, the answer is crictl, so let’s try :

[email protected]:~$ crictl ps
FATA[0010] failed to connect: failed to connect: context deadline exceeded

It’s not working, as it tries to connect to Docker by default, and more precisely by using the socket /var/run/dockershim.sock.

To change that, we can simply create a file /etc/crictl.yml with the containerd config :

sudo tee /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
EOF

Not let’s try again and list the containers running :

[email protected]:~# crictl ps
CONTAINER ID        IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
76961571138a3       f6d0b4767a6c4       13 minutes ago      Running             sample              0                   8e33f7170a338
740e80d7ee8e0       4e9f801d2217e       21 minutes ago      Running             kube-flannel        0                   08af77d2191dd
fed7fce8215b1       cb12d94b194b3       21 minutes ago      Running             calico-node         0                   08af77d2191dd
7873806de7956       9d368f4517bbe       21 minutes ago      Running             kube-proxy          0                   4ccdd921dfe19

We can also list the images :

[email protected]:~# crictl images
IMAGE                                 TAG                 IMAGE ID            SIZE
docker.io/calico/cni                  v3.16.6             3debebec24457       46.3MB
docker.io/calico/node                 v3.16.6             cb12d94b194b3       58.9MB
docker.io/calico/pod2daemon-flexvol   v3.16.6             e44bc3f1b8a9b       9.43MB
docker.io/library/nginx               latest              f6d0b4767a6c4       53.6MB
k8s.gcr.io/kube-proxy                 v1.19.7             9d368f4517bbe       49.3MB
k8s.gcr.io/pause                      3.2                 80d28bedfe5de       300kB
quay.io/coreos/flannel                v0.12.0             4e9f801d2217e       17.1MB

And even list the pods instead of containers :

[email protected]:~# crictl pods
POD ID              CREATED             STATE               NAME                NAMESPACE           ATTEMPT
8e33f7170a338       11 hours ago        Ready               sample              default             0
08af77d2191dd       11 hours ago        Ready               canal-rs895         kube-system         0
4ccdd921dfe19       11 hours ago        Ready               kube-proxy-x77z7    kube-system         0
[email protected]:~#

Next ?

Now as we are able to run a worker node with containerd, we are ready to explore other runtime like gVisor or KataContainer ! This will be the subject of a future post !

See you soon :)

comments powered by Disqus