Configuration of the Kubernetes cluster with external ETCD for a lab environment (1)

Sunho Song
6 min readOct 22, 2021

This time, let’s configure the Kubernetes cluster, which can be said to be the main part of the lab environment. Since Kubernetes configuration requires various prior knowledge, be sure to check the installation document on the Kubernetes homepage before configuration. If you think that it does not cover more than 50% of the contents of the installation inquiry, we recommend installing using kubespray.

The architecture for continuous machine learning in a lab environment

The Kubernetes cluster is largely composed of ETCD, control plane, and workers. ETCD is a key-value store that stores all state information of the Kubernetes cluster. The control plane is where the Kubernetes cluster management module exists, and scheduling and API Server are typically executed. Finally, the worker node is where the PODs run. For more information, check the Kubernetes homepage. If you want to learn the whole thing, please take the Certified Kubernetes Administrator (CKA) with Practice Tests course by udemy.com. (The course is quite long, but if you get the CKA certificate after training at the end, you will feel a special sense of accomplishment.)

Kubernetes Install

So, let’s go to the main topic and first set up the OS environment to install Kubernetes. A total of 8 test servers consist of 3 ETCDs, 3 Control Planes, and 2 Worker nodes. Since ETCD and Control Plane use relatively few resources, 2 vCPU and 2 GiB of memory were allocated, and 4 vCPU and 16 GiB of memory were allocated for the worker because various tests will be performed in the future.

+------------+---------+------------+-----------------+
| Hostname | OS | IP | ROLE |
+------------+---------+------------+-----------------+
| k8setcd1 | Rocky 8 | 10.0.0.151 | ETCD 1 |
| k8setcd2 | Rocky 8 | 10.0.0.152 | ETCD 2 |
| k8setcd3 | Rocky 8 | 10.0.0.153 | ETCD 3 |
| k8smaster1 | Rocky 8 | 10.0.0.154 | Control Plane 1 |
| k8smaster2 | Rocky 8 | 10.0.0.155 | Control Plane 2 |
| k8smaster3 | Rocky 8 | 10.0.0.156 | Control Plane 3 |
| k8sworker1 | Rocky 8 | 10.0.0.157 | Worker1 |
| k8sworker2 | Rocky 8 | 10.0.0.158 | Worker8 |
+------------+---------+------------+-----------------+

Basic environment configuration

Since we plan to use containerd as the Container Runtime Interface (CRI) of Kubernetes, we need to configure containerd first. The first thing to do is to set the kernel parameters and disable SELINUX.

$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
# persistent settings
$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sudo systemctl --system
# Set SELinux in permissive mode (effectively disabling it)
$ setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Containerd installation

Install containerd and let it load the default configuration file. In addition, let’s proceed with the basic network settings to be used in containerd. For detailed settings, refer to the following and save them as /etc/cni/net.d/10-containerd-net.conflist file. This will continue to occur. A separate configuration is not required for Control Plane and Worker nodes.)

$ dnf install -y containerd.io
$ mkdir -p /etc/containerd
$ containerd config default | sudo tee /etc/containerd/config.toml
$ cat << EOF | sudo tee -a /etc/cni/net.d/10-containerd-net.conflist
{
"cniVersion": "0.4.0",
"name": "containerd-net",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"promiscMode": true,
"ipam": {
"type": "host-local",
"ranges": [
[{
"subnet": "10.88.0.0/16"
}],
[{
"subnet": "2001:4860:4860::/64"
}]
],
"routes": [
{ "dst": "0.0.0.0/0" },
{ "dst": "::/0" }
]
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true}
}
]
}

Next, if you want to use containerd’s cgroup as systemd, proceed with additional cgroup-related settings as follows. You can proceed with the description in config.toml.

# add SystemCgroup config to config.toml[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
$ systemctl restart containerd

ETCD Installation

Now let’s build an ETCD cluster. ETCD cluster starts by making the root CA certificate first. A ROOT CA certificate can be created simply with the kubeadm command. However, since the current kubeadm command is not installed, add the Kubernetes repository and install kubeadm, kubelet, and kubectl. If you want to install what specific Kubernetes version you can select likes “kubelet-1.21.3”.

$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

$ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

When kubeadm is installed, it is possible to create a root CA certificate. The root ca be unique within the Kubernetes Cluster, so create and use it in the ETCD Master server.

$ kubeadm init phase certs etcd-ca

The generated ca.key ca.crt file exists in /etc/kubernetes/pki/etcd. Copy the created ca.key and ca.crt files to the ETCD secondary1 and 2 servers. File copying is possible with the scp command. (It is convenient to copy the ssh key in advance for file copying.)

$ ssh root@k8setcd2 "mkdir -p /etc/kubernetes/pki/etcd"
$ ssh root@k8setcd3 "mkdir -p /etc/kubernetes/pki/etcd"
$ scp -r /etc/kubernetes/pki/etcd/ca.{key,crt} root@k8setcd2:/etc/kubernetes/pki/etcd/
$ scp -r /etc/kubernetes/pki/etcd/ca.{key,crt} root@k8setcd3:/etc/kubernetes/pki/etcd/

After the ca certificate is copied, the kubeadm config file required for ETCD Cluster initialization is created. The kubeadm config file should be written on each of the 3 ETCD servers, and only the HOST and NAME listed below should be changed and executed.

$ export HOST0=10.0.0.151
$ export HOST1=10.0.0.152
$ export HOST2=10.0.0.153
$ ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
$ NAMES=("k8setcd1" "k8setcd2" "k8setcd3")
$ HOST="10.0.0.151" # on k8setcd1
$ NAME="k8setcd1" # on k8setcd1
$ cat << EOF > /tmp/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: ClusterConfiguration
etcd:
local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380
listen-client-urls: https://${HOST}:2379
advertise-client-urls: https://${HOST}:2379
initial-advertise-peer-urls: https://${HOST}:2380
EOF

When the configuration file is completed, execute the following commands sequentially on the ETCD server.

$ kubeadm init phase certs etcd-server --config=/tmp/kubeadmcfg.yaml
$ kubeadm init phase certs etcd-peer --config=/tmp/kubeadmcfg.yaml
$ kubeadm init phase certs etcd-healthcheck-client --config=/tmp/kubeadmcfg.yaml
$ kubeadm init phase certs apiserver-etcd-client --config=/tmp/kubeadmcfg.yaml

If the command is executed properly, you can check that the file is created in the tree structure below.

/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
├── ca.crt
├── ca.key
├── healthcheck-client.crt
├── healthcheck-client.key
├── peer.crt
├── peer.key
├── server.crt
└── server.key

Now that the certificate configuration is complete, let’s initialize the ETCD. Let’s use kubeadm again this time. Initialization must be performed on all ETCD servers.

$ kubeadm init phase etcd local --config=/tmp/kubeadmcfg.yaml

The normal operation of the ETCD Cluster can be checked by the following health check method. Please be careful to enter the appropriate version of ETCD in the command below. (Additional configuration is required to execute the crictl command. For a detailed setting method, refer to “Debugging Kubernetes nodes with crictl” in the reference.)

$ crictl run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.4.13-0 etcdctl \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://10.0.0.151:2379 endpoint health --cluster
https://10.0.0.151:2379 is healthy: successfully ...
https://10.0.0.152:2379 is healthy: successfully ...
https://10.0.0.153:2379 is healthy: successfully ...

Conclusion

I have briefly summarized how to build an ETCD cluster. ETCD Cluster is used to store the state information of the Kubernetes cluster and can additionally store the state information of the CNI. Next time, let’s learn about Control Plane settings.

I hope it will be helpful to those who are looking for related content. And if you have saved a lot of time with this content, please donate a cup of coffee. (Please help me to write while eating ice americano at a local cafe.)

https://buymeacoffee.com/7ov2xm5

And I am looking for a job. If you are interested, please comment for me.

References

Container runtimes

Set up a high availability etcd cluster with kubeadm

Debugging Kubernetes nodes with crictl

--

--

Sunho Song

I have developed an AI platform for semiconductor defect analysis. I am very interested in MLOps and love to learn new skills.