Configuration of the Kubernetes cluster with external ETCD for a lab environment (2)

Sunho Song
8 min readOct 30, 2021

In the control plane server, kube-apiserver, kube-scheduler, and kube-controller-manager are executed. In general, ETCD is also executed, but ETCD is executed on a separate server because the configuration is in progress by using an external ETCD.

The architecture for continuous machine learning in a lab environment

Prepare Certificate

First, to initialize the control plane server, you need to copy the CA certificate created by ETCD primary. The required certificate files are a total of three files: ca.crt, apiserver-etcd-client.crt, and apiserver-etcd-client.key. Copy the files to the /etc/kubernetes/pki/etcd and /etc/kubernetes/pki directories.

$ mkdir -p /etc/kubernetes/pki/etcd
$ scp root@k8setcd1:/etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/etcd/ca.crt
$ scp -r root@k8setcd1:/etc/kubernetes/pki/apiserver-etcd-client.{crt,key} /etc/kubernetes/pki/

Initialize Control Plane Primary

When the certificate copy is completed, you can initialize the control plane server through the kubeadm command. Initialization is performed in the order of primary to secondary servers. The kubeadm config file is required for the initial confirmation of the control plane primary server, so let’s use it by modifying the necessary contents in the example below.

$ NODE_IP=10.0.0.154
$ ETCD1_IP=10.0.0.151
$ ETCD2_IP=10.0.0.152
$ ETCD3_IP=10.0.0.153
$ LB_IP=10.0.0.150
$ LB_PORT=6443
$ cat << EOF > /root/kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "${NODE_IP}"
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.21.3
controlPlaneEndpoint: "${LB_IP}:${LB_PORT}"
etcd:
external:
endpoints:
- https://${ETCD1_IP}:2379
- https://${ETCD1_IP}:2379
- https://${ETCD1_IP}:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
apiServer:
extraArgs:
authorization-mode: "Node,RBAC"
service-account-issuer: "kubernetes.default.svc"
service-account-signing-key-file: "/etc/kubernetes/pki/sa.key"
# oidc-issuer-url: "https://{{ oidc_url }}/auth/realms/homelab"
# oidc-client-id: "kubernetes"
# oidc-username-claim: "preferred_username"
# oidc-username-prefix: "-"
# oidc-groups-claim: "groups"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
networking:
serviceSubnet: 172.96.64.0/18
podSubnet: 172.96.128.0/18
clusterName: homelab
EOF
$

If you want to link the OIDC Provider for Kubernetes authentication, uncomment the oidc-related settings before using them. When the kubeadm related configuration file is ready, use kubeadm to initialize the control plane primary server.

$ kubeadm init --config /root/kubeadm-config.yaml --upload-certs
W1031 15:25:22.591560 4068 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.96.0.1 10.0.0.154 10.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.527173 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8smaster1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wbehps.5vzwsr2qhdkyeqej
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.0.0.1:6443 --token wbehps.5vzwsr2qhdkyeqej \
--discovery-token-ca-cert-hash sha256:56bdf55e4d27d82cdedd9b0fe1aa9372b747ebd040d8eca4c650d15878538b37 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.0.0.1:6443 --token wbehps.5vzwsr2qhdkyeqej \
--discovery-token-ca-cert-hash sha256:56bdf55e4d27d82cdedd9b0fe1aa9372b747ebd040d8eca4c650d15878538b37
$

Initialize CNI

After initializing the control plane primary server, initialize the CNI service. The recently used CNIs are calico and cilium. Since cilium has a high number of eBFP and github stars in Linux 4.x kernel, the cilium is used here.

Create a client certificate for CNI using the root ca file of the ETCD primary server.

$ CNI_PEER_FQDN=("k8smaster1" "k8smaster2" "k8smaster3")
$ CNI_PEER_IP=("10.0.0.154" "10.0.0.155" "10.0.0.156")
$ cat << EOF > /root/cni-csr.conf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name][ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth,serverAuth
subjectAltName = ${CNI_PEER_FQDN[0]},${CNI_PEER_FQDN[1]},${CNI_PEER_FQDN[2]},${CNI_PEER_IP[0},${CNI_PEER_IP[1},${CNI_PEER_IP[2}
EOF
$ openssl genrsa -out /root/cni-etcd-client.key 2048
$ openssl rsa -in /root/cni-etcd-client.key -out /root/cni-etcd-client-nodes.key
$ chmod 644 /root/cni-etcd-client.key /root/cni-etcd-client-nodes.key
$ openssl req -new -sha512 \
-extensions v3_req \
-key /root/cni-etcd-client-nodes.key \
-config /root/cni-openssl.conf \
-subj "/CN=cni-etcd-client" \
-out /root/cni-etcd-client.csr
$ openssl x509 -req -sha512 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-days 3650 \
-extensions v3_req \
-extfile /root/cni-openssl.conf \
-in /root/cni-etcd-client.csr \
-out /root/cni-etcd-client.crt
$

When the certificate generation for CNI is completed in the ETCD primary server, copy it to the Control plane primary server. And, after downloading the cilium installation file from github, extract the compressed file.

$ scp -r root@k8setcd1:/root/cni-etcd-client.{crt,key} /etc/kubernetes/pki/
$ wget https://github.com/cilium/charts/blob/master/cilium-1.9.10.tgz -O /root/cilium-1.9.10.tgz
$ cd /root && tar zxf cilium-1.9.10.tgz

Register certificate information in configmap before installing cilium.

$ kubectl create secret generic -n kube-system cilium-etcd-secrets \
--from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt \
--from-file=etcd-client.key=/etc/kubernetes/pki/cni-etcd-client.key \
--from-file=etcd-client.crt=/etc/kubernetes/pki/cni-etcd-client.crt
$

Now let’s initialize cilium using the helm command. Because external ETCD is used, redefine ETCD information and modify CIDR information to the information used in the control plane.

$ ETCD1_IP=10.0.0.151
$ ETCD2_IP=10.0.0.152
$ ETCD3_IP=10.0.0.153
$ helm install cilium root/cilium --version 1.9.10 \
--namespace kube-system \
--set "etcd.enabled=true" \
--set "etcd.ssl=true" \
--set "etcd.endpoints[0]=https://${ETCD1_IP}:2379" \
--set "etcd.endpoints[1]=https://${ETCD2_IP}:2379" \
--set "etcd.endpoints[2]=https://${ETCD3_IP}:2379" \
--set "ipam.operator.clusterPoolIPv4PodCIDR=172.96.128.0/18"
$

Join Control Plane Secondary Node 1, 2

Now it’s time to join the control plane secondary server. After completing the join command by creating token and executing upload-certs on the primary server, the secondary1 and secondary servers execute the corresponding command.

$ kubeadm token create --print-join-command
9vr73a.a8uxyaju799qwdjv
$ kubeadm init phase upload-certs --upload-certs --config /root/kubeadm-config.yaml
f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07

Control plane secondary 1

$ NODE_IP=10.0.0.155
$ kubeadm join 10.0.0.150:6443 --token=token 9vr73a.a8uxyaju799qwdjv \
--discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 \
--certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 \
--control-plane --apiserver-advertise-address ${NODE_IP} > /root/master_joined.txt

Control plane secondary 2

$ NODE_IP=10.0.0.156
$ kubeadm join 10.0.0.150:6443 --token=token 9vr73a.a8uxyaju799qwdjv \
--discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 \
--certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 \
--control-plane --apiserver-advertise-address ${NODE_IP} > /root/master_joined.txt

After the initialization of control plane secondary1,2 is completed, execute the kubectl get node and kubectl get pods commands to check whether the kubernetes cluster is properly initialized.

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster1 Ready control-plane,master 8m7s v1.21.3
k8smaster2 Ready control-plane,master 2m30s v1.21.3
k8smaster3 Ready control-plane,master 2m30s v1.21.3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cilium-8pc4w 1/1 Running 0 107s
cilium-b7zjz 1/1 Running 0 107s
cilium-operator-6b86c496f7-gx9qh 1/1 Running 0 2m35s
cilium-operator-6b86c496f7-lfkx5 1/1 Running 0 2m35s
cilium-r4tgg 1/1 Running 0 2m35s
coredns-558bd4d5db-7zpsh 1/1 Running 0 105s
coredns-558bd4d5db-t4hz4 1/1 Running 0 2m
kube-apiserver-k8smaster1 1/1 Running 0 7m23s
kube-apiserver-k8smaster2 1/1 Running 0 105s
kube-apiserver-k8smaster3 1/1 Running 0 106s
kube-controller-manager-k8smaster1 1/1 Running 0 7m9s
kube-controller-manager-k8smaster2 1/1 Running 0 105s
kube-controller-manager-k8smaster3 1/1 Running 0 105s
kube-proxy-chmts 1/1 Running 0 7m8s
kube-proxy-r9m79 1/1 Running 0 107s
kube-proxy-xx98n 1/1 Running 0 107s
kube-scheduler-k8smaster1 1/1 Running 0 7m9s
kube-scheduler-k8smaster2 1/1 Running 0 105s
kube-scheduler-k8smaster3 1/1 Running 0 105s

Conclusion

You have seen how to initialize a control plane node using an external ETCD node. Placing the ETCD nodes externally is useful for future cluster expansion and allows you to build a more reliable Kubernetes cluster to failures. Next, we will learn how to join Worker nodes.

I hope it will be helpful to those who are looking for related content. And if you have saved a lot of time with this content, please donate a cup of coffee. (Please help me to write while eating ice americano at a local cafe.)

https://buymeacoffee.com/7ov2xm5

And I am looking for a job. If you are interested, please comment for me.

References

--

--

Sunho Song

I have developed an AI platform for semiconductor defect analysis. I am very interested in MLOps and love to learn new skills.