Installing Kubernetes

OpenSUSE Leap 15.3

  1. Open these ports on the firewall: https://kubernetes.io/docs/reference/ports-and-protocols/

    1. firewall-cmd --add-port=6443/tcp --permanent

    2. Repeat for all required ports

  2. Install conntrack: zypper install conntrack-tools

  3. Install socat: zypper install socat

  4. Enable ip_forward: echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward

  5. Install containerd as the container runtime: Follow instructions on containerd’s Getting Started page

    • Go get runc and CNI plugin files (see following steps)

    • Go get the latest released containerd archive

    • Untar it to /usr/local: tar Cxzvf /usr/local containerd-1.6.6-linux-amd64.tar.gz

    • Download containerd.service and copy it to /usr/lib/systemd/system/containerd.service (Official docs say to put it at /usr/local/lib, there’s nothing there and this won’t work. Put it in /usr/lib/…)

    • Launch systemd daemon for containerd:

      • systemctl daemon-reload

      • systemctl enable --now containerd

    • Get runc archive if you don’t already have it: https://github.com/opencontainers/runc/releases

    • Install it: install -m 755 runc.amd64 /usr/local/sbin/runc

    • Go get CNI plugins: https://github.com/containernetworking/plugins/releases, and install them under opt/cni/bin:

      • mkdir -p /opt/cni/bin

      • tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

    • Check it works by running ctr --help

    • Generate default containerd daemon configuration: containerd config default > /etc/containerd/config.toml

  6. Configure systemd as cgroup with runc:

    • Edit /etc/containerd/config.toml:

         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
             ...
             [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
             SystemdCgroup = true
    • Restart containerd: sudo systemctl restart containerd

  7. Install kubeadm: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

  8. Install critcl:

     DOWNLOAD_DIR=/usr/local/bin
     sudo mkdir -p $DOWNLOAD_DIR
     VERSION="v1.24.2"
     wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
     sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
     rm -f crictl-$VERSION-linux-amd64.tar.gz
  9. Install kubeadm, kubectl, kubelet

     RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"`
     ARCH="amd64"
     cd $DOWNLOAD_DIR
     sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
     sudo chmod +x {kubeadm,kubelet,kubectl}
     RELEASE_VERSION="v0.4.0"
     curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
     sudo mkdir -p /etc/systemd/system/kubelet.service.d
     curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  10. Enable kubelet: systemctl enable --now kubelet

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

 export KUBECONFIG=/etc/kubernetes/admin.conf
  1. You should now deploy a pod network to the cluster.

  2. Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.26.143.70:6443 --token oos4yr.46zwgo9mzqavckkx --discovery-token-ca-cert-hash sha256:2b725a2cda814b07ee07c9d704de5a5cc2451c746eeb5b32277ebe661b9a36e4

Rocky Linux 9.6

This example install will be on the mawenzi cluster, using mawenzi-01 as the admin node.

This guide does not cover installing Rocky Linux itself. For that, see: Rocky Linux Install Guide.

Prerequisites

We’ll start by preparing things for the control-plane node.

We’ll need to decide on a private CIDR subnet to use for the pods in our cluster, and another to use for the services in our cluster.

The default pod subnet that the Flannel CNI uses is 10.244.0.0/16.

The default service subnet that Kubernetes uses is 10.96.0.0/12. We’ll be using these for our examples going forward.

Proxies:

Set up DNF HPE proxy so we can download things with DNF from the internet.

cat >> /etc/dnf/dnf.conf << EOF
[main]
gpgcheck=0
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
proxy=http://proxy.houston.hpecorp.net:8080
EOF

Set up HTTP HPE proxy so we can download things with wget, etc, from the internet. We’ll need to disable this later on when setting up kubernetes; otherwise internal requests will try to use the proxy and will fail. kubeadm will also try to download things from the internet, so having the correct proxy environment set is critical. We’ll need to make sure we don’t use a proxy for our pod subnet, our lab network subnet, or the service API subnet for kubernetes, so add those to the no_proxy section.

cat >> /etc/environment << EOF
http_proxy="http://proxy.houston.hpecorp.net:8080/"
https_proxy="http://proxy.houston.hpecorp.net:8080/"
ftp_proxy="http://proxy.houston.hpecorp.net:8080/"
no_proxy="localhost,127.0.0.1,hpc.amslabs.hpecorp.net,10.214.128.0/21,10.96.0.0/12,10.244.0.0/16"
EOF

Install utilities:

Upgrade as many packages, and the kernel with DNF as we can before we get started:

dnf -y upgrade

Download basic CLI utilities

dnf -y install wget curl vim openssl git tar conntrack-tools socat

Enable IPv4 forwarding

modprobe bridge
modprobe br_netfilter
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward

Disable firewalld

systemctl stop firewalld
systemctl disable firewalld

Note: I don’t recommend doing this in a production cluster. This is only to simplify things for this dev cluster.

Disable swap

# This only disables swap for the current session.
swapoff -a

# This will find any lines in /etc/fstab with 'swap' and comment them out
sed -i.bak '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Disable SELinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

Note: I don’t recommend doing this in a production cluster. This is only to simplify things for this dev cluster.

Install containerd, CNI plugin, and runc

This step will also install runc.amd64 and CNI plugins.

download.sh
#!/bin/bash

# https://github.com/opencontainers/runc/releases
RUNC_VERSION="v1.3.0"

# https://github.com/kubernetes-sigs/cri-tools/releases
CRI_TOOLS_VERSION="v1.33.0"

# https://github.com/containernetworking/plugins/releases
CNI_PLUGINS_VERSION="v1.7.1"

# https://github.com/containerd/containerd/releases
CONTAINERD_VERSION="2.1.4"

download_urls=(
	https://github.com/kubernetes-sigs/cri-tools/releases/download/$CRI_TOOLS_VERSION/crictl-$CRI_TOOLS_VERSION-linux-amd64.tar.gz
	https://github.com/opencontainers/runc/releases/download/$RUNC_VERSION/runc.amd64
	https://github.com/containernetworking/plugins/releases/download/$CNI_PLUGINS_VERSION/cni-plugins-linux-amd64-$CNI_PLUGINS_VERSION.tgz
	https://github.com/containerd/containerd/releases/download/v$CONTAINERD_VERSION/containerd-$CONTAINERD_VERSION-linux-amd64.tar.gz
)

for download_url in "${download_urls[@]}"; do
	wget -q --show-progress  --https-only --timestamping "$download_url"
done

Extract containerd tarball to /usr/local:

tar Cxzvf /usr/local containerd-*-linux-amd64.tar.gz

We’ll be running containerd with systemd. This unit file can be found here, and the below example uses modifications to the environment variables for the service to use the HPE proxy. This will allow the containerd runtime to pull images from outside the lab.

Generate a default containerd configuration file:

mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

Edit this file and add systemd as the cgroup driver for containerd. https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd

[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
  ...
  [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
    SystemdCgroup = true

Install runc.amd64. This should have been downloaded from the download.sh script before, but if not, you can get it from https://github.com/opencontainers/runc/releases.

install -m 755 runc.amd64 /usr/local/sbin/runc

Install CNI plugins. This should have been downloaded from the download.sh script before, but if not, you can get it from https://github.com/containernetworking/plugins/releases.

mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v*.tgz
/usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target dbus.service

[Service]
Environment="HTTP_PROXY=http://proxy.houston.hpecorp.net:8080/"
Environment="HTTPS_PROXY=http://proxy.houston.hpecorp.net:8080/"
Environment="FTP_PROXY=http://proxy.houston.hpecorp.net:8080/"
Environment="NO_PROXY=localhost,127.0.0.1,.us.cray.com,.hpe.com,hpc.amslabs.hpecorp.net,10.214.128.0/21,10.96.0.0/12,10.244.0.0/16"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

With this in place, enable and load containerd with systemd:

systemctl daemon-reload
systemctl enable --now containerd

Check the status. It should look like this:

[root@mawenzi-01 downloads]# systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: disabled)
     Active: active (running) since Tue 2025-08-26 10:31:12 MDT; 3s ago
       Docs: https://containerd.io
    Process: 117075 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 117077 (containerd)
      Tasks: 17
     Memory: 36.2M
        CPU: 194ms
     CGroup: /system.slice/containerd.service
             └─117077 /usr/local/bin/containerd

Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426293642-06:00" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426334788-06:00" level=info msg="Start cni network conf syncer for default"
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426447839-06:00" level=info msg=serving... address=/run/containerd/containerd.sock
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426473006-06:00" level=info msg="Start streaming server"
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426507981-06:00" level=info msg="Registered namespace \"k8s.io\" with NRI"
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426545321-06:00" level=info msg="runtime interface starting up..."
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426564517-06:00" level=info msg="starting plugins..."
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.426601917-06:00" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Aug 26 10:31:12 mawenzi-01 containerd[117077]: time="2025-08-26T10:31:12.427695697-06:00" level=info msg="containerd successfully booted in 0.121902s"
Aug 26 10:31:12 mawenzi-01 systemd[1]: Started containerd container runtime.

Install Kubernetes packages

This gets us kubeadm, kubectl, and kubelet.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Enable kubelet systemd service

Enable the kubelet service.

systemctl enable --now kubelet

Note: If you check the status of the kubelet service (systemctl status kubelet), you’ll see that it’s exiting with code FAILURE. This is normal. It’s restarting every few seconds, while we wait for kubeadm to tell it what to do.

Using kubeadm to init cluster

On the control-plane node, initialize the cluster with the following args:

  • --node-name: this will ensure that our control-plane’s hostname will have an entry in the certificates that get generated.

  • --pod-network-cidr: this must match the Pod subnet we’ll use with the Flannel CNI. This is Flannel’s default.

kubeadm init --pod-network-cidr 10.244.0.0/16 --node-name mawenzi-01
Example
[root@mawenzi-01 ~]# https_proxy=http://proxy.houston.hpecorp.net:8080 no_proxy="localhost,127.0.0.1,hpc.amslabs.hpecorp.net,10.214.128.0/21,10.96.0.0/12,10.244.0.0/16" kubeadm init --pod-network-cidr 10.244.0.0/16 --node-name mawenzi-01
I0827 16:25:40.233312   14045 version.go:261] remote version is much newer: v1.34.0; falling back to: stable-1.33
[init] Using Kubernetes version: v1.33.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local mawenzi-01] and IPs [10.96.0.1 10.214.134.147]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost mawenzi-01] and IPs [10.214.134.147 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost mawenzi-01] and IPs [10.214.134.147 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501224529s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://10.214.134.147:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is healthy after 2.718165783s
[control-plane-check] kube-controller-manager is healthy after 4.187683893s
[control-plane-check] kube-apiserver is healthy after 4.501363588s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node mawenzi-01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node mawenzi-01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ano6ya.1nfsxs6sv96a2mvv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.214.134.147:6443 --token ano6ya.1nfsxs6....a2mvv \
	--discovery-token-ca-cert-hash sha256:66077c37436bc739bdd55238bda....31e57fbd42c64f482d0ca9e86a62f1ab

Save the output, particularly the kubeadm join command, to a text file. We’ll use it later.

Next, run:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

And add this to your ~/.bashrc or ~/.zshrc file: export KUBECONFIG=/etc/kubernetes/admin.conf

You should now be able to see that systemctl status kubelet is reporting Active/Running and should be relatively healthy.

Installing Flannel CNI

We’ll now need to install a CNI so we have the host-to-pod networking layer available. The CNI is what’s responsible for configuring network interfaces for Linux containers, assigning IPs to pods, etc.

Without this, pods have no way of communicating with each other, and this is a critical step in getting a cluster running.

We’ll be using Flannel CNI for this. Since we used the default Pod CIDR for Flannel in our kubeadm init command, we can just use the default set of Flannel CRDs:

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Example:
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

This creates a Flannel ServiceAccount, RBAC CRDs, ConfigMap, and a DaemonSet. The DaemonSet will run a Flannel pod on every node in the cluster, which will be responsible for managing that node’s networking.

Check the nodes and pods for the cluster now:

[root@mawenzi-01 ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
mawenzi-01   Ready    control-plane   5h13m   v1.33.4

[root@mawenzi-01 ~]# kubectl get pods -A
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-4482d                1/1     Running   0          5h11m
kube-system    coredns-674b8bbfcf-hlgg8             1/1     Running   0          5h11m
kube-system    coredns-674b8bbfcf-nk92m             1/1     Running   0          5h11m
kube-system    etcd-mawenzi-01                      1/1     Running   6          5h11m
kube-system    kube-apiserver-mawenzi-01            1/1     Running   1          5h11m
kube-system    kube-controller-manager-mawenzi-01   1/1     Running   1          5h11m
kube-system    kube-proxy-7mb9w                     1/1     Running   0          5h11m
kube-system    kube-scheduler-mawenzi-01            1/1     Running   1          5h11m

Nodes are now ready to join the cluster!

At this point, I’d recommend setting up password-less SSH to the nodes in your cluster to make your life easier, before moving on to setting up those nodes.

Set up worker nodes SSH access

With a blank install, you won’t have any SSH keys generated:

[root@mawenzi-01 kubernetes]# ls -la ~/.ssh
total 16
drwx------. 2 root root   71 Aug 22 11:57 .
dr-xr-x---. 5 root root 4096 Aug 22 11:53 ..
-rw-------. 1 root root  195 Aug 22 10:02 authorized_keys
-rw-------. 1 root root  828 Aug 22 11:57 known_hosts
-rw-r--r--. 1 root root   92 Aug 22 11:57 known_hosts.old

Use ssh-keygen to generate a public/private key pair.

[root@mawenzi-01 kubernetes]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:QxH4vc8bmPmzyAKa+HPk0nUsnRPdXIf51fBJx98zO0c root@mawenzi-01
The key's randomart image is:
+---[RSA 3072]----+
|       .o.    .*o|
|      .  .    +.O|
|       ... . o +*|
|       .. o . ooE|
|        So +   .+|
|      o o.B+   o.|
|   . * o o++.   o|
|  . = + .. o+.   |
|   ..+   .o ++   |
+----[SHA256]-----+

Copy the public key to each node in your cluster.

ssh-copy-id root@${IP}

Joining Nodes to the Cluster

Every node in the cluster will have to have the same steps done to:

  • Set proxies; dnf configuration

  • Install basic dependencies

  • Disable swap

  • Disable firewalls and SELinux, open ports

  • Enable port forwarding, bridge networks, etc

  • Install containerd, runc, CNI plugins

    • Configure containerd to use systemd as cgroup manager

    • Run containerd with systemd, using appropriate service file

  • Install kubelet, kubectl, and kubeadm

    • Run kubelet with systemd

You’ll follow all the same steps from Prerequisites up until Using kubeadm to init cluster to prepare a worker node.

Taking the output from the original kubeadm init command, which you should have saved to a file, run the kubeadm join command with the token.

Example
[root@mawenzi-02 ~]# kubeadm join 10.214.134.147:6443 --token ano6ya.1nfs......6a2mvv \
        --discovery-token-ca-cert-hash sha256:66077c37436bc739bdd55238.....e8e31e57fbd42c64f482d0ca9e86a62f1ab
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.800726ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

On the control-plane node, run kubectl get nodes:

[root@mawenzi-01 ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
mawenzi-01   Ready    control-plane   5h24m   v1.33.4
mawenzi-02   Ready    <none>          29m     v1.33.4

We’ve got a new node in our cluster! You should also see an instance of kube-proxy, and kube-flannel running on the node:

[root@mawenzi-01 ~]# kubectl get pods -A -o wide | grep mawenzi-02
kube-flannel   kube-flannel-ds-cdw8t                1/1     Running   0          30m     10.214.130.159   mawenzi-02   <none>           <none>
kube-system    kube-proxy-826t2                     1/1     Running   0          30m     10.214.130.159   mawenzi-02   <none>           <none>

Now, rinse and repeat for the rest of the nodes:

…​

[root@mawenzi-01 ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
mawenzi-01   Ready    control-plane   5h31m   v1.33.4
mawenzi-02   Ready    <none>          35m     v1.33.4
mawenzi-03   Ready    <none>          21s     v1.33.4