1080*80 ad

Setting up a Kubernetes Cluster on RHEL 9

Setting up a Kubernetes cluster on RHEL 9 requires careful preparation and execution. This guide walks you through the essential steps to get your cluster up and running smoothly.

Before you begin, ensure you have multiple machines (virtual or physical) running RHEL 9. You’ll need at least one machine to act as the control plane node (formerly master node) and one or more machines to serve as worker nodes. Each machine should have at least 2GB of RAM, 2 CPUs, and sufficient disk space. Ensure all nodes can communicate with each other over the network. A stable internet connection is also crucial for downloading necessary packages.

Step 1: Prepare All Nodes

On every node (control plane and workers), you need to perform some initial setup.

First, disable swap. Kubernetes components, particularly the Kubelet, function best without swap memory.
You can disable swap temporarily with:
sudo swapoff -a
To make this change persistent across reboots, edit the /etc/fstab file and comment out the line referring to swap (usually starting with /dev/mapper/rhel-swap). Add a # at the beginning of the line.

Next, configure the container runtime. Kubernetes uses a container runtime interface (CRI) to interact with container engines like Containerd or CRI-O. Containerd is a popular choice.
Install the necessary packages for Containerd:
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y containerd.io
Enable and start the Containerd service:
sudo systemctl enable containerd --now

Ensure the br_netfilter module is loaded. This is necessary for network communication between pods.
Load the module:
sudo modprobe br_netfilter
Enable required sysctl parameters for Kubernetes networking:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Apply the changes:
sudo sysctl --system

Step 2: Install Kubernetes Packages

You need to install kubeadm, kubelet, and kubectl on all nodes. kubeadm bootstraps the cluster, kubelet runs on all nodes and starts pods, and kubectl is the command-line tool for interacting with the cluster.

Add the Kubernetes repository:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
EOF

Note: Replace v1.29 with your desired Kubernetes version.

Install the packages, ensuring you specify the desired version to prevent unexpected upgrades:
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Enable the kubelet service, but do not start it yet. Kubelet will wait for kubeadm to tell it what to do.
sudo systemctl enable kubelet

Step 3: Initialize the Control Plane Node

This step is performed only on the designated control plane node.

Choose a Pod Network Add-on (CNI). Popular choices include Calico or Flannel. The kubeadm init command needs to know the CIDR range for the pod network. For Calico or Flannel, a common choice is 10.244.0.0/16.

Initialize the control plane:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

This command downloads necessary images, sets up the control plane components (API server, scheduler, controller manager, etcd), and generates a join command for worker nodes.

Once the initialization is complete, the output will provide instructions to configure kubectl access for a regular user. Run the following commands as instructed in the output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You can verify the cluster status:
kubectl get nodes
Initially, you will see only the control plane node, likely in a “NotReady” state because the network plugin is not yet installed.

Step 4: Install a Pod Network Add-on

This step is performed only on the control plane node after kubeadm init.

Install your chosen CNI. For Flannel:
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

For Calico:
kubectl apply -f https://docs.tigera.io/archive/v3.26/manifests/tigera-operator.yaml
kubectl apply -f https://docs.tigera.io/archive/v3.26/manifests/custom-resources.yaml
Note: Refer to the official Calico documentation for the latest manifest URLs.

Wait for the network pods to start. You can monitor their status:
kubectl get pods --all-namespaces
Once the network pods are running, the control plane node should transition to a “Ready” state.

Step 5: Join Worker Nodes

On each worker node, run the kubeadm join command provided in the output of the kubeadm init command from the control plane node. It will look similar to this:
sudo kubeadm join <control-plane-ip>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

If you lose the join command, you can regenerate the token on the control plane node:
kubeadm token create --print-join-command

After running the join command on a worker node, verify that the node has joined the cluster by running kubectl get nodes on the control plane node. The new worker node should appear, eventually transitioning to a “Ready” state once the network plugin components are running on it.

Step 6: Verify the Cluster

On the control plane node, confirm all nodes are in the “Ready” state:
kubectl get nodes

Deploy a simple test application to ensure everything is working:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort

Find the NodePort assigned to the service:
kubectl get service nginx
Look for the port mapping like 80:3xxxx/TCP. You should now be able to access the Nginx welcome page by navigating to any node’s IP address on the assigned NodePort (e.g., http://<node-ip>:3xxxx).

Troubleshooting

  • If nodes are “NotReady”, check the kubelet logs (sudo journalctl -u kubelet) for errors.
  • Ensure firewall rules (firewalld) on all nodes allow necessary traffic. Kubernetes requires specific ports open (e.g., 6443 for the API server, 10250 for Kubelet). Consider temporarily disabling firewalld (sudo systemctl disable firewalld --now) for testing, but re-enable and configure it properly for production.
  • Verify container runtime status (sudo systemctl status containerd).
  • Check pod status in the kube-system namespace (kubectl get pods -n kube-system).

By following these steps, you will have a basic Kubernetes cluster running on RHEL 9, ready for deploying and managing your containerized applications. Remember to explore further configurations like storage classes, ingresses, and monitoring solutions as you grow your cluster.

Source: https://kifarunix.com/install-and-setup-kubernetes-cluster-on-rhel-9/

900*80 ad

      1080*80 ad