Table of Contents Show
Kubeadm is a tool that makes setting up Kubernetes clusters easier by offering two simple commands: kubeadm init
and kubeadm join
, which follow best practices.
In this blog, we’ll walk you through the process of installing a Kubernetes cluster step by step using Kubeadm. covered the step-by-step guide to setting up a kubernetes cluster using Kubeadm with one master and two worker node.
Kubeadm Prerequisites
Before getting started, make sure you have the following:
- A compatible Linux host (such as Debian-based or Red Hat-based distributions). In this guide, we’re using Ubuntu, which is a Debian-based OS.
- Each machine should have at least 2 GB of RAM and 2 CPUs.
- All machines in the cluster must have full network connectivity with each other.
- Every node needs a unique hostname, MAC address, and product UUID.
- Make sure all necessary ports are open for both the control plane and worker nodes. You can refer to the “Ports and Protocols” section or check the screenshot below for details.
Make sure to follow these requirements to ensure a smooth setup.
Kubeadm for Kubernetes Certification Exams
Kubeadm plays a significant role in both the Certified Kubernetes Administrator (CKA) and Certified Kubernetes Security Specialist (CKS) exams.
CKA: Expect tasks involving cluster bootstrapping using kubeadm commands.
CKS: Cluster upgrades using kubeadm will be part of the exam.
Exam Tips:
- Utilize Coupon Codes: Take advantage of available discounts for CKA/CKAD/CKS certifications before potential price hikes.
Setup container runtime (Containerd)
Run these commands on all nodes. I’m using Ubuntu on each node.
Enable IPv4 packet forwarding
# Add sysctl parameters for the setup, persistent across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply sysctl parameters without rebooting
sudo sysctl --system
Run the command sudo sysctl net.ipv4.ip_forward
to verify that net.ipv4.ip_forward
is set to 1.
Load the necessary kernel module dependencies
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Install containerd
Add Docker’s official GPG key:
sudo apt-get update
sudo apt-get -y install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Add the Docker repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install containerd:
sudo apt-get update
sudo apt-get -y install containerd.io
For more details, refer to the Official installation documentation.
Configure systemd cgroup driver for containerd
First, create a containerd configuration file at /etc/containerd/config.toml
:
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Next, enable the systemd cgroup driver for CRI. In the file /etc/containerd/config.toml
, under the section [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
, set SystemCgroup = true
.
Alternatively, you can run this command to update the config:
sudo sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" "/etc/containerd/config.toml"
Restart containerd
sudo systemctl restart containerd
To check if containerd is running, verify its status:
sudo systemctl status containerd
Install kubeadm, kubelet, and kubectl
Run these commands on all nodes. The steps below are for Kubernetes v1.30.
- Install required packages:
First, install the necessary packages like apt-transport-https, ca-certificates, curl, and gpg:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
- Add the Kubernetes GPG key:
Download the public signing key for the Kubernetes package repositories:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
- Add the Kubernetes 1.30 repository:
Add the Kubernetes apt repository to your system:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
- Install kubeadm, kubelet, and kubectl:
Update the package list and install the Kubernetes tools:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
- (Optional) Enable the kubelet service:
Before running kubeadm
, you can enable and start the kubelet service:
sudo systemctl enable --now kubelet
Initialize the Kubernetes control plane
Run the following steps only on the control plane node.
- Initialize the control plane:
Use kubeadm init
to set up the control plane. You also need to choose a Pod network add-on for communication between Pods. We’ll use Calico CNI, so use the --pod-network-cidr=192.168.0.0/16
option.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
After running the command, you’ll get a kubeadm join
command at the end. It will look something like this:
sudo kubeadm join <control-plane-ip>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash <hash>
Copy this command for later use when adding worker nodes.
- Set up
kubectl
access to the cluster:
To access the cluster with kubectl
, run these commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now check the node status:
kubectl get nodes
Initially, the control plane node will show as NotReady because the network plugin hasn’t been set up yet.
- Set up a Pod network
You need to install a Container Network Interface (CNI) to allow Pods to communicate.
Install the Calico operator:
Deploy Calico using the following command:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
Apply Calico custom resources:
Configure Calico with these custom resources:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
Verify Calico installation:
Check that all Calico Pods are running:
kubectl get pods -n calico-system
Once the network is set up, your control plane node should now show as Ready.
Join the worker nodes
On all worker nodes, make sure containerd
, kubeadm
, kubectl
, and kubelet
are installed. Then, use the kubeadm join
command you saved earlier to join the nodes to the cluster.
sudo kubeadm join <control-plane-ip>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash <hash>
Check the cluster state
On the control plane node, check if the worker nodes have joined the cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-xxx-xx-xx-xx Ready worker 12m v1.30.2
ip-yyy-yy-yy-yy Ready worker 3m51s v1.30.2
ip-zzz-zz-z-zz Ready control-plane 31m v1.30.2
You can also label the worker nodes with the worker
role:
kubectl label node <node-name> node-role.kubernetes.io/worker=worker
Verify the cluster workload
To check the status of all Pods running on the cluster:
kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-6fcb65fbd5-n4wsn 1/1 Running 0 35m
calico-apiserver calico-apiserver-6fcb65fbd5-nnggl 1/1 Running 0 35m
calico-system calico-kube-controllers-6f459db86d-mg657 1/1 Running 0 35m
calico-system calico-node-ctc9q 1/1 Running 0 35m
calico-system calico-node-dmgt2 1/1 Running 0 18m
calico-system calico-node-nw4t5 1/1 Running 0 9m49s
calico-system calico-typha-774d5fbdb7-s7qsg 1/1 Running 0 35m
calico-system calico-typha-774d5fbdb7-sxb5c 1/1 Running 0 9m39s
calico-system csi-node-driver-bblm8 2/2 Running 0 35m
calico-system csi-node-driver-jk4sz 2/2 Running 0 18m
calico-system csi-node-driver-tbrrj 2/2 Running 0 9m49s
kube-system coredns-7db6d8ff4d-5f7s5 1/1 Running 0 37m
kube-system coredns-7db6d8ff4d-qj9r8 1/1 Running 0 37m
kube-system etcd-ip-zzz-zz-z-zz 1/1 Running 0 37m
kube-system kube-apiserver-ip-zzz-zz-z-zz 1/1 Running 0 37m
kube-system kube-controller-manager-ip-zzz-zz-z-zz 1/1 Running 0 37m
kube-system kube-proxy-dq8k4 1/1 Running 0 9m49s
kube-system kube-proxy-t2sw9 1/1 Running 0 18m
kube-system kube-proxy-xd6nn 1/1 Running 0 37m
kube-system kube-scheduler-ip-zzz-zz-z-zz 1/1 Running 0 37m
tigera-operator tigera-operator-76ff79f7fd-jj4kf 1/1 Running 0 35m
Deploy A Sample Nginx Application
Now that we have all the components to make the cluster and applications work, let’s deploy a sample Nginx application and see if we can access it over a NodePort
Create an Nginx deployment. Execute the following directly on the command line. It deploys the pod in the default namespace.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
Expose the Nginx deployment on a NodePort 32000
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32000
EOF
Check the pod status using the following command.
kubectl get pods
Once the deployment is up, you should be able to access the Nginx home page on the allocated NodePort.
For example :
Potential Kubeadm Cluster Issues
- Insufficient Master Resources: A common issue arises when the master node lacks sufficient CPU (minimum 2vCPU) or memory (minimum 2GB) to handle cluster operations. This can manifest as pod resource exhaustion and negatively impact cluster stability.
- Network Connectivity Issues: In cases where firewalls block communication between nodes, pods might not be able to reach the API server or other services on the master node. Verifying open firewall rules on required Kubernetes ports across all nodes is crucial for ensuring proper communication.
- Calico Network Plugin Conflicts: Overlapping IP address ranges between the node network and pod network (managed by Calico) can cause unexpected behavior and pod restarts. Assigning distinct IP ranges for nodes and pods is essential for maintaining a functional network configuration within the cluster.
How Does Kubeadm Work?
- Kubeadm meticulously validates the system state through pre-flight checks, ensuring the environment meets the minimum requirements for running a Kubernetes cluster.
- Subsequently, it fetches all the essential container images required for the cluster from the official registry (
registry.k8s.io
).
- Kubeadm generates a robust set of TLS certificates to secure communication within the cluster. These certificates are securely stored within the
/etc/kubernetes/pki
directory.
- Kubeadm creates configuration files (kubeconfig) for each cluster component, granting them the necessary permissions to interact with the API server. These files are placed in the
/etc/kubernetes
directory for easy access.
- Kubeadm initiates the
kubelet
service, a crucial component responsible for managing pods on each node. - It then statically generates pod manifests defining the configurations for all control plane components. These manifests are stored in the
/etc/kubernetes/manifests
directory.
- Leveraging the previously generated pod manifests, Kubeadm initiates the control plane components, establishing the core functionalities of the Kubernetes cluster.
- Kubeadm seamlessly integrates essential services like CoreDNS (cluster DNS) and Kube-proxy (load balancer) into the cluster, ensuring efficient service discovery and network communication.
- Finally, Kubeadm generates a unique node bootstrap token. This token serves as a secure credential for worker nodes to join the established control plane.
Kubeadm FAQs
How can I leverage custom CA certificates with Kubeadm?
By default, Kubeadm generates its own set of TLS certificates. However, you can incorporate your own certificates by placing them within the designated directory: /etc/kubernetes/pki
. Kubeadm will prioritize existing certificates found in this location and refrain from overwriting them during the initialization process.
Conclusion
In this article, we explored the step-by-step process of installing Kubernetes using kubeadm.
For DevOps engineers, grasping the fundamental components of a Kubernetes cluster is essential. While managed Kubernetes services are widely adopted by companies, they may overlook the foundational knowledge of Kubernetes.
Setting up Kubernetes with Kubeadm is an excellent method for hands-on learning and experimentation.
[ 30% OFF ] Kubernetes Certification Coupon (CKAD , CKA , CKS)
Save 30% on all the Linux Foundation training and certification programs. This is a limited-time offer for this month. This offer is applicable for CKA, CKAD, CKS, KCNA, LFCS, PCA FINOPS, NodeJS, CHFA, and all the other certification, training, and BootCamp programs.
Coupon: use code TECK30 at checkout
Hurry Up: Offer Ends Soon.
Coupon: use code TECK30 at checkout
Hurry Up: Offer Ends Soon.