A Step-by-Step Guide for minikube multi-node cluster with Kubernetes 1.31

minikube multi-node cluster with Kubernetes 1.31
minikube multi-node cluster with Kubernetes 1.31

In this guide, we’ll explore how to set up and test the latest Kubernetes 1.31 using minikube. We’ll create a 3-node cluster with specific configurations to provide a robust local development environment.

What’s New in Kubernetes 1.31

Kubernetes 1.31 introduces several new features and improvements:

  • Enhanced security with stricter default settings.
  • Improved scalability and performance optimizations.
  • New APIs and extended support for existing ones.
  • Simplified network policy management.
  • Better support for edge computing and IoT scenarios.

For a detailed overview, check out our latest blog :

Prerequisites

  • minikube is installed on your machine.
  • VirtualBox as the driver. (You can use Docker or Podman as alternatives)
  • Basic understanding of Kubernetes and command-line interface.

Setting Up minikube with Kubernetes 1.31

First, ensure that you have the latest version of Minikube. Update it if necessary:

$ minikube update-check

Then, start your minikube cluster with the following command:

$ minikube start \
    --driver=virtualbox \
    --nodes 3 \
    --cni calico \
    --cpus=2 \
    --memory=2g \
    --kubernetes-version=v1.31.0 \
    --container-runtime=containerd \
    --profile k8s-1-31

Here’s a breakdown of the command:

  • --driver=virtualbox: Specifies VirtualBox as the driver.
  • --nodes 3: Creates a 3-node cluster.
  • --cni calico: Uses Calico for the Container Network Interface.
  • --cpus=2: Allocates 2 CPUs to each node.
  • --memory=2g: Allocates 2GB of RAM to each node.
  • --kubernetes-version=v1.31.0: Specifies Kubernetes version 1.31.0.
  • --container-runtime=containerd: Uses containerd as the container runtime.
  • --profile=k8s-1-31: Give each cluster a unique name (to avoid disturbing other minikube clusters if you have any)

You will see the logs from minikube, where you can see the Kubernetes cluster nodes are created one by one.

  [k8s-1-31] minikube v1.33.1 on Fedora 40
  Specified Kubernetes version 1.31.0 is newer than the newest supported version: v1.30.0. Use `minikube config defaults kubernetes-version` for details.
  Specified Kubernetes version 1.31.0 not found in Kubernetes version list
  Searching the internet for Kubernetes version...
  Kubernetes version 1.31.0 found in GitHub version list
  Using the virtualbox driver based on user configuration
  Starting "k8s-1-31" primary control-plane node in "k8s-1-31" cluster
  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
  Preparing Kubernetes v1.31.0 on containerd 1.7.15 ...
     Generating certificates and keys ...
     Booting up control plane ...
     Configuring RBAC rules ...
  Configuring Calico (Container Networking Interface) ...
     Using image gcr.io/k8s-minikube/storage-provisioner:v5
  Enabled addons: storage-provisioner, default-storageclass
  Verifying Kubernetes components...

  Starting "k8s-1-31-m02" worker node in "k8s-1-31" cluster
...<removed for brevity>...
  Starting "k8s-1-31-m03" worker node in "k8s-1-31" cluster
...<removed for brevity>...
  Done! kubectl is now configured to use "k8s-1-31" cluster and "default" namespace by default

Verifying the brand-new Kubernetes 1.31 cluster

After starting minikube, verify the status of your cluster:

$ kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
k8s-1-31       Ready      control-plane   2m27s   v1.31.0
k8s-1-31-m02   Ready      <none>          86s     v1.31.0
k8s-1-31-m03   NotReady   <none>          28s     v1.31.0

You should see three nodes listed, each in the Ready state. This confirms that your 3-node cluster is up and running with Kubernetes 1.31.

Once you finish testing, you stop or delete the minikube cluster by mentioning the correct minikube profile as follows.

$ minikube stop --profile k8s-1-31
 Stopping node "k8s-1-31-m03" …
 Stopping node "k8s-1-31-m02" …
 Stopping node "k8s-1-31" …
 3 nodes stopped.

Congratulations! You’ve successfully set up and tested Kubernetes 1.31 with minikube and VirtuBox on a 3-node cluster.

Let us set up a multi-node Kubernetes cluster with Podman and minikube

If you prefer Podman or Docker as the driver for minikube, then it is simple as follows.

 $ minikube start \
    --driver=podman \
    --nodes 3 \
    --cni calico \
    --cpus=2 \
    --memory=2g \
    --kubernetes-version=v1.31.0 \
    --container-runtime=containerd \
    --profile k8s-1-31-podman

Please note, I have used –driver=podman and a new minikube profile (--profile k8s-1-31-podman) for my new cluster.

If I check my nodes now, I can see the nodes are running as follows.

$ kubectl get nodes
NAME                  STATUS   ROLES           AGE     VERSION
k8s-1-31-podman       Ready    control-plane   2m34s   v1.31.0
k8s-1-31-podman-m02   Ready    <none>          2m16s   v1.31.0
k8s-1-31-podman-m03   Ready    <none>          2m3s    v1.31.0

Since you used different profiles for different clusters, you can always switch back to the other minikube cluster anytime.

Let us list the cluster profiles.

$  minikube profile list
|-----------------|------------|------------|----------------|------|---------|---------|-------|----------------|--------------------|
|     Profile     | VM Driver  |  Runtime   |       IP       | Port | Version | Status  | Nodes | Active Profile | Active Kubecontext |
|-----------------|------------|------------|----------------|------|---------|---------|-------|----------------|--------------------|
| cluster2-podman | podman     | containerd |                | 8443 | v1.30.0 | Unknown |     1 |                |                    |
| k8s-1-31        | virtualbox | containerd | 192.168.59.176 | 8443 | v1.31.0 | Stopped |     3 |                |                    |
| k8s-1-31-podman | podman     | containerd | 10.88.0.4      | 8443 | v1.31.0 | Running |     3 | *              | *                  |
|-----------------|------------|------------|----------------|------|---------|---------|-------|----------------|--------------------|

If I want to switch to my minikube + VirtualBox cluster based on Kubernetes 1.31, I can use the following command.

$ minikube profile k8s-1-31
  minikube profile was successfully set to k8s-1-31

Also, remember to start/stop/delete unused minikube profiles and clusters to save your computing resources.

$ minikube stop --profile=k8s-1-31-podman

# Switch to other minikube profile
$ minikube profile k8s-1-31
  minikube profile was successfully set to k8s-1-31

# Start the cluster if its stopped; ignore otherwise.
$ minikube start --profile k8s-1-31

# Check cluster nodes
$ $  kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-1-31       Ready    control-plane   17m   v1.31.0
k8s-1-31-m02   Ready    <none>          64s   v1.31.0
k8s-1-31-m03   Ready    <none>          22s   v1.31.0

These environments are perfect for experimenting with the latest features and enhancements in Kubernetes 1.31.

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

    View all posts
0 Shares:
Leave a Reply
You May Also Like
What is Cilium CNI
Read More

What is Cilium CNI

Table of Contents Hide What is Cilium Cilium Installation PrerequisitesCreating a Kubernetes ClusterInstallationCilium CLIHubble – observability according to…
Learn Prometheus : A Complete Guide
Read More

Learn Prometheus : A Complete Guide

Table of Contents Hide HistoricalKey ConceptsMetricsScrapingPromQLAlertingService DiscoveryExportersTargetsLabelsTime Series Database (TSDB) Prometheus Installation Prometheus ConfigurationAlert ManagerAlertmanager Installation Alertmanager ConfigurationAlert…