What’s New in Kubernetes 1.31

What’s New in Kubernetes 1.31
What’s New in Kubernetes 1.31

The Kubernetes v1.31 Release Theme is “Elli”.

Kubernetes v1.31’s Elli is a cute and joyful dog, with a heart of gold and a nice sailor’s cap, as a playful wink to the huge and diverse family of Kubernetes contributors.

Kubernetes v1.31 marks the first release after the project has successfully celebrated its first 10 years. Kubernetes has come a very long way since its inception, and it’s still moving towards exciting new directions with each release. After 10 years, it is awe-inspiring to reflect on the effort, dedication, skill, wit and tiring work of the countless Kubernetes contributors who have made this a reality.

Kubernetes 1.31 logo

Kubernetes 1.31 New Features

Enhanced Security

AppArmor Support: Protect your apps from potential vulnerabilities by enforcing security profiles for pods and containers.

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app-container
image: my-app-image
securityContext:
apparmorProfile: "restricted" # Or a custom profile name

PodDisruptionBudget (PDB) with Unhealthy Pod Eviction Policy: Reduce disturbances by making sure that, in times of resource scarcity, only robust pods are spared from eviction.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: my-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: my-app
unhealthyPodEvictionPolicy: "Condition" # New in 1.31

Resource Management

Pod-Level Resource Limits: Give more precise control over the distribution of resources by defining ceilings on resource use for certain pods.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
resources:
limits:
cpu: "2"
memory: "2Gi"
limits:
cpu: "4"
memory: "4Gi"

In this example, the my-container has resource limits defined at the container level, while the pod itself has additional resource limits. The pod’s total resource consumption cannot exceed the specified limits, regardless of individual container usage.

Traffic Management

Multiple Service CIDRs: To enable load balancing over a larger IP address range, assign a service several CIDR blocks. Leverage a new Service field to configure traffic splitting among multiple backends or implement advanced routing strategies.

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
clusterIP: None # For load balancer mode
loadBalancerIP: 10.0.0.1 # Optional static IP
serviceCidrBlock: "10.10.0.0/16,10.20.0.0/16" # New in 1.31
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
trafficPolicy: "Local" # Optional; new in 1.31
# Define traffic routing rules here

Service trafficDistribution field

In Kubernetes 1.31, the trafficDistribution field on the Service API object has advanced to beta. You can enter the percentage of traffic that each service endpoint should receive in this box. When integrating features like canary rollouts or blue-green deployments, this can be helpful.

An instance of use the trafficDistribution field is as follows:

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
trafficPolicy: Local
trafficDistribution:
# Route 80% of traffic to endpoints with label "version: v1"
- metadata:
labels:
version: v1
weight: 80
# Route 20% of traffic to endpoints with label "version: v2"
- metadata:
labels:
version: v2
weight: 20

Scheduling hints for volumeRestriction plugin

It is possible to limit the kinds of volumes that can be used in your pods with the VolumeRestriction plugin. The VolumeRestriction plugin scheduling hints are now supported by the Kube-scheduler in Kubernetes 1.31 thanks to improvements made to it. This implies that you can now give the scheduler suggestions regarding the kinds of volumes that a pod needs. The scheduler will then attempt to schedule the pod on a node that has the relevant volume types available using these suggestions.

The pod is asking for a persistent volume claim in this instance. This hint will be used by the scheduler to attempt scheduling the pod on a node that has available persistent volume claims.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
schedulerName: default-scheduler
# Hint that the pod requires a persistent volume claim
volumeSchedulingHints:
- type: PersistentVolumeClaim

Randomized Pod Selection Algortihm

ReplicaSets employed a deterministic method in earlier version of Kubernetes to choose which pods to remove during downscaling. Pod affinity problems resulted from this frequently, with certain nodes having disproportionately fewer pods than others. This may eventually affect resource usage and cluster performance.

A randomized technique was added in Kubernetes 1.31 to choose nodes from a pool of qualified applicants. By doing this, pod affinity problems are avoided, and cluster utilization is enhanced overall.

Even though the Kubernetes controller manager implements the randomized pod selection process, it helps to visualize the idea with a more basic example. Here is a simulation of randomized pod selection using Python:

import random
def random_pod_selection(pods):
"""Selects a random pod for termination from a list of pods.
Args:
pods: A list of pod objects.
Returns:
The selected pod.
"""
if not pods:
return None
return random.choice(pods)

Persistent Volume Reclaim Policy

What happens to a PV when its associated PVC is erased is determined by the reclaim policy. The reclaim policy was improved in Kubernetes 1.31.
There are the ensuing reclamation policies available:

Retain: After the PVC is erased, the PV is kept.
Recycle: Prior to being utilized again, the PV is cleaned up and recycled.
Delete: The PV is eliminated following the deletion of the PVC.
The reclaim policy saw enhancements with Kubernetes 1.31, which included:

Finalizers: Finalizers can be added to PVs to keep them from being deleted until a set of requirements are satisfied.
Reclaim policy validation: To avoid mistakes, reclaim policies must undergo stricter validation.

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /export/path
server: 192.168.1.10
  • Persistent Volume Last Phase Transition Time: To help with debugging and lifecycle management, keep track of how long a Persistent Volume spends in its last phase (such as Released).
  • Jobs with Retriable and Non-Retriable Pod Failures: Identify recoverable and non-retriable pod failures within jobs to facilitate smarter job management.
  • Elastic Indexed Jobs: Jobs that are elastically indexed can be scaled up or down in response to an external index, allowing for dynamic workload modifications.
  • Enhanced Ingress Connectivity: with Kube-Proxy Enhance the dependability of Ingress connections by means of better kube-proxy management.
  • Consider Terminating Pods Deployment: Deployments now include a new annotation that lets you indicate when a pod might terminate due to scale.
  • Declarative Node Maintenance: By using Node objects to manage planned node maintenance declaratively, the procedure can be made simpler.

Deprecations

KubeProxy Requirements

Kubernetes 1.31 onwards necessitates kernel 5.13 or later and nft version 1.0.1 or later for kube-proxy to function. Kube-proxy configures Netfilter for network traffic management using the nft command-line utility. Because kube-proxy requires some new features introduced in Kernel 5.13, the kernel is necessary. Before you may install Kubernetes 1.31, you must update to a supported version of either the kernel or kube-proxy if you are running an older version.

Here, let me help you 😉

# Upgrade nft command-line tool
apt-get update && apt-get install nft -y

# Upgrade kernel
uname -r # Check current kernel version
# Download and install the latest kernel image
wget https://cdn.kernel.org/v5.13/linux-5.13.19.tar.xz
tar -xvf linux-5.13.19.tar.xz
cd linux-5.13.19
make install
# Update grub to boot the new kernel
update-grub

Version field — In Kubernetes v1.31, the.status.nodeInfo.kubeProxyVersion field of Nodes is deprecated and will be deleted in a subsequent version. The reason for this deprecation is because this field’s value was inaccurate and continues to remain so. This field is set by the kubelet, and thus lacks trustworthy information regarding the version of kube-proxy or even whether it is currently operating. In v1.31, the kubelet will no longer try to set the.status.kubeProxyVersion field for its associated Node, and the DisableNodeKubeProxyVersion feature gate will be set to true by default.

In-Tree Cloud Provider Code Removal

The elimination of in-tree cloud provider code was one of Kubernetes 1.31’s biggest modifications. The purpose of this change was to encourage the growth of external cloud providers and make Kubernetes more vendor-neutral.

apiVersion: cloudprovider.k8s.io/v1beta1
kind: CloudConfig
metadata:
name: config
spec:
controllerManagerConfig:
aws:
region: us-west-2

Make sure to

  • Identify the deprecated API you are currently using.
  • Find the recommended replacement API in the Kubernetes documentation.
  • Update your code to use the replacement API.
  • Test your changes thoroughly.

For more, visit the Kubernetes Deprecation Guide

Conclusion

You should only use this blog post as a jumping off point to explore the Kubernetes 1.31 changelog. To find out more about every change in this release, I highly recommend reading the entire changelog. Go ahead visit the changelog for even more:

Kubernetes Deprecation Policy: https://kubernetes.io/docs/reference/using-api/deprecation-policy/

Kubernetes 1.31 Changelog: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

    View all posts
0 Shares:
Leave a Reply
You May Also Like
What is Cilium CNI
Read More

What is Cilium CNI

Table of Contents Hide What is Cilium Cilium Installation PrerequisitesCreating a Kubernetes ClusterInstallationCilium CLIHubble – observability according to…