An overview of new features in Kubernetes v1.29

An overview of new features in Kubernetes v1.29
An overview of new features in Kubernetes v1.29

Kubernetes v1.29 is the third major version update in 2023 and the last major version this year, containing 49 major updates. 

The first version v1.27 released this year has nearly 60 items, and the second version v1.28 has 46 items. Even though Kubernetes has been released for almost 10 years, Kubernetes is still very much alive!

There are 19 enhancements in this release that are entering the Alpha stage, 19 will be upgraded to the Beta stage, and 11 will be upgraded to the stable version.

It can be seen that there are still many new features being gradually introduced.

CEL-based CRD rule verification officially reaches GA

This feature should be very important for all friends who develop on Kubernetes. Because in most cases, we use CRD to implement functional expansion in Kubernetes.

When implementing function expansion through CRD, in order to provide a better user experience and more reliable input verification, it is necessary to support verification.

CRD currently supports two types of verification capabilities natively:

  • Check based on CRD structure definition
  • OpenAPIv3 verification rules

For example:

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.13.0
  name: kongplugins.configuration.konghq.com
spec:
  group: configuration.konghq.com
  names:
    categories:
    - kong-ingress-controller
    kind: KongPlugin
    listKind: KongPluginList
    plural: kongplugins
    shortNames:
    - kp
    singular: kongplugin
  scope: Namespaced
  versions:
    name: v1
    schema:
      openAPIV3Schema:
        description: KongPlugin is the Schema for the kongplugins API.
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          protocols:
            description: Protocols configures plugin to run on requests received on
              specific protocols.
            items:
              description: KongProtocol is a valid Kong protocol. This alias is necessary
                to deal with https://github.com/kubernetes-sigs/controller-tools/issues/342
              enum:
              - http
              - https
              - grpc
              - grpcs
              - tcp
              - tls
              - udp
              type: string
            type: array
        type: object
        ...
        x-kubernetes-validations:
        - message: Using both config and configFrom fields is not allowed.
          rule: '!(has(self.config) && has(self.configFrom))'
        - message: Using both configFrom and configPatches fields is not allowed.
          rule: '!(has(self.configFrom) && has(self.configPatches))'
        - message: The plugin field is immutable
          rule: self.plugin == oldSelf.plugin

KongPluginIn the above example, a custom resource named is defined , in which openAPIV3Schemathe validation rules of the OpenAPI schema are defined.

However, the effects these built-in rules can achieve are relatively limited. If you want to implement richer verification rules/features, you can use:

  • Admission Webhook
  • Use a custom validator

However, whether it is admission webhook or custom validator, they are separated from the CRD itself, and this will also increase the difficulty of developing CRD and subsequent maintenance costs.

In order to solve these problems, the Kubernetes community has introduced a CEL (Common Expression Language)-based verification rule for CRD. This rule can be written directly in the CRD declaration file without using any admission webhook or custom validator. , greatly simplifying the development and maintenance costs of CRD.

In the Kubernetes v1.29 version, the CEL-based CRD verification capability reaches GA. You only need to use to x-kubernetes-validationsdefine verification rules.

It is lightweight and safe enough and can be run directly in kube-apiserver. Let’s take a look at an example:

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.13.0
  name: kongplugins.configuration.konghq.com
spec:
  group: configuration.konghq.com
  scope: Namespaced
  versions:
    name: v1
    schema:
      openAPIV3Schema:
        description: KongPlugin is the Schema for the kongplugins API.
        properties:
          plugin:
        ...
        x-kubernetes-validations:
        - message: Using both config and configFrom fields is not allowed.
          rule: '!(has(self.config) && has(self.configFrom))'
        - message: The plugin field is immutable
          rule: self.plugin == oldSelf.plugin

For example, in this sentence self.plugin == oldSelf.pluginselfand oldSelfrepresents the resource object before and after the change. Once pluginthe content of the field is defined, it is not allowed to be modified.

In addition, CEL also has very rich features, which can be experienced through the online Playground https://playcel.undistro.io/

As I mentioned earlier, this feature has been introduced for more than two years. It reached Beta and was enabled by default in Kubernetes v1.25, and now it has finally reached GA.

In addition, the Kubernetes Gateway API project is deleting all its admission webhooks and using CEL-based rules for verification. This is probably the largest use case in the community at present.

Check my post about CRD here :

Reserve NodePort port range for dynamic and static allocation to stable


In Kubernetes, the native Service object includes a NodePort type for exposing services within the cluster to external entities. Presently, creating a new NodePort and specifying a fixed port entails a certain risk. There is a possibility that the designated port has already been allocated, resulting in conflicts and Service creation failures.

This Kubernetes Enhancement Proposal (KEP) suggests introducing dynamic and static reservation of an optional interval for NodePort type Services, offering two methods for Port configuration:

  1. Automatically and Randomly Generated: Kubernetes will generate the port automatically based on predefined rules.
  2. Manual Setting: Users can manually set the port according to their requirements.

As an example, the kube-apiserver can control the port range that NodePort can utilize using the --service-node-port-range flag, with the default range being 30000-32767. The calculation formula proposed in this KEP is as follows:

Static Band Start=Min(Max($min,$node-range-size/$step),$max)Static Band Start=Min(Max($min,$node-range-size/$step),$max)

Here:

  • Service Node Port range: 30000-32767
  • Range size: 32767 – 30000 = 2767
  • Band Offset: Min(Max(16,2767/32),128)=Min(86,128)=86Min(Max(16,2767/32),128)=Min(86,128)=86
  • Static Band Start: 30000
  • Static Band End: 30086

Following this calculation, 30000-30086 is considered the static segment, while the remaining ports are considered the dynamic segment.

Implementing this approach mitigates the risk of conflicts when creating NodePort type Services with fixed ports, enhancing overall cluster stability.

 ┌─────────────┬─────────────────────────────────────────────┐
 │   static    │                    dynamic                  │
 └─────────────┴─────────────────────────────────────────────┘

 ◄────────────► ◄────────────────────────────────────────────►
30000        30086                                          32767


When users wish to specify a NodePort port themselves, conflicts are less likely to occur if the chosen port falls within the static range, as outlined in the previous paragraph.

Primarily, this feature is an internal aspect of Kubernetes, and for most scenarios, a simple principle suffices: “When manually selecting the NodePort, strive to choose a port within the static range mentioned earlier.” If the user-specified port lies within the dynamic range and has not been allocated, the creation process will also succeed.

This calculation method is derived from KEP-3070 and is employed to allocate ClusterIP to Services, ensuring a streamlined and effective approach for port allocation within the Kubernetes ecosystem.

Sidecar Containers feature reaches Beta and is enabled by default


Sidecars serve as auxiliary containers that augment the primary container’s functionality. While extensively utilized in Service Mesh scenarios, many users also deploy them in non-Service Mesh contexts, such as for logging, monitoring, and various other purposes.

However, issues have arisen in the past regarding the use of Sidecars in these scenarios. One notable problem is the lack of life cycle management for Sidecar containers.

For instance, when a Pod is deleted, the absence of proper life cycle management for the Sidecar container can lead to a synchronization mismatch between the life cycles of these services and the main container, adversely impacting service reliability.

This Kubernetes Enhancement Proposal (KEP) addresses these challenges by defining the Sidecar container as part of the init containers in the Pod specification.

Additionally, the KEP specifies that the Sidecar container should adhere to an “always restart” policy. By incorporating Sidecars into the init containers with a consistent restart policy, this proposal aims to enhance the life cycle management of Sidecar containers, promoting better synchronization and reliability for services in both Service Mesh and non-Service Mesh scenarios.

apiVersion: v1
kind: Pod
metadata:
  name: mohamedbenhassine-pod
spec:
  initContainers:
  - name: log
    image: mohamedbenhassine/fluentbit
    restartPolicy: Always
    ...

In Kubernetes v1.29, this feature will be enabled by default, and the stop order of Sidecar containers will be in the reverse order of when they are started.

This ensures that the main container is stopped first, and also facilitates control of all containers. Component life cycle.

Check my blog about that here :

PreStop Hook introduces Sleep action (Alpha)

This KEP is also interesting, mainly to simplify one of the most common requirements.

Many application Pods need to be disconnected when closed to avoid affecting user traffic. Therefore, PreStop is often set to perform some related processing or waiting before shutdown.

However, the current PreStop Hook only supports two types: exec and httpGet.

This KEP is intended to implement a native sleep operation, such as:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    spec:
      containers:
      - name: nginx
        image: nginx:1.25.3
        lifecycle:
          preStop:
            sleep:
              seconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80

This can be very simple. Before introducing this KEP, it needs to be set to something like exec sh -c "sleep 5", which requires this command to be included in the container sleep.

However, this feature is currently in Alpha, and whether it can be finally GA depends on the feedback from the community.

Improve service account token binding mechanism (Alpha)

In Kubernetes, service account token is an indispensable part of ensuring security. They are used to authenticate individual workloads in the cluster and prevent unauthorized access.

Kubernetes v1.29 further strengthens the security protection of these tokens: now each service account can only be associated with a specific Pod instance, and cannot be abused after a leak. This method effectively bundles the service account and the Pod life cycle, greatly reducing the possibility of attackers using stolen tokens to attack.

In v1.29, kube-apiserver has the following related feature gates to control related features. Of course, in addition to KEP-4193, this version also promotes KEP-2799, which reduces the number of secret-based service account tokens. This helps shorten the validity time of the token and reduce the attack surface as much as possible.

LegacyServiceAccountTokenCleanUp=true|false (BETA - default=true)
ServiceAccountTokenJTI=true|false (ALPHA - default=false)
ServiceAccountTokenNodeBinding=true|false (ALPHA - default=false)
ServiceAccountTokenNodeBindingValidation=true|false (ALPHA - default=false)
ServiceAccountTokenPodNodeInfo=true|false (ALPHA - default=false)

Kubelet Resource Metrics reaches GA

This is a KEP with a long history. It took 5 years from the initial proposal to the GA, and many interesting things happened in the meantime. The main parts involved this time are the following metrics:

  • container_cpu_usage_seconds_total
  • container_memory_working_set_bytes
  • container_start_time_seconds
  • node_cpu_usage_seconds_total
  • node_memory_working_set_bytes
  • pod_cpu_usage_seconds_total
  • pod_memory_working_set_bytes
  • resource_scrape_error

Here is the output of an example:

# HELP container_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the container in core-seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-55968cc89d-bhhbx"} 0.195744 1691361886865
# HELP container_memory_working_set_bytes [STABLE] Current working set of the container in bytes
# TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="coredns",namespace="kube-system",pod="coredns-55968cc89d-bhhbx"} 1.675264e+07 1691361886865
# HELP container_start_time_seconds [STABLE] Start time of the container since unix epoch in seconds
# TYPE container_start_time_seconds gauge
container_start_time_seconds{container="coredns",namespace="kube-system",pod="coredns-55968cc89d-bhhbx"} 1.6913618235901163e+09 1691361823590
# HELP node_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the node in core-seconds
# TYPE node_cpu_usage_seconds_total counter
node_cpu_usage_seconds_total 514578.636 1691361887931
# HELP node_memory_working_set_bytes [STABLE] Current working set of the node in bytes
# TYPE node_memory_working_set_bytes gauge
node_memory_working_set_bytes 1.9501084672e+10 1691361887931
# HELP pod_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the pod in core-seconds
# TYPE pod_cpu_usage_seconds_total counter
pod_cpu_usage_seconds_total{namespace="kube-system",pod="coredns-55968cc89d-bhhbx"} 1.30598 1691361880003
# HELP pod_memory_working_set_bytes [STABLE] Current working set of the pod in bytes
# TYPE pod_memory_working_set_bytes gauge
pod_memory_working_set_bytes{namespace="kube-system",pod="coredns-55968cc89d-bhhbx"} 1.6715776e+07 1691361880003
# HELP resource_scrape_error [STABLE] 1 if there was an error while getting container metrics, 0 otherwise
# TYPE resource_scrape_error gauge
resource_scrape_error 0
# HELP scrape_error [ALPHA] 1 if there was an error while getting container metrics, 0 otherwise
# TYPE scrape_error gauge
scrape_error 0

Kubernetes Component Health SLIs reach GA

The main purpose of this KEP is to allow each component to expose its own health status so that the SLO of the cluster can be calculated based on the SLI of its health status.

A long time ago, there was a ComponentStatus in Kubernetes, which can be used to view the status of components in the cluster.  But as shown below, it has been effectively deprecated in v1.19.

mohamedbenhassine@k8s-test:~$ kubectl  get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok        
etcd-0               Healthy   ok        
controller-manager   Healthy   ok 

We hope to use this KEP to use the health status of each component as SLI, collect and aggregate it, and calculate the SLO of the cluster. An SLA can then be given. (Friends who are interested in the relationship between them can search for relevant content)

Here is the output of an example:

test@mohamedbenhassine:/$ kubectl get --raw "/metrics/slis"
# HELP kubernetes_healthcheck [STABLE] This metric records the result of a single healthcheck.
# TYPE kubernetes_healthcheck gauge
kubernetes_healthcheck{name="etcd",type="healthz"} 1
kubernetes_healthcheck{name="etcd",type="livez"} 1
kubernetes_healthcheck{name="etcd",type="readyz"} 1
kubernetes_healthcheck{name="etcd-readiness",type="readyz"} 1
kubernetes_healthcheck{name="informer-sync",type="readyz"} 1
kubernetes_healthcheck{name="ping",type="healthz"} 1
kubernetes_healthcheck{name="ping",type="livez"} 1
kubernetes_healthcheck{name="ping",type="readyz"} 1

ReadWriteOncePod PersistentVolume Access Mode Graduated to Stable


In previous releases, the ReadWriteOnce access mode permitted multiple pods on a shared node to both read from and write to the same volume.

With the release of Kubernetes 1.29, the ReadWriteOncePod feature is introduced as a stable addition. This ensures that a pod, exclusive to the entire cluster, has the sole authority to read from or write to a Persistent Volume Claim (PVC).

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOncePod
  storageClassName: standard
  resources:
    requests:
      storage: 5Gi

Define Pod Affinity or Anti-affinity Using matchLabelKeys


This new alpha feature enhances PodAffinity/PodAntiAffinity by introducing matchLabelKeys. This addition ensures more precise calculations during rolling updates, providing improved control over pod scheduling based on specified label keys.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: teckbootcamps-deployment
spec:
  template:
    spec:
      affinity:
        podAffinity:
          matchLabelKeys:
            - app-2024

KMS v2 Encryption at Rest Graduated to Stable

In Kubernetes version 1.29, enhanced stability is introduced to Key Management Service (KMS) v2. This update brings improved performance, key rotation capabilities, health checks, and enhanced observability. These advancements contribute to the secure encryption of API data at rest within the Kubernetes environment.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: encrypted
provisioner: kubernetes.io/aws-ebs
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Removal of In-tree Integrations with Cloud Providers (KEP-2395)

In Kubernetes 1.29, the default configuration involves operating without built-in integrations with cloud providers. Users have the flexibility to enable external cloud controller managers or revert to the legacy integration by using the associated feature gates.

kubeadm init --cloud-provider=external

Known issues on Kubernetes 1.29

Evented PLEG This feature was upgraded to Beta in v1.27, but many problems were discovered during the testing of the new version, so it is now disabled by default and will be turned on again after the community fixes it.

 It is recommended to turn off this feature in v1.29 first

Other Kubernetes 1.29 features

  • KEP-2495: PV/PVC ReadWriteOncePod reaches GA
  • NodeExpandSecret feature reaches GA
  • kube-proxy has a new backend for nftables

🔥🔥 [40 % OFF ] The Linux Foundation, Feb 6, 2024 – Feb 9, 2024!

Shop Certifications (Kubernetes CKAD , CKA and CKS) , Bootcamps, SkillCreds, or Courses .

Coupon: use code LUNAR24 at checkout

Hurry Up: Offer valid from Feb 6, 2024 – Feb 9, 2024. . ⏳

Offer valid from Feb 6, 2024 – Feb 9, 2024. . ⏳

Use code LUNAR24  at CHECKOUT

Check our last updated Kubernetes Exam Guides (CKAD , CKA , CKS) :

Conclusion

This is what I think is the main thing worth paying attention to in Kubernetes v1.29. see you later!

Check a blog post about Kubernetes 1.27 features here :

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

    View all posts
0 Shares:
You May Also Like