Table of Contents Show
This article shows you how to check out the newest features of sidecar containers in Kubernetes version 1.28.
What is Kubernetes Side Container ?
The Sidecar Container is a widely used design pattern in Kubernetes, involving the deployment of multiple containers within a Pod. This additional container, known as the Sidecar Container, supports the main container in various capacities, including:
- Network Proxy: In architectures like Service Mesh, it aids in forwarding and managing diverse types of network traffic.
- Log Collection: The Sidecar Container processes logs generated by the main container.
Previously, all containers were situated within the “containers” section to manage the logic of the sidecar container.
In Kubernetes, these two container types essentially share the same lifecycle and management characteristics. However, this resemblance gives rise to certain challenges when dealing with sidecar containers:
- In the context of a Job, when the main container completes its task but the sidecar container persists, it hampers the Job’s ability to accurately ascertain whether the Pod has concluded successfully.
- The sidecar container sometimes kicks in too late, causing issues for the main container, which needs it from the get-go. This delay results in errors that necessitate waiting for the container to restart.
Concerning the second problem, notable instances include:
- In scenarios involving Istio, the Istio sidecar container initiates later than the main container, causing a brief period of network unavailability when the main container kicks off.
- Within Google Kubernetes Engine (GKE), when utilizing the Cloud SQL proxy to access Cloud SQL, if the Cloud SQL proxy starts after the main container, the main container faces difficulties connecting to the database, leading to errors.
Hence, in the past, solutions were needed to tackle these issues.
In the updated architecture ( Kubernetes 1.28) , the configuration has been shifted to the initContainer and is internally managed by Kubernetes, ensuring a dedicated lifecycle.
Presently, Kubernetes officially incorporates native support for the sidecar container architecture. Its autonomous lifecycle management effectively addresses the typical issues mentioned earlier, resulting in a more refined and elegant overall solution.
Test Kubernetes Native Side Container
Here, I’ll illustrate a potential scenario of employing sidecar containers and delve into how to tackle associated issues using the latest features in Kubernetes 1.28.
The scenario delves into the challenge arising from the utilization of a sidecar container in a Job.
For instance, consider deploying a Job service with a sidecar container using the following YAML. In this example, the sidecar’s functionality is not crucial; it’s merely for demonstration purposes.
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
- name: sidecar
image: hwchiu/netutils
restartPolicy: Never
backoffLimit: 4
When deploying this YAML, the following scenario can be observed:
controlplane $ kubectl get pods,job
NAME READY STATUS RESTARTS AGE
pod/pi-lksdl 1/2 NotReady 0 15m
NAME COMPLETIONS DURATION AGE
job.batch/pi 0/1 15m 15m
The main container completes its execution, but the sidecar container persists. Consequently, the current Pod fails to transition to the “Completed” state, leading to the Job’s inability to determine its COMPLETIONS.
Now, let’s explore the new feature introduced in version 1.28.
This feature revolves around a “never-ending initContainer” concept. The configuration begins with an initContainer, utilizing the restartPolicy to activate the sidecar container’s functionality. Once the sidecar container is configured with “RestartPolicy: Always,” its behavior undergoes slight modifications:
- It can persist without the need to terminate for the execution of other init containers.
- If it exits due to any issues, it automatically restarts.
- Its running state no longer impacts the Pod’s state determination.
Next, try the following YAML file. We transition the sidecar container to the initContainer stage to assess whether the Pod can successfully complete under this scenario.
apiVersion: batch/v1
kind: Job
metadata:
name: pi-sidecar
spec:
template:
spec:
initContainers:
- name: network-proxy
image: hwchiu/python-example
restartPolicy: Always
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
As observed from the results, the Pod still acknowledges the presence of two containers, but now it can successfully conclude its execution and transition to the “Completed” state.
controlplane $ kubectl get pods,job
NAME READY STATUS RESTARTS AGE
pod/pi-sidecar2-g9fl9 0/2 Completed 0 24s
NAME COMPLETIONS DURATION AGE
job.batch/pi-sidecar2 1/1 13s 24s
Furthermore, by examining the output of kubectl describe pod
pi-sidecar2-g9fl9 , you can observe the last line stating “Stopping container network-proxy.”
This indicates that after the main container completes its execution, Kubernetes terminates the sidecar container (network-proxy), ensuring it does not affect the lifecycle of the main container.
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m9s default-scheduler Successfully assigned default/pi-sidecar2-g9fl9 to node01
Normal Pulling 2m8s kubelet Pulling image "hwchiu/python-example"
Normal Pulled 2m8s kubelet Successfully pulled image "hwchiu/python-example" in 225ms (225ms including waiting)
Normal Created 2m8s kubelet Created container network-proxy
Normal Started 2m8s kubelet Started container network-proxy
Normal Pulled 2m8s kubelet Container image "perl:5.34.0" already present on machine
Normal Created 2m8s kubelet Created container pi
Normal Started 2m7s kubelet Started container pi
Normal Killing 2m kubelet Stopping container network-proxy
Conclusion
The initial experience highlights the evident advantages introduced by sidecar containers. They effectively diminish the necessity for numerous past workarounds, rendering the sidecar container pattern much more intuitive. Notably, in Istio, the latest version also accommodates the utilization of Kubernetes 1.28 sidecar functionality.
It’s essential to note that this feature remains in the alpha version in Kubernetes 1.28. It is expected to advance to Beta and eventually reach General Availability (GA), a process that may span at least two versions, approximately six months, potentially aligning with version 1.30.