Practical Examples for 3 Advanced Kubernetes deployment strategies

Advanced Kubernetes deployment strategies
Advanced Kubernetes deployment strategies

In the modern technology world, Kubernetes is a very widely adopted platform. It enables organizations to deploy and manage applications at scale. This container orchestration platform greatly simplifies infrastructure configuration for microservices-based applications and enables efficient load management through modular design.

Kubernetes supports a variety of deployment resources to help operators implement CI/CD pipelines using updates and version control. While Kubernetes provides rolling updates as the default deployment strategy, some use cases require unconventional methods to deploy or update cluster services.

This article reviews several concepts in Kubernetes deployment and dives into various advanced Kubernetes deployment strategies, pros, cons, and use cases.

Kubernetes Deployment Concepts

Kubernetes uses deployment resources to declaratively update applications. Through deployment, the cluster administrator defines the life cycle of the application and how the application performs relevant updates. Kubernetes deployments provide an automated way to achieve and maintain the state required by cluster objects and applications.

The Kubernetes backend manages the deployment process without manual intervention, providing a secure and repeatable way to perform application updates.

Kubernetes deployment allows cluster administrators to:

  • Deploy a pod or replica set
  • Update replica sets and pods
  • Roll back to an earlier version
  • Pause/resume deployment
  • Extend deployment

The following sections explore how Kubernetes simplifies the update process for containerized applications and how it solves the challenges of continuous delivery.

Kubernetes Objects

Kubernetes leverages many workload resource objects as persistent entities to manage cluster state. The Kubernetes API uses Deployment, ReplicaSet, StatefulSet, and DaemonSet resources to declaratively update applications.

Deployment

A Deployment is a Kubernetes resource that defines and identifies the desired state of an application. The cluster administrator describes the desired state in the deployment’s YAML file, which is used by the deployment controller to gradually change the actual state to the desired state. To ensure high availability, the deployment controller also continuously monitors the process and replaces failed cluster nodes and pods with healthy cluster nodes and pods.

Replica set

ReplicaSet (replica set) is used to maintain a specific number of pods to ensure high availability. The ReplicaSet’s manifest file includes the following fields:

  • The selector used to identify which pods belong to the set
  • Number of replicas, indicating how many pods should be in the collection
  • A pod template that shows what data a new pod should create to meet the ReplicaSet’s criteria

Stateful set

The StatefulSet object manages the deployment and scaling of pods in stateful applications. This resource manages pods based on the same container specification and then ensures proper ordering and uniqueness of a set of pods. StatefulSet’s persistent pod identifiers enable cluster administrators to connect their workloads to persistent storage volumes with availability guarantees.

Daemon set

DaemonSets help maintain application deployments by ensuring that a set of nodes run a single copy of a pod. DaemonSet resources are mainly used to manage the deployment and life cycle of various agents, such as:

  • Cluster storage agent on each node
  • Log collection daemon
  • Node monitoring daemon

https://teckbootcamps.com/kubernetes-daemonset/

Detailed information on the various Kubernetes workload resource lists can be found here (https://kubernetes.io/docs/concepts/workloads/controllers/)

Using Deployment

Kubernetes deployments provide a predictable way to start and stop pods. These resources make it easier for managers to iterate and deploy autonomously, roll back changes, and manage software release cycles. Kubernetes provides various deployment strategies to enable smaller, more frequent updates because they provide the following benefits:

  • Faster customer feedback for better feature optimization
  • Reduce time to market
  • Improve DevOps team productivity

By default, Kubernetes provides rolling updates as a standard deployment strategy, which replaces one pod at a time with a new version to avoid cluster downtime. In addition to this, Kubernetes supports various advanced deployment strategies — including blue-green, canary and A/B deployment — depending on the target and type of feature.

Let’s take a closer look at what each of these strategies offers and the differences between them.

Advanced strategies for Kubernetes deployment

Kubernetes offers multiple ways to release application updates and features, depending on the use case and workload involved. In a live production environment, it is important to use deployment configurations in conjunction with routing features so that updates only affect specific versions. This enables release teams to test the effectiveness of updated features in a live environment before committing to a full version. Kubernetes supports a variety of advanced deployment policies so developers can precisely control traffic to specific versions.

Blue-Green Deployment

In a blue-green strategy, new and old instances of an application are deployed simultaneously. Users can access the existing version (blue), while the new version (green) is available to the same number of instances for Site Reliability Engineering ( SRE ) and QA teams. Once the QA team confirms that the green version passes all testing requirements, users are redirected to the new version. This is accomplished by updating the version label in the selector field of the load balancing service.

Blue-green deployment is best suited when developers want to avoid version control issues.

Use a blue-green deployment strategy

Let us assume that the first version of the application is v1.0.0 and the second version available is v2.0.0.

Here’s a link to the first version of the service:

apiVersion: v1
kind: Service
metadata:
name: darwin-service-a
spec:
type: LoadBalancer
selector:
app: nginx
version: v1.0.0
ports:
- name: http
port: 80
targetPort: 80

Here is the service pointing to the second version:

Version: v1
kind: Service
metadata:
name: darwin-service-b
spec:
type: LoadBalancer
selector:
app: nginx
version: v2.0.0
ports:
- name: http
port: 80
targetPort: http

After the requested test is executed and the second version is approved, the selector for the first version is changed to v2.0.0:

apiVersion: v1
kind: Service
metadata:
name: darwin-service-a
spec:
type: LoadBalancer
selector:
app: nginx
version: v2.0.0
ports:
- name: http
port: 80
targetPort: http

If the application runs as expected, v1.0.0 will be discarded.

Canary Deployment

In the canary strategy, a subset of users are routed to the pod hosting the new version. This subset gradually increases, while the subset connected to older versions decreases. This policy compares the subset of users connected to both versions. If no bugs are detected, the new version is pushed to the remaining users.

Using a canary deployment strategy

The native Kubernetes canary deployment process involves the following:

Deploy the number of replicas required to run version 1 via:

Deploy the first application:

$ kubectl apply -f darwin-v1.yaml

Scale this to the desired number of replicas:

$ kubectl scale --replicas=9 deploy darwin-v1

An example of deploying version 2:

$ kubectl apply -f darwin-v2.yaml

If the second version is successfully deployed, test it:

$ service=$(minikube service darwin --url)$ while sleep 0.1; do curl "$service"; done

If the deployment is successful, scale the number of instances for version 2:

$ kubectl scale --replicas=10 deploy darwin-v2

Once all replicas are online, the first version can be deleted gracefully:

$ kubectl delete deploy darwin-v1

A/B Deployment

A/B deployment allows administrators to route a specific subset of users to a newer version with some restrictions and/or conditions. These deployments are primarily used to evaluate the user base’s response to certain features. A/B deployments are also known as “dark launches” because users don’t know what new features the app contains during testing.

Use A/B deployment strategy

Here’s how to perform A/B testing using an Istio service mesh, with the ability to push different versions using traffic weights:

  1. Assuming that Istio has been installed on the cluster, the first step is to deploy two versions of the application:
$ kubectl apply -f darwin-v1.yaml -f darwin-v2.yaml
  1. These versions can then be exposed through the Istio gateway, matching requests to the first service using the following command:
$ kubectl apply -f ./gateway.yaml -f ./virtualservice.yaml
  1. Istio VirtualService rules can then be applied based on weight using the following command:
$ kubectl apply -f ./virtualservice-weight.yaml

This distributes traffic weights between versions in a 1:10 ratio. To shift traffic weights, edit each service’s weight and then update the VirtualService rules through the Kubernetes CLI.

When to use each advanced deployment strategy

Because Kubernetes use cases vary based on availability requirements, budget constraints, available resources, and other considerations, there is no one-size-fits-all deployment strategy. When choosing the right deployment strategy, there are a few things to consider:

Compare Kubernetes deployment strategies

Blue-green strategy

  • Features: Focus on progressive delivery, which is very important for testing features on the backend of the application.
  • Advantages: Enables instant push and rollback; allows administrators to change the state of the entire cluster in one upgrade; eliminates version control issues.
  • Cons: Requires twice the amount of resources and proper platform testing before production release.

Canary strategy

  • Features: Test new versions while users are still running instances of older versions; considered the best option to avoid API versioning issues.
  • Advantages: Easily monitor performance through error rate comparison; enable fast rollback; includes user experience testing.
  • Disadvantages: Fine-tuning traffic distribution is expensive; push speed is slow.

A/B strategy

  • Features: Provide users with new and old application versions, and then compare their experiences; mainly used when front-end deployment and QA testing processes are insufficient.
  • Advantages: Allows multiple versions to run in parallel; enables performance monitoring.
  • Disadvantages: Causes slow deployment; brings expensive traffic balancing.

Conclusion

Kubernetes objects are one of the core capabilities of the technology, enabling the rapid delivery of application updates and features. With deployment resources, Kubernetes administrators can set up an efficient version control system to manage versions with minimal to zero application downtime. Deployments allow administrators to update pods, roll back to earlier versions, or scale the infrastructure to support growing loads.

The advanced Kubernetes deployment strategies described in this article also enable administrators to route traffic and requests to specific versions for real-time testing and error handling.

These strategies can be used to ensure that new features work as planned before administrators and developers fully commit the changes. While deployment resources form the basis for persisting application state, it is recommended that you strive to choose the right deployment strategy, prepare adequate rollback options, and take seriously the dynamic nature of an ecosystem that relies on multiple loosely coupled services.

Check my last updated Kubernetes Exam Guides (CKAD , CKA , CKS) :

Resources

  • Use kubetctl to create a deployment (https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/)
  • Kubernetes deployment use cases (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#use-case)
  • Kubernetes deployment lifecycle states (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#deployment-status)

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

0 Shares:
You May Also Like
Scaling Kubernetes (GKE / AKS / EKS )
Read More

Scaling Kubernetes by Examples

Table of Contents Hide Scaling a Kubernetes DeploymentUse CaseSolutionImplementing Horizontal Pod AutoscalingUse CaseSolutionAutomating Cluster Scaling in GKEUse CaseSolutionDiscussionDynamically…