What Is Kubernetes (K8s)?

What Is Kubernetes (K8s)?
What Is Kubernetes (K8s)?

This article is an introductory introduction to Kubernetes (hereinafter referred to as K8s). It will be approached from the perspective of containerization technology. 

What Problem Does K8s Solve 

Before introducing Kubernetes (K8s), it’s essential to grasp the problems it aims to address. K8s is tailored to streamline the management of communication between containers across hundreds or thousands of hosts, particularly in the development of large-scale services.

Prior to the emergence of K8s, Docker provided a means to swiftly launch microservices composed of containers using Docker Compose on a single server. Developers could simply craft a YAML file, configure parameters, and execute the file to initiate or halt a set of interconnected services.

While Docker Compose effectively simplified testing and deployment processes, its operational scope was confined to a single host. In scenarios demanding collaborative efforts across multiple hosts for large-scale services, its capabilities seemed insufficient. Consequently, K8s emerged with automated deployment and scaling features for containers, emerging as the optimal solution for managing container communication at scale.

What is K8s?

Kubernetes, often abbreviated as K8s, is a tool used to manage containers efficiently. It helps automate the process of deploying, scaling, and managing workloads across different containers. The main idea behind Kubernetes is to make it easier to run applications reliably and with less manual effort. This frees up developers to focus more on writing code rather than managing the underlying infrastructure.

Kubernetes can do everything that Docker Compose does, but on a larger scale. It can start containers, manage their connections and interactions, and distribute them across multiple servers. It also keeps an eye on the health of each container and takes action if it detects any failures, ensuring that the application keeps running smoothly.

One of Kubernetes’ handy features is its ability to automatically adjust the number of containers based on the workload. This helps optimize resource usage and ensures that the application can handle varying levels of demand without manual intervention.

When it comes to deployment, Kubernetes uses a file (typically in YAML format) to describe the desired state of the containers and their settings. It then takes care of creating and configuring the necessary resources based on this file, making the deployment process streamlined and consistent.

K8s Architecture And Workflow


The image below illustrates a cluster within the Kubernetes (K8s) platform. The individuals within this K8s Cluster are collectively referred to as Nodes. These Nodes are categorized into either Workers or Masters based on their respective roles.

You can think of the Worker Node as the body’s core, responsible for executing tasks, while the Master Node serves as the brain, issuing instructions and overseeing operations.

On the right side of the image, you’ll find the Worker Node, typically equipped with more computational resources as it handles numerous applications.

On the left side of the image, you’ll see the Master Node. The Control Plane, a set of management programs, runs on the Master, overseeing scheduling and maintaining the overall Cluster’s status.

Within the Worker Node, multiple Pods are active. A Pod represents the fundamental unit for operations and deployments in K8s, allowing multiple Containers to coexist within it. Kubernetes leverages Pods to bundle and manage Containers, enhancing scheduling and deployment flexibility.

Two Basic Components Of K8s

Component 1. Master Node (Control Plane)


The Control Plane, also known as the control platform, serves as the central command hub for Kubernetes (K8s) operations. It issues commands for various tasks such as scheduling Containers, managing Services, and handling API requests.

This Control Plane communicates with each Node through a dedicated API, keeping track of workload across all Nodes and issuing instructions to address any emergencies. For instance, if it detects a sudden surge in application usage, it will allocate additional computing resources accordingly, and automatically scale them down when usage decreases.

The Control Plane comprises four crucial components:

  1. Kube-API Server: This serves as the single entry point for all requests and acts as the communication bridge for each Node in the Cluster. It handles authentication, authorization, access control, and API registration.
  2. etcd: This is a database utilized to store backup information of the K8s Cluster and record its overall status. In the event of Control Plane failure, etcd helps restore the K8s state.
  3. Kube-scheduler: Responsible for scheduling tasks within K8s, it monitors all user instructions to deploy Pods and selects the most suitable Worker Node based on resource constraints and hardware limitations.
  4. kube-controller-manager: Serving as the automated control center of the K8s Cluster, it manages and oversees the operation of various components within the K8s Controller.

Component 2, Worker Node

The Worker Node serves as the operational host within Kubernetes (K8s) and is tasked with managing and executing Pods. It can be either a physical machine or a virtual machine (e.g., EC2 on AWS) and contains the necessary services to run Pods, with management overseen by the Master Node.

The services operating on a Worker Node include:

  1. Pod:
    • Pods are the smallest unit for deploying resources in K8s, simplifying the deployment and management of containerized applications.
    • A Pod can encapsulate one or more Containers, which collectively perform the same tasks and share network resources like IP address, memory, and hostname. This setup enables efficient data sharing and communication between containers while ensuring simplicity and security.
  2. Kubelet:
    • Kubelet acts as the intermediary between the Worker Node and the Kube-API Server. It receives Pod specifications from the API server and ensures that Pods and their containers run as expected.
    • Kubelet also regularly gathers status information about Pods/Containers from the Worker Node (e.g., running containers, replica counts, resource configurations) and reports it to the Control Plane. Failure to receive this information may result in the Node being marked as unhealthy.
  3. Kube-proxy:
    • Kube-proxy is a network proxy service running on each Node, responsible for managing network communication rules between Pods, internal Cluster communication, and handling external requests.
    • If there’s a packet filtering layer in the operating system, Kube-proxy delegates processing requests to the Node’s operating system for handling.
  4. Container Runtime:
    • The Container Runtime is a lower-level component responsible for running containers and managing them according to Kubelet’s commands.
    • Kubernetes supports various Container Runtimes such as containerd, runC, CRI-O, etc., providing flexibility in container management.

Four Advantages Of K8s

1. Lightweight

The lightweight nature of K8s allows applications to be easily deployed to different environments, such as on-premises data centers, public clouds, or other cloud hybrid environments. The containerized nature of K8s allows the packaged applications and their dependent resources to be closely integrated, thereby solving the compatibility issues of different platforms and reducing the difficulty of deployment on different infrastructures.

2. Declarative Configuration

K8s allows users to declare the desired system state and manage applications and resources through a declarative configuration file (Kubernetes Manifest). Because declarative configuration directly describes the desired service status (declarative) and does not need to be stacked through item-by-item imperative declarations (imperative), it is less error-prone.

K8s declarative configuration is written in YAML or JSON file format, describing the resource configuration to be executed, and then sent to the K8s API Server. The API Server ensures that the target’s operating state matches the user’s expectations.

K8s declarative configuration supports version control, automatic deployment, rollback, expansion and self-healing, which can improve users’ ability to manage large-scale distributed systems. At the same time, it provides a high-level abstraction that allows developers and operations staff to focus on the behavior and needs of the application.

3. Promote Collaboration Between Development Team And Maintenance Team

K8s promotes collaboration between development and maintenance teams by providing a unified application deployment and management platform. Developers can define application configurations as code through the Kubernetes Manifests File to achieve version control and continuous deployment. Maintenance personnel can monitor application running status and introduce CI/CD workflow through K8s automated deployment process.

4. Storage Scheduling

The Storage Orchestration function of K8s is critical to running Stateful applications because it connects containers that require storage resources to the infrastructure that can provide them.

The way K8s performs storage coordination will vary depending on a variety of factors, such as the type of storage infrastructure and how the container uses storage. Take the following figure as an example. When you need to write log files to the local Volume, you can use the Local storage method. When using Azure, you can use the AzureFile storage method. The storage coordination function of K8s allows users to store according to different needs.

Comparison Of Open Source Container Orchestration Tools: K8s Vs Docker Swarm 

K8s and Docker Swarm are two mainstream container orchestration tools on the market. In this section, we will compare the functions, advantages and applicable scenarios of these two tools.

High Availability

Both tools are highly available

Load Balancing

Docker Swarm has automatic load balancing function compared to K8s. However, users can integrate load balancing functions into K8s Cluster through third-party tools.

Scalability

K8s expands in units of Pods, which is suitable for larger-scale expansion. In comparison, Docker Swarm scales in units of containers and has a faster expansion speed.

Should I Choose K8s Or Docker Swarm?

As a popular container scheduling platform, K8s has huge community resources. In addition, major cloud providers and Docker EE also support K8s. Although K8s is more powerful, flexible, and customizable than Docker EE, the learning curve is also steeper. Therefore, K8s needs to be maintained by an experienced team; in order to save costs, some companies will also choose to hand over K8s to hosting providers for maintenance.

Docker Swarm has the advantages of Docker native and relatively simple configuration settings, can be seamlessly integrated with the Docker engine, and can be quickly started and deployed in the environment. Compared with K8s, Docker Swarm provides users with a more intuitive entry choice and is suitable for handling smaller workloads.

The question of choosing Docker Swarm or K8s depends on your own needs, the technical capabilities of your team, and the goals you want to achieve. If you have a smaller application and are looking for a solution with simple deployment steps and an easy-to-use solution, Docker Swarm would be a good choice. On the other hand, if you have enough budget and need a solution that is feature-rich, massively scalable, and supported by a large community and cloud providers, K8s will be more suitable.

Conclusion

In conclusion, Kubernetes (K8s) addresses the challenges faced in managing containerized applications at scale. Through its lightweight architecture and declarative configuration approach, K8s simplifies deployment and promotes collaboration between development and maintenance teams. Its advanced storage scheduling capabilities ensure efficient resource allocation.

The architecture of K8s, consisting of Master and Worker Nodes, facilitates efficient workflow management. The Master Node, also known as the Control Plane, orchestrates tasks and ensures high availability. Meanwhile, the Worker Node executes Pods and handles workload efficiently.

When compared to other container orchestration tools like Docker Swarm, Kubernetes stands out with its robust features such as load balancing and scalability. These features make K8s an ideal choice for managing large-scale applications and ensuring seamless performance.

In essence, Kubernetes emerges as a powerful solution for addressing the complexities of container orchestration, enabling organizations to deploy and manage applications with ease while ensuring high availability, scalability, and efficient resource utilization.

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

    View all posts
0 Shares:
You May Also Like