Getting Started with Kubernetes

Getting Started with Kubernetes
Getting Started with Kubernetes

In our guide, we offer a collection of recipes designed to kickstart your journey with Kubernetes, the leading container orchestration platform. Whether you’re new to Kubernetes or looking to expand your knowledge, these recipes will provide valuable insights and practical steps to enhance your experience.

Installing the Kubernetes Command-Line Interface (kubectl)

Use Case

When aiming to manage and interact with your Kubernetes cluster directly from the command line, having the Kubernetes Command-Line Interface (CLI), kubectl, installed is essential. It provides a direct way to deploy applications, inspect and manage cluster resources, and view logs.


Downloading the Latest Official Release

Linux users seeking the latest stable version of kubectl can achieve this with a straightforward two-step process. First, download the latest stable release using wget:

wget$(wget -qO -

Then, set the downloaded file as executable and move it to a directory included in your system’s PATH, such as /usr/local/bin, to make it globally accessible:

sudo install -m 755 kubectl /usr/local/bin/kubectl

Using Homebrew on Linux and macOS

For those who prefer using Homebrew, a popular package manager for Linux and macOS, installing kubectl is as simple as running:

brew install kubectl

After the installation, verify the successful setup of kubectl by checking its version:

kubectl version --client

This command should display the client version, confirming kubectl is ready for use.


kubectl is the official CLI tool for Kubernetes, offering a wide range of functionalities to manage your cluster. It’s open-source, giving you the option to compile the kubectl binary yourself if needed. This flexibility is particularly useful for those who might require a specific version or build of kubectl.

Special Notes for Certain Kubernetes Services Users:

  • Google Kubernetes Engine (GKE) Users: You have the option to install kubectl via the gcloud SDK:gcloud components install kubectl
  • Minikube Users: Minikube offers a convenient feature that allows you to use kubectl as a subcommand, ensuring you use a version of kubectl that matches your cluster’s version:minikube kubectl -- version --client

Setting Up Minikube for a Local Kubernetes Experience

Use Case

Whether you’re diving into Kubernetes for development, testing, or educational purposes, having a local instance can significantly streamline your learning and experimentation process. Minikube offers an efficient solution by enabling a Kubernetes environment directly on your personal computer.


Getting Started with Minikube

Minikube simplifies the process of running Kubernetes locally, making it an ideal tool for personal development and testing. To install Minikube on a Linux system, follow these steps to download and set up the Minikube CLI:

  1. Download the Latest Minikube Release Use the wget command to fetch the most recent Minikube binary: wget -O minikube
  2. Install MinikubeMake the downloaded binary executable and move it to a directory in your system’s PATH to ensure it’s accessible from any location: sudo install -m 755 minikube /usr/local/bin/minikube
  3. Verify Your InstallationAfter installation, confirm that Minikube is correctly installed by checking its version: minikube version This command will display the installed version of Minikube, confirming the successful setup.


Understanding Minikube’s Flexibility

Minikube can operate in various environments, including as a virtual machine, within a container, or on bare metal. This versatility is controlled through the --driver flag when initializing a Minikube cluster. If the flag is not specified, Minikube will automatically choose the optimal runtime environment available.

  • Hypervisor Support: Minikube is compatible with several hypervisors like VirtualBox, Hyperkit, Docker Desktop, and Hyper-V. Hypervisors facilitate the creation and management of virtual machines, allowing multiple VMs to share the same physical hardware resources efficiently.
  • Container Runtime Option: On Linux systems, Minikube can also leverage container runtimes like Docker Engine and Podman (currently experimental) to create clusters. This method provides excellent performance and resource utilization by running containers natively, without the overhead of a virtual machine.

Minikube for Local Development

Use Case

For developers aiming to test and develop Kubernetes applications locally, Minikube represents an invaluable tool. Following the initial setup and activation of Minikube (refer to the previous instructions), there are several commands and configurations that can enhance your development experience, offering more control and flexibility over your local Kubernetes environment.


Starting Your Local Kubernetes Cluster

Kick off your development by creating a local Kubernetes cluster with Minikube:

minikube start

By default, Minikube allocates 2 GB of RAM to your cluster. However, you might find yourself needing more resources or wanting to test under different configurations. You can customize your setup by specifying the number of CPUs, the amount of memory, and even the Kubernetes version you want to use. For instance:

minikube start --cpus=4 --memory=4096 --kubernetes-version=v1.27.0

To simulate a more robust environment, you can also increase the number of nodes in your cluster:

minikube start --cpus=2 --memory=4096 --nodes=2

Monitoring Your Cluster

After setting up, it’s essential to know how to inspect the health and status of your Minikube cluster:

minikube status

This command provides a concise summary of your cluster’s components and their operational status. To get detailed information about the Kubernetes cluster running inside Minikube, use:

kubectl cluster-info

Remember, your Minikube cluster shares resources with your host machine. Ensure your host has enough available resources to support the configurations you choose. When you’re finished with development, release the resources by stopping Minikube:

minikube stop


The Minikube CLI is equipped with a variety of commands designed to streamline your development workflow. Here are a few essential commands to familiarize yourself with:

  • Basic Commands: Start, stop, and delete your local Kubernetes cluster with ease.
  • Configuration and Management: Adjust settings and manage Minikube’s features through addons and other configurations.

Pro Tips:

  • Exploring Minikube’s Capabilities: Dive into Minikube’s built-in help for a comprehensive overview of available subcommands and features.
  • Recovery and Fresh Starts: If Minikube becomes unstable or you wish to reset your environment, you can do so by stopping and deleting your current setup. A fresh start with minikube start will reinitialize your cluster from scratch, providing a clean slate for development.

Leveraging these commands and tips will empower you to create a flexible and efficient local development environment tailored to your specific Kubernetes project needs.

Launching Your First Application on Minikube

Use Case

Now that Minikube is up and running (as outlined in previous steps), the next exciting phase is deploying your first application within the Kubernetes environment. For beginners, launching a straightforward, lightweight application is an excellent way to grasp the basics of application deployment and management on Kubernetes.


Deploying the Ghost Blogging Platform on Minikube

Ghost is a popular open-source blogging platform. It’s an ideal candidate for your first deployment due to its simplicity. To deploy Ghost on Minikube, follow these steps:

  1. Start the Ghost ApplicationUse kubectl to run a Ghost instance with a specific image version and set the environment to development: kubectl run ghost --image=ghost:5.59.4 --env="NODE_ENV=development"
  2. Expose the Ghost PodAfter starting the Ghost application, you’ll need to make it accessible. Expose the Ghost pod to the internet using the following command, which creates a service of type NodePort: kubectl expose pod ghost --port=2368 --type=NodePort
  3. Monitor the Pod’s StatusKeeping an eye on the pod’s status is crucial to understand when it’s fully operational: kubectl get pods You should see your Ghost pod listed as Running, indicating that your application is up and active.
  4. Accessing Your ApplicationMinikube simplifies the process of accessing your newly deployed application. Use the minikube service command to open the Ghost blogging platform in your web browser:minikube service ghost


Understanding the Commands:

  • The kubectl run command is a quick and convenient way to create a Pod, the smallest deployable unit in Kubernetes that represents a single instance of a running process in your cluster.
  • The kubectl expose command is similarly a shortcut for creating a Service, an abstract way to expose an application running on a set of Pods as a network service.

With these two commands, you effectively deploy an application and make it accessible through a network.

Cleaning Up:

Once you’re done experimenting with your Ghost application or if you wish to deploy something else, it’s good practice to clean up the resources you’ve used. You can delete the Ghost pod and its associated service with the following commands:

  • To delete the Ghost pod:kubectl delete pod ghost
  • To delete the service exposing the Ghost application:kubectl delete svc ghost

Removing unused resources helps to keep your Minikube environment clean and ensures efficient use of system resources.

Running Kubernetes Locally with kind

Use Case

For developers seeking an alternative local Kubernetes environment, especially for testing and developing Kubernetes applications, kind (Kubernetes in Docker) offers a compelling solution. Originating as a tool for testing Kubernetes itself, kind now serves a broader audience, allowing individuals to experiment with Kubernetes-native tools and solutions directly on their laptops with minimal setup.


Getting Started with kind

To use kind for local Kubernetes development, the prerequisites include having Go and Docker installed on your system. Kind is designed to run Kubernetes clusters by using Docker containers as nodes, making it a lightweight and convenient option for local testing.

  1. Installing kindkind installation is straightforward across various platforms. For instance, on macOS, you can use Homebrew:brew install kind
  2. Creating a Local Kubernetes ClusterOnce kind is installed, creating a Kubernetes cluster is as simple as executing:kind create cluster This command initializes a default cluster with a single node, which acts as both the control plane and worker.
  3. Deleting the ClusterWhen you’re finished with your testing or development work, or if you wish to start over with a fresh cluster, deleting your kind cluster is equally straightforward:kind delete cluster


Why Choose kind?

kind’s design caters to automation and simplicity, making it an excellent choice for developers who need to rapidly deploy Kubernetes clusters for testing purposes. Its architecture ensures that users can easily integrate it into CI/CD pipelines for automated testing and development workflows. Additionally, because it runs Kubernetes in Docker containers, kind is a resource-efficient way to create and manage local Kubernetes clusters, allowing for quick iteration and testing of Kubernetes configurations and applications.


While kind is a powerful tool for local development and testing, it’s important to recognize its primary focus on testing Kubernetes itself. This focus influences certain design decisions, such as how resources are managed and allocated. However, for most development and testing scenarios, kind provides a robust and efficient environment for running Kubernetes clusters locally.

Further Resources

For more detailed information, including advanced configurations and usage scenarios, refer to the official kind Quick Start guide. This resource offers comprehensive insights into getting the most out of kind, including customizing cluster configurations, networking, and more, to suit your specific development and testing needs.

Managing kubectl Contexts for Multiple Kubernetes Clusters

Use Case

When working with Kubernetes, it’s common to interact with multiple clusters, such as development, testing, and production environments. kubectl, the command-line tool for interacting with Kubernetes, uses a concept called “contexts” to manage connections to different clusters. Understanding how to view, switch between, and manage these contexts is essential for efficiently working across various Kubernetes environments.


Viewing Available Contexts

To start, it’s helpful to know which Kubernetes clusters kubectl is configured to communicate with. Each cluster configuration, including its user authentication credentials and namespace preferences, is stored in a context. To see all the contexts available to you, run:

kubectl config get-contexts

This command lists all configured contexts, indicating the currently active context with an asterisk (*).

Switching Contexts

If you need to switch the active context to target a different cluster (for example, moving from a local development cluster like Minikube to a testing cluster managed by kind), you can change the context using:

kubectl config use-context kind-kind

After executing this command, kubectl will direct all subsequent commands to the kind-kind cluster.


Accessing Remote Clusters

The versatility of kubectl extends beyond local clusters. It’s also designed to interact with remote Kubernetes clusters. Adjusting the kubeconfig file, which stores your cluster, context, and user credentials, can link kubectl to any accessible cluster. This flexibility allows for seamless transitions between local development work and managing resources in remote, production-grade environments.

For those managing complex configurations or working within teams, understanding the structure and capabilities of the kubeconfig file is crucial. This file is the key to configuring access to multiple clusters, defining different user credentials, and even setting namespace preferences for each context.

Further Learning

For more detailed information about managing multiple Kubernetes clusters and navigating between them with kubectl, the Kubernetes official documentation offers comprehensive guides and tutorials. It covers everything from the basics of kubeconfig file structure to advanced techniques for managing cluster access in multi-user environments. This knowledge is invaluable for anyone looking to streamline their workflow and enhance their efficiency in deploying and managing applications across various Kubernetes clusters.

Simplifying Kubernetes Context and Namespace Switching with kubectx and kubens

Use Case

Frequently switching between different Kubernetes clusters and namespaces using kubectl can be cumbersome due to the lengthy command syntax. Especially for developers and operators working across multiple environments, streamlining this process is desirable.


To ease the management of Kubernetes contexts (clusters) and namespaces, kubectx and kubens are invaluable tools. These open-source scripts allow for quick switching without the need for verbose kubectl commands.

Installation via Homebrew:

For users on systems where Homebrew is available, installing these tools is straightforward:

brew install kubectx

Using kubectx for Context Switching:

To list all your available kubectl contexts:


Switching between contexts is simplified to just entering:

kubectx <context-name>

Replace <context-name> with your desired context, such as minikube.

Using kubens for Namespace Switching:

Similarly, kubens makes listing and switching namespaces effortless:

List namespaces:


Switch to a specific namespace:

kubens <namespace-name>

Replace <namespace-name> with your target namespace, such as test.

After switching, all subsequent kubectl commands will operate within the context of the chosen namespace, removing the need to specify the namespace for each command.


kubectx and kubens significantly streamline the process of switching between Kubernetes clusters and namespaces, making daily operations more efficient for developers and system administrators alike. For further details, the tools’ repository offers comprehensive insights and additional usage scenarios.


  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

You May Also Like
Scaling Kubernetes (GKE / AKS / EKS )
Read More

Scaling Kubernetes by Examples

Table of Contents Hide Scaling a Kubernetes DeploymentUse CaseSolutionImplementing Horizontal Pod AutoscalingUse CaseSolutionAutomating Cluster Scaling in GKEUse CaseSolutionDiscussionDynamically…