Working with Kubernetes Services

Kubernetes Services Practical Guide
Kubernetes Services Practical Guide

In this section, we’ll explore the intricacies of pod-to-pod communication within a Kubernetes cluster. We’ll delve into how applications within the cluster can find and talk to each other, and we’ll cover the methods you can use to make your pods accessible from outside the cluster’s internal network.


A Kubernetes service assigns a stable virtual IP (VIP) to a group of pods, enabling consistent access to the containers inside, despite pod changes. The VIP, not tied to a physical network interface, directs traffic to pods. The kube-proxy component on each node ensures this by updating iptables based on new services learned from the API server, facilitating correct traffic flow.

Kubernetes Services

Creating a Service to Make Your Application Accessible

Use Case

You need a dependable method to enable discovery and access to your application within a Kubernetes cluster.

Solution

Create a Kubernetes Service linked to the pods that constitute your application.

Let’s say you’ve initiated an nginx deployment using kubectl create deployment nginx --image nginx:1.25.2. You can seamlessly generate a Service object to expose it with the kubectl expose command as follows:

$ kubectl expose deploy/nginx --port 80
service "nginx" exposed

And to inspect the details of the Service you’ve just exposed:

$ kubectl describe svc/nginx

This will display the Service’s specifications, including its name, namespace, labels, IP addresses, port, and more. For example, you’ll find it’s assigned a ClusterIP, enabling internal cluster communication to your application.

To view this Service in your cluster’s service list, run:

$ kubectl get svc nginx

To Interact with Your Service:

For direct browser access to this service, start a proxy using kubectl proxy. This will serve requests locally:

$ kubectl proxy

Then navigate your browser to:

$ open http://localhost:8001/api/v1/namespaces/default/services/nginx/proxy/

You should be greeted with the NGINX welcome page.

Pro Tip: If you encounter issues with the service, inspect the selector labels and check that endpoints are correctly populated by running kubectl get endpoints <service-name>. If there are no endpoints, your selector might not be correctly matching any pods.

For manual Service object creation, here’s an example YAML configuration for the nginx Service:

apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80

The critical element here is the selector, which ties the Service to the pods that form your application, ensuring traffic is correctly routed.

Note: Deployments and ReplicationSets, which oversee pod health and lifecycle, work alongside Services, which focus on providing stable access to these pods. Both utilize labels for pod identification, albeit serving distinct roles.

Further Reading: Explore the Kubernetes Service documentation and tutorials on exposing your app through a Service for comprehensive guidance.


Ensuring DNS Resolution for a Kubernetes Service

Use case

After setting up a service as outlined in a previous guide, you want to confirm that its Domain Name System (DNS) setup is functioning as intended.

Solution

Kubernetes services, when using the ClusterIP type by default, are made available on a cluster-internal IP. Provided the DNS cluster add-on is operational, these services can be accessed through their Fully Qualified Domain Name (FQDN) following the pattern: $SERVICENAME.$NAMESPACE.svc.cluster.local.

To validate the DNS resolution:

  1. Launch an interactive shell in a container within your cluster. A straightforward method to accomplish this involves deploying a temporary busybox container via kubectl run:
$ kubectl run busybox --rm -it --image busybox:1.36 -- /bin/sh

If the prompt doesn’t appear immediately, hit enter.

  1. Inside the container, perform a DNS lookup for your service:
/ # nslookup nginx

You should see an output similar to:

Server:		10.96.0.10
Address: 10.96.0.10:53

Name: nginx.default.svc.cluster.local
Address: 10.100.34.223

The displayed IP address should match your service’s cluster IP.

  1. To exit the container, type exit and press Enter.

Discussion: By default, DNS queries are restricted to the namespace of the requesting pod. If your busybox container and the nginx service are in different namespaces, the DNS lookup will fail unless you specify the correct namespace in the format <service-name>.<namespace>. For example, to resolve a service in the “staging” namespace, use nginx.staging.

This ensures that your services are discoverable within the cluster through DNS, a fundamental aspect of intra-cluster communication.

Adapting a Service’s Type in Kubernetes

Use Case

You’ve deployed a service with the ClusterIP type, as detailed previously, and now you seek to alter its type to either NodePort or LoadBalancer for exposing your application outside the cluster.

Solution

Use the kubectl edit command to modify the service type interactively. Assuming you have a service defined in a file named simple-nginx-svc.yaml with the initial setup as follows:

kind: Service
apiVersion: v1
metadata:
name: webserver
spec:
ports:
- port: 80
selector:
app: nginx

First, apply your service configuration:

$ kubectl apply -f simple-nginx-svc.yaml
$ kubectl get svc/webserver

This will show your service with the type ClusterIP. To change its type to NodePort, you would:

$ kubectl edit svc/webserver

This command opens the service’s current configuration in your default text editor. In the editor, find the type field under spec and change it from ClusterIP to NodePort:

spec:
...
type: NodePort

Save and close the editor. Kubernetes will apply the changes, transitioning the service to the NodePort type. You can verify this by running:

$ kubectl get svc/webserver

The output will confirm the service type has changed to NodePort, with an additional nodePort value assigned for accessing the service externally.

Discussion: Transitioning the service type allows you to tailor the exposure of your applications according to your needs, whether it’s within the cluster or externally through a NodePort or a cloud provider’s LoadBalancer. However, it’s crucial to understand the nuances and potential costs, especially when employing LoadBalancers in a cloud environment, as they may incur additional charges.

By adjusting the service type as needed, you can ensure your applications are accessible in the manner that best fits your deployment strategy, while also considering security and cost implications.

See Also: Dive deeper into Kubernetes documentation to explore the various service types and their respective use cases, ensuring you select the most appropriate type for your specific requirements.

Implementing an Ingress Controller in Kubernetes

Use Case

You’re aiming to grant external access to your applications running on Kubernetes without resorting to NodePort or LoadBalancer services. Your focus is on understanding and utilizing Ingress objects for this purpose.

Solution

An ingress controller functions as a reverse proxy and load balancer within your Kubernetes cluster. It efficiently directs external traffic to the appropriate pods based on hostname and/or URI path, facilitating the deployment of multiple, easily accessible applications on a single cluster.

To activate Ingress objects and establish routes from outside the cluster to your pods, an ingress controller must be deployed. Here’s how:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

For Minikube Users: Activate the ingress add-on directly:

$ minikube addons enable ingress

Shortly after, verify the successful deployment of the ingress controller in the ingress-nginx namespace:

$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-xpqbt 0/1 Completed 0 3m39s
ingress-nginx-admission-patch-r7cnf 0/1 Completed 1 3m39s
ingress-nginx-controller-6cc5ccb977-l9hvz 1/1 Running 0 3m39s

The output will display the ingress controller pod(s) among others, indicating readiness for Ingress object creation.

Discussion: NGINX stands out as a Kubernetes-endorsed ingress controller, yet numerous other ingress solutions exist, both open source and commercial, with many offering extensive API management features.

As Kubernetes evolves, the Gateway API is emerging as a modern alternative to the traditional ingress specification, already embraced by various gateway providers. For newcomers to ingress concepts, exploring the Gateway API could provide a more forward-looking foundation.

See Also:

Making Services Accessible from Outside the Kubernetes Cluster

Use Case

You need to access a service within a Kubernetes cluster using a URI path from an external point.

Solution

Create an ingress controller, as detailed in Recipe 5.4. This involves the creation of Ingress objects to configure the controller.

To deploy a basic service that responds with “Hello, world!” upon invocation, start by setting up the deployment:

kubectl create deployment web --image=gcr.io/google-samples/hello-app:2.0

Next, make the service accessible:

kubectl expose deployment web --port=8080

Ensure the correct creation of these resources:


$ kubectl get all -l app=web
NAME READY STATUS RESTARTS AGE
pod/web-79b7b8f988-95tjv 1/1 Running 0 47s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web ClusterIP 10.100.87.233 <none> 8080/TCP 8s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 1/1 1 1 47s

NAME DESIRED CURRENT READY AGE
replicaset.apps/web-79b7b8f988 1 1 1 47s

You should see the deployment, service, and pod information, indicating everything is running as expected.

To direct the URI path /web to your service, create an Ingress object using the following manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-public
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host:
http:
paths:
- path: /web
pathType: Prefix
backend:
service:
name: web
port:
number: 8080

Apply this configuration:

kubectl apply -f nginx-ingress.yaml

Afterward, you’ll find the Ingress object set up for NGINX in the Kubernetes dashboard.

With NGINX now accessible via a specific IP address (e.g., 192.168.49.2), the service can be reached from outside the cluster at the /web path, showing the “Hello, world!” message.

Note on Minikube: A known Minikube issue related to the Docker driver might prevent external access to the service via the Ingress IP address. A workaround is to establish a tunnel to the cluster using minikube service web. This command, especially with the --url option, displays the tunnel URL in the terminal. Running this command will occupy the terminal, so it’s advised to execute it in a separate window. Further details on this limitation are available in the Minikube documentation.

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

0 Shares:
Leave a Reply
You May Also Like
Scaling Kubernetes (GKE / AKS / EKS )
Read More

Scaling Kubernetes by Examples

Table of Contents Hide Scaling a Kubernetes DeploymentUse CaseSolutionImplementing Horizontal Pod AutoscalingUse CaseSolutionAutomating Cluster Scaling in GKEUse CaseSolutionDiscussionDynamically…