Kubernetes Monitoring and Logging By Examples

Kubernetes Monitoring and Logging By Examples
Kubernetes Monitoring and Logging By Examples


This chapter covers monitoring and logging methods for infrastructure and applications in Kubernetes, targeting two main roles. Administrators focus on the cluster’s health, such as node condition, scaling needs, overall use, and managing user quotas.

Developers concentrate on the application side, considering resource needs, scaling, storage, and troubleshooting failing apps. We start with Kubernetes probes for internal monitoring, then examine Metrics Server and Prometheus for broader insights, and end with approaches to logging.

Accessing Application Logs within a Container

Use Case

You need to view the logs from an application running inside a specific container in a pod.

Solution

Utilize the kubectl logs command. For a guide on its options, use:

$ kubectl logs --help | more
Print the logs for a container in a pod or specified resource. If the pod has
only one container, the container name is optional.

Examples:
# Return snapshot logs from pod nginx with only one container
kubectl logs nginx
...

This command displays a container’s logs in a pod. If the pod contains only one container, mentioning the container’s name is not necessary.

Examples:

Practical Usage: Consider a pod deployed via a deployment method. To view its logs:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-with-pv-7d6877b8cf-mjx5m 1/1 Running 0 140m

$ kubectl logs nginx-with-pv-7d6877b8cf-mjx5m
  • To fetch logs from a pod named nginx with a single container:
$ kubectl logs nginx-with-pv-7d6877b8cf-mjx5m
...
2024/03/31 11:03:24 [notice] 1#1: using the "epoll" event method
2024/03/31 11:03:24 [notice] 1#1: nginx/1.23.4
2024/03/31 11:03:24 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2024/03/31 11:03:24 [notice] 1#1: OS: Linux 5.15.49-linuxkit
2024/03/31 11:03:24 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/03/31 11:03:24 [notice] 1#1: start worker processes
...

This will show logs like nginx version, build information, and system limits.

Tip:
For pods with multiple containers, use the -c option with kubectl logs to specify which container’s logs to view.

Discussion

For an alternative way to view Kubernetes pod logs, consider using Stern. It allows for easy log retrieval across namespaces with only a part of the pod name required, offering a simpler approach than selector-based methods.

Recovering from Broken States with Liveness Probes

Use Case

You need Kubernetes to automatically restart pods if their applications enter a broken state.

Solution

Implement a liveness probe in your pod specifications. If the probe fails, Kubernetes will automatically restart the affected pod. Each container within a pod can have its own liveness probe, ensuring granular health checks.

Liveness probes can be:

  • A command executed within the container.
  • An HTTP or gRPC request to a path served by an internal server.
  • A TCP socket check.

Example: Here’s how to set up a basic HTTP liveness probe for an Nginx container:

apiVersion: v1
kind: Pod
metadata:
name: liveness-nginx
spec:
containers:
- name: nginx
image: nginx:1.25.2
livenessProbe:
httpGet:
path: /
port: 80

For more detailed examples, refer to specific recipes or documentation.

See Also:

Controlling Traffic with Readiness Probes

Use Case

Although your pods are live as indicated by liveness probes, you want to ensure they only receive traffic when the application is fully ready to handle requests.

Solution

Incorporate readiness probes into your pod configurations. These probes assess if the application within the container is prepared to serve traffic. Below is a basic example using an Nginx container, where the readiness probe performs an HTTP request on port 80:

apiVersion: v1
kind: Pod
metadata:
name: readiness-nginx
spec:
containers:
- name: readiness
image: nginx:1.25.2
readinessProbe:
httpGet:
path: /
port: 80

Discussion:
Though the readiness probe in this instance mirrors the liveness probe from another example, they typically serve different purposes and thus should vary. A liveness probe verifies that an application process is running, but not necessarily ready to process requests.

On the other hand, a readiness probe checks if the application can successfully serve requests. A pod will only be added to a service’s load balancer pool if its readiness probe passes, ensuring smooth traffic handling.

See Also:

Protecting Slow-Starting Containers with Startup Probes

Use Case

You have a pod with a container that requires extra time to initialize upon its first launch, and you wish to avoid using liveness probes initially as their requirement is solely for the pod’s initial startup.

Solution

Integrate a startup probe into your pod’s configuration, setting the failureThreshold and periodSeconds sufficiently high to accommodate the container’s startup duration. Startup probes can be configured similarly to liveness probes, supporting command executions, HTTP requests, or TCP checks.

Here is an example of a pod with an Nginx container using an HTTP startup probe:

apiVersion: v1
kind: Pod
metadata:
name: startup-nginx
spec:
containers:
- name: startup
image: nginx:1.25.2
startupProbe:
httpGet:
path: /
port: 80
failureThreshold: 30
periodSeconds: 10

Discussion

Applications that require lengthy initialization times, such as those performing extensive database migrations, can benefit from startup probes. Setting a liveness probe for such applications without compromising their startup can be challenging.

By configuring a startup probe with an appropriate failureThreshold * periodSeconds, you ensure the container has enough time to become operational.

Startup probes prevent liveness and readiness probes from starting until the application is fully initialized, avoiding premature termination by the kubelet. This approach allows for the safe implementation of liveness checks on containers that are slow to start.

See Also:

  • Kubernetes documentation on container probes for detailed instructions on configuring startup, liveness, and readiness probes.
  • “Configure Liveness, Readiness, and Startup Probes” section in the Kubernetes documentation for further guidance on probe setup.

Enhancing Deployments with Liveness and Readiness Probes

Use Case

You want Kubernetes to automatically monitor your application’s health and take necessary actions if it isn’t performing as expected.

Solution

Incorporate liveness and readiness probes into your deployment to inform Kubernetes about your application’s state. Starting from a deployment manifest like webserver.yaml, these probes are added within the container specification to regularly check the app’s health and readiness.

Here’s how to modify your deployment manifest to include these probes:

apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25.2
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 10

After deploying this configuration:

$ kubectl apply -f webserver.yaml
$ kubectl get pods

You can inspect the pods to see the probes in action:

$ kubectl describe pod <pod-name>

The output will show details about the liveness and readiness probes, indicating whether your application is healthy and ready to serve traffic.

Discussion

Liveness and readiness probes are essential for managing container health within a pod. Liveness probes help Kubernetes decide when to restart a container, while readiness probes inform it when a container is ready to start accepting traffic. For services balancing traffic across multiple pods, readiness probes ensure that only healthy pods are considered.

When configuring probes, consider the behavior of your application. Use liveness probes for containers that should be restarted upon failure, with a restart policy of Always or OnFailure. Readiness probes are crucial for determining when a pod can start receiving traffic. Startup probes are useful for applications that need time to initialize, delaying other probes’ start.

For detailed guidance on configuring these probes, refer to Kubernetes documentation on liveness, readiness, and startup probes, as well as the Kubernetes pod lifecycle and init containers documentation.

Accessing Kubernetes Metrics via CLI

Use Case

After installing the Kubernetes Metrics Server, you wish to view metrics directly through the Kubernetes CLI.

Solution

The kubectl top command is your go-to for checking resource usage of both nodes and pods within your cluster. This command provides quick insights into CPU and memory utilization.

To see node metrics:

$ kubectl top node

Example output:

NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
minikube 338m 8% 1410Mi 17%

To check metrics for pods across all namespaces:

$ kubectl top pods --all-namespaces

Example output:

NAMESPACE     NAME                               CPU(cores)   MEMORY(bytes)
default db 15m 440Mi
default liveness-nginx 1m 5Mi
default nginx-with-pv-7d6877b8cf-mjx5m 0m 3Mi
default readiness-nginx 1m 3Mi
default webserver-f4f7cb455-rhxwt 1m 4Mi
kube-system coredns-787d4945fb-jrp8j 4m 12Mi
kube-system etcd-minikube 48m 52Mi
kube-system kube-apiserver-minikube 78m 266Mi
...

These metrics can offer a real-time view of how much resources your workloads are consuming, aiding in both troubleshooting and capacity planning.

TIP:
The Metrics Server may take a few minutes to fully initialize after starting. If it’s not ready, you might encounter errors when using the top command. Patience is key; once it’s operational, these commands will provide valuable insights into your cluster’s performance.

For a more visual representation, you can also use the Kubernetes Dashboard to view these metrics, provided it’s set up in your cluster.

Setting Up Prometheus and Grafana on Minikube

Use Case

You’re interested in monitoring your Kubernetes cluster with a comprehensive view of both system and application metrics from a unified platform.

Solution

Deploying Prometheus and Grafana on Minikube is a solid approach to achieve this. The kube-prometheus project simplifies the installation of Prometheus and Grafana on any Kubernetes cluster, including Minikube. Here’s how to set it up:

  1. Prepare Minikube: Start by configuring a new Minikube instance optimized for kube-prometheus: minikube delete && minikube start --kubernetes-version=v1.27.0 \ --memory=6g --bootstrapper=kubeadm \ --extra-config=kubelet.authentication-token-webhook=true \ --extra-config=kubelet.authorization-mode=Webhook \ --extra-config=scheduler.bind-address=0.0.0.0 \ --extra-config=controller-manager.bind-address=0.0.0.0
  2. Disable Metrics-Server on Minikube: minikube addons disable metrics-server
  3. Deploy kube-prometheus:
    • Clone the kube-prometheus project: git clone https://github.com/prometheus-operator/kube-prometheus.git
    • Navigate to the repository directory and apply the manifests: kubectl apply --server-side -f manifests/setup kubectl wait --for condition=Established --all CustomResourceDefinition --namespace=monitoring kubectl apply -f manifests/
  4. Access Prometheus and Grafana Dashboards:
    • For Prometheus: kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090 Then, visit localhost:9090 in your browser.
    • For Grafana: kubectl --namespace monitoring port-forward svc/grafana 3000 Access Grafana at localhost:3000 and log in with the default credentials (admin/admin).
    • Navigate to the built-in dashboard for the Kubernetes API server or explore and create custom dashboards as needed.

Discussion:
This setup offers a hands-on experience with Grafana and Prometheus, providing insights into Kubernetes’ metrics. It serves as an excellent starting point for both learning and developing more advanced monitoring solutions tailored to your specific needs. As you become more familiar with these tools, you can delve into Prometheus queries and Grafana dashboards, customizing your monitoring environment to better suit your applications and workloads.

See Also:

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

    View all posts
0 Shares:
You May Also Like