Kubernetes Serverless and Event-Driven Applications : Practical Examples

Serverless and Event-Driven Applications : Practical Examples
Serverless and Event-Driven Applications : Practical Examples

Serverless is a development paradigm aligned with cloud-native principles, enabling developers to build and deploy applications free from the complexities of server management. Though servers remain a component of the architecture, their operation details are hidden by the platform to streamline application development.

This post provides practical guides for deploying serverless workloads on Kubernetes through the Knative stack, simplifying the path to serverless computing.

Installing the Knative Operator

Knative Operator

Use Case

You aim to set up the Knative platform on your Kubernetes cluster to take advantage of serverless capabilities.

Solution

Leverage the Knative Operator for a streamlined deployment of Knative’s stack components, including Serving and Eventing, on your cluster. The operator introduces custom resources (CRs) for straightforward configuration, installation, upgrade, and lifecycle management of Knative.

To install the Knative Operator, version 1.11.4, execute the following command to apply the necessary Kubernetes configurations from Knative’s release page:

kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.11.4/operator.yaml

Afterwards, confirm the operator is operational:

kubectl get deployment knative-operator

You should see an output indicating the operator is successfully deployed:

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
knative-operator 1/1 1 1 13s

Discussion

Knative, an open-source initiative, facilitates the deployment, operation, and management of serverless and cloud-native applications on Kubernetes. It is comprised of two core components: Serving for handling serverless workloads and Eventing for event-driven architecture support.

Though using the Knative Operator is recommended for deploying Knative components due to its ease of use and management features, you also have the option to manually deploy these components using YAML files found on their respective release pages. This flexibility allows for customized setups tailored to specific requirements or environments.

Installing the Knative Serving Component

Use Case

After installing the Knative Operator, you’re ready to deploy the Knative Serving component to facilitate serverless application execution.

Solution

To install Knative Serving, utilize the KnativeServing custom resource (CR) defined by the Knative Operator. This process involves creating the appropriate namespace, selecting a networking layer, and setting up DNS configuration.

Create the knative-serving Namespace:

$ kubectl create ns knative-serving
namespace/knative-serving created

Prepare the Knative Serving Configuration:

For networking, we’ll opt for Kourier, a lightweight ingress solution for Knative Serving. For DNS, the sslip.io service will be used for convenience.

Create a file named serving.yaml with the following content:

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
ingress:
kourier:
enabled: true
config:
network:
ingress-class: "kourier.ingress.networking.knative.dev"

Deploy Knative Serving:

Apply the configuration with kubectl:

$ kubectl apply -f serving.yaml
knativeserving.operator.knative.dev/knative-serving created

Monitor the deployment status:

$ kubectl -n knative-serving get KnativeServing knative-serving -w
NAME VERSION READY REASON
knative-serving 1.11.0 False NotReady
knative-serving 1.11.0 False NotReady
knative-serving 1.11.0 True

Alternatively, you can manually apply YAML files for Knative Serving and its components:

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.11.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.11.0/servi

Verify External Access via Kourier:

Ensure the Kourier service has an external IP or CNAME:

$ kubectl -n knative-serving get service kourier
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kourier LoadBalancer 10.99.62.226 10.99.62.226 80:30227/T... 118s

Set Up DNS Using sslip.io:

Apply the default domain configuration for sslip.io:

$ kubectl apply -f https://github.com/knative/serving/releases/download/
knative-v1.11.0/serving-default-domain.yaml
job.batch/default-domain created
service/default-domain-service created

Discussion

Knative Serving introduces a high-level framework for deploying and managing serverless, stateless applications that are driven by requests. It abstracts away much of the underlying infrastructure management, allowing developers to concentrate on coding.

The use of sslip.io simplifies DNS management for accessing applications on Knative, generating URLs with the sslip.io suffix that resolve based on embedded IP addresses. While convenient for testing, a proper DNS setup is advised for production deployments to ensure reliable access to your services.

See Also:

  • Guides on Installing Knative
  • How to Configure DNS for Knative

Author

  • Mohamed BEN HASSINE

    Mohamed BEN HASSINE is a Hands-On Cloud Solution Architect based out of France. he has been working on Java, Web , API and Cloud technologies for over 12 years and still going strong for learning new things. Actually , he plays the role of Cloud / Application Architect in Paris ,while he is designing cloud native solutions and APIs ( REST , gRPC). using cutting edge technologies ( GCP / Kubernetes / APIGEE / Java / Python )

0 Shares:
Leave a Reply
You May Also Like
Scaling Kubernetes (GKE / AKS / EKS )
Read More

Scaling Kubernetes by Examples

Table of Contents Hide Scaling a Kubernetes DeploymentUse CaseSolutionImplementing Horizontal Pod AutoscalingUse CaseSolutionAutomating Cluster Scaling in GKEUse CaseSolutionDiscussionDynamically…