Working with Services in Kubernetes

Working with Services in Kubernetes

As a DevOps Engineer specializing in Kubernetes, my primary goal is to ensure seamless deployment, scaling, and management of containerized applications. Kubernetes, as an orchestration tool, plays a pivotal role in achieving this objective by abstracting away the complexities of infrastructure management. Let's delve into some examples of services in Kubernetes and how they facilitate robust application deployment and management:

Deployment Service: One of the fundamental services in Kubernetes is the Deployment service. It enables declarative updates to applications, allowing us to define the desired state of the deployed application. For instance, let's consider a scenario where we have a microservices-based e-commerce application. Using Kubernetes Deployment, we can easily manage the deployment of each microservice, specifying parameters such as replica count, resource constraints, and rolling update strategies. This ensures high availability and fault tolerance while enabling seamless updates without disrupting user experience.

Service Discovery: Kubernetes offers built-in service discovery mechanisms through services like kube-dns or CoreDNS. These services enable applications to discover and communicate with each other dynamically within the cluster. For example, suppose our e-commerce application consists of multiple microservices such as product catalog, shopping cart, and payment processing. With Kubernetes Service Discovery, each microservice can be accessed by other components using a logical endpoint (Service DNS), irrespective of the underlying pod IP addresses. This abstraction simplifies inter-service communication and allows for easier scalability and maintenance.

Horizontal Pod Autoscaler (HPA): Scalability is a critical aspect of modern applications, and Kubernetes provides the Horizontal Pod Autoscaler to automate the scaling process based on resource utilization metrics. Let's say our e-commerce platform experiences a surge in traffic during peak hours. With HPA, Kubernetes can dynamically adjust the number of replica pods for a deployment based on CPU or memory utilization thresholds. This ensures that sufficient resources are available to handle increased load, optimizing performance and cost-efficiency. For instance, if the traffic to our payment service spikes, Kubernetes HPA can automatically scale up the number of payment service pods to meet the demand and scale them down during off-peak periods, maintaining efficient resource utilization.

Ingress Controller: Managing external access to applications running in a Kubernetes cluster is simplified with the Ingress Controller. It acts as a smart router, enabling traffic routing based on HTTP/HTTPS rules and hostnames. Consider our e-commerce application requiring external access for customers. By configuring an Ingress resource in Kubernetes, we can define rules for routing requests to specific microservices based on URL paths or domain names. This facilitates easier management of external access and enables features like SSL termination, load balancing, and path-based routing. For instance, we can route requests to "/shop" to the shopping cart service and requests to "/checkout" to the payment processing service, providing a seamless user experience.

Kubernetes offers a comprehensive set of services that empower DevOps engineers to efficiently manage containerized applications at scale. From deployment automation to service discovery, autoscaling, and traffic routing, these services play a crucial role in building resilient and scalable infrastructure for modern applications. As a DevOps Engineer, leveraging these Kubernetes services enables me to streamline deployment workflows, enhance application reliability, and ultimately deliver value to end-users.

Create a Service for your todo-app Deployment:

How to install minikube in AWS EC2 Ubuntu Machine:

Login to AWS Console:

  • Go to the AWS Management Console (https://aws.amazon.com/).

    Sign in with your AWS account.

  • Navigate to EC2:

    In the AWS Management Console, navigate to the EC2 service.

  • Launch an Instance:

    Click on the "Instances" in the left navigation pane.

    Click the "Launch Instances" button.

  • Choose an Amazon Machine Image (AMI):

    In the "Step 1: Choose an Amazon Machine Image (AMI)" section, select an Ubuntu AMI. You can search for "Ubuntu" in the search bar and choose an appropriate version (e.g., Ubuntu Server 20.04 LTS).

  • Choose an Instance Type:

    In the "Step 2: Choose an Instance Type" section, select "t2.xlarge" as the instance type.

    Click "Next: Configure Instance Details."

  • Configure Instance Details:

    In the "Step 3: Configure Instance Details" section:

    Set the "Number of instances" to 1.

    Optionally, you can configure additional settings, such as network, subnet, IAM role, etc.

    Click "Next: Add Storage."

  • Add Storage:

    In the "Step 4: Add Storage" section, you can leave the default storage settings or adjust as needed.

    Click "Next: Add Tags."

  • Add Tags:

    In the "Step 5: Add Tags" section, click "Add Tag."

    For "Key," enter "Name" and for "Value," enter "Jenkins" (or your preferred name).

    Click "Next: Configure Security Group."

  • Configure Security Group:

    In the "Step 6: Configure Security Group" section:

    Create a new security group or use an existing one.

    Add inbound rules to allow HTTP (port 80), HTTPS (port 443), and SSH (port 22) traffic.

    Click "Review and Launch."

  • Review and Launch:

    Review your configuration settings.

    Click "Launch."

  • Select Key Pair:

    In the key pair dialog, select "Choose an existing key pair" and choose the "minikube" key pair.

    Acknowledge that you have access to the private key.

    Click "Launch Instances."

  • View Instances:

    Once the instance is launched, you can view it in the EC2 dashboard.

    Wait for the instance to reach the "running" state.

Setup minikube in Ubuntu:

Go to Ubuntu Machine and update.

sudo apt-get update
sudo apt install -y curl wget apt-transport-https
sudo apt-get install docker.io
sudo systemctl enable --now docker
sudo usermod -aG docker $USER && newgrp docker
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
minikube start --driver=docker

Create a todo-app Project and Run the App using Kubernetes:

First we need to clone all the files for todo app.

git clone https://github.com/udayyysharma/node-todo-cicd.git

After that we apply Deployment.yml file.

Deployment File:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-app-deployment
  labels:
    app: node-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: node-app
  template:
    metadata:
      name: node-pod
      labels:
        app: node-app
    spec:
      containers:
        - name: node-container
          image: trainwithshubham/node-app-batch-6
          ports:
            - containerPort: 8000
          resources:
            requests:
              memory: "64Mi"
              cpu: "250m"
            limits:
              memory: "128Mi"
              cpu: "500m"

For Image we fine on the internet. so we have received shubham image.

Now apply this file.

kubectl apply -f deployment.yml

Service File:

apiVersion: v1
kind: Service
metadata:
  name: node-app-service
spec:
  type: NodePort
  selector:
    app: node-app
  ports:
    - port: 80
      targetPort: 8000
      nodePort: 30003

After that, Apply the Service file.


kubectl apply -f service.yml

Now check, Service file is apply successfully or not.

kubectl get svc

It is successfully applied. Now we need to create url.

minikube service node-app-service --url

Now we have find url, After that we need to do forwarding.

 kubectl port-forward service/node-app-service --address 0.0.0.0 8080:80

Node-todo-App Checking:

Now we need to go ec2 machine change the security group include 8080 port number then copy the public ip and paste the another browser.

Now your node-todo-app project is running.