Understanding Topics:
Persistent Volumes
Persistent Volumes Claims
Config Maps and Secrets
Persistent Volumes:
DevOps and containerized applications, Persistent Volumes (PVs) play a crucial role in ensuring data persistence and separation of concerns between storage and compute resources. Here's why they're important and a real-time example:
Why we need Persistent Volumes:
Data Persistence: Containers are ephemeral by nature, meaning they can be spun up and torn down easily. However, certain applications require persistent storage for their data, configurations, or other resources. Persistent Volumes provide a way to store this data independently of the lifecycle of the containers.
Decoupling Storage from Compute: Persistent Volumes allow for the separation of concerns between storage management and application deployment. DevOps engineers can manage storage resources separately from application deployment and scaling, providing flexibility and scalability.
Data Sharing and Reusability: Persistent Volumes enable multiple containers or pods to access the same data volume simultaneously, facilitating data sharing and reuse across different parts of an application or between multiple applications.
Data Backup and Disaster Recovery: By using Persistent Volumes, data stored within containers can be backed up easily and managed more effectively. This ensures that critical data is protected and can be recovered in case of failures or disasters.
Real-time Example:
Consider a scenario where you're deploying a microservices-based application using Kubernetes. Each microservice is containerized and requires access to a database. Instead of embedding the database within each container, which would make it difficult to manage, scale, and back up, you can use Persistent Volumes to store the database data separately.
For instance, let's say you have a Kubernetes cluster running an e-commerce application. You have a microservice for user authentication that requires access to a PostgreSQL database. You can define a Persistent Volume to store the database data independently of the authentication service. This allows you to:
Scale the authentication service independently without worrying about losing data.
Back up the database separately and ensure data integrity.
Facilitate data sharing if other microservices also need access to the same database.
Upgrade or replace the authentication service without affecting the database data.
In this example, Persistent Volumes provide the necessary data persistence and separation of concerns, enabling efficient management and scaling of the application components in a DevOps environment.
Persistent Volumes Claims:
Persistent Volume Claims (PVCs) are essential in DevOps for dynamically provisioning storage resources in Kubernetes environments. They act as a request for storage by a user, pod, or container within Kubernetes. Here's why they are important and a real-time example:
Why we need Persistent Volume Claims:
Dynamic Provisioning: PVCs enable dynamic provisioning of storage resources in Kubernetes. Instead of pre-allocating storage manually, PVCs allow applications to request storage as needed, reducing manual intervention and streamlining resource allocation.
Abstraction of Storage Details: PVCs abstract the underlying storage details from the application layer. DevOps engineers can define storage requirements at a higher level, such as storage class and access mode, without needing to worry about specific storage configurations.
Portability and Scalability: With PVCs, applications become more portable across different Kubernetes clusters and storage providers. They also facilitate scalability by allowing applications to dynamically request more storage as their requirements grow.
Efficient Resource Utilization: PVCs help in efficient resource utilization by ensuring that storage resources are only provisioned when needed. This helps in optimizing costs and resource utilization in cloud-native environments.
Real-time Example:
Consider a scenario where you're deploying a web application in a Kubernetes cluster that requires access to a MySQL database. Here's how PVCs can be used in this scenario:
Defining a Persistent Volume Claim: You define a PVC for the MySQL database with specific requirements such as storage size, access mode (e.g., ReadWriteOnce, ReadOnlyMany), and storage class.
Deployment of MySQL Pod: When you deploy the MySQL pod, it includes a volume claim template that references the PVC. Kubernetes dynamically provisions a Persistent Volume based on the PVC's requirements and binds it to the MySQL pod.
Scaling the Application: As the web application scales and requires more database storage, it can dynamically request additional storage by creating more PVCs. Kubernetes handles the provisioning and binding of new Persistent Volumes transparently.
Storage Management: DevOps engineers can manage storage resources at a higher level by defining storage classes that map to different storage providers or configurations. They can also monitor PVC usage to optimize resource allocation and identify potential bottlenecks.
In this example, PVCs enable dynamic provisioning of storage resources for the MySQL database, ensuring that the web application has access to the required storage while abstracting the underlying storage details. This promotes scalability, portability, and efficient resource utilization in DevOps environments.
ConfigMaps and Secrets:
In DevOps practices, ConfigMaps and Secrets are crucial for managing configuration data and sensitive information securely within containerized environments like Kubernetes. Here's why they are essential and a real-time example:
Why we need ConfigMaps and Secrets:
Configuration Management: ConfigMaps allow DevOps engineers to manage configuration data separately from application code. This separation of concerns facilitates easier configuration updates without modifying the application code, promoting flexibility and maintainability.
Environment Consistency: ConfigMaps ensure consistency across different environments (e.g., development, staging, production) by providing a centralized mechanism for storing configuration parameters. This helps in avoiding configuration drift and ensures that applications behave consistently across environments.
Sensitive Data Handling: Secrets are specifically designed to handle sensitive information such as passwords, API tokens, and encryption keys securely. They are encoded or encrypted at rest and only decrypted when accessed by authorized pods, reducing the risk of exposing sensitive data.
Compliance and Security: Secrets help in maintaining compliance with security standards and regulations by ensuring that sensitive data is stored and transmitted securely within the Kubernetes cluster. Access to secrets can be restricted based on RBAC (Role-Based Access Control) policies, limiting exposure to unauthorized users.
Real-time Example:
Consider a scenario where you're deploying a microservices-based web application in Kubernetes that requires configuration parameters for connecting to a database and accessing external APIs. Here's how ConfigMaps and Secrets can be used:
ConfigMaps for Configuration Data: You create a ConfigMap containing configuration parameters such as database hostname, port, credentials, and API endpoints. This ConfigMap is mounted as a volume in the pods running your application, allowing the application to access configuration data as environment variables or files.
Secrets for Sensitive Data: For sensitive information like database passwords and API tokens, you create Secrets instead of ConfigMaps. These Secrets are encrypted at rest and can only be accessed by authorized pods. The application retrieves sensitive data from Secrets during runtime, ensuring secure handling of critical information.
Environment-specific ConfigMaps: You create separate ConfigMaps for different environments (e.g., development, staging, production), each containing environment-specific configuration parameters. This ensures that the application behaves consistently across different environments while allowing environment-specific configurations to be easily managed.
Secret Rotation and Management: DevOps engineers implement processes for secret rotation and management to regularly update sensitive data stored in Secrets. This helps in mitigating security risks associated with long-lived secrets and ensures compliance with security best practices.
In this example, ConfigMaps and Secrets enable efficient management of configuration data and sensitive information within a Kubernetes environment, promoting consistency, security, and compliance in DevOps practices.
Prerequisites:
What Is AWS
Clear understood by how to create EC2 Maching using with ubuntu.
How to Install kubernetes Service.
What Is AWS:
Amazon Web Services (AWS) is a cloud computing platform offered by Amazon.com. It provides a wide range of services, including computing power, storage options, networking, databases, machine learning, and more, all delivered over the internet. AWS allows businesses and individuals to access scalable and reliable computing resources without the need to invest in costly infrastructure.
One of the key features of AWS is its scalability. Users can easily scale their resources up or down based on demand, allowing them to accommodate fluctuating workloads without over-provisioning or experiencing downtime. This scalability is made possible by AWS's pay-as-you-go pricing model, where users only pay for the resources they consume, eliminating the need for large upfront investments.
AWS offers a vast array of services to meet the diverse needs of its customers. Some of the core services include:
Amazon Elastic Compute Cloud (EC2): EC2 provides resizable compute capacity in the cloud, allowing users to quickly deploy virtual servers to run their applications.
Amazon Simple Storage Service (S3): S3 offers scalable object storage for data backup, archival, and analytics. It provides high durability, availability, and security for storing and retrieving data.
Amazon Relational Database Service (RDS): RDS simplifies the setup, operation, and scaling of relational databases such as MySQL, PostgreSQL, and SQL Server in the cloud.
Amazon Virtual Private Cloud (VPC): VPC allows users to create isolated virtual networks within the AWS cloud, giving them control over network configuration, IP addressing, and security.
Amazon Lambda: Lambda is a serverless compute service that allows users to run code without provisioning or managing servers. It automatically scales based on incoming requests, making it ideal for event-driven applications and microservices.
Amazon Elastic Load Balancing (ELB): ELB automatically distributes incoming application traffic across multiple targets, such as EC2 instances, to ensure high availability and fault tolerance.
Security is a top priority for AWS, and the platform provides a wide range of tools and features to help users secure their data and applications. These include identity and access management (IAM), encryption, network security, compliance certifications, and more.
Clear understood by how to create EC2 Maching using with ubuntu:
Login to AWS Console:
Go to the AWS Management Console (https://aws.amazon.com/).
Sign in with your AWS account.
Navigate to EC2:
In the AWS Management Console, navigate to the EC2 service.
Launch an Instance:
Click on the "Instances" in the left navigation pane.
Click the "Launch Instances" button.
Choose an Amazon Machine Image (AMI):
In the "Step 1: Choose an Amazon Machine Image (AMI)" section, select an Ubuntu AMI. You can search for "Ubuntu" in the search bar and choose an appropriate version (e.g., Ubuntu Server 20.04 LTS).
Choose an Instance Type:
In the "Step 2: Choose an Instance Type" section, select "t2.xlarge" as the instance type.
Click "Next: Configure Instance Details."
Configure Instance Details:
In the "Step 3: Configure Instance Details" section:
Set the "Number of instances" to 1.
Optionally, you can configure additional settings, such as network, subnet, IAM role, etc.
Click "Next: Add Storage."
Add Storage:
In the "Step 4: Add Storage" section, you can leave the default storage settings or adjust as needed.
Click "Next: Add Tags."
Add Tags:
In the "Step 5: Add Tags" section, click "Add Tag."
For "Key," enter "Name" and for "Value," enter "Jenkins" (or your preferred name).
Click "Next: Configure Security Group."
Configure Security Group:
In the "Step 6: Configure Security Group" section:
Create a new security group or use an existing one.
Add inbound rules to allow HTTP (port 80), HTTPS (port 443), and SSH (port 22) traffic.
Click "Review and Launch."
Review and Launch:
Review your configuration settings.
Click "Launch."
Select Key Pair:
In the key pair dialog, select "Choose an existing key pair" and choose the "minikube" key pair.
Acknowledge that you have access to the private key.
Click "Launch Instances."
View Instances:
Once the instance is launched, you can view it in the EC2 dashboard.
Wait for the instance to reach the "running" state.
How to Install kubernetes Service:
Go to Ubuntu Machine and update.
sudo apt-get update
sudo apt install -y curl wget apt-transport-https
sudo apt-get install docker.io
sudo systemctl enable --now docker
sudo usermod -aG docker $USER && newgrp docker
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
minikube start --driver=docker
GitHub Project Link:
Build a Two-Tier-Flask-App Project:
First we need to clone all the files for todo app.
git clone https://github.com/udayyysharma/two-tier-flask-app.git
After that go to "two-tier-flask-app" and check all the file using "ls" command.
After that create a "mysqldata" directory.
Now we have received the path that is "/home/ubuntu/two-tier-flask-app/mysqldata".
After that go to the K8S Directory, Check the all the files.
mysql-pv.yml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
capacity:
storage: 256Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /home/ubuntu/two-tier-flask-app/mysqldata #This is your host path where your data will be stored. Make sure to create mysqldata directory in mentioned path
mysql-svc.yml:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
two-tier-app-pod.yml:
apiVersion: v1
kind: Pod
metadata:
name: two-tier-app-pod
spec:
containers:
- name: two-tier-app-pod
image: trainwithshubham/flaskapp:latest
env:
- name: MYSQL_HOST
value: "10.98.19.211" # this is your mysql's service clusture IP, Make sure to change it with yours
- name: MYSQL_PASSWORD
value: "admin"
- name: MYSQL_USER
value: "root"
- name: MYSQL_DB
value: "mydb"
ports:
- containerPort: 5000
imagePullPolicy: Always
mysql-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
value: "admin"
- name: MYSQL_DATABASE
value: "mydb"
- name: MYSQL_USER
value: "admin"
- name: MYSQL_PASSWORD
value: "admin"
ports:
- containerPort: 3306
volumeMounts:
- name: mysqldata
mountPath: /var/lib/mysql # this is your container path from where your data will be stored
volumes:
- name: mysqldata
persistentVolumeClaim:
claimName: mysql-pvc # PVC claim name
mysql-pvc.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi
two-tier-app-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-tier-app
labels:
app: two-tier-app
spec:
replicas: 1
selector:
matchLabels:
app: two-tier-app
template:
metadata:
labels:
app: two-tier-app
spec:
containers:
- name: two-tier-app
image: trainwithshubham/flaskapp:latest
env:
- name: MYSQL_HOST
value: "10.98.19.211" # this is your mysql's service clusture IP, Make sure to change it with yours
- name: MYSQL_PASSWORD
value: "admin"
- name: MYSQL_USER
value: "root"
- name: MYSQL_DB
value: "mydb"
ports:
- containerPort: 5000
imagePullPolicy: Always
two-tier-app-svc.yml:
apiVersion: v1
kind: Service
metadata:
name: two-tier-app-service
spec:
selector:
app: two-tier-app
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30004
type: NodePort
After that, We need to apply first file that is "mysql-pv.yml".
kubectl apply -f mysql-pv.yml
After that we need to apply second file that is claim, So we need to calim.
kuebctl apply -f mysql-pvc.yml
Now, we need to check claim is ok or not, so the command is.
kubectl get pvc
After that we need to apply mysql deployments file. The command is.
kubectl apply -f mysql-deployment.yml
Now we need to try to create a data and check data is Persistent or not.
kubectl exec --stdin --tty mysql-5479cbccb8-w6w89 -- /bin/bash
Now you see I am in under by mysql container.
After that we need to login mysql data base.
Now we check that which data base, we have
Now, you see, I have "mydb" name database. Now we use mydb database.
After that we need to create a table.
After that exit to mysql.
Now, we need to create a backend and attach a data so now we need to go to "k8s" directory after that apply to mysql-svc file.
Now, what we have done we have create a deployment and this deployment need to expose, so we have create a services (Mysql). For support we have create Persistent volume, So the deployment getting support using persistent volume. So Generally we did not provide mysql service outsider user. so what we need to do, we need to do create a application and these application provide to mysql service access, After that we need to do create another service and provide the access to user that service connected to another service.
After that we need to copy mysql cluster ip and paste the deployment.yml file.
After that run the deployment file.
After that we need to access out sider user so need to run the service two-tier file.
Now we have service, also we have deployment and also have to database , Now we need to access, so need to run the command.
kubectl get svc
minikube service <service name> --url
Now we have received the url that is local environment url, So outside user how to access. so the command is.
kubectl port-forward service/<serice name> --address 0.0.0.0 8080:80
Now we need to go ec2 machine change the security group include 8080 port number then copy the public ip and paste the another browser.
If I add anything here that will go to the database.