Blog
My Kubernetes Learning Journey

After joining Greyamp Consulting as a junior DevOps consultant, I’ve been looking at ramping up on technology stacks that I haven’t been able to work on before. The first one I started with is Kubernetes. I recently completed the Introduction to Kubernetes course from the Linux Foundation Training! 🎉 I also went through the youtube videos by Abhishek Veermala.

Kubernetes, often abbreviated to K8s, is the backbone of modern cloud-native applications. Kubernetes is a system that helps you manage and run applications that are made up of containers. Think of containers like small, portable boxes that hold everything needed to run a piece of software, including the code and its environment. Kubernetes helps you organize these containers so that they can work together smoothly, scale up or down as needed, and recover automatically if something goes wrong. It’s like a traffic manager for your applications, ensuring they run efficiently and are always available.

This post is the first in a series summarizing my learning about k8s. I’ll be talking through the following:

  • Architecture of K8 and how it works.
  • Understanding containers and their orchestration.
  • How Kubernetes simplifies application deployment and management
  • Essential components like nodes, pods, and clusters.

Lets understand architecture of Kubernetes and how it works.

Architecture

Kubernetes primarily consists of a Control Plane (Master node) and Data plane (Worker Nodes). Here's the description of each plane with their components:

Control Plane (Master Node):

The Control Plane is the brain of the Kubernetes cluster. It’s responsible for managing the overall state of the cluster, scheduling workloads, and orchestrating the deployment of containers. The key components of the Control Plane are:

  • API Server (kube-apiserver): The API Server is the gateway for all interactions with Kubernetes. It handles requests from users, external tools, or even other components in the cluster (via the Kubernetes API). Everything from creating new resources (like Pods) to scaling deployments is done through the API Server. It is main entrance to a building/central point where everything happens. Whenever someone (a user or a tool) wants to communicate with the Kubernetes system—whether it's creating something new like a Pod or making changes—it has to go through the API Server first. Think of it as the "front desk" that handles all requests and interactions.
  • Scheduler: When a new task (Pod) is created, it's the Scheduler's job to find the best place (Worker Node) for it to run. It looks at things like how much space or resources each node has and what policies are in place, ensuring everything is balanced and running smoothly across the system.
  • Controller Manager: This component ensures the state of the cluster matches the desired state (defined by the users). It makes sure everything in the system is working as expected, according to what the user wants. For example, if a Pod stops working, the Controller Manager steps in to create a new one to keep things running smoothly, ensuring the right number of Pods are always up and running as planned.
  • etcd: etcd is a distributed Key-Value store. In Kubernetes it acts as the system's config and state store. It stores all the important information about the cluster, such as the settings, configurations, and the current state of everything running. It’s the place Kubernetes looks to know what’s happening and to keep everything in sync. It’s the main source of truth for the whole system.

Data Plane(Worker Nodes):

Worker Nodes are where the actual application workloads (containers) run. Each node in the cluster contains the following essential components:

  • kubelet: The kubelet is like a caretaker on each worker node in Kubernetes. It listens to the API Server for instructions and makes sure that the containers inside the Pods are running as they should be. It also keeps an eye on the health of the node and sends updates back to the Control Plane to let it know how things are going.
  • kube-proxy: The kube-proxy is responsible for managing network communication within the cluster and outside of it. It routes traffic to the appropriate Pods and ensures services can communicate with each other effectively.
  • The container runtime, such as Docker, is like the engine that runs containers in Kubernetes. It pulls container images from a storage location (called a registry) and then runs them on the node. This is what actually gets the containers up and running inside the cluster.

Let’s Understand how it all works

  • When a deployment request is made(1), such as creating or updating a Pod, the request first goes to the API Server, which validates it and stores the desired state in etcd(2).
  • API Server invoke the scheduler(3), the Scheduler takes over, selecting the most suitable Worker Node based on available resources like CPU and memory. And return the data to API(4) server and next to ETCD(5).
  • After the node is chosen, the kubelet on that node is notified(6), It then communicates with the Docker Daemon (or other container runtimes)(7), which pulls the required container images and runs them, ensuring the Pod is created.
  • Kubelet will update status of pod back to API Server(8). API server will write status back to ETCD(9).
  • Once the Pod is up and running, the kube-proxy assigns it an IP address and manages traffic, allowing smooth communication within the cluster and with external services.
  • Throughout this process, the Controller Manager constantly monitors the system’s health. If any Pod or node fails, it automatically recreates the Pod on a healthy node, ensuring the cluster remains aligned with the desired state.

Understanding Containers and Their Orchestration

Containers are lightweight, portable units that package applications and all their dependencies, ensuring they run smoothly across different computing environments. They solve the problem of "it works on my machine" by isolating applications from the host system, making them ideal for consistent, repeatable deployments.

But managing hundreds or thousands of containers manually would be overwhelming. That’s where container orchestration comes in.

Container orchestration tools, like Kubernetes, automate tasks such as:

  • Deploying and managing containers across multiple machines
  • Scaling applications up or down based on demand
  • Handling failures by restarting or rescheduling containers
  • Balancing the load between containers to optimize resource usage

Kubernetes takes away the complexity of managing containers at scale, allowing you to focus on developing your application while it takes care of the infrastructure.

How Kubernetes Simplifies Application Deployment and Management

Kubernetes (K8s) revolutionizes how we deploy and manage applications by automating many traditionally manual tasks. Here’s how it makes the process easier:

  1. Automated Deployment: With Kubernetes, deploying applications is simplified through YAML configuration files that define how your application should run. Kubernetes takes this information and deploys it consistently across multiple machines.
  2. Self-Healing: If something goes wrong, like a container crashing, Kubernetes automatically detects the issue and replaces or restarts the container. This ensures minimal downtime and keeps your application running smoothly.
  3. Scaling: Whether your application needs more resources during peak traffic or fewer resources during downtime, Kubernetes scales your application up or down automatically, saving time and infrastructure costs.
  4. Load Balancing: Kubernetes distributes traffic across multiple containers, ensuring no single container is overwhelmed. It balances the load effectively, improving performance and reliability.
  5. Rollback and Updates: When you release a new version of your application, Kubernetes allows for rolling updates. It replaces old containers with new ones gradually to ensure stability. If anything goes wrong, Kubernetes can automatically rollback to a previous stable version.

Essential Components: Nodes, Pods, and Clusters

In Kubernetes, understanding a few key components is crucial to grasp how it works:

  1. Nodes:A node is a physical or virtual machine where Kubernetes runs workloads. Each node runs one or more containers and is responsible for executing assigned tasks. There are two types of nodes:
    • Master Node: Manages the Kubernetes cluster, handling scheduling, scaling, and maintaining the desired state.
    • Worker Node: Executes tasks by running the actual application containers.
  2. Pods: A pod is the smallest and simplest Kubernetes object. It represents one or more containers that share the same network and storage. Pods group closely related containers together, so they can communicate and work as a single unit. They are ephemeral, meaning they can be created and destroyed based on the needs of the application.
  3. Clusters: A Kubernetes cluster is a group of nodes, both master and worker nodes, that work together to run containerized applications. The master node manages the cluster, while the worker nodes handle the actual workloads. Kubernetes ensures that applications are distributed across nodes, and the cluster remains healthy and balanced.
  • Node Pools are groups of nodes that share similar characteristics, like the type of machine or the region they’re running in. Node pools help you organize your cluster, allowing you to have different configurations for different workloads. For example, some node pools might be optimized for CPU-intensive tasks, while others might have more memory for different needs.
  • Replicas refer to multiple copies of the same Pod (which is a group of containers) running across different nodes. Having replicas is important because it ensures:
    1. High Availability: If one Pod fails (or the node it's running on crashes), Kubernetes can automatically shift the workload to another replica that's still running. This helps prevent downtime.
    2. Load Balancing: With multiple replicas, traffic can be distributed between them so that no single Pod gets overwhelmed by requests.
My Kubernetes Learning Journey
13 November 2024

After joining Greyamp Consulting as a junior DevOps consultant, I’ve been looking at ramping up on technology stacks that I haven’t been able to work on before. The first one I started with is Kubernetes. I recently completed the Introduction to Kubernetes course from the Linux Foundation Training! 🎉 I also went through the youtube videos by Abhishek Veermala.

Kubernetes, often abbreviated to K8s, is the backbone of modern cloud-native applications. Kubernetes is a system that helps you manage and run applications that are made up of containers. Think of containers like small, portable boxes that hold everything needed to run a piece of software, including the code and its environment. Kubernetes helps you organize these containers so that they can work together smoothly, scale up or down as needed, and recover automatically if something goes wrong. It’s like a traffic manager for your applications, ensuring they run efficiently and are always available.

This post is the first in a series summarizing my learning about k8s. I’ll be talking through the following:

  • Architecture of K8 and how it works.
  • Understanding containers and their orchestration.
  • How Kubernetes simplifies application deployment and management
  • Essential components like nodes, pods, and clusters.

Lets understand architecture of Kubernetes and how it works.

Architecture

Kubernetes primarily consists of a Control Plane (Master node) and Data plane (Worker Nodes). Here's the description of each plane with their components:

Control Plane (Master Node):

The Control Plane is the brain of the Kubernetes cluster. It’s responsible for managing the overall state of the cluster, scheduling workloads, and orchestrating the deployment of containers. The key components of the Control Plane are:

  • API Server (kube-apiserver): The API Server is the gateway for all interactions with Kubernetes. It handles requests from users, external tools, or even other components in the cluster (via the Kubernetes API). Everything from creating new resources (like Pods) to scaling deployments is done through the API Server. It is main entrance to a building/central point where everything happens. Whenever someone (a user or a tool) wants to communicate with the Kubernetes system—whether it's creating something new like a Pod or making changes—it has to go through the API Server first. Think of it as the "front desk" that handles all requests and interactions.
  • Scheduler: When a new task (Pod) is created, it's the Scheduler's job to find the best place (Worker Node) for it to run. It looks at things like how much space or resources each node has and what policies are in place, ensuring everything is balanced and running smoothly across the system.
  • Controller Manager: This component ensures the state of the cluster matches the desired state (defined by the users). It makes sure everything in the system is working as expected, according to what the user wants. For example, if a Pod stops working, the Controller Manager steps in to create a new one to keep things running smoothly, ensuring the right number of Pods are always up and running as planned.
  • etcd: etcd is a distributed Key-Value store. In Kubernetes it acts as the system's config and state store. It stores all the important information about the cluster, such as the settings, configurations, and the current state of everything running. It’s the place Kubernetes looks to know what’s happening and to keep everything in sync. It’s the main source of truth for the whole system.

Data Plane(Worker Nodes):

Worker Nodes are where the actual application workloads (containers) run. Each node in the cluster contains the following essential components:

  • kubelet: The kubelet is like a caretaker on each worker node in Kubernetes. It listens to the API Server for instructions and makes sure that the containers inside the Pods are running as they should be. It also keeps an eye on the health of the node and sends updates back to the Control Plane to let it know how things are going.
  • kube-proxy: The kube-proxy is responsible for managing network communication within the cluster and outside of it. It routes traffic to the appropriate Pods and ensures services can communicate with each other effectively.
  • The container runtime, such as Docker, is like the engine that runs containers in Kubernetes. It pulls container images from a storage location (called a registry) and then runs them on the node. This is what actually gets the containers up and running inside the cluster.

Let’s Understand how it all works

  • When a deployment request is made(1), such as creating or updating a Pod, the request first goes to the API Server, which validates it and stores the desired state in etcd(2).
  • API Server invoke the scheduler(3), the Scheduler takes over, selecting the most suitable Worker Node based on available resources like CPU and memory. And return the data to API(4) server and next to ETCD(5).
  • After the node is chosen, the kubelet on that node is notified(6), It then communicates with the Docker Daemon (or other container runtimes)(7), which pulls the required container images and runs them, ensuring the Pod is created.
  • Kubelet will update status of pod back to API Server(8). API server will write status back to ETCD(9).
  • Once the Pod is up and running, the kube-proxy assigns it an IP address and manages traffic, allowing smooth communication within the cluster and with external services.
  • Throughout this process, the Controller Manager constantly monitors the system’s health. If any Pod or node fails, it automatically recreates the Pod on a healthy node, ensuring the cluster remains aligned with the desired state.

Understanding Containers and Their Orchestration

Containers are lightweight, portable units that package applications and all their dependencies, ensuring they run smoothly across different computing environments. They solve the problem of "it works on my machine" by isolating applications from the host system, making them ideal for consistent, repeatable deployments.

But managing hundreds or thousands of containers manually would be overwhelming. That’s where container orchestration comes in.

Container orchestration tools, like Kubernetes, automate tasks such as:

  • Deploying and managing containers across multiple machines
  • Scaling applications up or down based on demand
  • Handling failures by restarting or rescheduling containers
  • Balancing the load between containers to optimize resource usage

Kubernetes takes away the complexity of managing containers at scale, allowing you to focus on developing your application while it takes care of the infrastructure.

How Kubernetes Simplifies Application Deployment and Management

Kubernetes (K8s) revolutionizes how we deploy and manage applications by automating many traditionally manual tasks. Here’s how it makes the process easier:

  1. Automated Deployment: With Kubernetes, deploying applications is simplified through YAML configuration files that define how your application should run. Kubernetes takes this information and deploys it consistently across multiple machines.
  2. Self-Healing: If something goes wrong, like a container crashing, Kubernetes automatically detects the issue and replaces or restarts the container. This ensures minimal downtime and keeps your application running smoothly.
  3. Scaling: Whether your application needs more resources during peak traffic or fewer resources during downtime, Kubernetes scales your application up or down automatically, saving time and infrastructure costs.
  4. Load Balancing: Kubernetes distributes traffic across multiple containers, ensuring no single container is overwhelmed. It balances the load effectively, improving performance and reliability.
  5. Rollback and Updates: When you release a new version of your application, Kubernetes allows for rolling updates. It replaces old containers with new ones gradually to ensure stability. If anything goes wrong, Kubernetes can automatically rollback to a previous stable version.

Essential Components: Nodes, Pods, and Clusters

In Kubernetes, understanding a few key components is crucial to grasp how it works:

  1. Nodes:A node is a physical or virtual machine where Kubernetes runs workloads. Each node runs one or more containers and is responsible for executing assigned tasks. There are two types of nodes:
    • Master Node: Manages the Kubernetes cluster, handling scheduling, scaling, and maintaining the desired state.
    • Worker Node: Executes tasks by running the actual application containers.
  2. Pods: A pod is the smallest and simplest Kubernetes object. It represents one or more containers that share the same network and storage. Pods group closely related containers together, so they can communicate and work as a single unit. They are ephemeral, meaning they can be created and destroyed based on the needs of the application.
  3. Clusters: A Kubernetes cluster is a group of nodes, both master and worker nodes, that work together to run containerized applications. The master node manages the cluster, while the worker nodes handle the actual workloads. Kubernetes ensures that applications are distributed across nodes, and the cluster remains healthy and balanced.
  • Node Pools are groups of nodes that share similar characteristics, like the type of machine or the region they’re running in. Node pools help you organize your cluster, allowing you to have different configurations for different workloads. For example, some node pools might be optimized for CPU-intensive tasks, while others might have more memory for different needs.
  • Replicas refer to multiple copies of the same Pod (which is a group of containers) running across different nodes. Having replicas is important because it ensures:
    1. High Availability: If one Pod fails (or the node it's running on crashes), Kubernetes can automatically shift the workload to another replica that's still running. This helps prevent downtime.
    2. Load Balancing: With multiple replicas, traffic can be distributed between them so that no single Pod gets overwhelmed by requests.

Subscribe To Our Newsletter

Do get in touch with us to understand more about how we can help your organization in building meaningful and in-demand products
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Blog

My Kubernetes Learning Journey

Written by:  

Ashwini

November 14, 2024

4 min read

My Kubernetes Learning Journey

After joining Greyamp Consulting as a junior DevOps consultant, I’ve been looking at ramping up on technology stacks that I haven’t been able to work on before. The first one I started with is Kubernetes. I recently completed the Introduction to Kubernetes course from the Linux Foundation Training! 🎉 I also went through the youtube videos by Abhishek Veermala.

Kubernetes, often abbreviated to K8s, is the backbone of modern cloud-native applications. Kubernetes is a system that helps you manage and run applications that are made up of containers. Think of containers like small, portable boxes that hold everything needed to run a piece of software, including the code and its environment. Kubernetes helps you organize these containers so that they can work together smoothly, scale up or down as needed, and recover automatically if something goes wrong. It’s like a traffic manager for your applications, ensuring they run efficiently and are always available.

This post is the first in a series summarizing my learning about k8s. I’ll be talking through the following:

  • Architecture of K8 and how it works.
  • Understanding containers and their orchestration.
  • How Kubernetes simplifies application deployment and management
  • Essential components like nodes, pods, and clusters.

Lets understand architecture of Kubernetes and how it works.

Architecture

Kubernetes primarily consists of a Control Plane (Master node) and Data plane (Worker Nodes). Here's the description of each plane with their components:

Control Plane (Master Node):

The Control Plane is the brain of the Kubernetes cluster. It’s responsible for managing the overall state of the cluster, scheduling workloads, and orchestrating the deployment of containers. The key components of the Control Plane are:

  • API Server (kube-apiserver): The API Server is the gateway for all interactions with Kubernetes. It handles requests from users, external tools, or even other components in the cluster (via the Kubernetes API). Everything from creating new resources (like Pods) to scaling deployments is done through the API Server. It is main entrance to a building/central point where everything happens. Whenever someone (a user or a tool) wants to communicate with the Kubernetes system—whether it's creating something new like a Pod or making changes—it has to go through the API Server first. Think of it as the "front desk" that handles all requests and interactions.
  • Scheduler: When a new task (Pod) is created, it's the Scheduler's job to find the best place (Worker Node) for it to run. It looks at things like how much space or resources each node has and what policies are in place, ensuring everything is balanced and running smoothly across the system.
  • Controller Manager: This component ensures the state of the cluster matches the desired state (defined by the users). It makes sure everything in the system is working as expected, according to what the user wants. For example, if a Pod stops working, the Controller Manager steps in to create a new one to keep things running smoothly, ensuring the right number of Pods are always up and running as planned.
  • etcd: etcd is a distributed Key-Value store. In Kubernetes it acts as the system's config and state store. It stores all the important information about the cluster, such as the settings, configurations, and the current state of everything running. It’s the place Kubernetes looks to know what’s happening and to keep everything in sync. It’s the main source of truth for the whole system.

Data Plane(Worker Nodes):

Worker Nodes are where the actual application workloads (containers) run. Each node in the cluster contains the following essential components:

  • kubelet: The kubelet is like a caretaker on each worker node in Kubernetes. It listens to the API Server for instructions and makes sure that the containers inside the Pods are running as they should be. It also keeps an eye on the health of the node and sends updates back to the Control Plane to let it know how things are going.
  • kube-proxy: The kube-proxy is responsible for managing network communication within the cluster and outside of it. It routes traffic to the appropriate Pods and ensures services can communicate with each other effectively.
  • The container runtime, such as Docker, is like the engine that runs containers in Kubernetes. It pulls container images from a storage location (called a registry) and then runs them on the node. This is what actually gets the containers up and running inside the cluster.

Let’s Understand how it all works

  • When a deployment request is made(1), such as creating or updating a Pod, the request first goes to the API Server, which validates it and stores the desired state in etcd(2).
  • API Server invoke the scheduler(3), the Scheduler takes over, selecting the most suitable Worker Node based on available resources like CPU and memory. And return the data to API(4) server and next to ETCD(5).
  • After the node is chosen, the kubelet on that node is notified(6), It then communicates with the Docker Daemon (or other container runtimes)(7), which pulls the required container images and runs them, ensuring the Pod is created.
  • Kubelet will update status of pod back to API Server(8). API server will write status back to ETCD(9).
  • Once the Pod is up and running, the kube-proxy assigns it an IP address and manages traffic, allowing smooth communication within the cluster and with external services.
  • Throughout this process, the Controller Manager constantly monitors the system’s health. If any Pod or node fails, it automatically recreates the Pod on a healthy node, ensuring the cluster remains aligned with the desired state.

Understanding Containers and Their Orchestration

Containers are lightweight, portable units that package applications and all their dependencies, ensuring they run smoothly across different computing environments. They solve the problem of "it works on my machine" by isolating applications from the host system, making them ideal for consistent, repeatable deployments.

But managing hundreds or thousands of containers manually would be overwhelming. That’s where container orchestration comes in.

Container orchestration tools, like Kubernetes, automate tasks such as:

  • Deploying and managing containers across multiple machines
  • Scaling applications up or down based on demand
  • Handling failures by restarting or rescheduling containers
  • Balancing the load between containers to optimize resource usage

Kubernetes takes away the complexity of managing containers at scale, allowing you to focus on developing your application while it takes care of the infrastructure.

How Kubernetes Simplifies Application Deployment and Management

Kubernetes (K8s) revolutionizes how we deploy and manage applications by automating many traditionally manual tasks. Here’s how it makes the process easier:

  1. Automated Deployment: With Kubernetes, deploying applications is simplified through YAML configuration files that define how your application should run. Kubernetes takes this information and deploys it consistently across multiple machines.
  2. Self-Healing: If something goes wrong, like a container crashing, Kubernetes automatically detects the issue and replaces or restarts the container. This ensures minimal downtime and keeps your application running smoothly.
  3. Scaling: Whether your application needs more resources during peak traffic or fewer resources during downtime, Kubernetes scales your application up or down automatically, saving time and infrastructure costs.
  4. Load Balancing: Kubernetes distributes traffic across multiple containers, ensuring no single container is overwhelmed. It balances the load effectively, improving performance and reliability.
  5. Rollback and Updates: When you release a new version of your application, Kubernetes allows for rolling updates. It replaces old containers with new ones gradually to ensure stability. If anything goes wrong, Kubernetes can automatically rollback to a previous stable version.

Essential Components: Nodes, Pods, and Clusters

In Kubernetes, understanding a few key components is crucial to grasp how it works:

  1. Nodes:A node is a physical or virtual machine where Kubernetes runs workloads. Each node runs one or more containers and is responsible for executing assigned tasks. There are two types of nodes:
    • Master Node: Manages the Kubernetes cluster, handling scheduling, scaling, and maintaining the desired state.
    • Worker Node: Executes tasks by running the actual application containers.
  2. Pods: A pod is the smallest and simplest Kubernetes object. It represents one or more containers that share the same network and storage. Pods group closely related containers together, so they can communicate and work as a single unit. They are ephemeral, meaning they can be created and destroyed based on the needs of the application.
  3. Clusters: A Kubernetes cluster is a group of nodes, both master and worker nodes, that work together to run containerized applications. The master node manages the cluster, while the worker nodes handle the actual workloads. Kubernetes ensures that applications are distributed across nodes, and the cluster remains healthy and balanced.
  • Node Pools are groups of nodes that share similar characteristics, like the type of machine or the region they’re running in. Node pools help you organize your cluster, allowing you to have different configurations for different workloads. For example, some node pools might be optimized for CPU-intensive tasks, while others might have more memory for different needs.
  • Replicas refer to multiple copies of the same Pod (which is a group of containers) running across different nodes. Having replicas is important because it ensures:
    1. High Availability: If one Pod fails (or the node it's running on crashes), Kubernetes can automatically shift the workload to another replica that's still running. This helps prevent downtime.
    2. Load Balancing: With multiple replicas, traffic can be distributed between them so that no single Pod gets overwhelmed by requests.
About Greyamp

Greyamp is a boutique Management Consulting firm that works with large enterprises to help them on their Digital Transformation journeys, going across the organisation, covering process, people, culture, and technology. Subscribe here to get our latest digital transformation insights.