How Kubernetes Works: An Architectural Overview

Kubernetes has revolutionized how applications are deployed, managed, and scaled in cloud environments. From its origins at Google to becoming the cornerstone of modern cloud-native infrastructure, Kubernetes has simplified container orchestration and enabled cloud-agnostic deployments.
In this blog, we will explore Kubernetes’ journey, its architecture, and the role it plays in simplifying container management.
From Virtualization to Containerization
The evolution of cloud computing began with virtualization, which allowed multiple virtual machines (VMs) to run on a single physical server. Virtual machines, however, were resource-intensive and came with a high overhead.
Enter containerization, a lightweight alternative to VMs. Containers package an application and its dependencies into a single unit that can run anywhere. But managing these containers at scale became a daunting task. This is where container orchestration comes into play, helping automate the deployment, scaling, and management of containerized applications.
The Need for Automated Container Orchestration
Manually managing containers, especially across a large infrastructure, is not sustainable. Operations like scaling up/down, deploying applications, handling network communication, and updating configurations require a lot of manual effort.
The question arises:
Q: Is there a way to automate container orchestration?
Ans: Yes, this is where Kubernetes comes in.
A Brief History of Kubernetes
Before Kubernetes, Google used an internal system called Borg to manage its containerized applications. Borg handled container orchestration at scale, but it wasn’t open to the public. Eventually, Google rewrote Borg, and in 2014, they open-sourced the project as Kubernetes. The project was later donated to the Cloud Native Computing Foundation (CNCF), making it an open-source tool that anyone could contribute to and use.
Kubernetes provided a platform-agnostic approach to container orchestration, meaning it could run across different cloud environments without being tied to any specific one.
The Challenge of Multi-Cloud Deployments
Deploying an application across different cloud providers (e.g., AWS to Azure or GCP) can be challenging. Each cloud provider has its own infrastructure, services, and tools, which might require reconfiguring or rewriting certain components.
This is where Kubernetes excels. Being cloud-agnostic, Kubernetes abstracts the underlying infrastructure, making it easier to move applications from one cloud provider to another without significant changes.
Kubernetes Architecture
The Kubernetes architecture is composed of two main components: the Control Plane and the Worker Nodes. Let’s dive deeper into how these components interact to manage containerized applications at scale.
1. Control Plane
The Control Plane is the brain of the Kubernetes cluster. It manages the worker nodes and ensures that the desired state of the cluster (as specified by the user) is maintained.
Key components of the Control Plane include:
- API Server: This is the entry point for all administrative tasks in Kubernetes. Developers and administrators send configuration and deployment instructions to the API Server.
- Controller Manager: The Controller Manager listens to the API Server for instructions and ensures the required state is maintained (e.g., ensuring the correct number of pods are running).
- etcd: A key-value store used as the backing store for all cluster data. It’s where Kubernetes stores information about the cluster’s state.
- Scheduler: This component assigns workloads (pods) to worker nodes based on resource availability and other constraints.
2. Worker Node
Worker Nodes are where the actual containers (in the form of pods) run. Each worker node has several essential components:
- Kubelet: Acts as the agent on the worker node, ensuring that the containers defined in a pod are running and communicating with the API Server.
- Kube-proxy: Responsible for managing network traffic between pods across different nodes.
- Container Runtime Interface (CRI): This is the software responsible for running the containers. Popular runtimes include CRI-O, and others.
3. Cloud Control Manager (CCM)
For cloud-based Kubernetes deployments, the Cloud Control Manager (CCM) allows Kubernetes to interact dynamically with the underlying cloud provider (e.g., AWS, GCP, Azure). It handles tasks such as provisioning cloud resources (like load balancers) and ensures seamless integration between the Kubernetes cluster and cloud provider APIs.

Cluster Architecture
Example: Deploying Nginx with Kubernetes
Let’s look at how a typical deployment happens in Kubernetes. Imagine you want to deploy two Nginx containers on your cluster.
Here’s the step-by-step process:
1. API Server: You submit your deployment configuration (specifying two Nginx containers) to the API Server.
2. Controller Manager: It receives the deployment instructions and ensures two pods are created for Nginx.
3. Scheduler: The scheduler assigns these pods to available worker nodes based on resources like CPU and memory.
4. Worker Nodes: The pods (which house the Nginx containers) are deployed on the worker nodes. The kubelet ensures that they are running, and the kube-proxy manages the network traffic.
5. CCM (if cloud-based): If you’re using a cloud provider, the CCM ensures that any cloud-specific resources, such as load balancers, are created.
Conclusion
Kubernetes simplifies the complex task of managing containerized applications. By automating the orchestration of containers, Kubernetes has become the backbone of modern cloud-native infrastructure. Its ability to run across different cloud environments without being tied to one makes it a powerful tool for multi-cloud strategies.
Whether you’re deploying applications on AWS, GCP, or Azure, Kubernetes ensures that your infrastructure is both scalable and flexible.
By understanding Kubernetes’ architecture and history, you can appreciate its role in transforming modern-day infrastructure management. If you’re ready to dive into Kubernetes, the first step is to experiment with a cluster and deploy your first set of containers.