Why We Need Pods
About Pods
Pods are the smallest API unit in Kubernetes. In more technical terms, Pods are the atomic scheduling unit in Kubernetes. But why do we need Pods?To answer this question, we need to first understand the essence of a container: a container is essentially a process.That's right. Containers are processes in a cloud computing system, and container images are essentially ".exe" installation packages for this system. Kubernetes, in this analogy, acts as the operating system.
Processes and Process Groups
Let's log in to a Linux machine and execute the following command:
$ pstree -g
This command displays the tree structure of currently running processes in the system. The output might look like this:
systemd(1)-+-accounts-daemon(1984)-+-{gdbus}(1984)
| `-{gmain}(1984)
|-acpid(2044)
...
|-lxcfs(1936)-+-{lxcfs}(1936)
| `-{lxcfs}(1936)
|-mdadm(2135)
|-ntpd(2358)
|-polkitd(2128)-+-{gdbus}(2128)
| `-{gmain}(2128)
|-rsyslogd(1632)-+-{in:imklog}(1632)
| |-{in:imuxsock) S 1(1632)
| `-{rs:main Q:Reg}(1632)
|-snapd(1942)-+-{snapd}(1942)
| |-{snapd}(1942)
| |-{snapd}(1942)
| |-{snapd}(1942)
| |-{snapd}(1942)
As you can see, in a real operating system, processes do not run in isolation. Instead, they are organized into process groups.For instance, the program "rsyslogd" is responsible for log processing in Linux. The main program of rsyslogd, "main", and the kernel log module "imklog" it uses belong to the process group 1632. These processes collaborate to fulfill the responsibilities of the rsyslogd program.Kubernetes essentially maps this concept of "process groups" to container technology and makes it a "first-class citizen" in this cloud computing "operating system." Kubernetes adopts this approach because Google engineers realized that the applications they deployed often exhibited relationships similar to "processes and process groups." Specifically, these applications required close collaboration, necessitating their deployment on the same machine.Managing such operational relationships without the concept of "groups" would be incredibly challenging. Take rsyslogd as an example. It consists of three processes: an imklog module, an imuxsock module, and the main function process of rsyslogd itself. These three processes must run on the same machine; otherwise, their socket-based communication and file exchange would encounter issues.
Inter-Container Communication
As shown in the diagram above, this Pod contains two user containers, A and B, and an Infra container. In Kubernetes, the Infra container is designed to consume minimal resources and utilizes a special image called "k8s.gcr.io/pause." This image represents a container, written in assembly language, that perpetually remains in a "paused" state, with an uncompressed size of only 100–200 KB.Once the Infra container "holds" the Network Namespace, the user containers can join this namespace. Therefore, if you examine the Namespace files of these containers on the host machine (the path to this file was mentioned earlier), they will point to the exact same value. This means that for containers A and B within the Pod, they can communicate directly using "localhost."They perceive the same network devices as the Infra container. A Pod has only one IP address, which is the IP address associated with the Pod's Network Namespace. Naturally, all other network resources are allocated per Pod and shared by all containers within that Pod. The lifecycle of a Pod is solely tied to the Infra container and is independent of containers A and B.Furthermore, for all user containers within the same Pod, their incoming and outgoing traffic can be considered as passing through the Infra container. This aspect is crucial because if you were to develop a network plugin for Kubernetes in the future, your primary focus should be on configuring the Pod's Network Namespace, not how each user container utilizes your network configuration. The latter is inconsequential.This implies that if your network plugin relies on installing packages or configurations within the container, it is not a viable solution. The root filesystem of the Infra container image is practically empty, leaving you with no room for customization. Conversely, this also means that your network plugin doesn't need to be concerned with the startup status of user containers but solely needs to focus on configuring the Pod, which is the Network Namespace of the Infra container.With this design, sharing volumes becomes much simpler. Kubernetes can define all volume configurations at the Pod level. Consequently, a volume's corresponding host directory is unique to the Pod, and any container within the Pod only needs to declare mounting this directory.This design philosophy behind Pods, fostering a "super-close relationship" among containers, aims to encourage users to consider whether applications with multiple, functionally unrelated components running in a single container might be better represented as multiple containers within a Pod.To grasp this mindset, try applying it to scenarios that are challenging to solve with a single container. For example, imagine an application that continuously outputs log files to the "/var/log" directory within the container. In this case, you can mount a volume within the Pod to the "/var/log" directory of the application container. Then, within the same Pod, run a sidecar container that also declares mounting the same volume to its "/var/log" directory.From there, the sidecar container's sole task is to continuously read log files from its "/var/log" directory and forward them to storage solutions like MongoDB or Elasticsearch. This setup establishes a basic log collection mechanism.Similar to the first example, the primary function of the sidecar in this scenario also revolves around using the shared volume for file operations. However, don't overlook the other crucial characteristic of Pods: all containers within a Pod share the same Network Namespace. This allows many configurations and management tasks related to the Pod's network to be delegated to the sidecar, entirely bypassing the need to interfere with user containers. A prime example of this is the Istio service mesh project.
Summary
In this discussion, we delved into the reasons behind the need for Pods. In essence, a Pod serves as the fundamental unit within a Kubernetes cluster, encapsulating one or more containers (typically Docker containers). These containers share network and storage resources. From the perspective of processes and process groups, a Pod can be viewed as a lightweight process group. It enables the deployment, scaling, and management of multiple closely collaborating processes (containers) as a cohesive unit, simplifying the deployment and operation of complex applications. In the next article, we will provide a more in-depth explanation of Pods.
Novita AI is the all-in-one cloud platform that empowers your AI ambitions. With seamlessly integrated APIs, serverless computing, and GPU acceleration, we provide the cost-effective tools you need to rapidly build and scale your AI-driven business. Eliminate infrastructure headaches and get started for free - Novita AI makes your AI dreams a reality.
Recommended Reading: