Container Runtime Works
- RNREDDY

- Sep 9
- 3 min read

How Kubernetes Container Runtime Works
When your Pod starts in Kubernetes, the container runtime is one of the first components to take action.
What Is a Container Runtime?
A container runtime is the system-level component responsible for:
Pulling container images from a registry
Unpacking and mounting the image layers
Creating isolated environments using Linux namespaces and cgroups
Executing the container process using a low-level runtime like runc
Reporting back container state to the kubelet for health and lifecycle tracking
It sits underneath your Kubernetes worker node, abstracted behind the Container Runtime Interface (CRI), and acts as the bridge between kubelet and the host operating system.

Popular Container Runtime Types
Kubernetes supports several CRI compatible runtimes. Each has specific use cases and trade offs:
containerd (Default in GKE, EKS, k3s. Handles image pulls, snapshots, and lifecycle. Lightweight and production-ready)
CRI-O (Kubernetes-focused, minimal runtime. Used in OpenShift and upstream clusters for strict CRI compliance)
Docker Engine (Deprecated since v1.24. Still used for image builds but not for running containers in Kubernetes)
gVisor (Sandbox runtime offering user-space isolation. Ideal for multi-tenant or untrusted workloads)
Kata Containers (Uses lightweight VMs for stronger isolation. Suitable for high-security environments)
Mirantis CR (Enterprise grade fork of Docker Engine with commercial support)
Note: Kubernetes did not “drop Docker,” it dropped the Docker Engine as a container runtime.
You can still use Docker to build images, but use containerd or CRI-O to run them inside Kubernetes.
Let’s explore the two most popular runtimes used across managed Kubernetes environments and self-hosted clusters.
Kubernetes Containerd Runtime Flow
Kubernetes CRI-O Runtime Flow

Containerd is the default container runtime in many Kubernetes distributions including GKE, EKS, and k3s. It is CRI-compliant, lightweight, and purpose-built to manage images, snapshots, and container lifecycles efficiently.
Unlike Docker, containerd is designed for Kubernetes integration without including any user-facing tooling.
Step-by-step breakdown:
1. Kubelet receives pod spec
The scheduler assigns a pod to a node. The kubelet on that node parses the pod definition and determines the required container images, volume mounts, and network configuration.
2. Kubelet calls CRI plugin inside containerd
The kubelet interacts with containerd using the CRI gRPC API. This interaction is split into two parts:
ImageService handles pulling, storing, and listing images.
RuntimeService handles creating, starting, stopping, and removing containers.
3. Image pull from registry
If the required image is not available locally, containerd pulls it from the specified registry. It uses content-addressable storage to store layers efficiently.
4. Snapshot creation and mount
containerd uses a snapshotter (usually overlayfs) to prepare the container’s root filesystem based on the pulled image.
5. Container creation request
The kubelet sends a CreateContainer request through CRI. containerd creates the container environment, prepares namespaces and mounts, and sets up volume paths.
6. Container start
containerd starts the container process in an isolated environment. It then notifies the kubelet that the container is running.
Operational Notes
All interactions happen within the k8s.io namespace inside containerd.
containerd logs are accessible via journalctl -u containerd or /var/log/containerd.log.
For image and container inspection, use tools like crictl which speak directly to containerd’s CRI socket.
containerd integrates seamlessly with Kubernetes features like liveness probes, readiness checks, and logging.
2. Kubernetes CRI-O Runtime Flow
CRI-O is a lightweight and Kubernetes-native container runtime that was built specifically to implement the CRI specification. Unlike containerd, which supports a broader container ecosystem, CRI-O is focused solely on serving Kubernetes workloads.
It is the default runtime for distributions like OpenShift and is commonly used in upstream Kubernetes clusters that prioritize strict compliance and minimalism.

Kubelet kicks off the flow
The kubelet sends gRPC requests via the Container Runtime Interface. It doesn’t care what runtime sits behind the interface. It just expects results - image pulled, container started, status updated.
CRI-O listens on this CRI socket and starts the process.
Image is resolved and pulled
CRI-O uses /container/image to:
Fetch image layers from registries
Handle signature checks (if configured)
Cache images locally using content digests
This part is modular. Whether the image is in Docker Hub, Quay, or a private registry, it goes through the same library.
Storage is mounted
Once the image is ready, CRI-O uses /container/storage to:
Set up the writable container layer
Mount the image snapshot to a container-specific path
Handle volume bindings based on the pod spec
This is where overlayfs (or other supported drivers) come into play.
runc takes over
CRI-O then prepares the runtime configuration in OCI format. This includes:
Process command
Environment variables
Mounts and namespaces
Security settings like seccomp or SELinux
It passes this spec to runc, which uses the Linux kernel to create the container.
Once the process starts, CRI-O watches it and reports back to the kubelet.
Operational Notes
To inspect containers, use crictl or query CRI-O's runtime socket directly.
You can customize default runtime behavior by editing the CRI-O config file, typically located at /etc/crio/crio.conf.
SELinux and AppArmor are integrated with CRI-O and can be enforced at the pod level using annotations.



Comments