Kubernetes 101 – Part 1: CRI, CNI, and CSI
Kubernetes 101 – Part 1: CRI, CNI, and CSI
Building a Strong Foundation for Kasten in Your Homelab
I’ve been asked a few times recently how I learned to set up Kubernetes — what resources I used, how I built my homelab, and how I got comfortable enough to start experimenting with tools like Kasten K10 for data protection.
So, with that in mind, I thought I’d write a blog series to walk you through the journey of setting up Kubernetes in your lab: why certain decisions matter, how to make those choices, and ultimately, how it all ties into protecting your workloads with Kasten (because, let’s be honest, I’ll use any opportunity to talk about it 😄).
In this first part, we’ll talk about the three foundational interfaces that make Kubernetes tick:
- CRI – Container Runtime Interface
- CNI – Container Network Interface
- CSI – Container Storage Interface
These aren’t just acronyms to memorize, they define how your containers run, communicate, and store data. Understanding them will make your Kubernetes setup more reliable and Kasten-ready.
In my lab, I use:
- CRI-O for the container runtime
- Calico for networking
- Longhorn for storage
Let’s break down why.
CRI – Container Runtime Interface
At the heart of Kubernetes, there’s something that actually runs your containers. That’s the container runtime, and the CRI (Container Runtime Interface) is how Kubernetes talks to it.
If we had to relate this back to traditional virtualization, think of the container runtime as your hypervisor (ESXi) whereas Kubernetes is the orchestrator (vCenter). It’s a common misconception to think that Kubernetes itself runs your containers — in reality, the runtime does.
Unlike traditional virtualization, the CRI can be swapped out. For example, you might have a management plane like vCenter while running ESXi; later you could change the underlying engine to Hyper-V. Both are virtualization engines, but Kubernetes orchestrates containers on top of whatever runtime is configured.
Think of the CRI as a translator — the kubelet doesn’t care whether you’re using Docker, containerd, or CRI-O, as long as the runtime understands its instructions.
Common CRI options:
- containerd – lightweight and widely used (default in many distros)
- CRI-O – Kubernetes-native, Red Hat–backed, used in OpenShift
- Docker – previously common; dockershim was deprecated, so Docker isn’t used directly as the runtime in modern Kubernetes clusters
Why I use CRI-O:
In my lab setup, I’ve leaned into CRI-O because it’s designed from the ground up for Kubernetes. It’s lightweight, secure, and integrates cleanly with SELinux and systemd — especially handy if you’re using Fedora- or CentOS-based environments.
It also matches what’s used in OpenShift, which makes it a great learning crossover if you ever work with enterprise clusters.
CNI – Container Network Interface
Once your containers are running, they need to talk to each other across nodes, namespaces, and services. That’s where the Container Network Interface (CNI) comes in.
The CNI defines how pods get their IP addresses, how traffic moves inside your cluster, and how policies control that traffic.
Common CNI options:
- Calico – powerful, policy-driven networking (my pick)
- Flannel – simple, good for small or single-node clusters
- Cilium – eBPF-powered networking with advanced visibility
- Weave Net – easy to set up, decent for dev environments
Why I use Calico:
I like Calico because it’s the sweet spot between simplicity and control. It’s easy to deploy in a homelab but still supports advanced features like network policies and fine-grained traffic control.
Calico also gives you great visibility into pod communication, which is invaluable when debugging why something can’t reach your Kasten service or when enforcing isolation between namespaces.
CSI – Container Storage Interface
Now for one of the most important (and fun) pieces: storage.
The Container Storage Interface (CSI) defines how Kubernetes interacts with different storage backends. Whether you’re using local disks, NFS shares, or cloud block storage, CSI is what allows your pods to request and use Persistent Volumes (PVs).
Common CSI options:
- Longhorn – lightweight, distributed storage system (my choice)
- Rook/Ceph – enterprise-grade distributed storage
- OpenEBS – great for cloud-native and local storage experiments
- local-path-provisioner – simple and lightweight, good for testing
Why I use Longhorn:
Longhorn is an excellent fit for homelabs. It’s easy to deploy and works well on small clusters.
Longhorn is software-defined storage (think vSAN), making it simple to deploy in a homelab environment: add a disk to your worker nodes, attach it to Longhorn, and you’re off. It also provides fault tolerance — if a worker node fails, Longhorn can relocate the volume to another node.
It also provides a web UI for those less comfortable with the CLI — you can click around and see what’s happening in the storage layer, which I found really helpful.
For Kasten specifically, this makes life much easier. Longhorn integrates natively with CSI, meaning Kasten can take real volume snapshots, perform application-consistent backups, and restore workloads seamlessly.
Why CSI matters for Kasten:
Kasten K10 hooks directly into your CSI driver to handle volume snapshots and backups. The better your CSI implementation, the more reliable your data protection will be.
How It All Fits Together
If you visualize your Kubernetes stack, it looks something like this:
+-------------------------------------------------+
| Kasten K10 (Data Mgmt) |
+-------------------------------------------------+
| CSI (Storage) | CNI (Networking) |
| CRI (Containers) |
+-------------------------------------------------+
| Kubernetes Control Plane |
+-------------------------------------------------+
Each interface plays a key role:
- CRI makes your containers run.
- CNI lets them talk.
- CSI gives them a place to store data.
Together, they form the foundation that Kasten builds upon. If these layers are stable and well configured, Kasten can do its job — protecting your applications and data — without drama.
What’s Next
In Part 2, we’ll get hands-on and actually build your first Kubernetes cluster using:
- CRI-O as the container runtime
- Calico for networking
- Longhorn for storage
Hopefully this lays the groundwork for how to set up a Kubernetes cluster and which components are required.
In the next part of the series I’ll go through deploying this in my lab and provide all the commands I use to bring the cluster into a known good state, so please stay tuned for Part 2.
Thanks for reading — I hope you found this guide helpful. As always, stay curious and keep learning!