Kubernetes has made handling persistent storage remarkably flexible thanks to the broad support it offers for Container Storage Interface (CSI) drivers. In this post, I want to share how I set up the NFS CSI driver on my own homelab cluster, which relies on an NFS server packing a roomy 12TB of storage. If you are interested in getting your persistent volumes reliably served from an NFS backend, read on. I will use Terraform throughout because I believe infrastructure as code is the surest path to repeatability, transparency, and peace of mind.
Why NFS CSI?
With almost any storage solution now available for your Kubernetes clusters through CSI, it can be hard to pick the right one. For my own needs, NFS hits the sweet spot between compatibility, simplicity, and cost. The list of CSI drivers is quite extensive, so feel free to check if your favorite backend is available or supported.
Installing the NFS CSI Driver With Helm and Terraform
First things first, let’s get the NFS CSI driver installed. It’s available as a Helm chart, which means you can automate its deployment easily. Below is the Terraform snippet I use:
resource "helm_release" "nfs-csi" {
name = "nfs-csi"
chart = "csi-driver-nfs"
repository = "https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts"
namespace = "kube-system"
}
Pretty straightforward. However, if you are running Kubernetes through MicroK8s, there’s a special step. MicroK8s uses a different kubelet directory than other Kubernetes distributions. To accommodate that, you’ll want to set the kubeletDir
value:
set {
name = "kubeletDir"
value = "/var/snap/microk8s/common/var/lib/kubelet"
}
This little detail can save you hours of troubleshooting if you’re on MicroK8s, so don’t overlook it.
Defining a Storage Class for NFS
The next step is to create a StorageClass in Kubernetes pointing to your NFS server. Again, Terraform makes this both easy and understandable:
resource "kubernetes_storage_class" "nfs" {
metadata {
name = "nfs-csi"
annotations = {
# Set as default if that's what you want
"storageclass.kubernetes.io/is-default-class" = "true"
}
}
storage_provisioner = "nfs.csi.k8s.io"
parameters = {
server = "nfs.example.com"
share = "/path/to/root/of/nfs/share"
}
depends_on = [helm_release.nfs-csi]
}
Please be sure to update server
and share
to reflect your own NFS server’s hostname or IP and the root directory you intend to use for persistent volumes.
Checking Your Setup
Once the CSI driver is installed and the StorageClass is created, you should see nfs-csi
among your available storage classes:
> kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi (default) nfs.csi.k8s.io Delete Immediate true 620d
When you provision a PersistentVolume (PV), you will also see the correct storage class being set:
> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5c5f4815-4c69-492b-849d-45b8c5aa4060 97656250Ki RWX Delete Bound main/zabbix-db nfs-csi 620d
Important Considerations With NFS
There are a couple of key things you should know about using NFS for your persistent storage in Kubernetes:
- Resource requests are ignored: The storage request amount specified in your pods or StatefulSets does not limit the available size in any real way. Every PV essentially has access to the full free space of the underlying NFS share.
- How files are stored: Each PV is mapped to its specific directory on the NFS server. For example, for a PV named
pvc-5c5f4815-4c69-492b-849d-45b8c5aa4060
, all files for that PV will appear in/path/to/root/of/nfs/share/pvc-5c5f4815-4c69-492b-849d-45b8c5aa4060
on your server.
Wrapping Up
Leveraging the NFS CSI driver on Kubernetes is a powerful, flexible approach for persistent storage, especially in home labs or environments where you control your own NFS infrastructure. Using Terraform and Helm to manage this setup not only saves time but also gives you confidence that your cluster’s storage configuration is always documented and reproducible.