How to map an NFS drive as Persistent Volume in Kubernetes


Kubernetes allows mounting NFS (Network File System) share drive in containers as persistent volume (PV). You want to do this when you need to preserve the contents of a volume through several volume mounting and container reboot or re-creation. An NFS drive can also be used to share data among containers, and can be mounted by multiple writers and readers simultaneously.

This tutorial assumes that you already have a working Kubernetes cluster and NFS mount setup.

1. Create a NFS share on the storage server. If you don’t know how to do this, there are several resources over the internet. Also, Most NAS system provides GUI support for creating NFS shares. Once you have the NFS share created, note down the server IP/Hostname and the mount path for the nfs share.

2. Make sure that all your Kubernetes nodes support NFS. If using Ubuntu, you may need to install ‘nfs-commons’ on all the nodes.

sudo apt-get install nfs-commons

3. create a manifest for the Persistent Volume using the NFS server IP/Hostname and mount path. The PersistentVolume subsystem provides an API for users and administrators that abstracts the details of how storage is provided and consumed:


#nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-nas
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.0.21.3 # replace with your nfs server IP or hostname
    path: "/volume/nas-drive" # replace with your mount path

4. Create the Persistent Volume using kubectl

kubectl create -f nfs-pv.yaml

5. Create a manifest for the Persistent Volume Claim. PersistentVolumeClaim is simply a request for storage by a User. User can use a PVC to request a storage that meets specific requirements, including size, read/write, type (e.g ssd) etc., without caring about how the requirements are met. The PVC request is met once a PersistVolume or StorageClass that meets the requirements is found in the cluster. Since we already created a PV that meets the PVC requirements, the PVC is immediately provisioned:


# nfs-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  resources:
    requests:
      storage: 1Gi
  accessModes:
  - ReadWriteMany
  storageClassName: ""

6. Create the PVC using kubectl

kubectl create -f nfs-pvc.yaml

7. Use the PVCs in containers. This example uses uses a ReplicaSet to create 3 pods:


# my-app-replicaset.yaml

apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
  name: my-app-replicaset
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: ubuntu/xenial # change to your image
        volumeMounts:
        - name: nfs-share
          mountPath: /temp/nfs-share # the path to mount the NFS share in the container
        command: [ "/bin/bash", "-c", "--"]
        args: [ "while true; do sleep 30; done;"]
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
        - name: nfs-share 
          persistentVolumeClaim:
            claimName: nfs-pvc # the name of the PVC we created in steps 5 and 6

8. Create the pods:

kubectl create -f my-app-replicaset.yaml

Note that same method applies whether you are deploying a manual pod or using ReplicationController, ReplicaSet or Deployment.

Once the pods are in running state, you can verify that the volumes are properly mounted by creating a file in the volume on one pod and verify that the second pod can see it:


kubectl exec <pod1-name> -- touch /temp/nfs-share/test.txt
kubectl exec <pod2-name> -- ls /temp/nfs-share/ 


About Matthias 33 Articles
I am a Software Engineer from Houston, TX who love to write codes that brings great ideas to live. In my professional life, I have created software for different industries including Oil & Gas, Finance, Service Provider, Cloud Computing and Embedded Systems. When not writting codes, i enjoy travelling, good music and photography. You can reach me at me@matthiasomisore.com.

Be the first to comment

Leave a Reply

Your email address will not be published.


*