Configure a Pod to Use a PersistentVolume for Storage

    1. You, as cluster administrator, create a PersistentVolume backed by physical storage. You do not associate the volume with any Pod.

    2. You, now taking the role of a developer / cluster user, create a PersistentVolumeClaim that is automatically bound to a suitable PersistentVolume.

    3. You create a Pod that uses the above PersistentVolumeClaim for storage.

    • You need to have a Kubernetes cluster that has only one Node, and the command-line tool must be configured to communicate with your cluster. If you do not already have a single-node cluster, you can create one by using Minikube.

    • Familiarize yourself with the material in .

    Create an index.html file on your Node

    Open a shell to the single Node in your cluster. How you open a shell depends on how you set up your cluster. For example, if you are using Minikube, you can open a shell to your Node by entering .

    In your shell on that Node, create a /mnt/data directory:

    In the /mnt/data directory, create an index.html file:

    1. # This again assumes that your Node uses "sudo" to run commands
    2. # as the superuser
    3. sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"

    Note: If your Node uses a tool for superuser access other than sudo, you can usually make this work if you replace sudo with the name of the other tool.

    Test that the index.html file exists:

    1. cat /mnt/data/index.html

    The output should be:

    1. Hello from Kubernetes storage

    You can now close the shell to your Node.

    Create a PersistentVolume

    In this exercise, you create a hostPath PersistentVolume. Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.

    In a production cluster, you would not use hostPath. Instead a cluster administrator would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume. Cluster administrators can also use StorageClasses to set up .

    Here is the configuration file for the hostPath PersistentVolume:

    pods/storage/pv-volume.yaml

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: task-pv-volume
    5. labels:
    6. type: local
    7. spec:
    8. storageClassName: manual
    9. capacity:
    10. storage: 10Gi
    11. accessModes:
    12. - ReadWriteOnce
    13. hostPath:
    14. path: "/mnt/data"

    Create the PersistentVolume:

    1. kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml

    View information about the PersistentVolume:

    1. kubectl get pv task-pv-volume

    The output shows that the PersistentVolume has a STATUS of Available. This means it has not yet been bound to a PersistentVolumeClaim.

    1. task-pv-volume 10Gi RWO Retain Available manual 4s

    The next step is to create a PersistentVolumeClaim. Pods use PersistentVolumeClaims to request physical storage. In this exercise, you create a PersistentVolumeClaim that requests a volume of at least three gibibytes that can provide read-write access for at least one Node.

    Here is the configuration file for the PersistentVolumeClaim:

    pods/storage/pv-claim.yaml Configure a Pod to Use a PersistentVolume for Storage - 图2

    Create the PersistentVolumeClaim:

    1. kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml

    After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim’s requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.

    Look again at the PersistentVolume:

    1. kubectl get pv task-pv-volume

    Now the output shows a of Bound.

    1. NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
    2. task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m

    Look at the PersistentVolumeClaim:

    1. kubectl get pvc task-pv-claim

    The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume.

    1. NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
    2. task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s

    Create a Pod

    The next step is to create a Pod that uses your PersistentVolumeClaim as a volume.

    Here is the configuration file for the Pod:

    pods/storage/pv-pod.yaml

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: task-pv-pod
    5. spec:
    6. volumes:
    7. - name: task-pv-storage
    8. persistentVolumeClaim:
    9. claimName: task-pv-claim
    10. containers:
    11. - name: task-pv-container
    12. image: nginx
    13. ports:
    14. - containerPort: 80
    15. name: "http-server"
    16. volumeMounts:
    17. - mountPath: "/usr/share/nginx/html"
    18. name: task-pv-storage

    Notice that the Pod’s configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume. From the Pod’s point of view, the claim is a volume.

    1. kubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml

    Verify that the container in the Pod is running;

    Get a shell to the container running in your Pod:

    1. kubectl exec -it task-pv-pod -- /bin/bash

    In your shell, verify that nginx is serving the index.html file from the hostPath volume:

    1. # Be sure to run these 3 commands inside the root shell that comes from
    2. # running "kubectl exec" in the previous step
    3. apt update
    4. apt install curl
    5. curl http://localhost/

    The output shows the text that you wrote to the index.html file on the hostPath volume:

    1. Hello from Kubernetes storage

    If you see that message, you have successfully configured a Pod to use storage from a PersistentVolumeClaim.

    Clean up

    Delete the Pod, the PersistentVolumeClaim and the PersistentVolume:

    1. kubectl delete pod task-pv-pod

    If you don’t already have a shell open to the Node in your cluster, open a new shell the same way that you did earlier.

    In the shell on your Node, remove the file and directory that you created:

    1. # This assumes that your Node uses "sudo" to run commands
    2. # as the superuser
    3. sudo rm /mnt/data/index.html
    4. sudo rmdir /mnt/data

    You can now close the shell to your Node.

    Configure a Pod to Use a PersistentVolume for Storage - 图4

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: test
    5. spec:
    6. containers:
    7. - name: test
    8. image: nginx
    9. volumeMounts:
    10. # a mount for site-data
    11. - name: config
    12. mountPath: /usr/share/nginx/html
    13. subPath: html
    14. # another mount for nginx config
    15. - name: config
    16. mountPath: /etc/nginx/nginx.conf
    17. subPath: nginx.conf
    18. volumes:
    19. - name: config
    20. persistentVolumeClaim:
    21. claimName: test-nfs-claim

    You can perform 2 volume mounts on your nginx container:

    /usr/share/nginx/html for the static website /etc/nginx/nginx.conf for the default config

    Access control

    Storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors. To reduce the need for coordination with users, an administrator can annotate a PersistentVolume with a GID. Then the GID is automatically added to any Pod that uses the PersistentVolume.

    Use the pv.beta.kubernetes.io/gid annotation as follows:

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: pv1
    5. annotations:
    6. pv.beta.kubernetes.io/gid: "1234"

    When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod’s security context are. Every GID, whether it originates from a PersistentVolume annotation or the Pod’s specification, is applied to the first process run in each container.

    Note: When a Pod consumes a PersistentVolume, the GIDs associated with the PersistentVolume are not present on the Pod resource itself.

    What’s next