NFS Server CRD

    The parameters to configure the NFS CRD are demonstrated in the example below which is followed by a table that explains the parameters in more detail.

    Below is a very simple example that shows sharing a volume (which could be hostPath, cephFS, cephRBD, googlePD, EBS, etc.) using NFS, without any client or per export based configuration.

    For a PersistentVolumeClaim named googlePD-claim, which has Read/Write permissions and no squashing, the NFS CRD instance would look like the following:

    *note: if and exports.server.allowedClients.squash options are specified, exports.server.accessMode and exports.server.squash are overridden respectively.

    Description for volumes.allowedClients.squash valid options are:

    OptionDescription
    noneNo user id squashing is performed
    rootIdUID 0 and GID 0 are squashed to the anonymous uid and anonymous GID.
    rootUID 0 and GID of any value are squashed to the anonymous uid and anonymous GID.
    allAll users are squashed

    The volume that needs to be exported by NFS must be attached to NFS server pod via PVC. Examples of volume that can be attached are Host Path, AWS Elastic Block Store, GCE Persistent Disk, CephFS, RBD etc. The limitations of these volumes also apply while they are shared by NFS. The limitation and other details about these volumes can be found .

    This example shows how to share a volume with different options for different clients accessing the share. The EBS volume (represented by a PVC) will be exported by the NFS server for client access as /nfs-share (note that this PVC must already exist).

    The following client groups are allowed to access this share:

    • group2 includes both the network range of 172.17.0.5/16 and a host named serverX. They will all be granted Read/Write permissions with no user squash.

    Multiple volumes

    This section provides an example of how to share multiple volumes from one NFS server. These volumes can all be different types (e.g., Google PD and Ceph RBD). Below we will share an Amazon EBS volume as well as a CephFS volume, using differing configuration for the two:

    • The EBS volume is named share1 and is available for all clients with Read Only access and no squash.