NFS Server CRD

    The parameters to configure the NFS CRD are demonstrated in the example below which is followed by a table that explains the parameters in more detail.

    Below is a very simple example that shows sharing a volume (which could be hostPath, cephFS, cephRBD, googlePD, EBS, etc.) using NFS, without any client or per export based configuration.

    The table below explains in detail each configuration option that is available in the NFS CRD.

    *note: if exports.server.allowedClients.accessMode and exports.server.allowedClients.squash options are specified, exports.server.accessMode and exports.server.squash are overridden respectively.

    1. none (No user id squashing is performed)
    2. rootId (uid 0 and gid 0 are squashed to the anonymous uid and anonymous gid)
    3. all (All users are squashed)

    The volume that needs to be exported by NFS must be attached to NFS server pod via PVC. Examples of volume that can be attached are Host Path, AWS Elastic Block Store, GCE Persistent Disk, CephFS, RBD etc. The limitations of these volumes also apply while they are shared by NFS. The limitation and other details about these volumes can be found .

    This section contains some examples for more advanced scenarios and configuration options.

    • group1 with IP address 172.17.0.5 will be given Read Only access with the root user squashed.
    • group2 includes both the network range of 172.17.0.5/16 and a host named serverX. They will all be granted Read/Write permissions with no user squash.

    Multiple volumes

    This section provides an example of how to share multiple volumes from one NFS server. These volumes can all be different types (e.g., Google PD and Ceph RBD). Below we will share an Amazon EBS volume as well as a CephFS volume, using differing configuration for the two:

    • The CephFS volume is named share2 and is available for all clients with Read/Write access and no squash.