Device Plugins

    FEATURE STATE: Kubernetes v1.10 [beta]

    Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the .

    Instead of customizing the code for Kubernetes itself, vendors can implement a device plugin that you deploy either manually or as a DaemonSet. The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adapters, and other similar computing resources that may require vendor specific initialization and setup.

    The kubelet exports a Registration gRPC service:

    A device plugin can register itself with the kubelet through this gRPC service. During the registration, the device plugin needs to send:

    • The name of its Unix socket.
    • The Device Plugin API version against which it was built.
    • The ResourceName it wants to advertise. Here ResourceName needs to follow the as vendor-domain/resourcetype. (For example, an NVIDIA GPU is advertised as nvidia.com/gpu.)

    Following a successful registration, the device plugin sends the kubelet the list of devices it manages, and the kubelet is then in charge of advertising those resources to the API server as part of the kubelet node status update. For example, after a device plugin registers hardware-vendor.example/foo with the kubelet and reports two healthy devices on a node, the node status is updated to advertise that the node has 2 “Foo” devices installed and available.

    Then, users can request devices in a Container specification as they request other types of resources, with the following limitations:

    • Extended resources are only supported as integer resources and cannot be overcommitted.
    • Devices cannot be shared among Containers.

    Suppose a Kubernetes cluster is running a device plugin that advertises resource hardware-vendor.example/foo on certain nodes. Here is an example of a pod requesting this resource to run a demo workload:

    1. ---
    2. apiVersion: v1
    3. kind: Pod
    4. metadata:
    5. name: demo-pod
    6. spec:
    7. containers:
    8. image: k8s.gcr.io/pause:2.0
    9. resources:
    10. limits:
    11. hardware-vendor.example/foo: 2
    12. #
    13. # This Pod needs 2 of the hardware-vendor.example/foo devices
    14. # and can only schedule onto a Node that's able to satisfy
    15. # that need.
    16. #
    17. # If the Node has more than 2 of those devices available, the
    18. # remainder would be available for other Pods to use.

    Device plugin implementation

    The general workflow of a device plugin includes the following steps:

    • Initialization. During this phase, the device plugin performs vendor specific initialization and setup to make sure the devices are in a ready state.

    • The plugin starts a gRPC service, with a Unix socket under host path /var/lib/kubelet/device-plugins/, that implements the following interfaces:

      Note: Plugins are not required to provide useful implementations for GetPreferredAllocation() or . Flags indicating which (if any) of these calls are available should be set in the DevicePluginOptions message sent back by a call to GetDevicePluginOptions(). The kubelet will always call GetDevicePluginOptions() to see which optional functions are available, before calling any of them directly.

    • The plugin registers itself with the kubelet through the Unix socket at host path /var/lib/kubelet/device-plugins/kubelet.sock.

    A device plugin is expected to detect kubelet restarts and re-register itself with the new kubelet instance. In the current implementation, a new kubelet instance deletes all the existing Unix sockets under /var/lib/kubelet/device-plugins when it starts. A device plugin can monitor the deletion of its Unix socket and re-register itself upon such an event.

    You can deploy a device plugin as a DaemonSet, as a package for your node’s operating system, or manually.

    The canonical directory /var/lib/kubelet/device-plugins requires privileged access, so a device plugin must run in a privileged security context. If you’re deploying a device plugin as a DaemonSet, /var/lib/kubelet/device-plugins must be mounted as a Volume in the plugin’s .

    If you choose the DaemonSet approach you can rely on Kubernetes to: place the device plugin’s Pod onto Nodes, to restart the daemon Pod after failure, and to help automate upgrades.

    API compatibility

    Kubernetes device plugin support is in beta. The API may change before stabilization, in incompatible ways. As a project, Kubernetes recommends that device plugin developers:

    • Watch for changes in future releases.
    • Support multiple versions of the device plugin API for backward/forward compatibility.

    If you enable the DevicePlugins feature and run device plugins on nodes that need to be upgraded to a Kubernetes release with a newer device plugin API version, upgrade your device plugins to support both versions before upgrading these nodes. Taking that approach will ensure the continuous functioning of the device allocations during the upgrade.

    FEATURE STATE: Kubernetes v1.15 [beta]

    In order to monitor resources provided by device plugins, monitoring agents need to be able to discover the set of devices that are in-use on the node and obtain metadata to describe which container the metric should be associated with. metrics exposed by device monitoring agents should follow the Kubernetes Instrumentation Guidelines, identifying containers using pod, namespace, and container prometheus labels.

    The kubelet provides a gRPC service to enable discovery of in-use devices, and to provide metadata for these devices:

    1. // PodResourcesLister is a service provided by the kubelet that provides information about the
    2. // node resources consumed by pods and containers on the node
    3. service PodResourcesLister {
    4. rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
    5. rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}
    6. }

    The List endpoint provides information on resources of running pods, with details such as the id of exclusively allocated CPUs, device id as it was reported by device plugins and id of the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains the information about memory and hugepages reserved for a container.

    Note:

    cpu_ids in the ContainerResources in the List endpoint correspond to exclusive CPUs allocated to a partilar container. If the goal is to evaluate CPUs that belong to the shared pool, the List endpoint needs to be used in conjunction with the GetAllocatableResources endpoint as explained below:

    1. Call GetCpuIds on all ContainerResources in the system
    2. Subtract out all of the CPUs from the calls from the GetAllocatableResources call

    FEATURE STATE: Kubernetes v1.23 [beta]

    Note: GetAllocatableResources should only be used to evaluate resources on a node. If the goal is to evaluate free/unallocated resources it should be used in conjunction with the List() endpoint. The result obtained by GetAllocatableResources would remain the same unless the underlying resources exposed to kubelet change. This happens rarely but when it does (for example: hotplug/hotunplug, device health changes), client is expected to call GetAlloctableResources endpoint. However, calling GetAllocatableResources endpoint is not sufficient in case of cpu and/or memory update and Kubelet needs to be restarted to reflect the correct resource capacity and allocatable.

    1. // AllocatableResourcesResponses contains informations about all the devices known by the kubelet
    2. message AllocatableResourcesResponse {
    3. repeated ContainerDevices devices = 1;
    4. repeated int64 cpu_ids = 2;
    5. repeated ContainerMemory memory = 3;
    6. }

    Starting from Kubernetes v1.23, the GetAllocatableResources is enabled by default. You can disable it by turning off the KubeletPodResourcesGetAllocatable feature gate.

    Preceding Kubernetes v1.23, to enable this feature kubelet must be started with the following flag:

    --feature-gates=KubeletPodResourcesGetAllocatable=true

    ContainerDevices do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what device plugins report .

    The gRPC service is served over a unix socket at /var/lib/kubelet/pod-resources/kubelet.sock. Monitoring agents for device plugin resources can be deployed as a daemon, or as a DaemonSet. The canonical directory /var/lib/kubelet/pod-resources requires privileged access, so monitoring agents must run in a privileged security context. If a device monitoring agent is running as a DaemonSet, /var/lib/kubelet/pod-resources must be mounted as a Volume in the device monitoring agent’s .

    Support for the PodResourcesLister service requires KubeletPodResources feature gate to be enabled. It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.

    Device Plugin integration with the Topology Manager

    FEATURE STATE: Kubernetes v1.18 [beta]

    The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a TopologyInfo struct.

    Device Plugins that wish to leverage the Topology Manager can send back a populated TopologyInfo struct as part of the device registration, along with the device IDs and the health of the device. The device manager will then use this information to consult with the Topology Manager and make resource assignment decisions.

    TopologyInfo supports a nodes field that is either nil (the default) or a list of NUMA nodes. This lets the Device Plugin publish that can span NUMA nodes.

    An example TopologyInfo struct populated for a device by a Device Plugin:

    1. pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}}

    Here are some examples of device plugin implementations:

    What’s next