Dynamic Resource Allocation

    Dynamic resource allocation is a new API for requesting and sharing resources between pods and containers inside a pod. It is a generalization of the persistent volumes API for generic resources. Third-party resource drivers are responsible for tracking and allocating resources. Different kinds of resources support arbitrary parameters for defining requirements and initialization.

    Kubernetes v1.26 includes cluster-level API support for dynamic resource allocation, but it explicitly. You also must install a resource driver for specific resources that are meant to be managed using this API. If you are not running Kubernetes v1.26, check the documentation for that version of Kubernetes.

    API

    The new resource.k8s.io/v1alpha1 provides four new types:

    ResourceClass

    Defines which resource driver handles a certain kind of resource and provides common parameters for it. ResourceClasses are created by a cluster administrator when installing a resource driver.

    ResourceClaim

    Defines a particular resource instances that is required by a workload. Created by a user (lifecycle managed manually, can be shared between different Pods) or for individual Pods by the control plane based on a ResourceClaimTemplate (automatic lifecycle, typically used by just one Pod).

    ResourceClaimTemplate

    PodScheduling

    Used internally by the control plane and resource drivers to coordinate pod scheduling when ResourceClaims need to be allocated for a Pod.

    Parameters for ResourceClass and ResourceClaim are stored in separate objects, typically using the type defined by a CRD that was created when installing a resource driver.

    The core/v1 defines ResourceClaims that are needed for a Pod in a new resourceClaims field. Entries in that list reference either a ResourceClaim or a ResourceClaimTemplate. When referencing a ResourceClaim, all Pods using this PodSpec (for example, inside a Deployment or StatefulSet) share the same ResourceClaim instance. When referencing a ResourceClaimTemplate, each Pod gets its own instance.

    The resources.claims list for container resources defines whether a container gets access to these resource instances, which makes it possible to share resources between one or more containers.

    Here is an example for a fictional resource driver. Two ResourceClaim objects will get created for this Pod and each container gets access to one of them.

    In contrast to native resources (CPU, RAM) and extended resources (managed by a device plugin, advertised by kubelet), the scheduler has no knowledge of what dynamic resources are available in a cluster or how they could be split up to satisfy the requirements of a specific ResourceClaim. Resource drivers are responsible for that. They mark ResourceClaims as “allocated” once resources for it are reserved. This also then tells the scheduler where in the cluster a ResourceClaim is available.

    ResourceClaims can get allocated as soon as they are created (“immediate allocation”), without considering which Pods will use them. The default is to delay allocation until a Pod gets scheduled which needs the ResourceClaim (i.e. “wait for first consumer”).

    As part of this process, ResourceClaims also get reserved for the Pod. Currently ResourceClaims can either be used exclusively by a single Pod or an unlimited number of Pods.

    One key feature is that Pods do not get scheduled to a node unless all of their resources are allocated and reserved. This avoids the scenario where a Pod gets scheduled onto one node and then cannot run there, which is bad because such a pending Pod also blocks all other resources like RAM or CPU that were set aside for it.

    Limitations

    The scheduler plugin must be involved in scheduling Pods which use ResourceClaims. Bypassing the scheduler by setting the field leads to Pods that the kubelet refuses to start because the ResourceClaims are not reserved or not even allocated. It may be possible to in the future.

    Dynamic resource allocation is an alpha feature and only enabled when the DynamicResourceAllocation and the resource.k8s.io/v1alpha1 API group are enabled. For details on that, see the and --runtime-config . kube-scheduler, kube-controller-manager and kubelet also need the feature gate.

    A quick check whether a Kubernetes cluster supports the feature is to list ResourceClass objects with:

    If your cluster supports dynamic resource allocation, the response is either a list of ResourceClass objects or:

    If not supported, this error is printed instead:

    The default configuration of kube-scheduler enables the “DynamicResources” plugin if and only if the feature gate is enabled. Custom configurations may have to be modified to include it.

    What’s next

    • For more information on the design, see the .