Runtime Class

    This page describes the RuntimeClass resource and runtime selection mechanism.

    RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a Pod’s containers.

    You can set a different RuntimeClass between different Pods to provide a balance of performance versus security. For example, if part of your workload deserves a high level of information security assurance, you might choose to schedule those Pods so that they run in a container runtime that uses hardware virtualization. You’d then benefit from the extra isolation of the alternative runtime, at the expense of some additional overhead.

    You can also use RuntimeClass to run different Pods with the same container runtime but with different settings.

    1. Create the corresponding RuntimeClass resources

    The configurations available through RuntimeClass are Container Runtime Interface (CRI) implementation dependent. See the corresponding documentation () for your CRI implementation for how to configure.

    Note: RuntimeClass assumes a homogeneous node configuration across the cluster by default (which means that all nodes are configured the same way with respect to container runtimes). To support heterogeneous node configurations, see Scheduling below.

    The configurations have a corresponding handler name, referenced by the RuntimeClass. The handler must be a valid .

    The configurations setup in step 1 should each have an associated handler name, which identifies the configuration. For each handler, create a corresponding RuntimeClass object.

    The name of a RuntimeClass object must be a valid .

    Note: It is recommended that RuntimeClass write operations (create/update/patch/delete) be restricted to the cluster administrator. This is typically the default. See Authorization Overview for more details.

    Once RuntimeClasses are configured for the cluster, you can specify a runtimeClassName in the Pod spec to use it. For example:

    This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the terminal phase. Look for a corresponding for an error message.

    If no runtimeClassName is specified, the default RuntimeHandler will be used, which is equivalent to the behavior when the RuntimeClass feature is disabled.

    For more details on setting up CRI runtimes, see .

    Runtime handlers are configured through containerd’s configuration at /etc/containerd/config.toml. Valid handlers are configured under the runtimes section:

    See containerd’s config documentation for more details:

    CRI-O

    See CRI-O’s for more details.

    FEATURE STATE: Kubernetes v1.16 [beta]

    By specifying the scheduling field for a RuntimeClass, you can set constraints to ensure that Pods running with this RuntimeClass are scheduled to nodes that support it. If is not set, this RuntimeClass is assumed to be supported by all nodes.

    To ensure pods land on nodes supporting a specific RuntimeClass, that set of nodes should have a common label which is then selected by the runtimeclass.scheduling.nodeSelector field. The RuntimeClass’s nodeSelector is merged with the pod’s nodeSelector in admission, effectively taking the intersection of the set of nodes selected by each. If there is a conflict, the pod will be rejected.

    If the supported nodes are tainted to prevent other RuntimeClass pods from running on the node, you can add tolerations to the RuntimeClass. As with the nodeSelector, the tolerations are merged with the pod’s tolerations in admission, effectively taking the union of the set of nodes tolerated by each.

    To learn more about configuring the node selector and tolerations, see .

    FEATURE STATE: Kubernetes v1.24 [stable]

    You can specify overhead resources that are associated with running a Pod. Declaring overhead allows the cluster (including the scheduler) to account for it when making decisions about Pods and resources.