Handling retriable and non-retriable pod failures with Pod failure policy
This document shows you how to use the Pod failure policy, in combination with the default , to improve the control over the handling of container- or Pod-level failure within a Job.
The definition of Pod failure policy may help you to:
- better utilize the computational resources by avoiding unnecessary Pod retries.
- avoid Job failures due to Pod disruptions (such , API-initiated eviction or -based eviction).
You should already be familiar with the basic use of .
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version v1.25. To check the version, enter kubectl version
.
With the following example, you can learn how to use Pod failure policy to avoid unnecessary Pod restarts when a Pod failure indicates a non-retriable software bug.
First, create a Job based on the config:
/controllers/job-pod-failure-policy-failjob.yaml
kubectl create -f job-pod-failure-policy-failjob.yaml
After around 30s the entire Job should be terminated. Inspect the status of the Job by running:
kubectl get jobs -l job-name=job-pod-failure-policy-failjob -o yaml
In the Job status, see a job Failed
condition with the field reason
equal PodFailurePolicy
. Additionally, the message
field contains a more detailed information about the Job termination, such as: Container main for pod default/job-pod-failure-policy-failjob-8ckj8 failed with exit code 42 matching FailJob rule at index 0
.
For comparison, if the Pod failure policy was disabled it would take 6 retries of the Pod, taking at least 2 minutes.
Delete the Job you created:
The cluster automatically cleans up the Pods.
With the following example, you can learn how to use Pod failure policy to ignore Pod disruptions from incrementing the Pod retry counter towards the .spec.backoffLimit
limit.
Caution: Timing is important for this example, so you may want to read the steps before execution. In order to trigger a Pod disruption it is important to drain the node while the Pod is running on it (within 90s since the Pod is scheduled).
Create a Job based on the config:
apiVersion: batch/v1
metadata:
name: job-pod-failure-policy-ignore
completions: 4
parallelism: 2
template:
spec:
restartPolicy: Never
containers:
- name: main
image: docker.io/library/bash:5
command: ["bash"]
args:
- -c
backoffLimit: 0
rules:
- action: Ignore
onPodConditions:
- type: DisruptionTarget
by running:
kubectl create -f job-pod-failure-policy-ignore.yaml
Run this command to check the
nodeName
the Pod is scheduled to:Drain the node to evict the Pod before it completes (within 90s):
kubectl drain nodes/$nodeName --ignore-daemonsets --grace-period=0
Inspect the
.status.failed
to check the counter for the Job is not incremented:kubectl get jobs -l job-name=job-pod-failure-policy-ignore -o yaml
Uncordon the node:
The Job resumes and succeeds.
For comparison, if the Pod failure policy was disabled the Pod disruption would result in terminating the entire Job (as the .spec.backoffLimit
is set to 0).
Cleaning up
Delete the Job you created:
kubectl delete jobs/job-pod-failure-policy-ignore
You could rely solely on the , by specifying the Job’s .spec.backoffLimit
field. However, in many situations it is problematic to find a balance between setting a low value for .spec.backoffLimit
to avoid unnecessary Pod retries, yet high enough to make sure the Job would not be terminated by Pod disruptions.