Indexed Job for Parallel Processing with Static Work Assignment
In this example, you will run a Kubernetes Job that uses multiple parallel worker processes. Each worker is a different container running in its own Pod. The Pods have an index number that the control plane sets automatically, which allows each Pod to identify which part of the overall task to work on.
The pod index is available in the annotation batch.kubernetes.io/job-completion-index
as a string representing its decimal value. In order for the containerized task process to obtain this index, you can publish the value of the annotation using the mechanism. For convenience, the control plane automatically sets the downward API to expose the index in the JOB_COMPLETION_INDEX
environment variable.
Here is an overview of the steps in this example:
- Define a Job manifest using indexed completion. The downward API allows you to pass the pod index annotation as an environment variable or file to the container.
- Start an
Indexed
Job based on that manifest.
You should already be familiar with the basic, non-parallel, use of .
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version v1.21. To check the version, enter kubectl version
.
To access the work item from the worker program, you have a few options:
- Read the
JOB_COMPLETION_INDEX
environment variable. The Job controller automatically links this variable to the annotation containing the completion index. - Read a file that contains the completion index.
- Assuming that you can’t modify the program, you can wrap it with a script that reads the index using any of the methods above and converts it into something that the program can use as input.
You’ll use the rev
tool from the container image.
As this is only an example, each Pod only does a tiny piece of work (reversing a short string). In a real workload you might, for example, create a Job that represents the task of producing 60 seconds of video based on scene data. Each work item in the video rendering Job would be to render a particular frame of that video clip. Indexed completion would mean that each Pod in the Job knows which frame to render and publish, by counting frames from the start of the clip.
Here is a sample Job manifest that uses Indexed
completion mode:
apiVersion: batch/v1
kind: Job
metadata:
name: 'indexed-job'
spec:
completions: 5
parallelism: 3
completionMode: Indexed
template:
spec:
restartPolicy: Never
initContainers:
- name: 'input'
image: 'docker.io/library/bash'
- "bash"
- "-c"
- |
echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt
volumeMounts:
- mountPath: /input
name: input
containers:
- name: 'worker'
image: 'docker.io/library/busybox'
command:
- "rev"
- "/input/data.txt"
volumeMounts:
- mountPath: /input
name: input
volumes:
- name: input
emptyDir: {}
In the example above, you use the builtin JOB_COMPLETION_INDEX
environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a file that is shared with the container running the worker through an . Optionally, you can define your own environment variable through the downward API to publish the index to containers. You can also choose to load a list of values from a .
Alternatively, you can directly use the downward API to pass the annotation value as a volume file, like shown in the following example:
apiVersion: batch/v1
kind: Job
metadata:
name: 'indexed-job'
spec:
parallelism: 3
completionMode: Indexed
spec:
restartPolicy: Never
containers:
- name: 'worker'
image: 'docker.io/library/busybox'
command:
- "rev"
- "/input/data.txt"
volumeMounts:
- mountPath: /input
name: input
volumes:
- name: input
downwardAPI:
items:
- path: "data.txt"
fieldRef:
fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']
When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism
determines how many can run at once whereas .spec.completions
determines how many Pods the Job creates in total.
Because .spec.parallelism
is less than .spec.completions
, the control plane waits for some of the first Pods to complete before starting more of them.
You can wait for the Job to succeed, with a timeout:
# The check for condition name is case insensitive
kubectl wait --for=condition=complete --timeout=300s job/indexed-job
Now, describe the Job and check that it was successful.
kubectl describe jobs/indexed-job
The output is similar to:
In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:
The output is similar to: