Indexed Job for Parallel Processing with Static Work Assignment

    In this example, you will run a Kubernetes Job that uses multiple parallel worker processes. Each worker is a different container running in its own Pod. The Pods have an index number that the control plane sets automatically, which allows each Pod to identify which part of the overall task to work on.

    The pod index is available in the annotation batch.kubernetes.io/job-completion-index as a string representing its decimal value. In order for the containerized task process to obtain this index, you can publish the value of the annotation using the mechanism. For convenience, the control plane automatically sets the downward API to expose the index in the JOB_COMPLETION_INDEX environment variable.

    Here is an overview of the steps in this example:

    1. Define a Job manifest using indexed completion. The downward API allows you to pass the pod index annotation as an environment variable or file to the container.
    2. Start an Indexed Job based on that manifest.

    You should already be familiar with the basic, non-parallel, use of .

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

    Your Kubernetes server must be at or later than version v1.21. To check the version, enter kubectl version.

    To access the work item from the worker program, you have a few options:

    1. Read the JOB_COMPLETION_INDEX environment variable. The Job controller automatically links this variable to the annotation containing the completion index.
    2. Read a file that contains the completion index.
    3. Assuming that you can’t modify the program, you can wrap it with a script that reads the index using any of the methods above and converts it into something that the program can use as input.

    You’ll use the rev tool from the container image.

    As this is only an example, each Pod only does a tiny piece of work (reversing a short string). In a real workload you might, for example, create a Job that represents the task of producing 60 seconds of video based on scene data. Each work item in the video rendering Job would be to render a particular frame of that video clip. Indexed completion would mean that each Pod in the Job knows which frame to render and publish, by counting frames from the start of the clip.

    Here is a sample Job manifest that uses Indexed completion mode:

    1. apiVersion: batch/v1
    2. kind: Job
    3. metadata:
    4. name: 'indexed-job'
    5. spec:
    6. completions: 5
    7. parallelism: 3
    8. completionMode: Indexed
    9. template:
    10. spec:
    11. restartPolicy: Never
    12. initContainers:
    13. - name: 'input'
    14. image: 'docker.io/library/bash'
    15. - "bash"
    16. - "-c"
    17. - |
    18. echo ${items[$JOB_COMPLETION_INDEX]} > /input/data.txt
    19. volumeMounts:
    20. - mountPath: /input
    21. name: input
    22. containers:
    23. - name: 'worker'
    24. image: 'docker.io/library/busybox'
    25. command:
    26. - "rev"
    27. - "/input/data.txt"
    28. volumeMounts:
    29. - mountPath: /input
    30. name: input
    31. volumes:
    32. - name: input
    33. emptyDir: {}

    In the example above, you use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. An init container maps the index to a static value and writes it to a file that is shared with the container running the worker through an . Optionally, you can define your own environment variable through the downward API to publish the index to containers. You can also choose to load a list of values from a .

    Alternatively, you can directly use the downward API to pass the annotation value as a volume file, like shown in the following example:

    Indexed Job for Parallel Processing with Static Work Assignment - 图2

    1. apiVersion: batch/v1
    2. kind: Job
    3. metadata:
    4. name: 'indexed-job'
    5. spec:
    6. parallelism: 3
    7. completionMode: Indexed
    8. spec:
    9. restartPolicy: Never
    10. containers:
    11. - name: 'worker'
    12. image: 'docker.io/library/busybox'
    13. command:
    14. - "rev"
    15. - "/input/data.txt"
    16. volumeMounts:
    17. - mountPath: /input
    18. name: input
    19. volumes:
    20. - name: input
    21. downwardAPI:
    22. items:
    23. - path: "data.txt"
    24. fieldRef:
    25. fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']

    When you create this Job, the control plane creates a series of Pods, one for each index you specified. The value of .spec.parallelism determines how many can run at once whereas .spec.completions determines how many Pods the Job creates in total.

    Because .spec.parallelism is less than .spec.completions, the control plane waits for some of the first Pods to complete before starting more of them.

    You can wait for the Job to succeed, with a timeout:

    1. # The check for condition name is case insensitive
    2. kubectl wait --for=condition=complete --timeout=300s job/indexed-job

    Now, describe the Job and check that it was successful.

    1. kubectl describe jobs/indexed-job

    The output is similar to:

    In this example, you run the Job with custom values for each index. You can inspect the output of one of the pods:

    The output is similar to: