Fine Parallel Processing Using a Work Queue

    In this example, as each pod is created, it picks up one unit of work from a task queue, processes it, and repeats until the end of the queue is reached.

    Here is an overview of the steps in this example:

    1. Start a storage service to hold the work queue. In this example, we use Redis to store our work items. In the previous example, we used RabbitMQ. In this example, we use Redis and a custom work-queue client library because AMQP does not provide a good way for clients to detect when a finite-length work queue is empty. In practice you would set up a store such as Redis once and reuse it for the work queues of many jobs, and other things.
    2. Create a queue, and fill it with messages. Each message represents one task to be done. In this example, a message is an integer that we will do a lengthy computation on.
    3. Start a Job that works on tasks from the queue. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached.

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

    Be familiar with the basic, non-parallel, use of .

    Starting Redis

    For this example, for simplicity, we will start a single instance of Redis. See the for an example of deploying Redis scalably and redundantly.

    You could also download the following files directly:

    Now let’s fill the queue with some “tasks”. In our example, our tasks are strings to be printed.

    Start a temporary interactive pod for running the Redis CLI.

    Now hit enter, start the redis CLI, and create a list with some work items in it.

    1. redis:6379> rpush job2 "apple"
    2. (integer) 1
    3. redis:6379> rpush job2 "banana"
    4. (integer) 2
    5. redis:6379> rpush job2 "cherry"
    6. (integer) 3
    7. redis:6379> rpush job2 "date"
    8. (integer) 4
    9. redis:6379> rpush job2 "fig"
    10. (integer) 5
    11. redis:6379> rpush job2 "grape"
    12. (integer) 6
    13. redis:6379> rpush job2 "lemon"
    14. (integer) 7
    15. redis:6379> rpush job2 "melon"
    16. redis:6379> rpush job2 "orange"
    17. (integer) 9
    18. redis:6379> lrange job2 0 -1
    19. 1) "apple"
    20. 2) "banana"
    21. 3) "cherry"
    22. 4) "date"
    23. 5) "fig"
    24. 6) "grape"
    25. 7) "lemon"
    26. 8) "melon"
    27. 9) "orange"

    Note: if you do not have Kube DNS setup correctly, you may need to change the first step of the above block to redis-cli -h $REDIS_SERVICE_HOST.

    Create an Image

    Now we are ready to create an image that we will run.

    We will use a python worker program with a redis client to read the messages from the message queue.

    A simple Redis work queue client library is provided, called rediswq.py ().

    The “worker” program in each Pod of the Job uses the work queue client library to get work. Here it is:

    application/job/redis/worker.py

    1. #!/usr/bin/env python
    2. import time
    3. import rediswq
    4. host="redis"
    5. # Uncomment next two lines if you do not have Kube-DNS working.
    6. # import os
    7. # host = os.getenv("REDIS_SERVICE_HOST")
    8. q = rediswq.RedisWQ(name="job2", host=host)
    9. print("Worker with sessionID: " + q.sessionID())
    10. print("Initial queue state: empty=" + str(q.empty()))
    11. while not q.empty():
    12. item = q.lease(lease_secs=10, block=True, timeout=2)
    13. if item is not None:
    14. itemstr = item.decode("utf-8")
    15. print("Working on " + itemstr)
    16. time.sleep(10) # Put your actual work here instead of sleep.
    17. q.complete(item)
    18. else:
    19. print("Waiting for work")
    20. print("Queue empty, exiting")

    You could also download , rediswq.py, and files, then build the image:

    For the Docker Hub, tag your app image with your username and push to the Hub with the below commands. Replace <username> with your Hub username.

    1. docker push <username>/job-wq-2

    You need to push to a public repository or .

    1. docker tag job-wq-2 gcr.io/<project>/job-wq-2
    2. gcloud docker -- push gcr.io/<project>/job-wq-2

    Here is the job definition:

    application/job/redis/job.yaml Fine Parallel Processing Using a Work Queue - 图2

    Be sure to edit the job template to change gcr.io/myproject to your own path.

    In this example, each pod works on several items from the queue and then exits when there are no more items. Since the workers themselves detect when the workqueue is empty, and the Job controller does not know about the workqueue, it relies on the workers to signal when they are done working. The workers signal that the queue is empty by exiting with success. So, as soon as any worker exits with success, the controller knows the work is done, and the Pods will exit soon. So, we set the completion count of the Job to 1. The job controller will wait for the other pods to complete too.

    Running the Job

    So, now run the Job:

    1. kubectl apply -f ./job.yaml

    Now wait a bit, then check on the job.

    1. kubectl describe jobs/job-wq-2
    2. Name: job-wq-2
    3. Selector: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f
    4. Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f
    5. job-name=job-wq-2
    6. Annotations: <none>
    7. Parallelism: 2
    8. Completions: <unset>
    9. Start Time: Mon, 11 Jan 2016 17:07:59 -0800
    10. Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
    11. Pod Template:
    12. Labels: controller-uid=b1c7e4e3-92e1-11e7-b85e-fa163ee3c11f
    13. job-name=job-wq-2
    14. Containers:
    15. c:
    16. Image: gcr.io/exampleproject/job-wq-2
    17. Port:
    18. Environment: <none>
    19. Mounts: <none>
    20. Volumes: <none>
    21. Events:
    22. FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    23. --------- -------- ----- ---- ------------- -------- ------ -------
    24. 33s 33s 1 {job-controller } Normal SuccessfulCreate Created pod: job-wq-2-lglf8
    25. kubectl logs pods/job-wq-2-7r7b2
    26. Worker with sessionID: bbd72d0a-9e5c-4dd6-abf6-416cc267991f
    27. Initial queue state: empty=False
    28. Working on banana
    29. Working on lemon

    As you can see, one of our pods worked on several work units.

    If running a queue service or modifying your containers to use a work queue is inconvenient, you may want to consider one of the other job patterns.

    If you have a continuous stream of background processing work to run, then consider running your background workers with a ReplicaSet instead, and consider running a background processing library such as .