Scheduling function runs

    We assume that you have used the which means that you have OpenFaaS deployed into two namespaces:

    • openfaas for the core componentes (ui, gateway, etc)
    • openfaas-fn for the function deployments

    For this example, we use the sample nodeinfo function, which can be deployed using this stack file

    and the cli

    1. $ faas deploy

    We can then define a Kubernetes cron job to call this function every minute using this manifest file:

    1. # node-cron.yaml
    2. apiVersion: batch/v1beta1
    3. kind: CronJob
    4. metadata:
    5. name: nodeinfo
    6. namespace: openfaas
    7. spec:
    8. schedule: "*/1 * * * *"
    9. concurrencyPolicy: Forbid
    10. successfulJobsHistoryLimit: 1
    11. failedJobsHistoryLimit: 3
    12. jobTemplate:
    13. spec:
    14. template:
    15. spec:
    16. containers:
    17. - name: faas-cli
    18. image: openfaas/faas-cli:0.8.3
    19. args:
    20. - /bin/sh
    21. - -c
    22. - echo "verbose" | faas-cli invoke nodeinfo -g http://gateway.openfaas:8080
    23. restartPolicy: OnFailure

    You should also update the to the latest version of the faas-cli available found via the or faas-cli releases page.

    The important thing to notice is that we are using a Docker container with the faas-cli to invoke the function. This keeps the job very generic and easy to generize to other functions.

    1. $ kubectl apply -f node-cron.yaml
    2. $ kubectl -n=openfaas get cronjob nodeinfo --watch
    3. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
    4. nodeinfo */1 * * * * False 0 <none> 42s
    5. nodeinfo */1 * * * * False 1 2s 44s
    6. nodeinfo */1 * * * * False 0 12s 54s
    7. nodeinfo */1 * * * * False 1 2s 1m
    8. nodeinfo */1 * * * * False 0 12s 1m

    Unfortunately, there is no one-line command in kubectl for getting the logs from a cron job. Kubernetes creates new Job objects for each run of the CronJob, so we can look up that last run of our CronJob using

    We can use this to then get the output logs

    1. $ kubectl -n openfaas logs -l "job-name=nodeinfo-1529226900"
    2. Hostname: nodeinfo-6fffdb4446-57mzn
    3. Platform: linux
    4. Arch: x64
    5. CPU count: 1
    6. Uptime: 997420
    7. [ { model: 'Intel(R) Xeon(R) CPU @ 2.20GHz',
    8. speed: 2199,
    9. times:
    10. { user: 360061300,
    11. nice: 2053900,
    12. sys: 142472900,
    13. idle: 9425509300,
    14. irq: 0 } } ]
    15. { lo:
    16. [ { address: '127.0.0.1',
    17. netmask: '255.0.0.0',
    18. family: 'IPv4',
    19. mac: '00:00:00:00:00:00',
    20. internal: true },
    21. { address: '::1',
    22. netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
    23. family: 'IPv6',
    24. mac: '00:00:00:00:00:00',
    25. scopeid: 0,
    26. internal: true } ],
    27. eth0:
    28. [ { address: '10.4.2.40',
    29. netmask: '255.255.255.0',
    30. family: 'IPv4',
    31. mac: '0a:58:0a:04:02:28',
    32. internal: false },
    33. { address: 'fe80::f08e:d8ff:fecc:9635',
    34. netmask: 'ffff:ffff:ffff:ffff::',
    35. family: 'IPv6',
    36. mac: '0a:58:0a:04:02:28',
    37. scopeid: 3,
    38. internal: false } ] }

    This example assumes no authentication is enabled on the gateway.

    In this example, I created the CronJob in the same namespace as the . If we deploy the CronJob in a different namespace, then we need to update the job arguments to accommodate. Fortunately, with Kubernetes DNS, this is simply changing the gateway parameter like this ./faas-cli invoke nodeinfo -g

    If you have enabled basic auth on the gateway, then the invoke command will also need to be updated to first login the cli client. Assuming that you have created the basic auth secret as in the Helm install guide

    You could then update the CronJob to login, like this:

    1. # nodeauth-cron.yaml
    2. apiVersion: batch/v1beta1
    3. kind: CronJob
    4. metadata:
    5. name: nodeinfo-auth
    6. namespace: openfaas
    7. spec:
    8. concurrencyPolicy: Forbid
    9. successfulJobsHistoryLimit: 1
    10. failedJobsHistoryLimit: 3
    11. jobTemplate:
    12. spec:
    13. template:
    14. spec:
    15. containers:
    16. - name: faas-cli
    17. image: openfaas/faas-cli:0.8.3
    18. env:
    19. - name: USERNAME
    20. valueFrom:
    21. secretKeyRef:
    22. name: basic-auth
    23. key: basic-auth-user
    24. - name: PASSWORD
    25. valueFrom:
    26. secretKeyRef:
    27. name: basic-auth
    28. key: basic-auth-password
    29. args:
    30. - /bin/sh
    31. - -c
    32. - echo -n $PASSWORD | faas-cli login -g http://gateway.openfaas:8080 -u $USERNAME --password-stdin
    33. - echo "verbose" | faas-cli invoke nodeinfo -g http://gateway.openfaas:8080
    34. restartPolicy: OnFailure
    • Deploy the connector
    1. curl -s https://raw.githubusercontent.com/zeerorg/cron-connector/master/yaml/kubernetes/connector-dep.yml | kubectl create --namespace openfaas -f -
    • Now annotate a function with a topic to give it a schedule

    nodeinfo.yaml

    1. faas-cli deploy -f nodeinfo.yaml
    • Or deploy directly from the store
    1. faas-cli store deploy nodeinfo \
    2. --annotation topic="cron-function" \
    3. --annotation schedule="*/5 * * * *"
    • Now check the logs
    1. kubectl logs -n openfaas-fn deploy/nodeinfo -f

    You'll see the function invoked every 5 minutes as per the schedule.

    To stop the invocations, remove the two annotations or remove the cron-connector deployment.

    Docker Swarm has no concepts of scheduled tasks or cron, but we have a suitable recommendation which you can use with your OpenFaaS cluster. If you deploy a Jenkins master service, then you can use that to manage your scheduled tasks. It will handle distributed locking, concurrency and queueing.

    Example usage:

    • Deploy Swarm service for Jenkins using
    • Define a Freestyle job for each scheduled task
    • Add a CRON entry for the schedule
    • Install the OpenFaaS CLI
    • Run faas-cli login —gateway
    • Invoke the function

    Here is an example of how to do this with a Pipeline job.