Deployment guide for Kubernetes

    Use this guide to deploy OpenFaaS to upstream Kubernetes 1.11 or higher.

    Before deploying OpenFaaS, you should provision a Kubernetes cluster. There are many options for deploying a local or remote cluster. You can read about the .

    Once you have a cluster, you can follow the detailed instructions on this page.

    • Install OpenFaaS CLI
    • Deploy OpenFaaS from static YAML, via helm, or via new YAML files generated with
    • Find your OpenFaaS gateway address
    • Log in, deploy a function, and try out the UI.
    • k3s - a light-weight Kubernetes distribution ideal for edge and development - compatible with Raspberry Pi & ARM64 (Packet, AWS Graviton)
    • - makes k3s available on any computer where Docker is also running
    • microk8s - a Kubernetes distribution, specifically for Ubuntu users.
    • - a popular, but heavy-weight option that creates a Linux virtual machine your computer using VirtualBox or similar
    • Docker for Mac/Windows - Docker's Desktop edition has an option to run a local Kubernetes cluster

    Remote/managed options

    You can run k3s and k3d on a single node Virtual Machine so that you don't have to run Kubernetes on your own computer.

    Kubernetes services/engines:

    A guide is available for configuring minikube here:

    Tip

    Are you using Google Kubernetes Engine (GKE)? You'll need to create an RBAC role with the following command:

    Also, ensure any default load-balancer timeouts within GKE are understood and configured appropriately.

    You can install the OpenFaaS CLI using brew or a curl script.

    • via brew:
    1. brew install faas-cli
    • via curl:
    1. $ curl -sL https://cli.openfaas.com | sudo sh

    If you run the script as a normal non-root user then the script will be downloaded to the current folder.

    Pick k3sup, helm or plain YAML files

    It is recommended that new users install OpenFaaS with k3sup which determines whether you are running on x86_64, armhf, or ARM64. Experienced and advanced users can use the helm chart with or without tiller to tailor their installation of OpenFaaS to suit their needs. A guide is provided to .

    A. Deploy with k3sup (fastest option)

    k3sup is a CLI tool that can install helm charts without using tiller, a component that is considered to be insecure by some companies.

    The openfaas app in k3sup will install OpenFaaS to a regular cloud computer, your laptop, a Raspberry Pi, or a 64-bit ARM machine.

    • Get k3sup
    1. # For MacOS / Linux:
    2. curl -SLsf https://get.k3sup.dev/ | sudo sh
    3.  
    4. # For Windows (using Git Bash)
    5. curl -SLsf https://get.k3sup.dev/ | sh
    • Install the OpenFaaS app If you're using a managed cloud Kubernetes service which supplies LoadBalancers, then run the following:
    1. k3sup app install openfaas --load-balancer

    If you're using a local Kubernetes cluster or a VM, then run:

    1. k3sup app install openfaas

    After the installation you'll receive a command to retrieve your OpenFaaS URL and password.

    For cloud users run kubectl get -n openfaas svc/gateway-external and look for EXTERNAL-IP. This is your gateway address.

    B. Deploy with Helm (for production, most configurable)

    A Helm chart is provided in the faas-netes repository. Follow the link below then come back to this page.

    Note

    Some users may have concerns about using helm charts due to security concerns with the tiller component. If you fall into this category of users, then don't worry, you can still benefit from the helm chart without using tiller.

    See the Chart readme for how to generate your own static YAML files using helm template.

    C. Deploy using kubectl and plain YAML (for development-only)

    You can run these commands on your computer if you have and KUBECONFIG file available.

    • Clone the repository
    1. $ git clone https://github.com/openfaas/faas-netes
    • Deploy the whole stack

    This command is split into two parts so that the OpenFaaS namespaces are always created first:

    • openfaas - for OpenFaaS services
    • openfaas-fn - for functions
    1. $ kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml

    Create a password for the gateway:

    Now deploy OpenFaaS:

    1. $ cd faas-netes && \
    2. kubectl apply -f ./yaml

    If you're using a remote cluster, or you're not sure then you can also port-forward the gateway to your machine for this step.

    1. kubectl port-forward svc/gateway -n openfaas 31112:8080 &

    Now log in:

    1. export OPENFAAS_URL=http://127.0.0.1:31112
    2.  
    3. echo -n $PASSWORD | faas-cli login --password-stdin

    Note

    For deploying on a cloud that supports Kubernetes LoadBalancers you may also want to apply the configuration in: cloud/lb.yml.

    Notes for Raspberry Pi & 32-bit ARM (armhf)

    Use k3sup to install OpenFaaS, it will determine the correct files to use to install OpenFaaS.

    For a complete tutorial on setting up OpenFaaS for Raspberry Pi / 32-bit ARM using Kubernetes see the following blog post from Alex Ellis: .

    When creating new functions please use the templates with a suffix of -armhf such as go-armhf and python-armhf to ensure you get the correct versions for your devices.

    64-bit ARM and AWS Graviton

    For 64-bit ARM servers and devices such as ODroid-C2, Rock64, AWS Graviton and the servers provided by Packet.net.

    Use k3sup to install OpenFaaS, it will determine the correct files to use to install OpenFaaS.

    When creating new functions please use the templates with a suffix of -arm64 such as node-arm64 to ensure you get the correct versions for your devices.

    Learn the OpenFaaS fundamentals

    The community has built a workshop with 12 self-paced hands-on labs. Use the workshop to begin learning OpenFaaS at your own pace:

    A walk-through video shows auto-scaling in action and the Prometheus UI: .

    Functions can be deployed using the REST API, UI, CLI, or Function Store. Continue below to deploy your first sample function.

    Deploy functions from the OpenFaaS Function Store

    You can find many different sample functions from the community through the OpenFaaS Function Store. The Function Store is built into the UI portal and also available via the CLI.

    You may need to pass the —gateway / -g flag to each faas-cli command or alternatively you can set an environmental variable such as:

      To search the store:

      1. $ faas-cli store list

      To deploy figlet:

      1. $ faas-cli store deploy figlet

      Now find the function deployed in the cluster and invoke it.

      1. $ faas-cli list
      2. $ echo "OpenFaaS!" | faas-cli invoke figlet

      You can also access the Function Store from the Portal UI and find a range of functions covering everything from machine-learning to network tools.

      Build your first Python function

      Your first serverless Python function with OpenFaaS

      Use the UI

      Access your gateway using the URL from the steps above.

      Click "New Function" and fill it out with the following:

      • Test the function Your function will appear after a few seconds and you can click "Invoke"

      The function can also be invoked through the CLI:

      Troubleshooting

      If you are running into any issues please check out the troubleshooting guide and search the documentation / past issues before raising an issue.

      Deploy with TLS

      To enable TLS while using Helm, try one of the following references:

      Use a private registry with Kubernetes

      If you are using a hosted private Docker registry (, or other),in order to check how to configure it, please visit the Kubernetes documentation.

      If you try to deploy using faas-cli deploy it will fail because the Kubernetes kubelet component will not have credentials to authorize the docker image pull request.

      Once you have pushed an image to a private registry using faas-cli push follow the instructions below to either create a pull secret that can be referenced by each function which needs it, or create a secret for the ServiceAccount in the openfaas-fn namespace so that any functions which need it can make use of it.

      If you need to troubleshoot the use of a private image then see the Kubernetes section of the .

      Option 1 - use an ad-hoc image pull secret

      To deploy your function(s) first you need to create an Image Pull Secret with the commands below.

      Setup some environmental variables:

      1. export DOCKER_USERNAME=<your_docker_username>
      2. export DOCKER_PASSWORD=<your_docker_password>
      3. export DOCKER_EMAIL=<your_docker_email>

      Then run this command to create the secret:

      1. $ kubectl create secret docker-registry dockerhub \
      2. -n openfaas-fn \
      3. --docker-username=$DOCKER_USERNAME \
      4. --docker-password=$DOCKER_PASSWORD \
      5. --docker-email=$DOCKER_EMAIL

      The secret must be created in the namespace or the equivalent if you have customised this.

      Create a sample function with a —prefix variable:

      1. faas-cli new --lang go private-fn --prefix=registry:port/repo
      2. mv private-fn.yml stack.yml

      Update the stack.yml file and add a reference to the new secret:

      1. secrets:
      2. - dockerhub

      Now deploy the function using faas-cli up.

      Option 2 - Link an image pull secret to the namespace's ServiceAccount

      Rather than specifying the pull secret for each function that needs it you can bind the secret to the namespace's ServiceAccount. With this option you do not need to update the secrets: section of the stack.yml file.

      Create the image pull secret in the openfaas-fn namespace (or equivalent):

      1. $ kubectl create secret docker-registry my-private-repo \
      2. --docker-username=$DOCKER_USERNAME \
      3. --docker-password=$DOCKER_PASSWORD \
      4. --docker-email=$DOCKER_EMAIL \
      5. --namespace openfaas-fn

      If needed, pass in the —docker-server address.

      Use the following command to edit the default ServiceAccount's configuration:

      1. $ kubectl edit serviceaccount default -n openfaas-fn

      At the bottom of the manifest add:

      1. imagePullSecrets:
      2. - name: my-private-repo

      Save the changes in the editor and this configuration will be applied.

      The OpenFaaS controller will now deploy functions with images in private repositories without having to specify the secret in the stack.yml file.

      Set a custom ImagePullPolicy

      Kubernetes allows you to control the conditions for when the Docker images for your functions are pulled onto a node. This is configured through an .

      There are three options:

      • Always - pull the Docker image from the registry every time a deployment changes
      • IfNotPresent - only pull the image if it does not exist in the local registry cache
      • Never - never attempt to pull an image By default, deployed functions will use an imagePullPolicy of Always, which ensures functions using static image tags (e.g. "latest" tags) are refreshed during an update. This behavior is configurable in faas-netes via the image_pull_policy environment variable.

      If you're using helm you can pass a configuration flag:

      If you're using the plain YAML files then edit gateway-dep.yml and set the following for faas-netes:

      1. - name: image_pull_policy
      2. value: "IfNotPresent"
      Notes on picking an "imagePullPolicy"

      As mentioned above, the default value is Always. Every time a function is deployed or is scaled up, Kubernetes will pull a potentially updated copy of the image from the registry. If you are using static image tags like latest, this is necessary.

      When set to IfNotPresent, function deployments may not be updated when using static image tags like latest. IfNotPresent is particularly useful when developing locally with minikube. In this case, you can set your local environment to use minikube's docker so faas-cli build builds directly into the Docker library used by minikube. is unnecessary in this workflow - use faas-cli build then faas-cli deploy.

      When set to Never, only local (or pulled) images will work. This is useful if you want to tightly control which images are available and run in your Kubernetes cluster.