Configuring the Knative Serving Operator custom resource

    Cluster administrators can install a specific version of Knative Serving by using the field.

    For example, if you want to install Knative Serving v0.23.0, you can apply the following KnativeServing custom resource:

    If spec.version is not specified, the Knative Operator installs the latest available version of Knative Serving. If users specify an invalid or unavailable version, the Knative Operator will do nothing. The Knative Operator always includes the latest 3 minor release versions. For example, if the current version of the Knative Operator is v0.24.0, the earliest version of Knative Serving available through the Operator is v0.22.0.

    If Knative Serving is already managed by the Operator, updating the spec.version field in the KnativeServing resource enables upgrading or downgrading the Knative Serving version, without needing to change the Operator.

    Important

    The Knative Operator only permits upgrades or downgrades by one minor release version at a time. For example, if the current Knative Serving deployment is version v0.22.0, you must upgrade to v0.23.0 before upgrading to v0.24.0.

    Knative Serving configuration by ConfigMap

    The Operator manages the Knative Serving installation. It overwrites any updates to ConfigMaps which are used to configure Knative Serving. The KnativeServing custom resource (CR) allows you to set values for these ConfigMaps by using the Operator. Knative Serving has multiple ConfigMaps that are named with the prefix config-. The spec.config in the KnativeServing CR has one <name> entry for each ConfigMap, named config-<name>, with a value which will be used for the ConfigMap data.

    In the , you can see the content of the ConfigMap config-domain is:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: config-domain
    5. namespace: knative-serving
    6. data:
    7. example.org: |
    8. selector:
    9. app: prod
    10. example.com: ""

    Using the operator, specify the ConfigMap config-domain using the operator CR:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. config:
    8. domain:
    9. example.org: |
    10. selector:
    11. app: prod
    12. example.com: ""

    You can apply values to multiple ConfigMaps. This example sets stable-window to 60s in config-autoscaler as well as specifying config-domain:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. config:
    8. domain:
    9. example.org: |
    10. selector:
    11. app: prod
    12. example.com: ""
    13. autoscaler:
    14. stable-window: "60s"

    All the ConfigMaps are created in the same namespace as the operator CR. You can use the operator CR as the unique entry point to edit all of them.

    Private repository and private secrets

    You can use the spec.registry section of the operator CR to change the image references to point to a private registry or :

    • default: this field defines a image reference template for all Knative images. The format is example-registry.io/custom/path/${NAME}:{CUSTOM-TAG}. If you use the same tag for all your images, the only difference is the image name. ${NAME} is a pre-defined variable in the operator corresponding to the container name. If you name the images in your private repo to align with the container names ( activator, autoscaler, controller, webhook, autoscaler-hpa, net-istio-controller, and queue-proxy), the default argument should be sufficient.

    • override: a map from container name to the full registry location. This section is only needed when the registry images do not match the common naming format. For containers whose name matches a key, the value is used in preference to the image name calculated by default. If a container’s name does not match a key in override, the template in default is used.

    • imagePullSecrets: a list of Secret names used when pulling Knative container images. The Secrets must be created in the same namespace as the Knative Serving Deployments. See deploying images from a private container registry for configuration details.

    This example shows how you can define custom image links that can be defined in the CR using the simplified format docker.io/knative-images/${NAME}:{CUSTOM-TAG}.

    In the following example:

    • the custom tag v0.13.0 is used for all images
    • all image links are accessible without using secrets
    • images are pushed as docker.io/knative-images/${NAME}:{CUSTOM-TAG}

    First, you need to make sure your images pushed to the following image tags:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. registry:
    8. default: docker.io/knative-images/${NAME}:v0.13.0

    Download images individually without secrets

    If your custom image links are not defined in a uniform format by default, you will need to individually include each link in the CR.

    For example, to given the following images:

    ContainerDocker Image
    activatordocker.io/knative-images-repo1/activator:v0.13.0
    autoscalerdocker.io/knative-images-repo2/autoscaler:v0.13.0
    controllerdocker.io/knative-images-repo3/controller:v0.13.0
    webhookdocker.io/knative-images-repo4/webhook:v0.13.0
    autoscaler-hpadocker.io/knative-images-repo5/autoscaler-hpa:v0.13.0
    net-istio-controllerdocker.io/knative-images-repo6/prefix-net-istio-controller:v0.13.0
    net-istio-webhookdocker.io/knative-images-repo6/net-istio-webhooko:v0.13.0
    queue-proxydocker.io/knative-images-repo7/queue-proxy-suffix:v0.13.0

    The Operator CR should be modified to include the full list:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. namespace: knative-serving
    4. spec:
    5. registry:
    6. override:
    7. activator: docker.io/knative-images-repo1/activator:v0.13.0
    8. autoscaler: docker.io/knative-images-repo2/autoscaler:v0.13.0
    9. controller: docker.io/knative-images-repo3/controller:v0.13.0
    10. webhook: docker.io/knative-images-repo4/webhook:v0.13.0
    11. autoscaler-hpa: docker.io/knative-images-repo5/autoscaler-hpa:v0.13.0
    12. net-istio-controller: docker.io/knative-images-repo6/prefix-net-istio-controller:v0.13.0
    13. net-istio-webhook/webhook: docker.io/knative-images-repo6/net-istio-webhook:v0.13.0
    14. queue-proxy: docker.io/knative-images-repo7/queue-proxy-suffix:v0.13.0

    Note

    If the container name is not unique across all Deployments, DaemonSets and Jobs, you can prefix the container name with the parent container name and a slash. For example, istio-webhook/webhook.

    Download images with secrets

    If your image repository requires private secrets for access, include the imagePullSecrets attribute.

    This example uses a secret named regcred. You must create your own private secrets if these are required:

    After you create this secret, edit the Operator CR by appending the following content:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. registry:
    8. ...
    9. imagePullSecrets:
    10. - name: regcred

    The field imagePullSecrets expects a list of secrets. You can add multiple secrets to access the images as follows:

    To enable tag to digest resolution, the Knative Serving controller needs to access the container registry. To allow the controller to trust a self-signed registry cert, you can use the Operator to specify the certificate using a ConfigMap or Secret.

    Specify the following fields in spec.controller-custom-certs to select a custom registry certificate:

    • name: the name of the ConfigMap or Secret.
    • type: either the string “ConfigMap” or “Secret”.

    If you create a ConfigMap named testCert containing the certificate, change your CR:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. controller-custom-certs:
    8. name: testCert
    9. type: ConfigMap

    Replace the default istio-ingressgateway-service

    To set up a custom ingress gateway, follow Step 1: Create Gateway Service and Deployment Instance.

    Step 2: Update the Knative gateway

    Update spec.ingress.istio.knative-ingress-gateway to select the labels of the new ingress gateway:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. ingress:
    8. istio:
    9. enabled: true
    10. knative-ingress-gateway:
    11. selector:
    12. istio: ingressgateway

    Additionally, you will need to update the Istio ConfigMap:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. ingress:
    8. istio:
    9. enabled: true
    10. knative-ingress-gateway:
    11. selector:
    12. istio: ingressgateway
    13. config:
    14. istio:
    15. gateway.knative-serving.knative-ingress-gateway: "custom-ingressgateway.custom-ns.svc.cluster.local"

    The key in spec.config.istio is in the format of gateway.<gateway_namespace>.<gateway_name>.

    Replace the knative-ingress-gateway gateway

    To create the ingress gateway, follow .

    Step 2: Update Gateway ConfigMap

    You will need to update the Istio ConfigMap:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. config:
    8. istio:
    9. gateway.custom-ns.knative-custom-gateway: "istio-ingressgateway.istio-system.svc.cluster.local"

    Update spec.ingress.istio.knative-local-gateway to select the labels of the new cluster-local ingress gateway:

    Default local gateway name:

    Go through the guide here to use local cluster gateway, if you use the default gateway called knative-local-gateway.

    Non-default local gateway name:

    If you create custom local gateway with a name other than knative-local-gateway, update config.istio and the knative-local-gateway selector:

    This example shows a service and deployment knative-local-gateway in the namespace istio-system, with the label custom: custom-local-gw:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. ingress:
    8. istio:
    9. enabled: true
    10. knative-local-gateway:
    11. selector:
    12. custom: custom-local-gateway
    13. config:
    14. local-gateway.knative-serving.knative-local-gateway: "custom-local-gateway.istio-system.svc.cluster.local"

    High availability

    By default, Knative Serving runs a single instance of each controller. The spec.high-availability field allows you to configure the number of replicas for the following leader-elected controllers: controller, autoscaler-hpa, net-istio-controller. This field also configures the HorizontalPodAutoscaler resources for the data plane (activator):

    The following configuration specifies a replica count of 3 for the controllers and a minimum of 3 activators (which may scale higher if needed):

    1. kind: KnativeServing
    2. metadata:
    3. name: knative-serving
    4. namespace: knative-serving
    5. spec:
    6. high-availability:
    7. replicas: 3

    System Resource Settings

    The operator custom resource allows you to configure system resources for the Knative system containers. Requests and limits can be configured for the following containers: activator, autoscaler, controller, webhook, autoscaler-hpa, net-istio-controller and queue-proxy.

    To override resource settings for a specific container, create an entry in the spec.resources list with the container name and the Kubernetes resource settings.

    For example, the following KnativeServing resource configures the activator to request 0.3 CPU and 100MB of RAM, and sets hard limits of 1 CPU, 250MB RAM, and 4GB of local storage:

    If you would like to add another container autoscaler with the same configuration, you need to change your CR as follows:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. resources:
    8. - container: activator
    9. requests:
    10. cpu: 300m
    11. memory: 100Mi
    12. limits:
    13. cpu: 1000m
    14. memory: 250Mi
    15. ephemeral-storage: 4Gi
    16. - container: autoscaler
    17. requests:
    18. cpu: 300m
    19. memory: 100Mi
    20. limits:
    21. cpu: 1000m
    22. memory: 250Mi
    23. ephemeral-storage: 4Gi

    If you would like to override some configurations for a specific deployment, you can override the configuration by using spec.deployments in CR. Currently replicas, labels, annotations and nodeSelector are supported.

    The following KnativeServing resource overrides the webhook deployment to have 3 Replicas, the label mylabel: foo, and the annotation myannotataions: bar, while other system deployments have 2 Replicas by using spec.high-availability.

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. high-availability:
    8. replicas: 2
    9. deployments:
    10. - name: webhook
    11. replicas: 3
    12. labels:
    13. mylabel: foo
    14. annotations:
    15. myannotataions: bar

    Note

    The KnativeServing resource label and annotation settings override the webhook’s labels and annotations for both Deployments and Pods.

    Override the nodeSelector

    The following KnativeServing resource overrides the webhook deployment to use the disktype: hdd nodeSelector:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. deployments:
    8. - name: webhook
    9. nodeSelector:
    10. disktype: hdd

    Override the tolerations

    The KnativeServing resource is able to override tolerations for the Knative Serving deployment resources. For example, if you would like to add the following tolerations

    1. tolerations:
    2. - key: "key1"
    3. operator: "Equal"
    4. value: "value1"
    5. effect: "NoSchedule"

    to the deployment activator, you need to change your KnativeServing CR as below:

    1. apiVersion: operator.knative.dev/v1alpha1
    2. kind: KnativeServing
    3. metadata:
    4. name: knative-serving
    5. namespace: knative-serving
    6. spec:
    7. deployments:
    8. - name: activator
    9. tolerations:
    10. - key: "key1"
    11. operator: "Equal"
    12. value: "value1"
    13. effect: "NoSchedule"

    Override the affinity

    The KnativeServing resource is able to override the affinity, including nodeAffinity, podAffinity, and podAntiAffinity, for the Knative Serving deployment resources. For example, if you would like to add the following nodeAffinity

    1. affinity:
    2. nodeAffinity:
    3. preferredDuringSchedulingIgnoredDuringExecution:
    4. - weight: 1
    5. preference:
    6. matchExpressions:
    7. - key: disktype
    8. operator: In
    9. values:
    10. - ssd