Installing Gateways

    Some of Istio’s built in configuration profiles deploy gateways during installation. For example, a call to with will deploy an ingress gateway along with the control plane. Although fine for evaluation and simple use cases, this couples the gateway to the control plane, making management and upgrade more complicated. For production Istio deployments, it is highly recommended to decouple these to allow independent operation.

    Follow this guide to separately deploy and manage one or more gateways in a production installation of Istio.

    This guide requires the Istio control plane before proceeding.

    You can use the minimal profile, for example istioctl install --set profile=minimal, to prevent any gateways from being deployed during installation.

    Using the same mechanisms as , the Envoy proxy configuration for gateways can similarly be auto-injected.

    Using auto-injection for gateway deployments is recommended as it gives developers full control over the gateway deployment, while also simplifying operations. When a new upgrade is available, or a configuration has changed, gateway pods can be updated by simply restarting them. This makes the experience of operating a gateway deployment the same as operating sidecars.

    To support users with existing deployment tools, Istio provides a few different ways to deploy a gateway. Each method will produce the same result. Choose the method you are most familiar with.

    As a security best practice, it is recommended to deploy the gateway in a different namespace from the control plane.

    All methods listed below rely on Injection to populate additional pod settings at runtime. In order to support this, the namespace the gateway is deployed in must not have the istio-injection=disabled label. If it does, you will see pods failing to startup attempting to pull the auto image, which is a placeholder that is intended to be replaced when a pod is created.

    First, setup an IstioOperator configuration file, called ingress.yaml here:

    Then install using standard istioctl commands:

    1. $ kubectl create namespace istio-ingress
    2. $ istioctl install -f ingress.yaml

    Install using standard helm commands:

    To see possible supported configuration values, run helm show values istio/gateway. The Helm repository contains additional information on usage.

    First, setup the Kubernetes configuration, called ingress.yaml here:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: istio-ingressgateway
    5. namespace: istio-ingress
    6. spec:
    7. type: LoadBalancer
    8. selector:
    9. istio: ingressgateway
    10. ports:
    11. - port: 80
    12. name: http
    13. - port: 443
    14. name: https
    15. ---
    16. kind: Deployment
    17. metadata:
    18. name: istio-ingressgateway
    19. namespace: istio-ingress
    20. selector:
    21. matchLabels:
    22. istio: ingressgateway
    23. template:
    24. metadata:
    25. annotations:
    26. # Select the gateway injection template (rather than the default sidecar template)
    27. inject.istio.io/templates: gateway
    28. labels:
    29. # Set a unique label for the gateway. This is required to ensure Gateways can select this workload
    30. istio: ingressgateway
    31. # Enable gateway injection. If connecting to a revisioned control plane, replace with "istio.io/rev: revision-name"
    32. sidecar.istio.io/inject: "true"
    33. spec:
    34. containers:
    35. - name: istio-proxy
    36. image: auto # The image will automatically update each time the pod starts.
    37. ---
    38. # Set up roles to allow reading credentials for TLS
    39. apiVersion: rbac.authorization.k8s.io/v1
    40. kind: Role
    41. metadata:
    42. name: istio-ingressgateway-sds
    43. rules:
    44. - apiGroups: [""]
    45. resources: ["secrets"]
    46. verbs: ["get", "watch", "list"]
    47. ---
    48. apiVersion: rbac.authorization.k8s.io/v1
    49. kind: RoleBinding
    50. metadata:
    51. namespace: istio-ingress
    52. roleRef:
    53. apiGroup: rbac.authorization.k8s.io
    54. kind: Role
    55. name: istio-ingressgateway-sds
    56. subjects:
    57. - kind: ServiceAccount
    58. name: default

    The sidecar.istio.io/inject label on the pod is used in this example to enable injection. Just like application sidecar injection, this can instead be controlled at the namespace level. See Controlling the injection policy for more information.

    Next, apply it to the cluster:

    The following describes how to manage gateways after installation. For more information on their usage, follow the Ingress and tasks.

    The labels on a gateway deployment’s pods are used by Gateway configuration resources, so it’s important that your Gateway selector matches these labels.

    For example, in the above deployments, the istio=ingressgateway label is set on the gateway pods. To apply a Gateway to these deployments, you need to select the same label:

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: Gateway
    3. metadata:
    4. name: gateway
    5. spec:
    6. selector:
    7. istio: ingressgateway
    8. ...

    Depending on your mesh configuration and use cases, you may wish to deploy gateways in different ways. A few different gateway deployment patterns are shown below. Note that more than one of these patterns can be used within the same cluster.

    Shared gateway

    In this model, a single centralized gateway is used by many applications, possibly across many namespaces. Gateway(s) in the ingress namespace delegate ownership of routes to application namespaces, but retain control over TLS configuration.

    Shared gateway

    This model works well when you have many applications you want to expose externally, as they are able to use shared infrastructure. It also works well in use cases that have the same domain or TLS certificates shared by many applications.

    Dedicated application gateway

    In this model, an application namespace has its own dedicated gateway installation. This allows giving full control and ownership to a single namespace. This level of isolation can be helpful for critical applications that have strict performance or security requirements.

    Dedicated application gateway

    Unless there is another load balancer in front of Istio, this typically means that each application will have its own IP address, which may complicate DNS configurations.

    To pick up changes to the gateway configuration, the pods can simply be restarted, using commands such as kubectl rollout restart deployment.

    If you would like to change the control plane revision in use by the gateway, you can set the istio.io/rev label on the gateway Deployment, which will also trigger a rolling restart.

    In place upgrade in progress

    This upgrade method depends on control plane revisions, and therefore can only be used in conjunction with .

    If you would like to more slowly control the rollout of a new control plane revision, you can run multiple versions of a gateway deployment. For example, if you want to roll out a new revision, canary, create a copy of your gateway deployment with the istio.io/rev=canary label set:

    When this deployment is created, you will then have two versions of the gateway, both selected by the same Service:

    1. $ kubectl get endpoints -n istio-ingress -o "custom-columns=NAME:.metadata.name,PODS:.subsets[*].addresses[*].targetRef.name"
    2. NAME PODS
    3. istio-ingressgateway istio-ingressgateway-...,istio-ingressgateway-canary-...

    Canary upgrade in progress

    Canary upgrade in progress

    Unlike application services deployed inside the mesh, you cannot use to distribute the traffic between the gateway versions because their traffic is coming directly from external clients that Istio does not control. Instead, you can control the distribution of traffic by the number of replicas of each deployment. If you use another load balancer in front of Istio, you may also use that to control the traffic distribution.

    Because other installation methods bundle the gateway Service, which controls its external IP address, with the gateway , only the Kubernetes YAML method is supported for this upgrade method.

    A variant of the canary upgrade approach is to shift the traffic between the versions using a high level construct outside Istio, such as an external load balancer or DNS.

    Canary upgrade in progress with external traffic shifting