Single Consul Datacenter in Multiple Kubernetes Clusters

    This page describes how to deploy a single Consul datacenter in multiple Kubernetes clusters, with both servers and clients running in one cluster, and only clients running in the rest of the clusters. In this example, we will use two Kubernetes clusters, but this approach could be extended to using more than two.

    Note: This deployment topology requires that your Kubernetes clusters have a flat network for both pods and nodes, so that pods or nodes from one cluster can connect to pods or nodes in another. If a flat network is not available across all Kubernetes clusters, follow the instructions for using Admin Partitions, which is a Consul Enterprise feature.

    The Helm release name must be unique for each Kubernetes cluster. The Helm chart uses the Helm release name as a prefix for the ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are identical, subsequent Consul on Kubernetes clusters overwrite existing ACL resources and cause the clusters to fail.

    Before you proceed with installation, prepare the Helm release names as environment variables for both the server and client installs to use.

    1. $ export HELM_RELEASE_CLIENT=client
    2. ...
    3. $ export HELM_RELEASE_CLIENT2=client2

    First, we will deploy the Consul servers with Consul clients in the first cluster. For that, we will use the following Helm configuration:

    1. global:
    2. datacenter: dc1
    3. tls:
    4. enabled: true
    5. enableAutoEncrypt: true
    6. acls:
    7. manageSystemACLs: true
    8. gossipEncryption:
    9. secretName: consul-gossip-encryption-key
    10. secretKey: key
    11. connectInject:
    12. enabled: true
    13. controller:
    14. enabled: true
    15. ui:
    16. service:
    17. type: NodePort

    cluster1-config.yaml

    1. global:
    2. datacenter: dc1
    3. tls:
    4. enabled: true
    5. enableAutoEncrypt: true
    6. acls:
    7. manageSystemACLs: true
    8. gossipEncryption:
    9. secretName: consul-gossip-encryption-key
    10. secretKey: key
    11. connectInject:
    12. enabled: true
    13. controller:
    14. enabled: true
    15. ui:
    16. service:
    17. type: NodePort

    Note that we are deploying in a secure configuration, with gossip encryption, TLS for all components, and ACLs. We are enabling the Consul Service Mesh and the controller for CRDs so that we can use them to later verify that our services can connect with each other across clusters.

    We’re also setting UI’s service type to be NodePort. This is needed so that we can connect to servers from another cluster without using the pod IPs of the servers, which are likely going to change.

    To deploy, first we need to generate the Gossip encryption key and save it as a Kubernetes secret.

    1. $ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen)
    1. $ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen)

    Now we can install our Consul cluster with Helm:

    1. $ helm install ${HELM_RELEASE_SERVER} --values cluster1-config.yaml hashicorp/consul
    1. $ helm install ${HELM_RELEASE_SERVER} --values cluster1-config.yaml hashicorp/consul
    1. $ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml

    Note: If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.

    Now we can switch to the second Kubernetes cluster where we will deploy only the Consul clients that will join the first Consul cluster.

    First, we need to apply credentials we’ve extracted from the first cluster to the second cluster:

    1. $ kubectl apply --filename cluster1-credentials.yaml
    1. $ kubectl apply --filename cluster1-credentials.yaml

    To deploy in the second cluster, we will use the following Helm configuration:

    1. global:
    2. enabled: false
    3. datacenter: dc1
    4. acls:
    5. manageSystemACLs: true
    6. bootstrapToken:
    7. secretName: cluster1-consul-bootstrap-acl-token
    8. secretKey: token
    9. gossipEncryption:
    10. secretName: consul-gossip-encryption-key
    11. secretKey: key
    12. tls:
    13. enabled: true
    14. enableAutoEncrypt: true
    15. caCert:
    16. secretName: cluster1-consul-ca-cert
    17. secretKey: tls.crt
    18. externalServers:
    19. enabled: true
    20. # This should be any node IP of the first k8s cluster
    21. hosts: ["10.0.0.4"]
    22. # The node port of the UI's NodePort service
    23. httpsPort: 31557
    24. tlsServerName: server.dc1.consul
    25. # The address of the kube API server of this Kubernetes cluster
    26. k8sAuthMethodHost: https://kubernetes.example.com:443
    27. client:
    28. enabled: true
    29. join: ["provider=k8s kubeconfig=/consul/userconfig/cluster1-kubeconfig/kubeconfig label_selector=\"app=consul,component=server\""]
    30. extraVolumes:
    31. - type: secret
    32. name: cluster1-kubeconfig
    33. load: false
    34. connectInject:
    35. enabled: true

    Single Consul Datacenter in Multiple Kubernetes Clusters - 图2

    cluster2-config.yaml

    1. global:
    2. enabled: false
    3. datacenter: dc1
    4. acls:
    5. manageSystemACLs: true
    6. bootstrapToken:
    7. secretName: cluster1-consul-bootstrap-acl-token
    8. secretKey: token
    9. gossipEncryption:
    10. secretName: consul-gossip-encryption-key
    11. secretKey: key
    12. tls:
    13. enabled: true
    14. enableAutoEncrypt: true
    15. caCert:
    16. secretName: cluster1-consul-ca-cert
    17. secretKey: tls.crt
    18. externalServers:
    19. enabled: true
    20. # This should be any node IP of the first k8s cluster
    21. hosts: ["10.0.0.4"]
    22. # The node port of the UI's NodePort service
    23. tlsServerName: server.dc1.consul
    24. # The address of the kube API server of this Kubernetes cluster
    25. k8sAuthMethodHost: https://kubernetes.example.com:443
    26. client:
    27. enabled: true
    28. join: ["provider=k8s kubeconfig=/consul/userconfig/cluster1-kubeconfig/kubeconfig label_selector=\"app=consul,component=server\""]
    29. extraVolumes:
    30. - type: secret
    31. name: cluster1-kubeconfig
    32. load: false
    33. enabled: true

    Note that we’re referencing secrets from the first cluster in ACL, gossip, and TLS configuration.

    Next, we need to set up the externalServers configuration.

    The externalServers.hosts and externalServers.httpsPort refer to the IP and port of the UI’s NodePort service deployed in the first cluster. Set the externalServers.hosts to any Node IP of the first cluster, which you can see by running kubectl get nodes --output wide. Set externalServers.httpsPort to the nodePort of the cluster1-consul-ui service. In our example, the port is 31557.

    1. $ kubectl get service cluster1-consul-ui --context cluster1
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. cluster1-consul-ui NodePort 10.0.240.80 <none> 443:31557/TCP 40h
    1. $ kubectl get service cluster1-consul-ui --context cluster1
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. cluster1-consul-ui NodePort 10.0.240.80 <none> 443:31557/TCP 40h

    We set the externalServer.tlsServerName to server.dc1.consul. This the DNS SAN (Subject Alternative Name) that is present in the Consul server’s certificate. We need to set it because we’re connecting to the Consul servers over the node IP, but that IP isn’t present in the server’s certificate. To make sure that the hostname verification succeeds during the TLS handshake, we need to set the TLS server name to a DNS name that is present in the certificate.

    Next, we need to set externalServers.k8sAuthMethodHost to the address of the second Kubernetes API server. This should be the address that is reachable from the first cluster, and so it cannot be the internal DNS available in each Kubernetes cluster. Consul needs it so that consul login with the Kubernetes auth method will work from the second cluster. More specifically, the Consul server will need to perform the verification of the Kubernetes service account whenever consul login is called, and to verify service accounts from the second cluster it needs to reach the Kubernetes API in that cluster. The easiest way to get it is to set it from your kubeconfig by running kubectl config view and grabbing the value of cluster.server for the second cluster.

    Note: The kubeconfig you’re providing to the client should have minimal permissions. The cloud auto-join provider will only need permission to read pods. Please see for more details.

    Now we’re ready to install!

    1. $ helm install ${HELM_RELEASE_CLIENT} --values cluster2-config.yaml hashicorp/consul

    When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have a explicit upstream configured through the “consul.hashicorp.com/connect-service-upstreams” annotation.

    Now that we have our Consul cluster in multiple k8s clusters up and running, we will deploy two services and verify that they can connect to each other.

    First, we’ll deploy static-server service in the first cluster:

    1. ---
    2. apiVersion: consul.hashicorp.com/v1alpha1
    3. kind: ServiceIntentions
    4. metadata:
    5. name: static-server
    6. spec:
    7. destination:
    8. name: static-server
    9. sources:
    10. - name: static-client
    11. action: allow
    12. ---
    13. apiVersion: v1
    14. kind: Service
    15. metadata:
    16. name: static-server
    17. spec:
    18. type: ClusterIP
    19. selector:
    20. app: static-server
    21. ports:
    22. - protocol: TCP
    23. port: 80
    24. targetPort: 8080
    25. ---
    26. apiVersion: v1
    27. kind: ServiceAccount
    28. metadata:
    29. name: static-server
    30. ---
    31. apiVersion: apps/v1
    32. kind: Deployment
    33. metadata:
    34. name: static-server
    35. spec:
    36. replicas: 1
    37. selector:
    38. matchLabels:
    39. app: static-server
    40. template:
    41. metadata:
    42. name: static-server
    43. labels:
    44. app: static-server
    45. annotations:
    46. "consul.hashicorp.com/connect-inject": "true"
    47. spec:
    48. containers:
    49. - name: static-server
    50. image: hashicorp/http-echo:latest
    51. args:
    52. - -text="hello world"
    53. - -listen=:8080
    54. ports:
    55. - containerPort: 8080
    56. name: http
    57. serviceAccountName: static-server

    static-server.yaml

    1. ---
    2. apiVersion: consul.hashicorp.com/v1alpha1
    3. kind: ServiceIntentions
    4. metadata:
    5. name: static-server
    6. spec:
    7. destination:
    8. name: static-server
    9. sources:
    10. - name: static-client
    11. action: allow
    12. ---
    13. apiVersion: v1
    14. kind: Service
    15. metadata:
    16. name: static-server
    17. spec:
    18. type: ClusterIP
    19. selector:
    20. app: static-server
    21. ports:
    22. - protocol: TCP
    23. port: 80
    24. targetPort: 8080
    25. ---
    26. apiVersion: v1
    27. kind: ServiceAccount
    28. name: static-server
    29. ---
    30. kind: Deployment
    31. metadata:
    32. name: static-server
    33. spec:
    34. replicas: 1
    35. selector:
    36. matchLabels:
    37. app: static-server
    38. template:
    39. metadata:
    40. name: static-server
    41. labels:
    42. app: static-server
    43. annotations:
    44. "consul.hashicorp.com/connect-inject": "true"
    45. spec:
    46. containers:
    47. - name: static-server
    48. image: hashicorp/http-echo:latest
    49. args:
    50. - -text="hello world"
    51. - -listen=:8080
    52. ports:
    53. - containerPort: 8080
    54. name: http
    55. serviceAccountName: static-server

    Note that we’re defining a Service intention so that our services are allowed to talk to each other.

    Then we’ll deploy static-client in the second cluster with the following configuration:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: static-client
    5. spec:
    6. selector:
    7. app: static-client
    8. ports:
    9. - port: 80
    10. ---
    11. apiVersion: v1
    12. kind: ServiceAccount
    13. metadata:
    14. name: static-client
    15. ---
    16. apiVersion: apps/v1
    17. kind: Deployment
    18. metadata:
    19. name: static-client
    20. spec:
    21. replicas: 1
    22. selector:
    23. matchLabels:
    24. app: static-client
    25. template:
    26. metadata:
    27. name: static-client
    28. labels:
    29. app: static-client
    30. annotations:
    31. "consul.hashicorp.com/connect-inject": "true"
    32. "consul.hashicorp.com/connect-service-upstreams": "static-server:1234"
    33. spec:
    34. containers:
    35. - name: static-client
    36. image: curlimages/curl:latest
    37. command: [ "/bin/sh", "-c", "--" ]
    38. args: [ "while true; do sleep 30; done;" ]
    39. serviceAccountName: static-client

    Single Consul Datacenter in Multiple Kubernetes Clusters - 图4

    static-client.yaml

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: static-client
    5. spec:
    6. selector:
    7. app: static-client
    8. ports:
    9. - port: 80
    10. ---
    11. apiVersion: v1
    12. kind: ServiceAccount
    13. metadata:
    14. name: static-client
    15. ---
    16. apiVersion: apps/v1
    17. kind: Deployment
    18. metadata:
    19. name: static-client
    20. spec:
    21. replicas: 1
    22. selector:
    23. matchLabels:
    24. app: static-client
    25. template:
    26. metadata:
    27. name: static-client
    28. labels:
    29. app: static-client
    30. annotations:
    31. "consul.hashicorp.com/connect-inject": "true"
    32. "consul.hashicorp.com/connect-service-upstreams": "static-server:1234"
    33. spec:
    34. containers:
    35. - name: static-client
    36. image: curlimages/curl:latest
    37. command: [ "/bin/sh", "-c", "--" ]
    38. args: [ "while true; do sleep 30; done;" ]
    39. serviceAccountName: static-client
    1. $ kubectl exec deploy/static-client -- curl --silent localhost:1234
    2. "hello world"