Troubleshooting Multicluster

    The most common, but also broad problem with multi-network installations is that cross-cluster load balancing doesn’t work. Usually this manifests itself as only seeing responses from the cluster-local instance of a Service:

    When following the guide to we would expect both and v2 responses, indicating traffic is going to both clusters.

    There are many possible causes to the problem:

    In some environments it may not be apparent that a firewall is blocking traffic between your clusters. It’s possible that ICMP (ping) traffic may succeed, but HTTP and other types of traffic do not. This can appear as a timeout, or in some cases a more confusing error such as:

    1. upstream connect error or disconnect/reset before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST

    While Istio provides service discovery capabilities to make it easier, cross-cluster traffic should still succeed if pods in each cluster are on a single network without Istio. To rule out issues with TLS/mTLS, you can do a manual traffic test using pods without Istio sidecars.

    In each cluster, create a new namespace for this test. Do not enable sidecar injection:

    1. $ kubectl create --context="${CTX_CLUSTER1}" namespace uninjected-sample
    2. $ kubectl create --context="${CTX_CLUSTER2}" namespace uninjected-sample

    Then deploy the same apps used in :

    1. $ kubectl apply --context="${CTX_CLUSTER1}" \
    2. -f samples/helloworld/helloworld.yaml \
    3. -l service=helloworld -n uninjected-sample
    4. $ kubectl apply --context="${CTX_CLUSTER2}" \
    5. -f samples/helloworld/helloworld.yaml \
    6. -l service=helloworld -n uninjected-sample
    7. $ kubectl apply --context="${CTX_CLUSTER1}" \
    8. -f samples/helloworld/helloworld.yaml \
    9. -l version=v1 -n uninjected-sample
    10. $ kubectl apply --context="${CTX_CLUSTER2}" \
    11. -f samples/helloworld/helloworld.yaml \
    12. $ kubectl apply --context="${CTX_CLUSTER1}" \
    13. -f samples/sleep/sleep.yaml -n uninjected-sample
    14. $ kubectl apply --context="${CTX_CLUSTER2}" \
    15. -f samples/sleep/sleep.yaml -n uninjected-sample

    Verify that there is a helloworld pod running in cluster2, using the -o wide flag, so we can get the Pod IP:

    1. $ kubectl --context="${CTX_CLUSTER2}" -n uninjected-sample get pod -o wide
    2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    3. sleep-557747455f-jdsd8 1/1 Running 0 41s 10.100.0.2 node-2 <none> <none>

    Take note of the IP column for helloworld. In this case, it is 10.100.0.1:

    1. $ REMOTE_POD_IP=10.100.0.1

    Next, attempt to send traffic from the sleep pod in cluster1 directly to this Pod IP:

    If successful, there should be responses only from helloworld-v2. Repeat the steps, but send traffic from cluster2 to cluster1.

    [Locality load balancing](/docs/tasks/traffic-management/locality-load-balancing/failover/#configure -locality-failover) can be used to make clients prefer that traffic go to the nearest destination. If the clusters are in different localities (region/zone), locality load balancing will prefer the local-cluster and is working as intended. If locality load balancing is disabled, or the clusters are in the same locality, there may be another issue.

    Cross-cluster traffic, as with intra-cluster traffic, relies on a common root of trust between the proxies. The default Istio installation will use their own individually generated root certificate-authorities. For multi-cluster, we must manually configure a shared root of trust. Follow Plug-in Certs below or read Identity and Trust Models to learn more.

    Plug-in Certs:

    To verify certs are configured correctly, you can compare the root-cert in each cluster:

    1. $ diff \
    2. <(kubectl --context="${CTX_CLUSTER1}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}') \
    3. <(kubectl --context="${CTX_CLUSTER2}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}')

    You can follow the guide, ensuring to run the steps for every cluster.

    If you’ve gone through the sections above and are still having issues, then it’s time to dig a little deeper.

    The following steps assume you’re following the . Before continuing, make sure both helloworld and sleep are deployed in each cluster.

    From each cluster, find the endpoints the sleep service has for helloworld:

    1. $ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld

    Troubleshooting information differs based on the cluster that is the source of traffic:

    1. $ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
    2. 10.0.0.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

    Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster. Verify that remote secrets are configured properly.

    1. $ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l "istio/multiCluster=true"
    • If the secret is missing, create it.
    • If the secret is present:
      • Look at the config in the secret. Make sure the cluster name is used as the data key for the remote kubeconfig.
    1. $ istioctl --context $CTX_CLUSTER2 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld

    Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster. Verify that remote secrets are configured properly.

    • If the secret is missing, create it.
    • If the secret is present and the endpoint is a Pod in the primary cluster:
      • Look at the config in the secret. Make sure the cluster name is used as the data key for the remote kubeconfig.
      • If the secret looks correct, check the logs of istiod for connectivity or permissions issues reaching the remote Kubernetes API server. Log messages may include Failed to add remote cluster from secret along with an error reason.
    • If the secret is present and the endpoint is a Pod in the remote cluster:
      • The proxy is reading configuration from an istiod inside the remote cluster. When a remote cluster has an in -cluster istiod, it is only meant for sidecar injection and CA. You can verify this is the problem by looking for a Service named istiod-remote in the istio-system namespace. If it’s missing, reinstall making sure values.global.remotePilotAddress is set.
    1. $ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
    2. 10.0.5.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
    3. 10.0.6.13:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

    In multi-network, we expect one of the endpoint IPs to match the remote cluster’s east-west gateway public IP. Seeing multiple Pod IPs indicates one of two things:

    • The address of the gateway for the remote network cannot be determined.
    • The network of either the client or server pod cannot be determined.

    The address of the gateway for the remote network cannot be determined:

    In the remote cluster that cannot be reached, check that the Service has an External IP:

    1. $ kubectl -n istio-system get service -l "istio=eastwestgateway"
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. istio-eastwestgateway LoadBalancer 10.8.17.119 <PENDING> 15021:31781/TCP,15443:30498/TCP,15012:30879/TCP,15017:30336/TCP 76m

    If the EXTERNAL-IP is stuck in <PENDING>, the environment may not support LoadBalancer services. In this case, it may be necessary to customize the spec.externalIPs section of the Service to manually give the Gateway an IP reachable from outside the cluster.

    If the external IP is present, check that the Service includes a topology.istio.io/network label with the correct value. If that is incorrect, reinstall the gateway and make sure to set the –network flag on the generation script.

    The network of either the client or server cannot be determined.

    On the source pod, check the proxy metadata.

    1. $ kubectl get pod $SLEEP_POD_NAME \
    2. -o jsonpath="{.spec.containers[*].env[?(@.name=='ISTIO_META_NETWORK')].value}"
    1. $ kubectl get pod $HELLOWORLD_POD_NAME \

    If either of these values aren’t set, or have the wrong value, istiod may treat the source and client proxies as being on the same network and send network-local endpoints. When these aren’t set, check that values.global.network was set properly during install, or that the injection webhook is configured correctly.

    Istio determines the network of a Pod using the topology.istio.io/network label which is set during injection. For non-injected Pods, Istio relies on the topology.istio.io/network label set on the system namespace in the cluster.

    In each cluster, check the network:

    1. $ kubectl --context="${CTX_CLUSTER1}" get ns istio-system -ojsonpath='{.metadata.labels.topology\.istio\.io/network}'

    If the above command doesn’t output the expected network name, set the label: