Transparent Proxy
Previously, service mesh users would need to explicitly define upstreams for a service as a local listener on the sidecar proxy, and dial the local listener to reach the appropriate upstream. Users would also have to set intentions to allow specific services to talk to one another. Transparent proxying reduces this duplication, by determining upstreams implicitly from Service Intentions. Explicit upstreams are still supported in the proxy service registration on VMs and via the in Kubernetes.
To support transparent proxying, Consul’s CLI now has a command consul connect redirect-traffic to redirect traffic through an inbound and outbound listener on the sidecar. Consul also watches Service Intentions and configures the Envoy proxy with the appropriate upstream IPs. If the default ACL policy is “allow”, then Service Intentions are not required. In Consul on Kubernetes, the traffic redirection command is automatically set up via an init container.
- To use transparent proxy on Kubernetes, Consul-helm >= and Consul-k8s >=
0.26.0
are required in addition to Consul >=1.10.0
.
The Kubernetes integration takes care of registering Kubernetes services with Consul, injecting a sidecar proxy, and enabling traffic redirection.
When upgrading from older versions (i.e Consul-k8s < 0.26.0
or Consul-helm < 0.32.0
) to Consul-k8s >= 0.26.0
and Consul-helm >= 0.32.0
, please make sure to follow the upgrade steps .
Transparent proxy can be enabled in Kubernetes on the whole cluster via the Helm value:
It can also be enabled on a per service basis via the annotation on the Pod for each service, which will override both the Helm value and the namespace label:
Traffic redirection interferes with Kubernetes HTTP health probes since the probes expect that kubelet can directly reach the application container on the probe’s endpoint, but that traffic will be redirected through the sidecar proxy, causing errors because kubelet itself is not encrypting that traffic using a mesh proxy. For this reason, Consul allows you to to point to the proxy instead. This can be done using the Helm value connectInject.transparentProxy.defaultOverwriteProbes
or the Pod annotation consul.hashicorp.com/transparent-proxy-overwrite-probes
.
Pods with transparent proxy enabled will have an init container injected that sets up traffic redirection for all inbound and outbound traffic through the sidecar proxies. This will include all traffic by default, with the ability to configure exceptions on a per-Pod basis. The following Pod annotations allow you to exclude certain traffic from redirection to the sidecar proxies:
- consul.hashicorp.com/transparent-proxy-exclude-inbound-ports
- consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs
- Traffic can only be transparently proxied when the address dialed corresponds to the address of a service in the transparent proxy’s datacenter. Services can also dial explicit upstreams in other datacenters without transparent proxy, for example, by adding an annotation such as
"consul.hashicorp.com/connect-service-upstreams": "my-service:1234:dc2"
to reach an upstream service calledmy-service
in the datacenterdc2
. - When dialing headless services the request will be proxied using a plain TCP proxy with a 5s connection timeout. Currently the upstream’s protocol and connection timeout are not considered.
In Kubernetes, services can reach other services via their address or via Pod IPs, and that traffic will be transparently sent through the proxy. Connect services in Kubernetes are required to have a Kubernetes service selecting the Pods.
Note: In order to use KubeDNS, the Kubernetes service name will need to match the Consul service name. This will be the case by default, unless the service Pods have the annotation overriding the Consul service name.
Each Pod for the service will be configured with iptables rules to direct all inbound and outbound traffic through an inbound and outbound listener on the sidecar proxy. The proxy will be configured to know how to route traffic to the appropriate upstream services based on Service Intentions. This means Connect services no longer need to use the consul.hashicorp.com/connect-service-upstreams
annotation to configure upstreams explicitly. Once the Service Intentions are set, they can simply address the upstream services using KubeDNS.
As of Consul-k8s >= 0.26.0
and Consul-helm >= 0.32.0
, a Kubernetes service that selects application pods is required for Connect applications, i.e:
In the example above, if another service wants to reach sample-app
via transparent proxying, it can dial sample-app.default.svc.cluster.local
, using . If ACLs with default “deny” policy are enabled, it also needs a ServiceIntention allowing it to talk to sample-app
.
For services that are not addressed using a virtual cluster IP, the upstream service must be configured using the option.
Individual instance addresses can then be discovered using DNS, and dialed through the transparent proxy. When this mode is enabled on the upstream, connect certificates will be presented for mTLS and intentions will be enforced at the destination.