Destination Rule
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
Version specific policies can be specified by defining a named subset
and overriding the settings specified at the service level. The following rule uses a round robin load balancing policy for all traffic going to a subset named testversion that is composed of endpoints (e.g., pods) with labels (version:v3).
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Note: Policies specified for subsets will not take effect until a route rule explicitly sends traffic to this subset.
Traffic policies can be customized to specific ports as well. The following rule uses the least connection load balancing policy for all traffic to port 80, while uses a round robin load balancing setting for traffic to the port 9080.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings-port
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy: # Apply to all ports
portLevelSettings:
- port:
number: 80
loadBalancer:
simple: LEAST_REQUEST
- port:
number: 9080
loadBalancer:
simple: ROUND_ROBIN
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: bookinfo-ratings-port
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy: # Apply to all ports
portLevelSettings:
- port:
number: 80
loadBalancer:
simple: LEAST_REQUEST
- port:
number: 9080
loadBalancer:
simple: ROUND_ROBIN
Destination Rules can be customized to specific workloads as well. The following example shows how a destination rule can be applied to a specific workload using the workloadSelector configuration.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: configure-client-mtls-dr-with-workloadselector
spec:
host: example.com
workloadSelector:
matchLabels:
app: ratings
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
tls:
credentialName: client-credential
mode: MUTUAL
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: configure-client-mtls-dr-with-workloadselector
spec:
host: example.com
workloadSelector:
matchLabels:
app: ratings
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 31443
tls:
credentialName: client-credential
mode: MUTUAL
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.
TrafficPolicy
Traffic policies to apply for a specific destination, across all destination ports. See DestinationRule for examples.
Field | Type | Description | Required |
---|---|---|---|
loadBalancer | LoadBalancerSettings | Settings controlling the load balancer algorithms. | No |
connectionPool |
| Settings controlling the volume of connections to an upstream service | No |
outlierDetection | OutlierDetection | Settings controlling eviction of unhealthy hosts from the load balancing pool | No |
tls |
| TLS related settings for connections to the upstream service. | No |
portLevelSettings | PortTrafficPolicy[] | Traffic policies specific to individual ports. Note that port level settings will override the destination-level settings. Traffic settings specified at the destination-level will not be inherited when overridden by port-level settings, i.e. default values will be applied to fields omitted in port-level traffic policies. | No |
tunnel |
| Configuration of tunneling TCP over other transport or application layers for the host configured in the DestinationRule. Tunnel settings can be applied to TCP or TLS routes and can’t be applied to HTTP routes. | No |
Subset
A subset of endpoints of a service. Subsets can be used for scenarios like A/B testing, or routing to a specific version of a service. Refer to documentation for examples of using subsets in these scenarios. In addition, traffic policies defined at the service-level can be overridden at a subset-level. The following rule uses a round robin load balancing policy for all traffic going to a subset named testversion that is composed of endpoints (e.g., pods) with labels (version:v3).
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Note: Policies specified for subsets will not take effect until a route rule explicitly sends traffic to this subset.
One or more labels are typically required to identify the subset destination, however, when the corresponding DestinationRule represents a host that supports multiple SNI hosts (e.g., an egress gateway), a subset without labels may be meaningful. In this case a traffic policy with ClientTLSSettings can be used to identify a specific SNI host corresponding to the named subset.
Field | Type | Description | Required |
---|---|---|---|
name | string | Name of the subset. The service name and the subset name can be used for traffic splitting in a route rule. | Yes |
labels | map<string, string> | Labels apply a filter over the endpoints of a service in the service registry. See route rules for examples of usage. | No |
trafficPolicy |
| Traffic policies that apply to this subset. Subsets inherit the traffic policies specified at the DestinationRule level. Settings specified at the subset level will override the corresponding settings specified at the DestinationRule level. | No |
LoadBalancerSettings
Load balancing policies to apply for a specific destination. See Envoy’s load balancing for more details.
For example, the following rule uses a round robin load balancing policy for all traffic going to the ratings service.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
The following example sets up sticky sessions for the ratings service hashing-based load balancer for the same ratings service using the the User cookie as the hash key.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s
Field | Type | Description | Required |
---|---|---|---|
simple | SimpleLB (oneof) | No | |
consistentHash |
| No | |
localityLbSetting | LocalityLoadBalancerSetting | Locality load balancer settings, this will override mesh wide settings in entirety, meaning no merging would be performed between this object and the object one in MeshConfig | No |
warmupDurationSecs |
| Represents the warmup duration of Service. If set, the newly created endpoint of service remains in warmup mode starting from its creation time for the duration of this window and Istio progressively increases amount of traffic for that endpoint instead of sending proportional amount of traffic. This should be enabled for services that require warm up time to serve full production load with reasonable latency. Currently this is only supported for ROUND_ROBIN and LEAST_REQUEST load balancers. | No |
ConnectionPoolSettings
Connection pool settings for an upstream host. The settings apply to each individual host in the upstream service. See Envoy’s for more details. Connection pool settings can be applied at the TCP level as well as at HTTP level.
For example, the following rule sets a limit of 100 connections to redis service called myredissrv with a connect timeout of 30ms
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-redis
spec:
host: myredissrv.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
connectTimeout: 30ms
tcpKeepalive:
time: 7200s
interval: 75s
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: bookinfo-redis
spec:
host: myredissrv.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
connectTimeout: 30ms
tcpKeepalive:
time: 7200s
interval: 75s
Field | Type | Description | Required |
---|---|---|---|
tcp | TCPSettings | Settings common to both HTTP and TCP upstream connections. | No |
http |
| HTTP connection pool settings. | No |
OutlierDetection
A Circuit breaker implementation that tracks the status of each individual host in the upstream service. Applicable to both HTTP and TCP services. For HTTP services, hosts that continually return 5xx errors for API calls are ejected from the pool for a pre-defined period of time. For TCP services, connection timeouts or connection failures to a given host counts as an error when measuring the consecutive errors metric. See Envoy’s for more details.
The following rule sets a connection pool size of 100 HTTP1 connections with no more than 10 req/connection to the “reviews” service. In addition, it sets a limit of 1000 concurrent HTTP2 requests and configures upstream hosts to be scanned every 5 mins so that any host that fails 7 consecutive times with a 502, 503, or 504 error code will be ejected for 15 minutes.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
spec:
host: reviews.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http2MaxRequests: 1000
maxRequestsPerConnection: 10
outlierDetection:
consecutive5xxErrors: 7
interval: 5m
baseEjectionTime: 15m
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews-cb-policy
spec:
host: reviews.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http2MaxRequests: 1000
maxRequestsPerConnection: 10
outlierDetection:
consecutive5xxErrors: 7
interval: 5m
baseEjectionTime: 15m
Field | Type | Description | Required |
---|---|---|---|
splitExternalLocalOriginErrors | bool | Determines whether to distinguish local origin failures from external errors. If set to true consecutive_local_origin_failure is taken into account for outlier detection calculations. This should be used when you want to derive the outlier detection status based on the errors seen locally such as failure to connect, timeout while connecting etc. rather than the status code retuned by upstream service. This is especially useful when the upstream service explicitly returns a 5xx for some requests and you want to ignore those responses from upstream service while determining the outlier detection status of a host. Defaults to false. | No |
consecutiveLocalOriginFailures | UInt32Value | The number of consecutive locally originated failures before ejection occurs. Defaults to 5. Parameter takes effect only when split_external_local_origin_errors is set to true. | No |
consecutiveGatewayErrors |
| Number of gateway errors before a host is ejected from the connection pool. When the upstream host is accessed over HTTP, a 502, 503, or 504 return code qualifies as a gateway error. When the upstream host is accessed over an opaque TCP connection, connect timeouts and connection error/failure events qualify as a gateway error. This feature is disabled by default or when set to the value 0. Note that consecutive_gateway_errors and consecutive_5xx_errors can be used separately or together. Because the errors counted by consecutive_gateway_errors are also included in consecutive_5xx_errors, if the value of consecutive_gateway_errors is greater than or equal to the value of consecutive_5xx_errors, consecutive_gateway_errors will have no effect. | No |
consecutive5xxErrors | UInt32Value | Number of 5xx errors before a host is ejected from the connection pool. When the upstream host is accessed over an opaque TCP connection, connect timeouts, connection error/failure and request failure events qualify as a 5xx error. This feature defaults to 5 but can be disabled by setting the value to 0. Note that consecutive_gateway_errors and consecutive_5xx_errors can be used separately or together. Because the errors counted by consecutive_gateway_errors are also included in consecutive_5xx_errors, if the value of consecutive_gateway_errors is greater than or equal to the value of consecutive_5xx_errors, consecutive_gateway_errors will have no effect. | No |
interval |
| Time interval between ejection sweep analysis. format: 1h/1m/1s/1ms. MUST BE >=1ms. Default is 10s. | No |
baseEjectionTime | Duration | Minimum ejection duration. A host will remain ejected for a period equal to the product of minimum ejection duration and the number of times the host has been ejected. This technique allows the system to automatically increase the ejection period for unhealthy upstream servers. format: 1h/1m/1s/1ms. MUST BE >=1ms. Default is 30s. | No |
maxEjectionPercent | int32 | Maximum % of hosts in the load balancing pool for the upstream service that can be ejected. Defaults to 10%. | No |
minHealthPercent | int32 | No |
ClientTLSSettings
SSL/TLS related settings for upstream connections. See Envoy’s TLS context for more details. These settings are common to both HTTP and TCP upstreams.
For example, the following rule configures a client to use mutual TLS for connections to upstream database cluster.
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: db-mtls
spec:
host: mydbserver.prod.svc.cluster.local
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
The following rule configures a client to use TLS when talking to a foreign service whose domain matches *.foo.com.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tls-foo
spec:
host: "*.foo.com"
trafficPolicy:
tls:
mode: SIMPLE
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: tls-foo
spec:
host: "*.foo.com"
trafficPolicy:
tls:
mode: SIMPLE
The following rule configures a client to use Istio mutual TLS when talking to rating services.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings-istio-mtls
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: ratings-istio-mtls
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
Field | Type | Description | Required |
---|---|---|---|
mode |
| Indicates whether connections to this port should be secured using TLS. The value of this field determines how TLS is enforced. | Yes |
clientCertificate | string | REQUIRED if mode is | No |
string | REQUIRED if mode is | No | |
caCertificates | string | OPTIONAL: The path to the file containing certificate authority certificates to use in verifying a presented server certificate. If omitted, the proxy will not verify the server’s certificate. Should be empty if mode is | No |
credentialName | string | The name of the secret that holds the TLS certs for the client including the CA certificates. Secret must exist in the same namespace with the proxy using the certificates. The secret (of type NOTE: This field is applicable at sidecars only if | No |
subjectAltNames | string[] | A list of alternate names to verify the subject identity in the certificate. If specified, the proxy will verify that the server certificate’s subject alt name matches one of the specified values. If specified, this list overrides the value of subject_alt_names from the ServiceEntry. If unspecified, automatic validation of upstream presented certificate for new upstream connections will be done based on the downstream HTTP host/authority header, provided | No |
sni | string | SNI string to present to the server during TLS handshake. If unspecified, SNI will be automatically set based on downstream HTTP host/authority header for SIMPLE and MUTUAL TLS modes, provided | No |
insecureSkipVerify | BoolValue | InsecureSkipVerify specifies whether the proxy should skip verifying the CA signature and SAN for the server certificate corresponding to the host. This flag should only be set if global CA signature verifcation is enabled,
| No |
Locality-weighted load balancing allows administrators to control the distribution of traffic to endpoints based on the localities of where the traffic originates and where it will terminate. These localities are specified using arbitrary labels that designate a hierarchy of localities in {region}/{zone}/{sub-zone} form. For additional detail refer to Locality Weight The following example shows how to setup locality weights mesh-wide.
Given a mesh with workloads and their service deployed to “us-west/zone1/” and “us-west/zone2/”. This example specifies that when traffic accessing a service originates from workloads in “us-west/zone1/”, 80% of the traffic will be sent to endpoints in “us-west/zone1/”, i.e the same zone, and the remaining 20% will go to endpoints in “us-west/zone2/”. This setup is intended to favor routing traffic to endpoints in the same locality. A similar setting is specified for traffic originating in “us-west/zone2/”.
distribute:
- from: us-west/zone1/*
to:
"us-west/zone1/*": 80
"us-west/zone2/*": 20
- from: us-west/zone2/*
to:
"us-west/zone1/*": 20
"us-west/zone2/*": 80
If the goal of the operator is not to distribute load across zones and regions but rather to restrict the regionality of failover to meet other operational requirements an operator can set a ‘failover’ policy instead of a ‘distribute’ policy.
The following example sets up a locality failover policy for regions. Assume a service resides in zones within us-east, us-west & eu-west this example specifies that when endpoints within us-east become unhealthy traffic should failover to endpoints in any zone or sub-zone within eu-west and similarly us-west should failover to us-east.
failover:
- from: us-east
to: eu-west
- from: us-west
to: us-east
Locality load balancing settings.
TrafficPolicy.PortTrafficPolicy
Traffic policies that apply to specific ports of the service
Field | Type | Description | Required |
---|---|---|---|
port | PortSelector | Specifies the number of a port on the destination service on which this policy is being applied. | No |
loadBalancer |
| Settings controlling the load balancer algorithms. | No |
connectionPool | ConnectionPoolSettings | Settings controlling the volume of connections to an upstream service | No |
outlierDetection |
| Settings controlling eviction of unhealthy hosts from the load balancing pool | No |
tls | ClientTLSSettings | TLS related settings for connections to the upstream service. | No |
TrafficPolicy.TunnelSettings
Field | Type | Description | Required |
---|---|---|---|
protocol | string | Specifies which protocol to use for tunneling the downstream connection. Supported protocols are: CONNECT - uses HTTP CONNECT; POST - uses HTTP POST. CONNECT is used by default if not specified. HTTP version for upstream requests is determined by the service protocol defined for the proxy. | No |
targetHost | string | Specifies a host to which the downstream connection is tunneled. Target host must be an FQDN or IP address. | Yes |
targetPort | uint32 | Specifies a port to which the downstream connection is tunneled. | Yes |
LoadBalancerSettings.ConsistentHashLB
Consistent Hash-based load balancing can be used to provide soft session affinity based on HTTP headers, cookies or other properties. The affinity to a particular destination host may be lost when one or more hosts are added/removed from the destination service.
Field | Type | Description | Required |
---|---|---|---|
httpHeaderName | string (oneof) | Hash based on a specific HTTP header. | No |
httpCookie |
| Hash based on HTTP cookie. | No |
useSourceIp | bool (oneof) | Hash based on the source IP address. This is applicable for both TCP and HTTP connections. | No |
httpQueryParameterName | string (oneof) | Hash based on a specific HTTP query parameter. | No |
ringHash | RingHash (oneof) | The ring/modulo hash load balancer implements consistent hashing to backend hosts. | No |
maglev |
| The Maglev load balancer implements consistent hashing to backend hosts. | No |
minimumRingSize | uint64 | Deprecated. Use RingHash instead. | No |
LoadBalancerSettings.ConsistentHashLB.RingHash
Field | Type | Description | Required |
---|---|---|---|
minimumRingSize | uint64 | The minimum number of virtual nodes to use for the hash ring. Defaults to 1024. Larger ring sizes result in more granular load distributions. If the number of hosts in the load balancing pool is larger than the ring size, each host will be assigned a single virtual node. | No |
LoadBalancerSettings.ConsistentHashLB.MagLev
Field | Type | Description | Required |
---|---|---|---|
tableSize | uint64 | The table size for Maglev hashing. This helps in controlling the disruption when the backend hosts change. Increasing the table size reduces the amount of disruption. | No |
LoadBalancerSettings.ConsistentHashLB.HTTPCookie
Describes a HTTP cookie that will be used as the hash key for the Consistent Hash load balancer. If the cookie is not present, it will be generated.
Field | Type | Description | Required |
---|---|---|---|
name | string | Yes | |
path | string | Path to set for the cookie. | No |
ttl |
| Lifetime of the cookie. | Yes |
Settings common to both HTTP and TCP upstream connections.
ConnectionPoolSettings.HTTPSettings
Settings applicable to HTTP1.1/HTTP2/GRPC connections.
Field | Type | Description | Required |
---|---|---|---|
http1MaxPendingRequests | int32 | Maximum number of requests that will be queued while waiting for a ready connection pool connection. Default 1024. Refer to https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking under which conditions a new connection is created for HTTP2. Please note that this is applicable to both HTTP/1.1 and HTTP2. | No |
http2MaxRequests | int32 | Maximum number of active requests to a destination. Default 1024. Please note that this is applicable to both HTTP/1.1 and HTTP2. | No |
maxRequestsPerConnection | int32 | Maximum number of requests per connection to a backend. Setting this parameter to 1 disables keep alive. Default 0, meaning “unlimited”, up to 2^29. | No |
maxRetries | int32 | Maximum number of retries that can be outstanding to all hosts in a cluster at a given time. Defaults to 2^32-1. | No |
idleTimeout |
| The idle timeout for upstream connection pool connections. The idle timeout is defined as the period in which there are no active requests. If not set, the default is 1 hour. When the idle timeout is reached, the connection will be closed. If the connection is an HTTP/2 connection a drain sequence will occur prior to closing the connection. Note that request based timeouts mean that HTTP/2 PINGs will not keep the connection alive. Applies to both HTTP1.1 and HTTP2 connections. | No |
h2UpgradePolicy | H2UpgradePolicy | Specify if http1.1 connection should be upgraded to http2 for the associated destination. | No |
useClientProtocol | bool | If set to true, client protocol will be preserved while initiating connection to backend. Note that when this is set to true, h2_upgrade_policy will be ineffective i.e. the client connections will not be upgraded to http2. | No |
ConnectionPoolSettings.TCPSettings.TcpKeepalive
TCP keepalive.
Field | Type | Description | Required |
---|---|---|---|
probes | uint32 | Maximum number of keepalive probes to send without response before deciding the connection is dead. Default is to use the OS level configuration (unless overridden, Linux defaults to 9.) | No |
time | Duration | The time duration a connection needs to be idle before keep-alive probes start being sent. Default is to use the OS level configuration (unless overridden, Linux defaults to 7200s (ie 2 hours.) | No |
interval |
| The time duration between keep-alive probes. Default is to use the OS level configuration (unless overridden, Linux defaults to 75s.) | No |
LocalityLoadBalancerSetting.Distribute
Describes how traffic originating in the ‘from’ zone or sub-zone is distributed over a set of ’to’ zones. Syntax for specifying a zone is {region}/{zone}/{sub-zone} and terminal wildcards are allowed on any segment of the specification. Examples:
*
- matches all localities
us-west/*
- all zones and sub-zones within the us-west region
us-west/zone-1/*
- all sub-zones within us-west/zone-1
Field | Type | Description | Required |
---|---|---|---|
from | string | Originating locality, ‘/’ separated, e.g. ‘region/zone/sub_zone’. | No |
to | map<string, uint32> | Map of upstream localities to traffic distribution weights. The sum of all weights should be 100. Any locality not present will receive no traffic. | No |
LocalityLoadBalancerSetting.Failover
Specify the traffic failover policy across regions. Since zone and sub-zone failover is supported by default this only needs to be specified for regions when the operator needs to constrain traffic failover so that the default behavior of failing over to any endpoint globally does not apply. This is useful when failing over traffic across regions would not improve service health or may need to be restricted for other reasons like regulatory controls.
Field | Type | Description | Required |
---|---|---|---|
from | string | Originating region. | No |
to | string | Destination region the traffic will fail over to when endpoints in the ‘from’ region becomes unhealthy. | No |
google.protobuf.UInt32Value
Wrapper message for uint32
.
The JSON representation for UInt32Value
is JSON number.
Field | Type | Description | Required |
---|---|---|---|
value | uint32 | The uint32 value. | No |
LoadBalancerSettings.SimpleLB
Standard load balancing algorithms that require no tuning.
Name | Description |
---|---|
UNSPECIFIED | No load balancing algorithm has been specified by the user. Istio will select an appropriate default. |
RANDOM | The random load balancer selects a random healthy host. The random load balancer generally performs better than round robin if no health checking policy is configured. |
PASSTHROUGH | This option will forward the connection to the original IP address requested by the caller without doing any form of load balancing. This option must be used with care. It is meant for advanced use cases. Refer to Original Destination load balancer in Envoy for further details. |
ROUND_ROBIN | A basic round robin load balancing policy. This is generally unsafe for many scenarios (e.g. when enpoint weighting is used) as it can overburden endpoints. In general, prefer to use LEAST_REQUEST as a drop-in replacement for ROUND_ROBIN. |
LEAST_REQUEST | The least request load balancer spreads load across endpoints, favoring endpoints with the least outstanding requests. This is generally safer and outperforms ROUND_ROBIN in nearly all cases. Prefer to use LEAST_REQUEST as a drop-in replacement for ROUND_ROBIN. |
LEAST_CONN | Deprecated. Use LEAST_REQUEST instead. |
Policy for upgrading http1.1 connections to http2.
ClientTLSSettings.TLSmode
TLS connection mode
Name | Description |
---|---|
DISABLE | Do not setup a TLS connection to the upstream endpoint. |
SIMPLE | Originate a TLS connection to the upstream endpoint. |
MUTUAL | Secure connections to the upstream using mutual TLS by presenting client certificates for authentication. |
ISTIO_MUTUAL | Secure connections to the upstream using mutual TLS by presenting client certificates for authentication. Compared to Mutual mode, this mode uses certificates generated automatically by Istio for mTLS authentication. When this mode is used, all other fields in should be empty. |