Enabling Rate Limits using Envoy
Setup Istio in a Kubernetes cluster by following the instructions in the .
Deploy the Bookinfo sample application.
Envoy supports two kinds of rate limiting: global and local. Global rate limiting uses a global gRPC rate limiting service to provide rate limiting for the entire mesh. Local rate limiting is used to limit the rate of requests per service instance. Local rate limiting can be used in conjunction with global rate limiting to reduce load on the global rate limiting service.
In this task you will configure Envoy to rate limit traffic to a specific path of a service using both global and local rate limits.
Envoy can be used to for your mesh. Global rate limiting in Envoy uses a gRPC API for requesting quota from a rate limiting service. A reference implementation of the API, written in Go with a Redis backend, is used below.
Use the following configmap to to rate limit requests to the path
/productpage
at 1 req/min and all other requests at 100 req/min.Create a global rate limit service which implements Envoy’s rate limit service protocol. As a reference, a demo configuration can be found , which is based on a reference implementation provided by Envoy.
$ kubectl apply -f @samples/ratelimit/rate-limit-service.yaml@
Apply an
EnvoyFilter
to theingressgateway
to enable global rate limiting using Envoy’s global rate limit filter.The first patch inserts the
envoy.filters.http.ratelimit
filter into theHTTP_FILTER
chain. Therate_limit_service
field specifies the external rate limit service,rate_limit_cluster
in this case.The second patch defines the
rate_limit_cluster
, which provides the endpoint location of the external rate limit service.$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
# select by label in the same namespace
labels:
istio: ingressgateway
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
# Adds the Envoy Rate Limit Filter in HTTP filter chain.
value:
name: envoy.filters.http.ratelimit
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
# domain can be anything! Match it to the ratelimter service config
domain: productpage-ratelimit
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: outbound|8081||ratelimit.default.svc.cluster.local
authority: ratelimit.default.svc.cluster.local
transport_api_version: V3
EOF
Apply another
EnvoyFilter
to theingressgateway
that defines the route configuration on which to rate limit. This adds rate limit actions for any route from a virtual host named*.80
.
Envoy supports local rate limiting of L4 connections and HTTP requests. This allows you to apply rate limits at the instance level, in the proxy itself, without calling any other service.
The following EnvoyFilter
enables local rate limiting for any traffic through the productpage
service. The HTTP_FILTER
patch inserts the envoy.filters.http.local_ratelimit
into the HTTP connection manager filter chain. The local rate limit filter’s token bucket is configured to allow 10 requests/min. The filter is also configured to add an x-local-rate-limit
response header to requests that are blocked.
The statistics mentioned on the are disabled by default. You can enable them with the following annotations during deployment:
template:
metadata:
annotations:
proxy.istio.io/config: |-
proxyStatsMatcher:
inclusionRegexps:
- ".*http_local_rate_limit.*"
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-local-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
app: productpage
configPatches:
- applyTo: HTTP_FILTER
match:
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.local_ratelimit
"@type": type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
value:
stat_prefix: http_local_rate_limiter
token_bucket:
max_tokens: 10
tokens_per_fill: 10
fill_interval: 60s
filter_enabled:
runtime_key: local_rate_limit_enabled
default_value:
numerator: 100
denominator: HUNDRED
filter_enforced:
runtime_key: local_rate_limit_enforced
default_value:
numerator: 100
denominator: HUNDRED
response_headers_to_add:
- append: false
header:
key: x-local-rate-limit
value: 'true'
EOF
The following EnvoyFilter
enables local rate limiting for any traffic to port 80 of the productpage
service. Unlike the previous configuration, there is no token_bucket
included in the HTTP_FILTER
patch. The token_bucket
is instead defined in the second (HTTP_ROUTE
) patch which includes a typed_per_filter_config
for the envoy.filters.http.local_ratelimit
local envoy filter, for routes to virtual host inbound|http|9080
.
Send traffic to the Bookinfo sample. Visit http://$GATEWAY_URL/productpage
in your web browser or issue the following command:
$ curl -s "http://$GATEWAY_URL/productpage" -o /dev/null -w "%{http_code}\n"
429
$GATEWAY_URL
is the value set in the Bookinfo example.
You will see the first request go through but every following request within a minute will get a 429 response.
Although the global rate limit at the ingress gateway limits requests to the productpage
service at 1 req/min, the local rate limit for productpage
instances allows 10 req/min. To confirm this, send internal productpage
requests, from the ratings
pod, using the following curl
command:
$ kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -s productpage:9080/productpage -o /dev/null -w "%{http_code}\n"
429
You should see no more than 10 req/min go through per instance.