Windows based Front proxy

    Sandbox environment

    Setup your sandbox environment with Docker and Docker Compose, and clone the Envoy repository with Git.

    To get a flavor of what Envoy has to offer on Windows, we are releasing a sandbox that deploys a front Envoy and a couple of services (simple Flask apps) colocated with a running service Envoy.

    The three containers will be deployed inside a virtual network called .

    Below you can see a graphic showing the docker compose deployment:

    All incoming requests are routed via the front Envoy, which is acting as a reverse proxy sitting on the edge of the envoymesh network. Port 8080, 8443, and 8001 are exposed by docker compose (see ) to handle HTTP, HTTPS calls to the services and requests to /admin respectively.

    Moreover, notice that all traffic routed by the front Envoy to the service containers is actually routed to the service Envoys (routes setup in front-envoy.yaml).

    Change to the examples/front-proxy directory.

    You can now send a request to both services via the front-envoy.

    For service1:

    1. PS> curl -v localhost:8080/service/1
    2. * Trying ::1...
    3. * TCP_NODELAY set
    4. * Trying 127.0.0.1...
    5. * TCP_NODELAY set
    6. * Connected to localhost (127.0.0.1) port 8080 (#0)
    7. > GET /service/1 HTTP/1.1
    8. > Host: localhost:8080
    9. > User-Agent: curl/7.55.1
    10. > Accept: */*
    11. >
    12. < HTTP/1.1 200 OK
    13. < content-type: text/html; charset=utf-8
    14. < content-length: 92
    15. < server: envoy
    16. < date: Wed, 05 May 2021 05:55:55 GMT
    17. < x-envoy-upstream-service-time: 18
    18. <
    19. Hello from behind Envoy (service 1)! hostname: 8a45bba91d83 resolvedhostname: 172.30.97.237
    20. * Connection #0 to host localhost left intact

    For service2:

    1. PS> curl -v localhost:8080/service/2
    2. * Trying ::1...
    3. * TCP_NODELAY set
    4. * Trying 127.0.0.1...
    5. * TCP_NODELAY set
    6. * Connected to localhost (127.0.0.1) port 8080 (#0)
    7. > GET /service/2 HTTP/1.1
    8. > Host: localhost:8080
    9. > User-Agent: curl/7.55.1
    10. > Accept: */*
    11. >
    12. < HTTP/1.1 200 OK
    13. < content-type: text/html; charset=utf-8
    14. < content-length: 93
    15. < server: envoy
    16. < date: Wed, 05 May 2021 05:57:03 GMT
    17. < x-envoy-upstream-service-time: 14
    18. <
    19. Hello from behind Envoy (service 2)! hostname: 51e28eb3c8b8 resolvedhostname: 172.30.109.113
    20. * Connection #0 to host localhost left intact

    Notice that each request, while sent to the front Envoy, was correctly routed to the respective application.

    We can also use HTTPS to call services behind the front Envoy. For example, calling service1:

    Now let’s scale up our service1 nodes to demonstrate the load balancing abilities of Envoy:

    1. PS> docker-compose scale service1=3
    2. Creating and starting example_service1_2 ... done

    Now if we send a request to service1 multiple times, the front Envoy will load balance the requests by doing a round robin of the three service1 machines:

    1. PS> curl -v localhost:8080/service/1
    2. * Trying ::1...
    3. * Trying 127.0.0.1...
    4. * TCP_NODELAY set
    5. * Connected to localhost (127.0.0.1) port 8080 (#0)
    6. > GET /service/1 HTTP/1.1
    7. > Host: localhost:8080
    8. > User-Agent: curl/7.55.1
    9. > Accept: */*
    10. >
    11. < HTTP/1.1 200 OK
    12. < content-type: text/html; charset=utf-8
    13. < content-length: 93
    14. < server: envoy
    15. < date: Wed, 05 May 2021 05:58:40 GMT
    16. < x-envoy-upstream-service-time: 22
    17. <
    18. Hello from behind Envoy (service 1)! hostname: 8d2359ee21a8 resolvedhostname: 172.30.101.143
    19. * Connection #0 to host localhost left intact
    20. PS> curl -v localhost:8080/service/1
    21. * Trying ::1...
    22. * TCP_NODELAY set
    23. * Trying 127.0.0.1...
    24. * TCP_NODELAY set
    25. * Connected to localhost (127.0.0.1) port 8080 (#0)
    26. > GET /service/1 HTTP/1.1
    27. > Host: localhost:8080
    28. > User-Agent: curl/7.55.1
    29. > Accept: */*
    30. >
    31. < HTTP/1.1 200 OK
    32. < content-type: text/html; charset=utf-8
    33. < content-length: 91
    34. < server: envoy
    35. < date: Wed, 05 May 2021 05:58:43 GMT
    36. < x-envoy-upstream-service-time: 11
    37. <
    38. Hello from behind Envoy (service 1)! hostname: 41e1141eebf4 resolvedhostname: 172.30.96.11
    39. * Connection #0 to host localhost left intact
    40. PS> curl -v localhost:8080/service/1
    41. * Trying ::1...
    42. * TCP_NODELAY set
    43. * Trying 127.0.0.1...
    44. * TCP_NODELAY set
    45. * Connected to localhost (127.0.0.1) port 8080 (#0)
    46. > GET /service/1 HTTP/1.1
    47. > Host: localhost:8080
    48. > User-Agent: curl/7.55.1
    49. > Accept: */*
    50. >
    51. < HTTP/1.1 200 OK
    52. < content-type: text/html; charset=utf-8
    53. < date: Wed, 05 May 2021 05:58:44 GMT
    54. < x-envoy-upstream-service-time: 7
    55. <
    56. Hello from behind Envoy (service 1)! hostname: 8a45bba91d83 resolvedhostname: 172.30.97.237
    57. * Connection #0 to host localhost left intact

    When Envoy runs it also attaches an admin to your desired port.

    In the example configs the admin listener is bound to port 8001.

    We can curl it to gain useful information:

    • provides information about the Envoy version you are running.

    • /stats provides statistics about the Envoy server.

    In the example we can enter the front-envoy container to query admin:

    1. PS> docker-compose exec front-envoy powershell
    2. PS C:\> (curl http://localhost:8003/server_info -UseBasicParsing).Content
    1. {
    2. "version": "093e2ffe046313242144d0431f1bb5cf18d82544/1.15.0-dev/Clean/RELEASE/BoringSSL",
    3. "state": "LIVE",
    4. "hot_restart_version": "11.104",
    5. "command_line_options": {
    6. "base_id": "0",
    7. "use_dynamic_base_id": false,
    8. "base_id_path": "",
    9. "concurrency": 8,
    10. "config_path": "/etc/front-envoy.yaml",
    11. "config_yaml": "",
    12. "allow_unknown_static_fields": false,
    13. "reject_unknown_dynamic_fields": false,
    14. "ignore_unknown_dynamic_fields": false,
    15. "admin_address_path": "",
    16. "local_address_ip_version": "v4",
    17. "log_level": "info",
    18. "component_log_level": "",
    19. "log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v",
    20. "log_format_escaped": false,
    21. "log_path": "",
    22. "service_cluster": "front-proxy",
    23. "service_node": "",
    24. "service_zone": "",
    25. "drain_strategy": "Gradual",
    26. "mode": "Serve",
    27. "disable_hot_restart": false,
    28. "enable_mutex_tracing": false,
    29. "restart_epoch": 0,
    30. "cpuset_threads": false,
    31. "disabled_extensions": [],
    32. "bootstrap_version": 0,
    33. "hidden_envoy_deprecated_max_stats": "0",
    34. "hidden_envoy_deprecated_max_obj_name_len": "0",
    35. "file_flush_interval": "10s",
    36. "drain_time": "600s",
    37. "parent_shutdown_time": "900s"
    38. },
    39. "uptime_current_epoch": "188s",
    40. "uptime_all_epochs": "188s"

    Notice that we can get the number of members of upstream clusters, number of requests fulfilled by them, information about http ingress, and a plethora of other useful stats.

    See also

    Quick start guide to the Envoy admin interface.