Monitoring with Prometheus

    Kong Gateway supports Prometheus with the Prometheus Plugin that exposes Kong Gateway performance and proxied upstream service metrics on the endpoint.

    This guide will help you setup a test Kong Gateway and Prometheus service. Then you will generate sample requests to Kong Gateway and observe the collected monitoring data.

    This guide assumes the following tools are installed locally:

    • Docker is used to run Kong Gateway, the supporting database, and Prometheus locally.
    • is used to send requests to Kong Gateway. curl is pre-installed on most systems.
    1. Install Kong Gateway:

      The -m flag instructs the script to install a mock service that is used in this guide to generate sample metrics.

      1. Kong is ready!
    2. Install the Prometheus Kong Gateway plugin:

      1. curl -s -X POST http://localhost:8001/plugins/ \
      2. --data "name=prometheus"

      You should receive a JSON response with the details of the installed plugin.

    3. Create a Prometheus configuration file named prometheus.yml in the current directory, and copy the following values:

      See the Prometheus for details on these settings.

    4. Generate sample traffic to the mock service. This allows you to observe metrics generated from the StatsD plugin. The following command generates 60 requests over one minute. Run the following in a new terminal:

      1. for _ in {1..60}; do {curl -s localhost:8000/mock/request; sleep 1; } done
    5. Kong Gateway will report system wide performance metrics by default. When the Plugin has been installed and traffic is being proxied, it will record additional metrics across service, route, and upstream dimensions.

      The response will look similar to the following snippet:

      1. # HELP kong_bandwidth Total bandwidth in bytes consumed per service/route in Kong
      2. # TYPE kong_bandwidth counter
      3. kong_bandwidth{service="mock",route="mock",type="egress"} 13579
      4. kong_bandwidth{service="mock",route="mock",type="ingress"} 540
      5. # HELP kong_datastore_reachable Datastore reachable from Kong, 0 is unreachable
      6. kong_datastore_reachable 1
      7. # HELP kong_http_status HTTP status codes per service/route in Kong
      8. # TYPE kong_http_status counter
      9. kong_http_status{service="mock",route="mock",code="200"} 6
      10. # HELP kong_latency Latency added by Kong, total request time and upstream latency for each service/route in Kong
      11. # TYPE kong_latency histogram
      12. kong_latency_bucket{service="mock",route="mock",type="kong",le="1"} 4
      13. kong_latency_bucket{service="mock",route="mock",type="kong",le="2"} 4

      See the Kong Prometheus Plugin documentation for details on the available metrics and configurations.

    6. Prometheus provides multiple ways to query collected metric data.

      You can view the viewer by opening a browser to http://localhost:9090/graph.

      You can also query Prometheus directly using it’s :

      1. curl -s 'localhost:9090/api/v1/query?query=kong_http_status'

    Once you are done experimenting with Prometheus and Kong Gateway, you can use the following commands to stop and remove the services created in this guide: