Securing the Admin API

    By default since its 0.12.0 release, Kong will only accept requests from the local interface, as specified in its default value:

    If you change this value, always ensure to keep the listening footprint to a minimum, in order to avoid exposing your Admin API to third-parties, which could seriously compromise the security of your Kong cluster as a whole. For example, avoid binding Kong to all of your interfaces, by using values such as 0.0.0.0:8001.

    Layer 3/4 Network Controls

    In cases where the Admin API must be exposed beyond a localhost interface, network security best practices dictate that network-layer access be restricted as much as possible. Consider an environment in which Kong listens on a private network interface, but should only be accessed by a small subset of an IP range. In such a case, host-based firewalls (e.g. iptables) are useful in limiting input traffic ranges. For example:

    1. # assume that Kong is listening on the address defined below, as defined as a
    2. # /24 CIDR block, and only a select few hosts in this range should have access
    3. grep admin_listen /etc/kong/kong.conf
    4. admin_listen 10.10.10.3:8001
    5. # explicitly allow TCP packets on port 8001 from the Kong node itself
    6. # this is not necessary if Admin API requests are not sent from the node
    7. iptables -A INPUT -s 10.10.10.3 -m tcp -p tcp --dport 8001 -j ACCEPT
    8. # explicitly allow TCP packets on port 8001 from the following addresses
    9. iptables -A INPUT -s 10.10.10.4 -m tcp -p tcp --dport 8001 -j ACCEPT
    10. iptables -A INPUT -s 10.10.10.5 -m tcp -p tcp --dport 8001 -j ACCEPT
    11. # drop all TCP packets on port 8001 not in the above IP list

    Additional controls, such as similar ACLs applied at a network device level, are encouraged, but fall outside the scope of this document.

    Kong’s routing design allows it to serve as a proxy for the Admin API itself. In this manner, Kong itself can be used to provide fine-grained access control to the Admin API. Such an environment requires bootstrapping a new Service that defines the admin_listen address as the Service’s url.

    We want to expose Admin API via the url :8000/admin-api, in a controlled way. We can do so by creating a Service and Route for it from inside :

    We can now transparently reach the Admin API through the proxy server, from outside 127.0.0.1:

    1. curl myhost.dev:8000/admin-api/services
    2. {
    3. "data":[
    4. {
    5. "id": "653b21bd-4d81-4573-ba00-177cc0108dec",
    6. "created_at": 1422386534,
    7. "updated_at": 1422386534,
    8. "name": "admin-api",
    9. "retries": 5,
    10. "protocol": "http",
    11. "host": "127.0.0.1",
    12. "port": 8001,
    13. "path": "/admin-api",
    14. "connect_timeout": 60000,
    15. "write_timeout": 60000,
    16. "read_timeout": 60000
    17. }
    18. ],
    19. "total":1
    20. }

    From here, simply apply desired Kong-specific security controls (such as or key authentication, , or access control lists) as you would normally to any other Kong API.

    If you are using Docker to host Kong Gateway, you can accomplish a similar task using a declarative configuration such as this one:

    Under this configuration, the Admin API will be available through the /admin-api, but only for requests accompanied with ?apikey=secret query parameters.

    1. -e "KONG_DECLARATIVE_CONFIG=/home/kong/kong.yml"
    2. -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
    3. -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
    4. -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
    5. -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
    6. -e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
    7. -v $(pwd):/home/kong
    8. kong-ee

    With a PostgreSQL database, the initialization steps would be the following:

    In both cases, once Kong is up and running, the Admin API would be available but protected:

    1. curl myhost.dev:8000/admin-api/services
    2. => HTTP/1.1 401 Unauthorized
    3. curl myhost.dev:8000/admin-api/services?apikey=secret
    4. => HTTP/1.1 200 OK
    5. {
    6. "data": [
    7. {
    8. "ca_certificates": null,
    9. "client_certificate": null,
    10. "connect_timeout": 60000,
    11. ...
    12. }
    13. }

    Kong is tightly coupled with Nginx as an HTTP daemon, and can thus be integrated into environments with custom Nginx configurations. In this manner, use cases with complex security/access control requirements can use the full power of Nginx/OpenResty to build server/location blocks to house the Admin API as necessary. This allows such environments to leverage native Nginx authorization and authentication mechanisms, ACL modules, etc., in addition to providing the OpenResty environment on which custom/complex security controls can be built.

    For more information on integrating Kong into custom Nginx configurations, see .

    Kong Gateway users can configure role-based access control to secure access to the Admin API. RBAC allows for fine-grained control over resource access based on a model of user roles and permissions. Users are assigned to one or more roles, which each in turn possess one or more permissions granting or denying access to a particular resource. In this way, fine-grained control over specific Admin API resources can be enforced, while scaling to allow complex, case-specific uses.