Traffic Log

    Configuring access logs in Kuma is a 3-step process:

    A logging backend is essentially a sink for access logs.

    In the current release of Kuma, a logging backend can be either a file or a TCP log collector, such as Logstash.

    1. type: Mesh
    2. name: default
    3. logging:
    4. # TrafficLog policies may leave the `backend` field undefined.
    5. # In that case the logs will be forwarded into the `defaultBackend` of that Mesh.
    6. defaultBackend: file
    7. # List of logging backends that can be referred to by name
    8. # from TrafficLog policies of that Mesh.
    9. backends:
    10. - name: logstash
    11. # Use `format` field to adjust the access log format to your use case.
    12. format: '{"start_time": "%START_TIME%", "source": "%KUMA_SOURCE_SERVICE%", "destination": "%KUMA_DESTINATION_SERVICE%", "source_address": "%KUMA_SOURCE_ADDRESS_WITHOUT_PORT%", "destination_address": "%UPSTREAM_HOST%", "duration_millis": "%DURATION%", "bytes_received": "%BYTES_RECEIVED%", "bytes_sent": "%BYTES_SENT%"}'
    13. type: tcp
    14. conf: # Use `config` field to co configure a TCP logging backend.
    15. # Address of a log collector.
    16. address: 127.0.0.1:5000
    17. - name: file
    18. type: file
    19. # Use `config` field to configure a file-based logging backend.
    20. conf:
    21. path: /tmp/access.log
    22. # When `format` field is omitted, the default access log format will be used.

    You need to create a TrafficLog policy to select a subset of traffic and forward its access logs into one of the logging backends configured for that Mesh.

    1. apiVersion: kuma.io/v1alpha1
    2. kind: TrafficLog
    3. metadata:
    4. mesh: default
    5. spec:
    6. # This TrafficLog policy applies all traffic in that Mesh.
    7. sources:
    8. - match:
    9. kuma.io/service: '*'
    10. destinations:
    11. - match:
    12. kuma.io/service: '*'
    13. # When `backend ` field is omitted, the logs will be forwarded into the `defaultBackend` of that Mesh.
    1. apiVersion: kuma.io/v1alpha1
    2. kind: TrafficLog
    3. metadata:
    4. name: backend-to-database-traffic
    5. spec:
    6. # This TrafficLog policy applies only to traffic from service `backend` to service `database`.
    7. sources:
    8. - match:
    9. kuma.io/service: backend_kuma-example_svc_8080
    10. destinations:
    11. kuma.io/service: database_kuma-example_svc_5432
    12. conf:
    13. # Forward the logs into the logging backend named `logstash`.
    14. backend: logstash
    1. type: TrafficLog
    2. name: backend-to-database-traffic
    3. mesh: default
    4. # this TrafficLog policy applies only to traffic from service `backend` to service `database`.
    5. sources:
    6. - match:
    7. kuma.io/service: backend
    8. destinations:
    9. - match:
    10. kuma.io/service: database
    11. # Forward the logs into the logging backend named `logstash`.
    12. backend: logstash

    When backend field of a TrafficLog policy is omitted, the logs will be forwarded into the defaultBackend of that Mesh.

    Kuma is presenting a simple solution to aggregate the logs of your containers and the access logs of your data-planes.

    1. Install Loki

    To install Loki use kumactl install logging | kubectl apply -f -. This will deploy Loki automatically in a kuma-logging namespace.

    2. Update the mesh

    The logging backend needs to be configured to send the access logs of your data-planes to stdout. Loki will directly retrieve the logs from stdout of your containers.

    1. type: Mesh
    2. metadata:
    3. name: default
    4. spec:
    5. logging:
    6. defaultBackend: loki
    7. backends:
    8. - name: loki
    9. type: file
    10. conf:
    11. path: /dev/stdout

    3. Configure Grafana to visualize the logs

    If the policy is installed on your kubernetes node, you can configure a new datasource in Grafana to visualise your containers’ logs and your access logs.

    Use the kubectl port-forward command to access Grafana.

    1. Install Loki

    To install Loki use the instructions on the official Loki Github repositoryTraffic Log - 图2 (opens new window).

    2. Update the mesh

    The logging backend needs to be configured to send the access logs of your data-planes to stdout. Loki will directly retrieve the logs from stdout of your containers.

    1. type: Mesh
    2. name: default
    3. logging:
    4. defaultBackend: loki
    5. backends:
    6. - name: loki
    7. type: file
    8. conf:
    9. path: /dev/stdout

    3. Configure Grafana to visualize the logs

    To visualise your containers’ logs and your access logs you need to have a Grafana up and running. If you don’t have Grafana you can install it by following the informations of the

    With Granana installed you can configure a new datasource so Grafana will be able to retrieve the logs from Loki.

    Loki Grafana configuration

    At this point you can visualize your containers’ logs and your access logs in Grafana by choosing the loki datasource in the explore section.

    Nice to have

    If you are also using the Traffic Trace policy you can configure a new datasource for Jaeger to visualise your traces directly into Grafana.

    Having your Logs and Traces in the same visualisation tool can come really handy. By adding the traceId in your app logs you can visualize your logs and the related Jaeger traces. To learn more about it go read this

    Kuma gives you full control over the format of access logs.

    The shape of a single log record is defined by a template string that uses command operatorsTraffic Log - 图8 (opens new window) to extract and format data about a connection or an HTTP request.

    E.g.,

    where %START_TIME% and %KUMA_SOURCE_SERVICE% are examples of available command operators.

    A complete set of supported command operators consists of:

    1. All command operators
    2. Command operators unique to Kuma

    The latter include:

    All access log command operators are valid to use with both TCP and HTTP traffic.

    If a command operator is specific to HTTP traffic, such as %REQ(X?Y):Z% or %RESP(X?Y):Z%, it will be replaced by a symbol “-“ in case of TCP traffic.

    Internally, Kuma determines traffic protocol based on the value of kuma.io/protocol tag on the inbound interface of a destination Dataplane.

    The default format string for TCP traffic is:

    1. [%START_TIME%] %RESPONSE_FLAGS% %KUMA_MESH% %KUMA_SOURCE_ADDRESS_WITHOUT_PORT%(%KUMA_SOURCE_SERVICE%)->%UPSTREAM_HOST%(%KUMA_DESTINATION_SERVICE%) took %DURATION%ms, sent %BYTES_SENT% bytes, received: %BYTES_RECEIVED% bytes

    The default format string for HTTP traffic is:

    1. [%START_TIME%] %KUMA_MESH% "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%KUMA_SOURCE_SERVICE%" "%KUMA_DESTINATION_SERVICE%" "%KUMA_SOURCE_ADDRESS_WITHOUT_PORT%" "%UPSTREAM_HOST%"

    To provide different format for TCP and HTTP logging you can define two separate logging backends with the same address and different format. Then define two TrafficLog entity, one for TCP and one for HTTP with kuma.io/protocol: http selector.

    If you need an access log with entries in JSON format, you have to provide a template string that is a valid JSON object, e.g.

    1. {
    2. "start_time": "%START_TIME%",
    3. "source": "%KUMA_SOURCE_SERVICE%",
    4. "destination": "%KUMA_DESTINATION_SERVICE%",
    5. "source_address": "%KUMA_SOURCE_ADDRESS_WITHOUT_PORT%",
    6. "destination_address": "%UPSTREAM_HOST%",
    7. "duration_millis": "%DURATION%",
    8. "bytes_received": "%BYTES_RECEIVED%",
    9. "bytes_sent": "%BYTES_SENT%"

    To use it with Logstash, use json_lines codec and make sure your JSON is formatted into one line.