Accessing CloudEvent traces

    You must have a Knative cluster running with the Eventing component installed. .

    With the exception of importers, the Knative Eventing tracing is configured through the config-tracing ConfigMap in the knative-eventing namespace.

    Most importers do not use the ConfigMap and instead, use a static 1% sampling rate.

    You can use the config-tracing ConfigMap to configure the following Eventing components:

    • Brokers
    • Triggers
    • InMemoryChannel
    • ApiServerSource
    • PingSource
    • GitlabSource
    • PrometheusSource

    Example:

    The following example config-tracing ConfigMap samples 10% of all CloudEvents:

    You can configure your config-tracing with following options:

    • zipkin-endpoint: Specifies the URL to the zipkin collector where you want to send the traces. Must be set if backend is set to zipkin.

    • stackdriver-project-id: Specifies the GCP project ID into which the Stackdriver traces are written. You must specify the backend as stackdriver. If is unspecified, the GCP project ID is read from GCP metadata when running on GCP.

    • sample-rate: Specifies the sampling rate. Valid values are decimals from 0 to 1 (interpreted as a float64), which indicate the probability that any given request is sampled. An example value is 0.5, which gives each request a 50% sampling probablity.

    • debug: Enables debugging. Valid values are true or false. Defaults to false when not specified. Set to true to enable debug mode, which forces the sample-rate to 1.0 and sends all spans to the server.

    To view your current configuration:

    To edit and then immediately deploy changes to your ConfigMap, run the following command:

    To access the traces, you use either the Zipkin or Jaeger tool. Details about using these tools to access traces are provided in the Knative Serving observability section:

    For this example, assume the following details:

    • Everything happens in the includes-incoming-trace-id-2qszn namespace.
    • The Broker is named br.
    • There are two Triggers that are associated with the Broker:
      • transformer - Filters to only allow events whose type is . Sends the event to the Kubernetes Service transformer, which will reply with an identical event, except the replied event’s type will be logger.
      • logger - Filters to only allow events whose type is logger. Sends the event to the Kubernetes Service logger.

    Given this scenario, the expected path and behavior of an event is as follows:

    1. sender Pod sends the request to the Broker.
    2. Go to the Broker’s ingress Pod.
    3. Go to the imc-dispatcher Channel (imc stands for InMemoryChannel).
    4. Go to both Triggers.
      1. Go to the Broker’s filter Pod for the Trigger logger. The Trigger’s filter ignores this event.
      2. Go to the Broker’s filter Pod for the Trigger transformer. The filter does pass, so it goes to the Kubernetes Service pointed at, also named transformer.
        1. transformer Pod replies with the modified event.
        2. Go to an InMemory dispatcher.
        3. Go to the Broker’s ingress Pod.
        4. Go to the InMemory dispatcher.
        5. Go to both Triggers.
          1. Go to the Broker’s filter Pod for the Trigger transformer. The Trigger’s filter ignores the event.
          2. Go to the Broker’s filter Pod for the Trigger logger. The filter passes.

    This is a screenshot of the trace view in Zipkin. All the red letters have been added to the screenshot and correspond to the expectations earlier in this section:

    This is the same screenshot without the annotations.

    Raw Trace

    If you are interested, here is the raw JSON of the trace.