Accessing CloudEvent traces
You must have a Knative cluster running with the Eventing component installed. .
With the exception of importers, the Knative Eventing tracing is configured through the config-tracing
ConfigMap in the knative-eventing
namespace.
Most importers do not use the ConfigMap and instead, use a static 1% sampling rate.
You can use the config-tracing
ConfigMap to configure the following Eventing components:
- Brokers
- Triggers
- InMemoryChannel
- ApiServerSource
- PingSource
- GitlabSource
- PrometheusSource
Example:
The following example config-tracing
ConfigMap samples 10% of all CloudEvents:
You can configure your config-tracing
with following options:
zipkin-endpoint
: Specifies the URL to the zipkin collector where you want to send the traces. Must be set if backend is set tozipkin
.stackdriver-project-id
: Specifies the GCP project ID into which the Stackdriver traces are written. You must specify thebackend
asstackdriver
. If is unspecified, the GCP project ID is read from GCP metadata when running on GCP.sample-rate
: Specifies the sampling rate. Valid values are decimals from0
to1
(interpreted as a float64), which indicate the probability that any given request is sampled. An example value is0.5
, which gives each request a 50% sampling probablity.debug
: Enables debugging. Valid values aretrue
orfalse
. Defaults tofalse
when not specified. Set totrue
to enable debug mode, which forces thesample-rate
to1.0
and sends all spans to the server.
To view your current configuration:
To edit and then immediately deploy changes to your ConfigMap, run the following command:
To access the traces, you use either the Zipkin or Jaeger tool. Details about using these tools to access traces are provided in the Knative Serving observability section:
For this example, assume the following details:
- Everything happens in the
includes-incoming-trace-id-2qszn
namespace. - The Broker is named
br
. - There are two Triggers that are associated with the Broker:
transformer
- Filters to only allow events whose type is . Sends the event to the Kubernetes Servicetransformer
, which will reply with an identical event, except the replied event’s type will belogger
.logger
- Filters to only allow events whose type islogger
. Sends the event to the Kubernetes Servicelogger
.
Given this scenario, the expected path and behavior of an event is as follows:
sender
Pod sends the request to the Broker.- Go to the Broker’s ingress Pod.
- Go to the
imc-dispatcher
Channel (imc stands for InMemoryChannel). - Go to both Triggers.
- Go to the Broker’s filter Pod for the Trigger
logger
. The Trigger’s filter ignores this event. - Go to the Broker’s filter Pod for the Trigger
transformer
. The filter does pass, so it goes to the Kubernetes Service pointed at, also namedtransformer
.transformer
Pod replies with the modified event.- Go to an InMemory dispatcher.
- Go to the Broker’s ingress Pod.
- Go to the InMemory dispatcher.
- Go to both Triggers.
- Go to the Broker’s filter Pod for the Trigger
transformer
. The Trigger’s filter ignores the event. - Go to the Broker’s filter Pod for the Trigger
logger
. The filter passes.
- Go to the Broker’s filter Pod for the Trigger
- Go to the Broker’s filter Pod for the Trigger
This is a screenshot of the trace view in Zipkin. All the red letters have been added to the screenshot and correspond to the expectations earlier in this section:
This is the same screenshot without the annotations.
If you are interested, here is the raw JSON of the trace.