OpenCensus Agent

    To learn how Istio handles tracing, visit this task’s overview.

    • Setup Istio by following the instructions in the Installation guide.

      The egress gateway and access logging will be enabled if you install the .

    • Deploy the sleep sample app to use as a test source for sending requests. If you have enabled, run the following command to deploy the sample app:

      Zip

      Otherwise, manually inject the sidecar before deploying the sleep application with the following command:

      1. $ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@)

      You can use any pod with curl installed as a test source.

    • Set the SOURCE_POD environment variable to the name of your source pod:

      1. $ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
    • Install Jaeger into your cluster.

    • Deploy the sample application.

    1. apiVersion: install.istio.io/v1alpha1
    2. kind: IstioOperator
    3. spec:
    4. meshConfig:
    5. defaultProviders:
    6. tracing:
    7. - "opencensus"
    8. enableTracing: true
    9. extensionProviders:
    10. - name: "opencensus"
    11. opencensus:
    12. service: "opentelemetry-collector.istio-system.svc.cluster.local"
    13. port: 55678
    14. context:
    15. - W3C_TRACE_CONTEXT

    With this configuration Istio is installed with OpenCensus Agent as the default tracer. Trace data will be sent to an OpenTelemetry backend.

    By default, Istio’s OpenCensus Agent tracing will attempt to read and write 4 types of trace headers:

    • B3,
    • gRPC’s binary trace header,
    • ,
    • and Cloud Trace Context.

    If you supply multiple values, the proxy will attempt to read trace headers in the specified order, using the first one that successfully parsed and writing all headers. This permits interoperability between services that use different headers, e.g. one service that propagates B3 headers and one that propagates W3C Trace Context headers can participate in the same trace. In this example we only use W3C Trace Context.

    In the default profile the sampling rate is 1%. Increase it to 100% using the Telemetry API:

    OpenTelemetry collector supports exporting traces to several backends by default in the core distribution. Other backends are available in the of OpenTelemetry collector.

    Deploy and configure the collector to receive and export spans to the Jaeger instance:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: opentelemetry-collector
    6. namespace: istio-system
    7. labels:
    8. app: opentelemetry-collector
    9. data:
    10. config: |
    11. receivers:
    12. opencensus:
    13. endpoint: 0.0.0.0:55678
    14. processors:
    15. memory_limiter:
    16. limit_mib: 100
    17. spike_limit_mib: 10
    18. check_interval: 5s
    19. zipkin:
    20. # Export via zipkin for easy querying
    21. endpoint: http://zipkin.istio-system.svc:9411/api/v2/spans
    22. logging:
    23. loglevel: debug
    24. extensions:
    25. health_check:
    26. service:
    27. extensions:
    28. - health_check
    29. pipelines:
    30. traces:
    31. receivers:
    32. - opencensus
    33. processors:
    34. - memory_limiter
    35. exporters:
    36. - zipkin
    37. - logging
    38. ---
    39. apiVersion: v1
    40. kind: Service
    41. metadata:
    42. name: opentelemetry-collector
    43. namespace: istio-system
    44. labels:
    45. app: opentelemetry-collector
    46. spec:
    47. type: ClusterIP
    48. selector:
    49. app: opentelemetry-collector
    50. ports:
    51. - name: grpc-opencensus
    52. port: 55678
    53. protocol: TCP
    54. targetPort: 55678
    55. ---
    56. apiVersion: apps/v1
    57. kind: Deployment
    58. metadata:
    59. name: opentelemetry-collector
    60. namespace: istio-system
    61. labels:
    62. app: opentelemetry-collector
    63. spec:
    64. replicas: 1
    65. selector:
    66. matchLabels:
    67. app: opentelemetry-collector
    68. metadata:
    69. labels:
    70. app: opentelemetry-collector
    71. spec:
    72. - name: opentelemetry-collector
    73. image: "otel/opentelemetry-collector:0.49.0"
    74. imagePullPolicy: IfNotPresent
    75. command:
    76. - "/otelcol"
    77. - "--config=/conf/config.yaml"
    78. ports:
    79. - name: grpc-opencensus
    80. containerPort: 55678
    81. protocol: TCP
    82. volumeMounts:
    83. - name: opentelemetry-collector-config
    84. mountPath: /conf
    85. readinessProbe:
    86. httpGet:
    87. path: /
    88. port: 13133
    89. resources:
    90. requests:
    91. cpu: 40m
    92. memory: 100Mi
    93. volumes:
    94. - name: opentelemetry-collector-config
    95. configMap:
    96. name: opentelemetry-collector
    97. items:
    98. - key: config
    99. path: config.yaml
    100. EOF

    details how to configure access to the Istio addons through a gateway.

    For testing (and temporary access), you may also use port-forwarding. Use the following, assuming you’ve deployed Jaeger to the istio-system namespace:

    1. $ istioctl dashboard jaeger
    1. When the Bookinfo application is up and running, access http://$GATEWAY_URL/productpage one or more times to generate trace information.

      To see trace data, you must send requests to your service. The number of requests depends on Istio’s sampling rate and can be configured using the . With the default sampling rate of 1%, you need to send at least 100 requests before the first trace is visible. To send a 100 requests to the productpage service, use the following command:

      1. $ for i in $(seq 1 100); do curl -s -o /dev/null "http://$GATEWAY_URL/productpage"; done
    2. From the left-hand pane of the dashboard, select productpage.default from the Service drop-down list and click Find Traces:

      Tracing Dashboard

    3. Click on the most recent trace at the top to see the details corresponding to the latest request to /productpage:

      Detailed Trace View

    4. The trace is comprised of a set of spans, where each span corresponds to a Bookinfo service, invoked during the execution of a /productpage request, or internal Istio component, for example: istio-ingressgateway.

    As you also configured logging exporter in OpenTelemetry Collector, you can see traces in the logs as well:

    1. Remove any istioctl processes that may still be running using control-C or:

      1. $ killall istioctl
    2. If you are not planning to explore any follow-on tasks, refer to the Bookinfo cleanup instructions to shutdown the application.

    3. Remove the Jaeger addon:

      1. $ kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-1.15/samples/addons/jaeger.yaml
    4. Remove the OpenTelemetry Collector:

      1. $ kubectl delete -n istio-system cm opentelemetry-collector
      2. $ kubectl delete -n istio-system svc opentelemetry-collector
      3. $ kubectl delete -n istio-system deploy opentelemetry-collector
    5. Remove, or set to "", the and meshConfig.defaultProviders setting in your Istio install configuration.