Forwarding logs to external third-party logging systems

    To send logs to other log aggregators, you use the OKD Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.

    When you forward logs externally, the Red Hat OpenShift Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

    Alternatively, you can create a config map to use the Fluentd forward protocol or the to send logs to external systems. However, these methods for forwarding logs are deprecated in OKD and will be removed in a future release.

    You cannot use the config map methods and the Cluster Log Forwarder in the same cluster.

    Forwarding cluster logs to external third-party systems requires a combination of outputs and pipelines specified in a ClusterLogForwarder custom resource (CR) to send logs to specific endpoints inside and outside of your OKD cluster. You can also use inputs to forward the application logs associated with a specific project to an endpoint.

    • An output is the destination for log data that you define, or where you want the logs sent. An output can be one of the following types:

      • elasticsearch. An external Elasticsearch instance. The elasticsearch output can use a TLS connection.

      • fluentdForward. An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The fluentForward output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS.

      • syslog. An external log aggregation solution that supports the syslog RFC3164 or protocols. The syslog output can use a UDP, TCP, or TLS connection.

      • cloudwatch. Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).

      • loki. Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.

      • kafka. A Kafka broker. The kafka output can use a TCP or TLS connection.

      • default. The internal OKD Elasticsearch instance. You are not required to configure the default output. If you do configure a default output, you receive an error message because the default output is reserved for the Red Hat OpenShift Logging Operator.

      If the output URL scheme requires TLS (HTTPS, TLS, or UDPS), then TLS server-side authentication is enabled. To also enable client authentication, the output must name a secret in the openshift-logging project. The secret must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.

    • A pipeline defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:

      • application. Container logs generated by user applications running in the cluster, except infrastructure container applications.

      • infrastructure. Container logs from pods that run in the openshift*, kube*, or default projects and journal logs sourced from node file system.

      • audit. Audit logs generated by the node audit system, auditd, Kubernetes API server, OpenShift API server, and OVN network.

      You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to others data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.

    • An input forwards the application logs associated with a specific project to a pipeline.

    In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter.

    Note the following:

    • If a ClusterLogForwarder CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the default output.

    • By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API.

    • If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped.

    • You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols.

    • The internal OKD Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Logging does not comply with those regulations.

    • You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration.

    The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs project to the internal Elasticsearch instance.

    Sample log forwarding outputs and pipelines

    1The name of the ClusterLogForwarder CR must be instance.
    2The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3Configuration for an secure Elasticsearch output using a secret with a secure URL.
    • A name to describe the output.

    • The type of output: elasticsearch.

    • The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.

    • The secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project.

    4Configuration for an insecure Elasticsearch output:
    • A name to describe the output.

    • The type of output: elasticsearch.

    • The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.

    5Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL
    • A name to describe the output.

    • The type of output: kafka.

    • Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.

    6Configuration for an input to filter application logs from the my-namespace project.
    7Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
    • A name to describe the pipeline.

    • The inputRefs is the log type, in this example audit.

    • The outputRefs is the name of the output to use, in this example elasticsearch-secure to forward to the secure Elasticsearch instance and default to forward to the internal Elasticsearch instance.

    • Optional: Labels to add to the logs.

    8Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x.
    9Optional: String. One or more labels to add to the logs. Quote values like “true” so they are recognized as string values, not as a boolean.
    10Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
    11Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance.
    • A name to describe the pipeline.

    • The inputRefs is a specific input: my-app-logs.

    • The outputRefs is default.

    • Optional: String. One or more labels to add to the logs.

    12Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
    • The inputRefs is the log type, in this example application.

    • The outputRefs is the name of the output to use.

    • Optional: String. One or more labels to add to the logs.

    If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OKD rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.

    Supported log data output types in OpenShift Logging 5.1

    Red Hat OpenShift Logging 5.1 provides the following output types and protocols for sending log data to target log collectors.

    Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.

    Output typesProtocolsTested with

    elasticsearch

    elasticsearch

    Elasticsearch 6.8.1

    Elasticsearch 6.8.4

    Elasticsearch 7.12.2

    fluentdForward

    fluentd forward v1

    fluentd 1.7.4

    logstash 7.10.1

    kafka

    kafka 0.11

    kafka 2.4.1

    kafka 2.7.0

    syslog

    RFC-3164, RFC-5424

    rsyslog-8.39.0

    Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.

    Supported log data output types in OpenShift Logging 5.2

    Red Hat OpenShift Logging 5.2 provides the following output types and protocols for sending log data to target log collectors.

    Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.

    Output typesProtocolsTested with

    Amazon CloudWatch

    REST over HTTPS

    The current version of Amazon CloudWatch

    elasticsearch

    elasticsearch

    Elasticsearch 6.8.1

    Elasticsearch 6.8.4

    Elasticsearch 7.12.2

    fluentdForward

    fluentd forward v1

    fluentd 1.7.4

    logstash 7.10.1

    Loki

    REST over HTTP and HTTPS

    Loki 2.3.0 deployed on OCP and Grafana labs

    kafka

    kafka 0.11

    kafka 2.4.1

    kafka 2.7.0

    syslog

    RFC-3164, RFC-5424

    rsyslog-8.39.0

    Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.

    Forwarding logs to an external Elasticsearch instance

    You can optionally forward logs to an external Elasticsearch instance in addition to, or instead of, the internal OKD Elasticsearch instance. You are responsible for configuring the external log aggregator to receive log data from OKD.

    To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

    To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default output to forward logs to the internal instance. You do not need to create a default output. If you do configure a default output, you receive an error message because the default output is reserved for the Red Hat OpenShift Logging Operator.

    If you want to forward logs to only the internal OKD Elasticsearch instance, you do not need to create a ClusterLogForwarder CR.

    Prerequisites

    • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

    Procedure

    1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

      1. apiVersion: "logging.openshift.io/v1"
      2. kind: ClusterLogForwarder
      3. metadata:
      4. name: instance (1)
      5. namespace: openshift-logging (2)
      6. spec:
      7. outputs:
      8. - name: elasticsearch-insecure (3)
      9. type: "elasticsearch" (4)
      10. url: http://elasticsearch.insecure.com:9200 (5)
      11. - name: elasticsearch-secure
      12. type: "elasticsearch"
      13. url: https://elasticsearch.secure.com:9200 (6)
      14. secret:
      15. name: es-secret (7)
      16. pipelines:
      17. - name: application-logs (8)
      18. inputRefs: (9)
      19. - application
      20. - audit
      21. outputRefs:
      22. - elasticsearch-secure (10)
      23. - default (11)
      24. parse: json (12)
      25. labels:
      26. myLabel: "myValue" (13)
      27. - name: infrastructure-audit-logs (14)
      28. inputRefs:
      29. - infrastructure
      30. outputRefs:
      31. - elasticsearch-insecure
      32. labels:
      33. logs: "audit-infra"
      1The name of the ClusterLogForwarder CR must be instance.
      2The namespace for the ClusterLogForwarder CR must be openshift-logging.
      3Specify a name for the output.
      4Specify the elasticsearch type.
      5Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
      6For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret.
      7For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following “Example: Setting secret that contains a username and password.”
      8Optional: Specify a name for the pipeline.
      9Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
      10Specify the name of the output to use when forwarding logs with this pipeline.
      11Optional: Specify the default output to send the logs to the internal Elasticsearch instance.
      12Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x.
      13Optional: String. One or more labels to add to the logs.
      14Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
      • A name to describe the pipeline.

      • The inputRefs is the log type to forward by using the pipeline: application, infrastructure, or audit.

      • The outputRefs is the name of the output to use.

      • Optional: String. One or more labels to add to the logs.

    2. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    Example: Setting a secret that contains a username and password

    You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.

    For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.

    1. Create a Secret YAML file similar to the following example. Use base64-encoded values for the username and password fields. The secret type is opaque by default.

      1. apiVersion: v1
      2. kind: Secret
      3. metadata:
      4. name: openshift-test-secret
      5. data:
      6. username: dGVzdHVzZXJuYW1lCg==
      7. password: dGVzdHBhc3N3b3JkCg==
    2. Create the secret:

      1. $ oc create secret -n openshift-logging openshift-test-secret.yaml
    3. Specify the name of the secret in the ClusterLogForwarder CR:

      1. kind: ClusterLogForwarder
      2. metadata:
      3. name: instance
      4. namespace: openshift-logging
      5. spec:
      6. outputs:
      7. - name: elasticsearch
      8. type: "elasticsearch"
      9. url: https://elasticsearch.secure.com:9200
      10. secret:
      11. name: openshift-test-secret
    4. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OKD.

    To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.

    Prerequisites

    • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

    Procedure

    1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogForwarder
      3. metadata:
      4. name: instance (1)
      5. namespace: openshift-logging (2)
      6. spec:
      7. outputs:
      8. - name: fluentd-server-secure (3)
      9. type: fluentdForward (4)
      10. url: 'tls://fluentdserver.security.example.com:24224' (5)
      11. secret: (6)
      12. name: fluentd-secret
      13. passphrase: phrase (7)
      14. - name: fluentd-server-insecure
      15. type: fluentdForward
      16. url: 'tcp://fluentdserver.home.example.com:24224'
      17. pipelines:
      18. - name: forward-to-fluentd-secure (8)
      19. inputRefs: (9)
      20. - application
      21. - audit
      22. outputRefs:
      23. - fluentd-server-secure (10)
      24. - default (11)
      25. parse: json (12)
      26. labels:
      27. clusterId: "C1234" (13)
      28. - name: forward-to-fluentd-insecure (14)
      29. inputRefs:
      30. - infrastructure
      31. outputRefs:
      32. - fluentd-server-insecure
      33. labels:
      34. clusterId: "C1234"
      1The name of the ClusterLogForwarder CR must be instance.
      2The namespace for the ClusterLogForwarder CR must be openshift-logging.
      3Specify a name for the output.
      4Specify the fluentdForward type.
      5Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
      6If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
      7Optional: Specify the password or passphrase that protects the private key file.
      8Optional: Specify a name for the pipeline.
      9Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
      10Specify the name of the output to use when forwarding logs with this pipeline.
      11Optional: Specify the default output to forward logs to the internal Elasticsearch instance.
      12Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x.
      13Optional: String. One or more labels to add to the logs.
      14Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
      • A name to describe the pipeline.

      • The inputRefs is the log type to forward by using the pipeline: application, infrastructure, or audit.

      • The outputRefs is the name of the output to use.

      • Optional: String. One or more labels to add to the logs.

    2. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    Forwarding logs using the syslog protocol

    You can use the syslog RFC3164 or protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OKD.

    To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.

    Alternately, you can use a config map to forward logs using the syslog RFC3164 protocols. However, this method is deprecated in OKD and will be removed in a future release.

    Prerequisites

    • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

    Procedure

    1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogForwarder
      3. metadata:
      4. name: instance (1)
      5. namespace: openshift-logging (2)
      6. spec:
      7. outputs:
      8. - name: rsyslog-east (3)
      9. type: syslog (4)
      10. syslog: (5)
      11. facility: local0
      12. rfc: RFC3164
      13. payloadKey: message
      14. severity: informational
      15. url: 'tls://rsyslogserver.east.example.com:514' (6)
      16. secret: (7)
      17. name: syslog-secret
      18. - name: rsyslog-west
      19. type: syslog
      20. syslog:
      21. appName: myapp
      22. facility: user
      23. msgID: mymsg
      24. procID: myproc
      25. rfc: RFC5424
      26. severity: debug
      27. url: 'udp://rsyslogserver.west.example.com:514'
      28. pipelines:
      29. - name: syslog-east (8)
      30. inputRefs: (9)
      31. - audit
      32. - application
      33. outputRefs: (10)
      34. - rsyslog-east
      35. - default (11)
      36. parse: json (12)
      37. labels:
      38. secure: "true" (13)
      39. syslog: "east"
      40. - name: syslog-west (14)
      41. inputRefs:
      42. - infrastructure
      43. outputRefs:
      44. - rsyslog-west
      45. - default
      46. labels:
      47. syslog: "west"
      1The name of the ClusterLogForwarder CR must be instance.
      2The namespace for the ClusterLogForwarder CR must be openshift-logging.
      3Specify a name for the output.
      4Specify the syslog type.
      5Optional: Specify the syslog parameters, listed below.
      6Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
      7If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.
      8Optional: Specify a name for the pipeline.
      9Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
      10Specify the name of the output to use when forwarding logs with this pipeline.
      11Optional: Specify the default output to forward logs to the internal Elasticsearch instance.
      12Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x.
      13Optional: String. One or more labels to add to the logs. Quote values like “true” so they are recognized as string values, not as a boolean.
      14Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
      • A name to describe the pipeline.

      • The inputRefs is the log type to forward by using the pipeline: application, infrastructure, or audit.

      • The outputRefs is the name of the output to use.

      • Optional: String. One or more labels to add to the logs.

    2. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    You can configure the following for the syslog outputs. For more information, see the syslog RFC3164 or RFC.

    • facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:

      • 0 or kern for kernel messages

      • 1 or user for user-level messages, the default.

      • 2 or mail for the mail system

      • 3 or daemon for system daemons

      • 4 or auth for security/authentication messages

      • 5 or syslog for messages generated internally by syslogd

      • 6 or lpr for the line printer subsystem

      • 7 or news for the network news subsystem

      • 8 or uucp for the UUCP subsystem

      • 9 or cron for the clock daemon

      • 10 or authpriv for security authentication messages

      • 11 or ftp for the FTP daemon

      • 12 or ntp for the NTP subsystem

      • 13 or security for the syslog audit log

      • 14 or console for the syslog alert log

      • 15 or solaris-cron for the scheduling daemon

      • 1623 or local0local7 for locally used facilities

    • Optional: payloadKey: The record field to use as payload for the syslog message.

      Configuring the parameter prevents other parameters from being forwarded to the syslog.

    • rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.

    • severity: The to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:

      • 0 or Emergency for messages indicating the system is unusable

      • 1 or Alert for messages indicating action must be taken immediately

      • 2 or Critical for messages indicating critical conditions

      • 3 or Error for messages indicating error conditions

      • 5 or Notice for messages indicating normal but significant conditions

      • 6 or Informational for messages indicating informational messages

      • 7 or Debug for messages indicating debug-level messages, the default

    • tag: Tag specifies a record field to use as a tag on the syslog message.

    • trimPrefix: Remove the specified prefix from the tag.

    The following parameters apply to RFC5424:

    • appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424.

    • msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424.

    • procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424.

    Forwarding logs to Amazon CloudWatch

    You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default OpenShift Logging-managed Elasticsearch log store.

    To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output.

    Procedure

    1. Create a Secret YAML file that uses the aws_access_key_id and aws_secret_access_key fields to specify your base64-encoded AWS credentials. For example:

      1. apiVersion: v1
      2. kind: Secret
      3. metadata:
      4. name: cw-secret
      5. namespace: openshift-logging
      6. data:
      7. aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK
      8. aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=
    2. Create the secret. For example:

      1. $ oc apply -f cw-secret.yaml
    3. Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the name of the secret. For example:

      1. apiVersion: "logging.openshift.io/v1"
      2. kind: ClusterLogForwarder
      3. metadata:
      4. name: instance (1)
      5. namespace: openshift-logging (2)
      6. spec:
      7. outputs:
      8. - name: cw (3)
      9. type: cloudwatch (4)
      10. cloudwatch:
      11. groupBy: logType (5)
      12. groupPrefix: <group prefix> (6)
      13. region: us-east-2 (7)
      14. secret:
      15. name: cw-secret (8)
      16. pipelines:
      17. - name: infra-logs (9)
      18. inputRefs: (10)
      19. - infrastructure
      20. - audit
      21. - application
      22. outputRefs:
      23. - cw (11)
      1The name of the ClusterLogForwarder CR must be instance.
      2The namespace for the ClusterLogForwarder CR must be openshift-logging.
      3Specify a name for the output.
      4Specify the cloudwatch type.
      5Optional: Specify how to group the logs:
      • logType creates log groups for each log type

      • namespaceName creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs.

      • namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.

      6Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups.
      7Specify the AWS region.
      8Specify the name of the secret that contains your AWS credentials.
      9Optional: Specify a name for the pipeline.
      10Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
      11Specify the name of the output to use when forwarding logs with this pipeline.
    4. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    Example: Using ClusterLogForwarder with Amazon CloudWatch

    Here, you see an example ClusterLogForwarder custom resource (CR) and the log data that it outputs to Amazon CloudWatch.

    Suppose that you are running an OKD cluster named mycluster. The following command returns the cluster’s infrastructureName, which you will use to compose aws commands later on:

    1. $ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
    2. "mycluster-7977k"

    To generate log data for this example, you run a busybox Pod in a namespace called app. The busybox Pod writes a message to stdout every three seconds:

    1. $ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
    2. $ oc logs -f busybox
    3. My life is my message
    4. My life is my message
    5. My life is my message
    6. ...

    You can look up the UUID of the app namespace where the busybox Pod runs:

    In your ClusterLogForwarder custom resource (CR), you configure the infrastructure, audit, and application log types as inputs to the all-logs pipeline. You also connect this pipeline to cw output, which forwards the logs to a CloudWatch instance in the us-east-2 region:

    1. apiVersion: "logging.openshift.io/v1"
    2. kind: ClusterLogForwarder
    3. metadata:
    4. name: instance
    5. namespace: openshift-logging
    6. spec:
    7. outputs:
    8. - name: cw
    9. type: cloudwatch
    10. cloudwatch:
    11. groupBy: logType
    12. region: us-east-2
    13. secret:
    14. name: cw-secret
    15. pipelines:
    16. - name: all-logs
    17. inputRefs:
    18. - infrastructure
    19. - audit
    20. - application
    21. outputRefs:
    22. - cw

    Each region in CloudWatch contains three levels of objects:

    • log group

      • log stream

        • log event

    With groupBy: logType in the ClusterLogForwarding CR, the three log types in the inputRefs produce three log groups in Amazon Cloudwatch:

    1. $ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
    2. "mycluster-7977k.application"
    3. "mycluster-7977k.audit"
    4. "mycluster-7977k.infrastructure"

    Each of the log groups contains log streams:

    1. $ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
    2. "kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
    1. $ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
    2. "ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
    3. "ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
    4. "ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
    5. ...
    1. $ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
    2. "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
    3. "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
    4. "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
    5. ...

    Each log stream contains log events. To see a log event from the busybox Pod, you specify its log stream from the application log group:

    1. $ aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log
    2. {
    3. "events": [
    4. {
    5. "timestamp": 1629422704178,
    6. "message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}",
    7. "ingestionTime": 1629422744016
    8. },
    9. ...

    Example: Customizing the prefix in log group names

    In the log group names, you can replace the default infrastructureName prefix, mycluster-7977k, with an arbitrary string like demo-group-prefix. To make this change, you update the groupPrefix field in the ClusterLogForwarding CR:

    1. cloudwatch:
    2. groupBy: logType
    3. groupPrefix: demo-group-prefix
    4. region: us-east-2

    The value of groupPrefix replaces the default infrastructureName prefix:

    1. $ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
    2. "demo-group-prefix.application"
    3. "demo-group-prefix.audit"
    4. "demo-group-prefix.infrastructure"

    Example: Naming log groups after application namespace names

    For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.

    If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.

    If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following “Naming log groups for application namespace UUIDs” section instead.

    To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy field to namespaceName in the ClusterLogForwarder CR:

    1. cloudwatch:
    2. groupBy: namespaceName
    3. region: us-east-2

    Setting groupBy to namespaceName affects the application log group only. It does not affect the audit and infrastructure log groups.

    In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, “app”, the following output shows a new mycluster-7977k.app log group instead of mycluster-7977k.application:

    1. $ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
    2. "mycluster-7977k.app"
    3. "mycluster-7977k.audit"
    4. "mycluster-7977k.infrastructure"

    If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.

    The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups.

    Example: Naming log groups after application namespace UUIDs

    For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.

    If you delete an application namespace object and create a new one, CloudWatch creates a new log group.

    If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding “Example: Naming log groups for application namespace names” section instead.

    To name log groups after application namespace UUIDs, you set the value of the groupBy field to namespaceUUID in the ClusterLogForwarder CR:

    1. cloudwatch:
    2. groupBy: namespaceUUID
    3. region: us-east-2

    In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, “app”, the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf log group instead of mycluster-7977k.application:

    1. $ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
    2. "mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
    3. "mycluster-7977k.audit"
    4. "mycluster-7977k.infrastructure"

    The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups.

    Forwarding logs to Loki

    You can forward logs to an external Loki logging system in addition to, or instead of, the internal default OKD Elasticsearch instance.

    To configure log forwarding to Loki, you must create a ClusterLogForwarder custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

    Prerequisites

    • You must have a Loki logging system running at the URL you specify with the url field in the CR.

    Procedure

    1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

      1. apiVersion: "logging.openshift.io/v1"
      2. kind: ClusterLogForwarder
      3. metadata:
      4. name: instance (1)
      5. namespace: openshift-logging (2)
      6. spec:
      7. outputs:
      8. - name: loki-insecure (3)
      9. type: "loki" (4)
      10. url: http://loki.insecure.com:9200 (5)
      11. - name: loki-secure
      12. type: "loki"
      13. url: https://loki.secure.com:9200 (6)
      14. secret:
      15. name: loki-secret (7)
      16. pipelines:
      17. - name: application-logs (8)
      18. inputRefs: (9)
      19. - application
      20. - audit
      21. outputRefs:
      22. - loki-secure (10)
      23. loki:
      24. tenantKey: kubernetes.namespace_name (11)
      25. labelKeys: kubernetes.labels.foo (12)
      1The name of the ClusterLogForwarder CR must be instance.
      2The namespace for the ClusterLogForwarder CR must be openshift-logging.
      3Specify a name for the output.
      4Specify the type as “loki”.
      5Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
      6For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret.
      7For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following “Example: Setting secret that contains a username and password.”
      8Optional: Specify a name for the pipeline.
      9Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
      10Specify the name of the output to use when forwarding logs with this pipeline.
      11Optional: Specify a meta-data key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespacename uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the “Log Record Fields” link in the following “Additional resources” section.
      12Optional: Specify a list of meta-data field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z:][a-zA-Z0-9:]*. Illegal characters in meta-data keys are replaced with to form the label name. For example, the kubernetes.labels.foo meta-data key becomes Loki label kubernetes_labels_foo. If you do not set labelKeys, the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]. Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters.

      Because Loki requires log streams to be correctly ordered by timestamp, labelKeys always includes the kubernetes_host label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts.

    2. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    If your Fluentd forwards a large block of messages to a Loki logging system that exceeds the rate limit, Loki to generates “entry out of order” errors. To fix this issue, you update some values in the Loki server configuration file, loki.yaml.

    Conditions

    • The ClusterLogForwarder custom resource is configured to forward logs to Loki.

    • Your system sends a block of messages that is larger than 2 MB to Loki, such as:

      1. "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\
      2. .......
      3. ......
      4. ......
      5. ......
      6. \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}
    • When you enter oc logs -c fluentd, the Fluentd logs in your OpenShift Logging cluster show the following messages:

      1. 429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes
      2. 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)'
    • When you open the logs on the Loki server, they display entry out of order messages like these:

    Procedure

    1. Update the following fields in the loki.yaml configuration file on the Loki server with the values shown here:

      • grpc_server_max_recv_msg_size: 8388608

      • chunk_target_size: 8388608

      • ingestion_rate_mb: 8

      • ingestion_burst_size_mb: 16

    2. Apply the changes in loki.yaml to the Loki server.

    Example loki.yaml file

    1. auth_enabled: false
    2. server:
    3. http_listen_port: 3100
    4. grpc_listen_port: 9096
    5. grpc_server_max_recv_msg_size: 8388608
    6. ingester:
    7. wal:
    8. enabled: true
    9. dir: /tmp/wal
    10. lifecycler:
    11. address: 127.0.0.1
    12. ring:
    13. kvstore:
    14. store: inmemory
    15. replication_factor: 1
    16. final_sleep: 0s
    17. chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
    18. chunk_target_size: 8388608
    19. max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
    20. chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
    21. max_transfer_retries: 0 # Chunk transfers disabled
    22. schema_config:
    23. configs:
    24. - from: 2020-10-24
    25. store: boltdb-shipper
    26. object_store: filesystem
    27. schema: v11
    28. index:
    29. prefix: index_
    30. period: 24h
    31. storage_config:
    32. boltdb_shipper:
    33. active_index_directory: /tmp/loki/boltdb-shipper-active
    34. cache_location: /tmp/loki/boltdb-shipper-cache
    35. cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
    36. shared_store: filesystem
    37. filesystem:
    38. directory: /tmp/loki/chunks
    39. compactor:
    40. working_directory: /tmp/loki/boltdb-shipper-compactor
    41. shared_store: filesystem
    42. limits_config:
    43. reject_old_samples: true
    44. reject_old_samples_max_age: 12h
    45. ingestion_burst_size_mb: 16
    46. chunk_store_config:
    47. max_look_back_period: 0s
    48. table_manager:
    49. retention_deletes_enabled: false
    50. retention_period: 0s
    51. ruler:
    52. storage:
    53. type: local
    54. local:
    55. directory: /tmp/loki/rules
    56. rule_path: /tmp/loki/rules-temp
    57. alertmanager_url: http://localhost:9093
    58. ring:
    59. kvstore:
    60. enable_api: true

    Additional resources

    You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OKD.

    To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.

    Prerequisites

    • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

    Procedure

    1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogForwarder
      3. metadata:
      4. name: instance (1)
      5. namespace: openshift-logging (2)
      6. spec:
      7. outputs:
      8. - name: fluentd-server-secure (3)
      9. type: fluentdForward (4)
      10. url: 'tls://fluentdserver.security.example.com:24224' (5)
      11. secret: (6)
      12. name: fluentd-secret
      13. - name: fluentd-server-insecure
      14. type: fluentdForward
      15. url: 'tcp://fluentdserver.home.example.com:24224'
      16. inputs: (7)
      17. - name: my-app-logs
      18. application:
      19. namespaces:
      20. - my-project
      21. pipelines:
      22. - name: forward-to-fluentd-insecure (8)
      23. inputRefs: (9)
      24. - my-app-logs
      25. outputRefs: (10)
      26. - fluentd-server-insecure
      27. parse: json (11)
      28. labels:
      29. project: "my-project" (12)
      30. - name: forward-to-fluentd-secure (13)
      31. inputRefs:
      32. - application
      33. - audit
      34. - infrastructure
      35. outputRefs:
      36. - fluentd-server-secure
      37. - default
      38. labels:
      39. clusterId: "C1234"
      1The name of the ClusterLogForwarder CR must be instance.
      2The namespace for the ClusterLogForwarder CR must be openshift-logging.
      3Specify a name for the output.
      4Specify the output type: elasticsearch, fluentdForward, syslog, or kafka.
      5Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
      6If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent.
      7Configuration for an input to filter application logs from the specified projects.
      8Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance.
      9The my-app-logs input.
      10The name of the output to use.
      11Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x.
      12Optional: String. One or more labels to add to the logs.
      13Configuration for a pipeline to send logs to other log aggregators.
      • Optional: Specify a name for the pipeline.

      • Specify which log types to forward by using the pipeline: application, infrastructure, or audit.

      • Specify the name of the output to use when forwarding logs with this pipeline.

      • Optional: Specify the default output to forward logs to the internal Elasticsearch instance.

      • Optional: String. One or more labels to add to the logs.

    2. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    Forwarding application logs from specific pods

    As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.

    Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.

    To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.

    Procedure

    1. Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels, as shown in the following example.

      Example ClusterLogForwarder CR YAML file

      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogForwarder
      3. metadata:
      4. name: instance (1)
      5. namespace: openshift-logging (2)
      6. spec:
      7. pipelines:
      8. - inputRefs: [ myAppLogData ] (3)
      9. outputRefs: [ default ] (4)
      10. parse: json (5)
      11. inputs: (6)
      12. - name: myAppLogData
      13. application:
      14. selector:
      15. matchLabels: (7)
      16. environment: production
      17. app: nginx
      18. namespaces: (8)
      19. - app1
      20. - app2
      21. outputs: (9)
      22. - default
      23. ...
      1The name of the ClusterLogForwarder CR must be instance.
      2The namespace for the ClusterLogForwarder CR must be openshift-logging.
      3Specify one or more comma-separated values from inputs[].name.
      4Specify one or more comma-separated values from outputs[].
      5Optional: Specify whether to forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x.
      6Define a unique inputs[].name for each application that has a unique set of pod labels.
      7Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
      8Optional: Specify one or more namespaces.
      9Specify one or more outputs to forward your log data to. The optional default output shown here sends log data to the internal Elasticsearch instance.
    2. Optional: To restrict the gathering of log data to specific namespaces, use inputs[].name.application.namespaces, as shown in the preceding example.

    3. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.

      1. For each unique combination of pod labels, create an additional inputs[].name section similar to the one shown.

      2. Update the selectors to match the pod labels of this application.

      3. Add the new inputs[].name value to inputRefs. For example:

        1. - inputRefs: [ myAppLogData, myOtherAppLogData ]
    4. Create the CR object:

      1. $ oc create -f <file-name>.yaml

    Additional resources

    • For more information on matchLabels in Kubernetes, see .

    Collecting OVN network policy audit logs

    You can collect the OVN network policy audit logs from the /var/log/ovn/acl-audit-log.log file on OVN-Kubernetes pods and forward them to logging servers.

    Prerequisites

    • You are using OKD version 4.8 or later.

    • You are using Cluster Logging 5.2 or later.

    • You have already set up a ClusterLogForwarder custom resource (CR) object.

    • The OKD cluster is configured for OVN-Kubernetes network policy audit logging. See the following “Additional resources” section.

    Often, logging servers that store audit data must meet organizational and governmental requirements for compliance and security.

    Procedure

    1. Create or edit a YAML file that defines the ClusterLogForwarder CR object as described in other topics on forwarding logs to third-party systems.

    2. In the YAML file, add the audit log type to the inputRefs element in a pipeline. For example:

      1. pipelines:
      2. - name: audit-logs
      3. inputRefs:
      4. - audit (1)
      5. outputRefs:
      6. - secure-logging-server (2)
      1Specify audit as one of the log types to input.
      2Specify the output that connects to your logging server.
    3. Recreate the updated CR object:

      1. $ oc create -f <file-name>.yaml

    Verification

    Verify that audit log entries from the nodes that you are monitoring are present among the log data gathered by the logging server.

    Find an original audit log entry in /var/log/ovn/acl-audit-log.log and compare it with the corresponding log entry on the logging server.

    For example, an original log entry in /var/log/ovn/acl-audit-log.log might look like this:

    1. 2021-07-06T08:26:58.687Z|00004|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-
    2. logging_deny-all", verdict=drop, severity=alert:
    3. icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:12,dl_dst=0a:58:0a:81:02:14,nw_src=10
    4. .129.2.18,nw_dst=10.129.2.20,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0

    And the corresponding OVN audit log entry you find on the logging server might look like this:

    1. {
    2. "@timestamp" : "2021-07-06T08:26:58..687000+00:00",
    3. "hostname":"ip.abc.iternal",
    4. "level":"info",
    5. "message" : "2021-07-06T08:26:58.687Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:12,dl_dst=0a:58:0a:81:02:14,nw_src=10.129.2.18,nw_dst=10.129.2.20,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0"
    6. }

    Where:

    • @timestamp is the timestamp of the log entry.

    • hostname is the node from which the log originated.

    • level is the log entry.

    • message is the original audit log message.

    On an Elasticsearch server, look for log entries whose indices begin with audit-00000.

    Troubleshooting

    1. Verify that your OKD cluster meets all the prerequisites.

    2. Verify that you have completed the procedure.

    3. Verify that the nodes generating OVN logs are enabled and have /var/log/ovn/acl-audit-log.log files.

    4. Check the Fluentd pod logs for issues.

    Additional resources

    Forwarding logs using the legacy Fluentd method

    You can use the Fluentd forward protocol to send logs to destinations outside of your OKD cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator to receive log data from OKD.

    This method for forwarding logs is deprecated in OKD and will be removed in a future release.

    The forward protocols are provided with the Fluentd image as of v1.4.0.

    To send logs using the Fluentd forward protocol, create a configuration file called secure-forward.conf, that points to an external log aggregator. Then, use that file to create a config map called called secure-forward in the openshift-logging project, which OKD uses when forwarding the logs.

    Prerequisites

    • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

    Sample Fluentd configuration file

    1. <store>
    2. @type forward
    3. <security>
    4. self_hostname ${hostname}
    5. shared_key "fluent-receiver"
    6. </security>
    7. transport tls
    8. tls_verify_hostname false
    9. tls_cert_path '/etc/ocp-forward/ca-bundle.crt'
    10. <buffer>
    11. @type file
    12. path '/var/lib/fluentd/secureforwardlegacy'
    13. queued_chunks_limit_size "1024"
    14. chunk_limit_size "1m"
    15. flush_interval "5s"
    16. flush_at_shutdown "false"
    17. flush_thread_count "2"
    18. retry_max_interval "300"
    19. retry_forever true
    20. overflow_action "#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'throw_exception'}"
    21. </buffer>
    22. <server>
    23. host fluent-receiver.example.com
    24. port 24224
    25. </server>
    26. </store>

    Procedure

    To configure OKD to forward logs using the legacy Fluentd method:

    1. Create a configuration file named secure-forward and specify parameters similar to the following within the <store> stanza:

      1. <store>
      2. @type forward
      3. <security>
      4. self_hostname ${hostname}
      5. shared_key <key> (1)
      6. </security>
      7. transport tls (2)
      8. tls_verify_hostname <value> (3)
      9. tls_cert_path <path_to_file> (4)
      10. <buffer> (5)
      11. @type file
      12. path '/var/lib/fluentd/secureforwardlegacy'
      13. queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
      14. chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
      15. flush_interval "#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}"
      16. flush_at_shutdown "#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}"
      17. flush_thread_count "#{ENV['FLUSH_THREAD_COUNT'] || 2}"
      18. retry_max_interval "#{ENV['FORWARD_RETRY_WAIT'] || '300'}"
      19. retry_forever true
      20. </buffer>
      21. <server>
      22. name (6)
      23. host (7)
      24. hostlabel (8)
      25. port (9)
      26. </server>
      27. <server> (10)
      28. name
      29. host
      30. </server>
      1Enter the shared key between nodes.
      2Specify tls to enable TLS validation.
      3Set to true to verify the server cert hostname. Set to false to ignore server cert hostname.
      4Specify the path to the private CA certificate file as /etc/ocp-forward/ca_cert.pem.
      5Specify the as needed.
      6Optionally, enter a name for this server.
      7Specify the hostname or IP of the server.
      8Specify the host label of the server.
      9Specify the port of the server.
      10Optionally, add additional servers. If you specify two or more servers, forward uses these server nodes in a round-robin order.

      To use Mutual TLS (mTLS) authentication, see the Fluentd documentation for information about client certificate, key parameters, and other settings.

    2. Create a config map named secure-forward in the openshift-logging project from the configuration file:

      1. $ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging

    You can use the syslog RFC3164 protocol to send logs to destinations outside of your OKD cluster by creating a configuration file and config map. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OKD.

    This method for forwarding logs is deprecated in OKD and will be removed in a future release.

    There are two versions of the syslog protocol:

    • out_syslog: The non-buffered implementation, which communicates through UDP, does not buffer data and writes out results immediately.

    • out_syslog_buffered: The buffered implementation, which communicates through TCP and .

    To send logs using the syslog protocol, create a configuration file called syslog.conf, with the information needed to forward the logs. Then, use that file to create a config map called syslog in the openshift-logging project, which OKD uses when forwarding the logs.

    Prerequisites

    • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

    Sample syslog configuration file

    1. <store>
    2. @type syslog_buffered
    3. remote_syslog rsyslogserver.example.com
    4. port 514
    5. hostname ${hostname}
    6. remove_tag_prefix tag
    7. facility local0
    8. severity info
    9. use_record true
    10. payload_key message
    11. rfc 3164
    12. </store>

    You can configure the following syslog parameters. For more information, see the syslog RFC3164.

    • facility: The . The value can be a decimal integer or a case-insensitive keyword:

      • 0 or kern for kernel messages

      • 1 or user for user-level messages, the default.

      • 2 or mail for the mail system

      • 3 or daemon for the system daemons

      • 4 or auth for the security/authentication messages

      • 5 or syslog for messages generated internally by syslogd

      • 6 or lpr for the line printer subsystem

      • 7 or news for the network news subsystem

      • 8 or uucp for the UUCP subsystem

      • 9 or cron for the clock daemon

      • 10 or authpriv for security authentication messages

      • 11 or ftp for the FTP daemon

      • 12 or ntp for the NTP subsystem

      • 13 or security for the syslog audit logs

      • 14 or console for the syslog alert logs

      • 15 or solaris-cron for the scheduling daemon

      • 1623 or local0local7 for locally used facilities

    • payloadKey: The record field to use as payload for the syslog message.

    • rfc: The RFC to be used for sending logs using syslog.

    • severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:

      • 0 or Emergency for messages indicating the system is unusable

      • 1 or Alert for messages indicating action must be taken immediately

      • 2 or Critical for messages indicating critical conditions

      • 3 or Error for messages indicating error conditions

      • 4 or Warning for messages indicating warning conditions

      • 5 or Notice for messages indicating normal but significant conditions

      • 6 or Informational for messages indicating informational messages

      • 7 or Debug for messages indicating debug-level messages, the default

    • tag: The record field to use as a tag on the syslog message.

    • trimPrefix: The prefix to remove from the tag.

    Procedure

    To configure OKD to forward logs using the legacy configuration methods:

    1. Create a configuration file named syslog.conf and specify parameters similar to the following within the <store> stanza:

      1. <store>
      2. @type <type> (1)
      3. remote_syslog <syslog-server> (2)
      4. port 514 (3)
      5. hostname ${hostname}
      6. remove_tag_prefix <prefix> (4)
      7. facility <value>
      8. severity <value>
      9. use_record <value>
      10. payload_key message
      11. rfc 3164 (5)
      12. </store>
    2. Create a config map named syslog in the openshift-logging project from the configuration file:

      1. $ oc create configmap syslog --from-file=syslog.conf -n openshift-logging

    Troubleshooting log forwarding

    When you create a ClusterLogForwarder custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy.

    Prerequisites

    Procedure