Logging

    Fluent Bit supports exporting to a number of other log providers. If you already have an existing log provider, for example, Splunk, Datadog, ElasticSearch, or Stackdriver, you can follow the FluentBit documentation to configure log forwarders.

    Setting up log collection requires two steps:

    1. Running a log forwarding DaemonSet on each node.
    2. Running a collector somewhere in the cluster.

    Tip

    In the following example, a StatefulSet is used, which stores logs on a Kubernetes PersistentVolumeClaim, but you can also use a HostPath.

    The file defines a StatefulSet, as well as a Kubernetes Service which allows accessing and reading the logs from within the cluster. The supplied configuration will create the monitoring configuration in a namespace called logging.

    Important

    Set up the collector before the forwarders. You will need the address of the collector when configuring the forwarders, and the forwarders may queue logs until the collector is ready.

    Procedure

    1. To access the logs through your web browser, enter the command:

      1. kubectl port-forward --namespace logging service/log-collector 8080:80
    2. Navigate to http://localhost:8080/.

    3. Optional: You can open a shell in the nginx pod and search the logs using Unix tools, by entering the command:

    Setting up the forwarders

    See the documentation to set up a Fluent Bit DaemonSet that forwards logs to ElasticSearch by default.

    When you create a ConfigMap during the installation steps, you must:

    • Replace the ElasticSearch configuration with the fluent-bit-configmap.yaml, or
    • Add the following block to the ConfigMap, and update the @INCLUDE output-elasticsearch.conf to be @INCLUDE output-forward.conf:

      1. output-forward.conf: |
      2. [OUTPUT]
      3. Host log-collector.logging
      4. Require_ack_response True

    Warning

    If you are using a local Kubernetes cluster for development, you can create a hostPath PersistentVolume to store the logs on your desktop operating system. This allows you to use your usual desktop tools on the files without needing Kubernetes-specific tools.

    The PersistentVolumeClaim will look similar to the following:

    Note

    The hostPath will vary based on your Kubernetes software and host operating system.

    You must update the StatefulSet volumeClaimTemplates to reference the shared-logs volume, as shown in the following example:

    1. volumeClaimTemplates:
    2. metadata:
    3. name: logs
    4. spec:
    5. accessModes: ["ReadWriteOnce"]
    6. volumeName: shared-logs

    Kind

    When creating your cluster, you must use a and specify extraMounts for each node, as shown in the following example:

    You can then use /shared/logs as the spec.hostPath.path in your PersistentVolume. Note that the directory path ./logs is relative to the directory that the Kind cluster was created in.

    Docker desktop automatically creates some shared mounts between the host and the guest operating systems, so you only need to know the path to your home directory. The following are some examples for different operating systems:

    Minikube

    Minikube requires an explicit command to into the virtual machine (VM) running Kubernetes.

    The following command mounts the logs directory inside the current directory onto /mnt/logs in the VM:

    1. minikube mount ./logs:/mnt/logs