Among the many features and changes in the new logging functionality is the removal of project-specific logging configurations. Instead, one now configures logging at the namespace level. Cluster-level logging remains available, but configuration options differ.
Installation
To install logging in Rancher v2.5+, refer to the .
In v2.5, logging configuration is centralized under a Logging menu option available in the Cluster Explorer. It is from this menu option that logging for both cluster and namespace is configured.
There are four key concepts to understand for v2.5+ logging:
Outputs
are a configuration resource that determine a destination for collected logs. This is where settings for aggregators such as ElasticSearch, Kafka, etc. are stored.
Outputs
are namespaced resources.Flows
Flows
are a configuration resource that determine collection, filtering, and destination rules for logs. It is within a flow that one will configure what logs to collect, how to mutate or filter them, and whichOutputs
to send the logs to.Flows
are namespaced resources, and can connect either to anOutput
in the same namespace, or aClusterOutput
.-
ClusterOutputs
serve the same functionality asOutputs
, except they are a cluster-scoped resource.ClusterOutputs
are necessary when collecting logs cluster-wide, or if you wish to provide anOutput
to all namespaces in your cluster. ClusterFlows
ClusterFlows
serve the same function asFlows
, but at the cluster level. They are used to configure log collection for an entire cluster, instead of on a per-namespace level.ClusterFlows
are also where mutations and filters are defined, same asFlows
(in functionality).
Cluster Logging
To configure cluster-wide logging for v2.5+ logging, one needs to set up a ClusterFlow
. This object defines the source of logs, any transformations or filters to be applied, and finally the Output
(or Outputs
) for the logs.
In legacy logging, in order to collect logs from across the entire cluster, one only needed to enable cluster-level logging and define the desired Output
. This basic approach remains in v2.5+ logging. To replicate legacy cluster-level logging, follow these steps:
- Define a
ClusterOutput
according to the instructions found under - Create a
ClusterFlow
, ensuring that it is set to be created in thecattle-logging-system
namespace- You do not need to configure any filters if you do not wish - default behavior does not require their creation
- Define your cluster
Output
orOutputs
This will result in logs from all sources in the cluster (all pods, and all system components) being collected and sent to the Output
or Outputs
you defined in the ClusterFlow
.
Project Logging
Logging in v2.5+ is not project-aware. This means that in order to collect logs from pods running in project namespaces, you will need to define for those namespaces.
To collect logs from a specific namespace, follow these steps:
- Define an
Output
orClusterOutput
according to the instructions found under - Create a
Flow
, ensuring that it is set to be created in the namespace in which you want to gather logs.- If you wish to define Include or Exclude rules, you may do so. Otherwise, removal of all rules will result in all pods in the target namespace having their logs collected.
- You do not need to configure any filters if you do not wish - default behavior does not require their creation
- Define your outputs - these can be either
ClusterOutput
orOutput
objects.
This will result in logs from all sources in the namespace (pods) being collected and sent to the Output
(or Outputs
) you defined in your Flow
.
Output Configuration
In legacy logging, there are five logging destinations to choose from: Elasticsearch, Splunk, Kafka, Fluentd, and Syslog. With the exception of Syslog, all of these destinations are available in logging v2.5+.
Elasticsearch
In legacy logging, indices were automatically created according to the format in the “Index Patterns” section. In v2.5 logging, default behavior has been changed to logging to a single index. You can still configure index pattern functionality on the Output
object by editing as YAML and inputting the following values:
Replace <desired prefix>
with the prefix for the indices that will be created. In legacy logging, this defaulted to the name of the cluster.
(1) client_key
and values must be paths to the key and cert files, respectively. These files must be mounted into the rancher-logging-fluentd
pod in order to be used.
(2) Users can configure either ca_file
(a path to a PEM-encoded CA certificate) or ca_path
(a path to a directory containing CA certificates in PEM format). These files must be mounted into the rancher-logging-fluentd
pod in order to be used.
Kafka
As of v2.5.2, it is only possible to add a single Fluentd server using the “Edit as Form” option. To add multiple servers, edit the Output
as YAML and input multiple servers.
(1) These values are to be specified as paths to files. Those files must be mounted into the rancher-logging-fluentd
pod in order to be used.
Syslog
As of v2.5.2, syslog is not currently supported for Outputs
using v2.5+ logging.
Custom Log Fields
In order to add custom log fields, you will need to add the following YAML to your Flow
configuration:
...
spec:
filters:
- record_modifier:
records:
(replace foo: "bar"
with custom log fields you wish to add)
System Logging
- Gather all cluster logs, not specifying any match or exclusion rules. This results in all container logs from the cluster being collected, which includes system logs.