Apache Kafka ingestion

    Kafka indexing tasks read events using Kafka’s own partition and offset mechanism to guarantee exactly-once ingestion. The supervisor oversees the state of the indexing tasks to:

    • coordinate handoffs
    • manage failures
    • ensure that scalability and replication requirements are maintained.

    This topic covers how to submit a supervisor spec to ingest event data, also known as message data, from Kafka. See the following for more information:

    • For operations reference information to help run and maintain Apache Kafka supervisors, see Kafka supervisor operations.
    • For a walk-through, see the tutorial.

    The Kafka indexing service supports transactional topics introduced in Kafka 0.11.x by default. The consumer for Kafka indexing service is incompatible with older Kafka brokers. If you are using an older version, refer to the .

    Additionally, you can set to read_uncommitted in consumerProperties if either:

    • You don’t need Druid to consume transactional topics.
    • You need Druid to consume older versions of Kafka. Make sure offsets are sequential, since there is no offset gap check in Druid anymore.

    If your Kafka cluster enables consumer-group based ACLs, you can set group.id in consumerProperties to override the default auto generated group id.

    To use the Kafka indexing service, load the druid-kafka-indexing-service extension on both the Overlord and the MiddleManagers. See for instructions on how to configure extensions.

    Similar to the ingestion spec for batch ingestion, the supervisor spec configures the data ingestion for Kafka streaming ingestion. A supervisor spec has the following sections:

    • dataSchema to specify the Druid datasource name, primary timestamp, dimensions, metrics, transforms, and any necessary filters.
    • ioConfig to configure Kafka connection settings and configure how Druid parses the data. Kafka-specific connection details go in the consumerProperties. The ioConfig is also where you define the input format (inputFormat) of your Kafka data. For supported formats for Kafka and information on how to configure the input format, see .
    • tuningConfig to control various tuning parameters specific to each ingestion method. For a full description of all the fields and parameters in a Kafka supervisor spec, see the Kafka supervisor reference.

    The following example demonstrates a supervisor spec for Kafka that uses the input format. In this case Druid parses the event contents in JSON format:

    Kafka input format supervisor spec example

    If you want to ingest data from other fields in addition to the Kafka message contents, you can use the kafka input format. The kafka input format lets you ingest:

    • the event key field
    • event headers
    • the Kafka event value that stores the payload.

    For example, consider the following structure for a message that represents a fictitious wiki edit in a development environment:

    • Event headers: {“environment”: “development”}
    • Event key: {“key: “wiki-edit”}
    • Event value: <JSON object with event payload containing the change details>
    • Event timestamp: “Nov. 10, 2021 at 14:06”

    When you use the kafka input format, you configure the way that Druid names the dimensions created from the Kafka message:

    • headerLabelPrefix: Supply a prefix to the Kafka headers to avoid any conflicts with named dimensions. The default is kafka.header. Considering the header from the example, Druid maps the header to the following column: kafka.header.environment.
    • timestampColumnName: Supply a custom name for the Kafka timestamp in the Druid schema to avoid conflicts with other time columns. The default is kafka.timestamp.
    • keyColumnName: Supply the name for the Kafka key column in Druid. The default is kafka.key. Additionally, you must provide information about how Druid should parse the data in the Kafka message:
    • headerFormat: The default “string” decodes UTF8-encoded strings from the Kafka header. If you need another format, you can implement your own parser.
    • valueFormat: Define how to parse the message contents. You can use any of the Druid input formats that work for Kafka.

    For more information on data formats, see .

    The following supervisor spec demonstrates how to ingest the Kafka header, key, and timestamp into Druid dimensions:

    After Druid ingests the data, you can query the Kafka message columns as follows:

    For more information, see kafka data format.

    Druid starts a supervisor for a dataSource when you submit a supervisor spec. You can use the data loader in the web console or you can submit a supervisor spec to the following endpoint:

    http://<OVERLORD_IP>:<OVERLORD_PORT>/druid/indexer/v1/supervisor

    For example:

    Where the file supervisor-spec.json contains your Kafka supervisor spec file.