The Apache Druid Indexer process is an alternative to the MiddleManager + Peon task execution system. Instead of forking a separate JVM process per-task, the Indexer runs tasks as separate threads within a single JVM process.

The Indexer is designed to be easier to configure and deploy compared to the MiddleManager + Peon system and to better enable resource sharing across tasks.

For Apache Druid Indexer Process Configuration, see Indexer Configuration.

The Indexer process shares the same HTTP endpoints as the .

The following resources are shared across all tasks running inside an Indexer process.

Query resources

The query processing threads and buffers are shared across all tasks. The Indexer will serve queries from a single endpoint shared by all tasks.

If is enabled, the query cache is also shared across all tasks.

Server HTTP threads

One pool is exclusively used for task control messages between the Overlord and the Indexer (“chat handler threads”). The other pool is used for handling all other HTTP requests.

The size of the pools are configured by the configuration (e.g., if this is set to 10, there will be 10 chat handler threads and 10 non-chat handler threads).

In addition to these two pools, 2 separate threads are allocated for lookup handling. If lookups are not used, these threads will not be used.

Memory sharing

The Indexer uses the druid.worker.globalIngestionHeapLimitBytes configuration to impose a global heap limit across all of the tasks it is running.

This global limit is evenly divided across the number of task slots configured by druid.worker.capacity.

To apply the per-task heap limit, the Indexer will override maxBytesInMemory in task tuning configs (i.e., ignoring the default value or any user configured value). maxRowsInMemory will also be overridden to an essentially unlimited value: the Indexer does not support row limits.

By default, is set to 1/6th of the available JVM heap. This default is chosen to align with the default value of maxBytesInMemory in task tuning configs when using the MiddleManager/Peon system, which is also 1/6th of the JVM heap.

This means that the peak in-heap usage for row data can be up to approximately maxBytesInMemory * (2 + maxPendingPersists). The default value of maxPendingPersists is 0, which allows for 1 persist to run concurrently with ingestion work.

The remaining portion of the heap is reserved for query processing and segment persist/merge operations, and miscellaneous heap usage.

Concurrent segment persist/merge limits

To help reduce peak memory usage, the Indexer imposes a limit on the number of concurrent segment persist/merge operations across all running tasks.

By default, the number of concurrent persist/merge operations is limited to (druid.worker.capacity / 2), rounded down. This limit can be configured with the property.

Separate task logs are not currently supported when using the Indexer; all task log messages will instead be logged in the Indexer process log.

The Indexer currently imposes an identical memory limit on each task. In later releases, the per-task memory limit will be removed and only the global limit will apply. The limit on concurrent merges will also be removed.

The Indexer does not work properly with task types. Therefore, it is not compatible with Tranquility. If you are using Tranquility, consider migrating to Druid’s builtin or Amazon Kinesis ingestion options.