Data in Druid is stored in a custom column format known as a . Segments are composed of different types of columns. and the classes that extend it is a great place to looking into the storage format.
Segment creation
Raw data is ingested in IncrementalIndex.java
, and segments are created in IndexMerger.java
.
Storage engine
Druid segments are memory mapped in to be exposed for querying.
Coordination
Most of the coordination logic for Historical processes is on the Druid Coordinator. The starting point here is DruidCoordinator.java
. Most of the coordination logic for (real-time) ingestion is in the Druid indexing service. The starting point here is .
Real-time Ingestion
Druid loads data through FirehoseFactory.java
classes. Firehoses often wrap other firehoses, where, similar to the design of the query runners, each firehose adds a layer of logic, and the persist and hand-off logic is in RealtimePlumber.java
.
The two main Hadoop indexing classes are for the job to determine how many Druid segments to create, and HadoopDruidIndexerJob.java
, which creates Druid segments.
Internal UIs
Druid currently has two internal UIs. One is for the Coordinator and one is for the Overlord.
At some point in the future, we will likely move the internal UI code out of core Druid.
Client libraries
We welcome contributions for new client libraries to interact with Druid. See the Community and third-party libraries page for links to existing client libraries.