A datasource is the Druid equivalent of a database table. Multitenant workloads can either use a separate datasource for each tenant, or can share one or more datasources between tenants using a “tenant_id” dimension. When deciding which path to go down, consider that each path has pros and cons.

Pros of datasources per tenant:

  • Each datasource can have its own schema, its own backfills, its own partitioning rules, and its own data loading and expiration rules.
  • Queries can be faster since there will be fewer segments to examine for a typical tenant’s query.

Pros of shared datasources:

  • Each datasource requires its own JVMs for realtime indexing.
  • Each datasource requires its own YARN resources for Hadoop batch jobs.
  • For these reasons it can be wasteful to have a very large number of small datasources.

If your multitenant cluster uses shared datasources, most of your queries will likely filter on a “tenant_id” dimension. These sorts of queries perform best when data is well-partitioned by tenant. There are a few ways to accomplish this.

With batch indexing, you can use to partition your data by tenant_id. Druid always partitions by time first, but the secondary partition within each time bucket will be on tenant_id.

With realtime indexing, you’d do this by tweaking the stream you send to Druid. For example, if you’re using Kafka then you can have your Kafka producer partition your topic by a hash of tenant_id.

Druid’s fundamental unit of computation is a segment. Processes scan segments in parallel and a given process can scan concurrently. To process more data in parallel and increase performance, more cores can be added to a cluster. Druid segments should be sized such that any computation over any given segment should complete in at most 500ms.

Druid internally stores requests to scan segments in a priority queue. If a given query requires scanning more segments than the total number of available processors in a cluster, and many similarly expensive queries are concurrently running, we don’t want any query to be starved out. Druid’s internal processing logic will scan a set of segments from one query and release resources as soon as the scans complete. This allows for a second set of segments from another query to be scanned. By keeping segment computation time very small, we ensure that resources are constantly being yielded, and segments pertaining to different queries are all being processed.

Druid queries can optionally set a priority flag in the . Queries known to be slow (download or reporting style queries) can be de-prioritized and more interactive queries can have higher priority.