Query context
- For Druid SQL, context parameters are provided either as a JSON object named to the HTTP POST API, or as properties to the JDBC connection.
- For , context parameters are provided as a JSON object named
context
.
Note that setting query context will override both the default value and the runtime properties value in the format of druid.query.default.context.{property_key}
(if set).
See SQL query context for query context parameters specific to Druid SQL queries.
See the list of GroupBy query context parameters available on the groupBy query page.
- All query-level filters must either be able to run on bitmap indexes or must offer vectorized row-matchers. These include “selector”, “bound”, “in”, “like”, “regex”, “search”, “and”, “or”, and “not”.
- All aggregators must offer vectorized implementations. These include “count”, “doubleSum”, “floatSum”, “longSum”, “longMin”, “longMax”, “doubleMin”, “doubleMax”, “floatMin”, “floatMax”, “longAny”, “doubleAny”, “floatAny”, “stringAny”, “hyperUnique”, “filtered”, “approxHistogram”, “approxHistogramFold”, and “fixedBucketsHistogram” (with numerical input).
- All virtual columns must offer vectorized implementations. Currently for expression virtual columns, support for vectorization is decided on a per expression basis, depending on the type of input and the functions used by the expression. See the currently supported list in the expression documentation.
- For GroupBy: All dimension specs must be “default” (no extraction functions or filtered dimension specs).
- For GroupBy: No multi-value dimensions.
- For Timeseries: No “descending” order.
- Only (not joins, subqueries, lookups, or inline datasources).
Other query types (like TopN, Scan, Select, and Search) ignore the “vectorize” parameter, and will execute without vectorization. These query types will ignore the “vectorize” parameter even if it is set to "force"
.