Router Process

    For query routing purposes, you should only ever need the Router process if you have a Druid cluster well into the terabyte range.

    In addition to query routing, the Router also runs the web console, a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.

    For Apache Druid Router Process Configuration, see Router Configuration.

    For basic tuning guidance for the Router process, see .

    HTTP endpoints

    For a list of API endpoints supported by the Router, see .

    Router as management proxy

    The Router can be configured to forward requests to the active Coordinator or Overlord process. This may be useful for setting up a highly available cluster in situations where the HTTP redirect mechanism of the inactive -> active Coordinator/Overlord does not function correctly (servers are behind a load balancer, the hostname used in the redirect is only resolvable internally, etc.).

    Enabling the management proxy

    To enable this functionality, set the following in the Router’s runtime.properties:

    1. druid.router.managementProxy.enabled=true

    Management proxy routing

    The management proxy supports implicit and explicit routes. Implicit routes are those where the destination can be determined from the original request path based on Druid API path conventions. For the Coordinator the convention is /druid/coordinator/* and for the Overlord the convention is /druid/indexer/*. These are convenient because they mean that using the management proxy does not require modifying the API request other than issuing the request to the Router instead of the Coordinator or Overlord. Most Druid API requests can be routed implicitly.

    Explicit routes are those where the request to the Router contains a path prefix indicating which process the request should be routed to. For the Coordinator this prefix is /proxy/coordinator and for the Overlord it is /proxy/overlord. This is required for API calls with an ambiguous destination. For example, the /status API is present on all Druid processes, so explicit routing needs to be used to indicate the proxy destination.

    This is summarized in the table below:

    timeBoundary

    1. {
    2. "type":"timeBoundary"

    Including this strategy means all timeBoundary queries are always routed to the highest priority Broker.

    priority

    Queries with a priority set to less than minPriority are routed to the lowest priority Broker. Queries with priority set to greater than maxPriority are routed to the highest priority Broker. By default, minPriority is 0 and maxPriority is 1. Using these default values, if a query with priority 0 (the default query priority is 0) is sent, the query skips the priority selection logic.

    manual

    This strategy reads the parameter brokerService from the query context and routes the query to that broker service. If no valid brokerService is specified in the query context, the field defaultManualBrokerService is used to determine target broker service given the value is valid and non-null. A value is considered valid if it is present in druid.router.tierToBrokerMap This strategy can route both Native and SQL queries (when enabled).

    Example: A strategy that routes queries to the Broker “druid:broker-hot” if no valid brokerService is found in the query context.

    1. {
    2. "type": "manual",
    3. "defaultManualBrokerService": "druid:broker-hot"
    4. }

    JavaScript

    Allows defining arbitrary routing rules using a JavaScript function. The function is passed the configuration and the query to be executed, and returns the tier it should be routed to, or null for the default tier.

    Example: a function that sends queries containing more than three aggregators to the lowest priority Broker.

    1. {
    2. "type" : "javascript",
    3. "function" : "function (config, query) { if (query.getAggregatorSpecs && query.getAggregatorSpecs().size() >= 3) { var size = config.getTierToBrokerMap().values().size(); if (size > 0) { return config.getTierToBrokerMap().values().toArray()[size-1] } else { return config.getDefaultBrokerServiceName() } } else { return null } }"
    4. }

    Routing of SQL queries using strategies

    To enable routing of SQL queries using strategies, set druid.router.sql.enable to true. The broker service for a given SQL query is resolved using only the provided Router strategies. If not resolved using any of the strategies, the Router uses the defaultBrokerServiceName. This behavior is slightly different from native queries where the Router first tries to resolve the broker service using strategies, then load rules and finally using the defaultBrokerServiceName if still not resolved. When druid.router.sql.enable is set to false (default value), the Router uses the defaultBrokerServiceName.

    Setting druid.router.sql.enable does not affect either Avatica JDBC requests or native queries. Druid always routes native queries using the strategies and load rules as documented. Druid always routes Avatica JDBC requests based on connection ID.

    All Avatica JDBC requests with a given connection ID must be routed to the same Broker, since Druid Brokers do not share connection state with each other.

    Note that when multiple Routers are used, all Routers should have identical balancer configuration to ensure that they make the same routing decisions.

    Rendezvous hash balancer

    This balancer uses Rendezvous Hashing on an Avatica request’s connection ID to assign the request to a Broker.

    To use this balancer, specify the following property:

    If no druid.router.avatica.balancer property is set, the Router will also default to using the Rendezvous Hash Balancer.

    Consistent hash balancer

    This balancer uses Consistent Hashing on an Avatica request’s connection ID to assign the request to a Broker.

    To use this balancer, specify the following property:

    1. druid.router.avatica.balancer.type=consistentHash

    This is a non-default implementation that is provided for experimentation purposes. The consistent hasher has longer setup times on initialization and when the set of Brokers changes, but has a faster Broker assignment time than the rendezvous hasher when tested with 5 Brokers. Benchmarks for both implementations have been provided in ConsistentHasherBenchmark and . The consistent hasher also requires locking, while the rendezvous hasher does not.

    Example production configuration

    In this example, we have two tiers in our production cluster: hot and _default_tier. Queries for the hot tier are routed through the broker-hot set of Brokers, and queries for the _default_tier are routed through the broker-cold set of Brokers. If any exceptions or network problems occur, queries are routed to the broker-cold set of brokers. In our example, we are running with a c3.2xlarge EC2 instance. We assume a common.runtime.properties already exists.

    JVM settings:

    1. -server
    2. -Xmx13g
    3. -XX:NewSize=256m
    4. -XX:MaxNewSize=256m
    5. -XX:+UseConcMarkSweepGC
    6. -XX:+PrintGCDetails
    7. -XX:+PrintGCTimeStamps
    8. -XX:+UseLargePages
    9. -XX:+HeapDumpOnOutOfMemoryError
    10. -XX:HeapDumpPath=/mnt/galaxy/deploy/current/
    11. -Duser.timezone=UTC
    12. -Dfile.encoding=UTF-8
    13. -Djava.io.tmpdir=/mnt/tmp
    14. -Dcom.sun.management.jmxremote.port=17071
    15. -Dcom.sun.management.jmxremote.ssl=false

    Runtime.properties: