Extensions

    Production clusters will generally use at least two extensions; one for deep storage and one for a metadata store. Many clusters will also use additional extensions.

    Core extensions are maintained by Druid committers.

    A number of community members have contributed their own extensions to Druid that are not packaged with the default Druid tarball. If you’d like to take on maintenance for a community extension, please post on dev@druid.apache.org to let us know!

    NameDescriptionDocs
    ambari-metrics-emitterAmbari Metrics Emitter
    druid-cassandra-storageApache Cassandra deep storage.link
    druid-cloudfiles-extensionsRackspace Cloudfiles deep storage and firehose.
    druid-distinctcountDistinctCount aggregatorlink
    druid-redis-cacheA cache implementation for Druid based on Redis.
    druid-time-min-maxMin/Max aggregator for timestamp.link
    sqlserver-metadata-storageMicrosoft SQLServer deep storage.
    graphite-emitterGraphite metrics emitterlink
    statsd-emitterStatsD metrics emitter
    kafka-emitterKafka metrics emitterlink
    druid-thrift-extensionsSupport thrift ingestion
    druid-opentsdb-emitterOpenTSDB metrics emitterlink
    materialized-view-selection, materialized-view-maintenanceMaterialized View
    druid-moving-average-querySupport for Moving Average and other Aggregate in Druid queries.link
    druid-influxdb-emitterInfluxDB metrics emitter
    druid-momentsketchSupport for approximate quantile queries using the momentsketch library
    druid-tdigestsketchSupport for approximate sketch aggregators based on T-Digest

    Please post on dev@druid.apache.org if you’d like an extension to be promoted to core. If we see a community extension actively supported by the community, we can promote it to core based on community feedback.

    For information how to create your own extension, please see .

    Apache Druid bundles all core extensions out of the box. See the for your options. You can load bundled extensions by adding their names to your common.runtime.properties druid.extensions.loadList property. For example, to load the postgresql-metadata-storage and druid-hdfs-storage extensions, use the configuration:

    These extensions are located in the extensions directory of the distribution.

    You can also load community and third-party extensions not already bundled with Druid. To do this, first download the extension and then install it into your directory. You can download extensions from their distributors directly, or if they are available from Maven, the included pull-deps can download them for you. To use pull-deps, specify the full Maven coordinate of the extension in the form groupId:artifactId:version. For example, for the (hypothetical) extension com.example:druid-example-extension:1.0.0, run:

    1. java \
    2. -cp "lib/*" \
    3. org.apache.druid.cli.Main tools pull-deps \
    4. --no-default-hadoop \
    5. -c "com.example:druid-example-extension:1.0.0"

    You only have to install the extension once. Then, add "druid-example-extension" to in common.runtime.properties to instruct Druid to load the extension.

    If you add your extension jar to the classpath at runtime, Druid will also load it into the system. This mechanism is relatively easy to reason about, but it also means that you have to ensure that all dependency jars on the classpath are compatible. That is, Druid makes no provisions while using this method to maintain class loader isolation so you must make sure that the jars on your classpath are mutually compatible.