Extensions

    Production clusters will generally use at least two extensions; one for deep storage and one for a metadata store. Many clusters will also use additional extensions.

    Core extensions are maintained by Druid committers.

    A number of community members have contributed their own extensions to Druid that are not packaged with the default Druid tarball. If you’d like to take on maintenance for a community extension, please post on dev@druid.apache.org to let us know!

    NameDescriptionDocs
    aliyun-oss-extensionsAliyun OSS deep storage
    ambari-metrics-emitterAmbari Metrics Emitterlink
    druid-cassandra-storageApache Cassandra deep storage.
    druid-cloudfiles-extensionsRackspace Cloudfiles deep storage and firehose.link
    druid-distinctcountDistinctCount aggregator
    druid-redis-cacheA cache implementation for Druid based on Redis.link
    druid-time-min-maxMin/Max aggregator for timestamp.
    sqlserver-metadata-storageMicrosoft SQLServer deep storage.link
    graphite-emitterGraphite metrics emitter
    statsd-emitterStatsD metrics emitterlink
    kafka-emitterKafka metrics emitter
    druid-thrift-extensionsSupport thrift ingestionlink
    druid-opentsdb-emitterOpenTSDB metrics emitter
    materialized-view-selection, materialized-view-maintenanceMaterialized Viewlink
    druid-moving-average-querySupport for and other Aggregate Window Functions in Druid queries.
    druid-influxdb-emitterInfluxDB metrics emitterlink
    druid-momentsketchSupport for approximate quantile queries using the librarylink
    druid-tdigestsketchSupport for approximate sketch aggregators based on link
    gce-extensionsGCE Extensions
    prometheus-emitterExposes Druid metrics for Prometheus server collection ()link

    Please post on dev@druid.apache.org if you’d like an extension to be promoted to core. If we see a community extension actively supported by the community, we can promote it to core based on community feedback.

    For information how to create your own extension, please see .

    Apache Druid bundles all core extensions out of the box. See the for your options. You can load bundled extensions by adding their names to your common.runtime.properties druid.extensions.loadList property. For example, to load the postgresql-metadata-storage and druid-hdfs-storage extensions, use the configuration:

    These extensions are located in the extensions directory of the distribution.

    You can also load community and third-party extensions not already bundled with Druid. To do this, first download the extension and then install it into your directory. You can download extensions from their distributors directly, or if they are available from Maven, the included can download them for you. To use pull-deps, specify the full Maven coordinate of the extension in the form groupId:artifactId:version. For example, for the (hypothetical) extension com.example:druid-example-extension:1.0.0, run:

    1. java \
    2. -cp "lib/*" \
    3. org.apache.druid.cli.Main tools pull-deps \
    4. --no-default-hadoop \
    5. -c "com.example:druid-example-extension:1.0.0"

    You only have to install the extension once. Then, add "druid-example-extension" to in common.runtime.properties to instruct Druid to load the extension.

    If you add your extension jar to the classpath at runtime, Druid will also load it into the system. This mechanism is relatively easy to reason about, but it also means that you have to ensure that all dependency jars on the classpath are compatible. That is, Druid makes no provisions while using this method to maintain class loader isolation so you must make sure that the jars on your classpath are mutually compatible.