Bloom Filter

    This extension adds the ability to both construct bloom filters from query results, and filter query results by testing against a bloom filter. A Bloom filter is a probabilistic data structure for performing a set membership check. A bloom filter is a good candidate to use with Druid for cases where an explicit filter is impossible, e.g. filtering a query against a set of millions of values.

    Following are some characteristics of Bloom filters:

    • Bloom filters are highly space efficient when compared to using a HashSet.
    • Because of the probabilistic nature of bloom filters, false positive results are possible (element was not actually inserted into a bloom filter during construction, but test() says true)
    • False negatives are not possible (if element is present then test() will never say false).
    • The false positive probability of this implementation is currently fixed at 5%, but increasing the number of entries that the filter can hold can decrease this false positive rate in exchange for overall size.

    This extension is currently based on org.apache.hive.common.util.BloomKFilter from hive-storage-api. Internally, this implementation uses Murmur3 as the hash algorithm.

    To construct a BloomKFilter externally with Java to use as a filter in a Druid query:

    1. {
    2. "type" : "bloom",
    3. "dimension" : <dimension_name>,
    4. "bloomKFilter" : <serialized_bytes_for_BloomKFilter>,
    5. "extractionFn" : <extraction_fn>
    6. }

    Serialized Format for BloomKFilter

    Serialized BloomKFilter format:

    • 1 byte for the number of hash functions.
    • 1 big endian int(That is how OutputStream works) for the number of longs in the bitset
    • big endian longs in the BloomKFilter bitset

    Note: org.apache.hive.common.util.BloomKFilter provides a serialize method which can be used to serialize bloom filters to outputStream.

    Bloom filters can be used in SQL clauses via the bloom_filter_test operator:

    Expression and Virtual Column Support

    The bloom filter extension also adds a bloom filter which shares syntax with the SQL operator.

    1. bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>')

    Bloom Filter Query Aggregator

    PropertyDescriptionrequired?
    typeAggregator Type. Should always be bloomyes
    nameOutput field nameyes
    fieldDimensionSpec to add to org.apache.hive.common.util.BloomKFilteryes
    maxNumEntriesMaximum number of distinct values supported by org.apache.hive.common.util.BloomKFilter, default 1500no

    Example

    1. {
    2. "queryType": "timeseries",
    3. "dataSource": "wikiticker",
    4. "aggregations": [
    5. {
    6. "type": "bloom",
    7. "name": "userBloom",
    8. "maxNumEntries": 100000,
    9. "field": {
    10. "type":"default",
    11. "dimension":"user",
    12. "outputType": "STRING"
    13. }
    14. }
    15. ]

    response

    These values can then be set in the filter specification described above.

    Ordering results by a bloom filter aggregator, for example in a TopN query, will perform a comparatively expensive linear scan of the filter itself to count the number of set bits as a means of approximating how many items have been added to the set. As such, ordering by an alternate aggregation is recommended if possible.

    Bloom filters can be computed in SQL expressions with the bloom_filter aggregator:

    1. SELECT BLOOM_FILTER(<expression>, <max number of entries>) FROM druid.foo WHERE dim2 = 'abc'