Quickstart

    Before beginning the quickstart, it is helpful to read the general Druid overview and the , as the tutorials will refer to concepts discussed on those pages.

    You will need:

    • Java 8 (8u92+) or later
    • Linux, Mac OS X, or other Unix-like OS (Windows is not supported)

    Druid includes several example single-server configurations, along with scripts to start the Druid processes using these configurations.

    If you’re running on a small machine such as a laptop for a quick evaluation, the micro-quickstart configuration is a good choice, sized for a 4CPU/16GB RAM environment.

    If you plan to use the single-machine deployment for further evaluation beyond the tutorials, we recommend a larger configuration than micro-quickstart.

    the 0.18.1 release.

    In the package, you should find:

    • LICENSE and NOTICE files
    • - example configurations for single-server and clustered setup
    • extensions/* - core Druid extensions
    • hadoop-dependencies/* - Druid Hadoop dependencies
    • lib/* - libraries and dependencies for core Druid
    • quickstart/* - configuration files, sample data, and other files for the quickstart tutorials

    The following commands will assume that you are using the micro-quickstart single-machine configuration. If you are using a different configuration, the bin directory has equivalent scripts for each configuration, such as .

    From the apache-druid-0.18.1 package root, run the following command:

    This will bring up instances of ZooKeeper and the Druid services, all running on the local machine, e.g.:

    All persistent state such as the cluster metadata store and segments for the services will be kept in the var directory under the apache-druid-0.18.1 package root. Logs for the services are located at var/sv.

    Later on, if you’d like to stop the services, CTRL-C to exit the bin/start-micro-quickstart script, which will terminate the Druid processes.

    Once the cluster has started, you can navigate to http://localhost:8888. The , which serves the Druid console, resides at this address.

    For the following data loading tutorials, we have included a sample data file containing Wikipedia page edit events that occurred on 2015-09-12.

    This sample data is located at quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz from the Druid package root. The page edit events are stored as JSON objects in a text file.

    The sample data has the following columns, and an example event is shown below:

    • added
    • channel
    • cityName
    • comment
    • countryIsoCode
    • countryName
    • deleted
    • delta
    • isMinor
    • isNew
    • isRobot
    • isUnpatrolled
    • metroCode
    • namespace
    • page
    • regionIsoCode
    • regionName
    • user

    The following tutorials demonstrate various methods of loading data into Druid, including both batch and streaming use cases. All tutorials assume that you are using the micro-quickstart single-machine configuration mentioned above.

    • - this tutorial demonstrates how to perform a batch file load, using Druid’s native batch ingestion.
    • Loading stream data from Apache Kafka - this tutorial demonstrates how to load streaming data from a Kafka topic.
    • - this tutorial demonstrates how to perform a batch file load, using a remote Hadoop cluster.

    If you want a clean start after stopping the services, delete the var directory and run the script again.

    Once every service has started, you are now ready to load data.

    Resetting Kafka

    If you completed and wish to reset the cluster state, you should additionally clear out any Kafka state.

    Shut down the Kafka broker with CTRL-C before stopping ZooKeeper and the Druid services, and then delete the Kafka log directory at /tmp/kafka-logs: