To ensure a clean migration, shut down the non-coordinator services to ensure that metadata state will not change as you do the migration.

When migrating from Derby, the coordinator processes will still need to be up initially, as they host the Derby database.

Druid provides an for exporting metadata from Derby into CSV files which can then be imported into your new metadata store.

Run the tool on your existing cluster, and save the CSV files it generates. After a successful export, you can shut down the coordinator.

Before importing the existing cluster metadata, you will need to set up the new metadata store.

The MySQL extension and docs have instructions for initial database setup.

Druid provides a metadata-init tool for creating Druid’s metadata tables. After initializing the Druid database, you can run the commands shown below from the root of the Druid package to initialize the tables.

In the example commands below:

  • lib is the Druid lib directory
  • extensions is the Druid extensions directory
  • The --connectURI parameter corresponds to the value of druid.metadata.storage.connector.connectURI.
  • The --user parameter corresponds to the value of .
  • The --password parameter corresponds to the value of druid.metadata.storage.connector.password.

MySQL

PostgreSQL

  1. cd ${DRUID_ROOT}
  2. java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] -Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools metadata-init --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid

After initializing the tables, please refer to the import commands for your target database.