Deep storage migration

    Migration of deep storage involves the following steps at a high level:

    • Exporting Druid’s segments table from metadata
    • Rewriting the load specs in the exported segment data to reflect the new deep storage location

    To ensure a clean migration, shut down the non-coordinator services to ensure that metadata state will not change as you do the migration.

    When migrating from Derby, the coordinator processes will still need to be up initially, as they host the Derby database.

    For information on what path structure to use in the new deep storage, please see deep storage migration options.

    Druid provides an Export Metadata Tool for exporting metadata from Derby into CSV files which can then be reimported.

    By setting , the tool will export CSV files where the segment load specs have been rewritten to load from your new deep storage location.

    After generating the CSV exports with the modified segment data, you can reimport the contents of the Druid segments table from the generated CSVs.

    Please refer to for examples. Only the table needs to be imported.

    Restart cluster

    After importing the segment table successfully, you can now restart your cluster.