The file contains history of all file operations of the DB since the last time DB was opened, and is replayed during DB open. If there are too many updates to replay, it takes a long time. This can happen when:

  • SST files were too small so file operations were too frequently. If this is the case, try to solve the small SST file problem. Maybe memtable is flushed too often, which generates small L0 files, or target file size is too small so that compaction generates small files. You can try to adjust the configuration accordingly
  • DB simply runs for too long and accumulates too many historic updates.Either way, you can try to set to force a new manifest file to be generated when it hits the maximum size, to avoid replaying for too long.

If your memtable size is large, the replay can be long. So try to shrink the memtable size.

When is set to -1, during DB open, all the SST files will be opened, with their footer and metadata blocks to be read. This is random reads from disk. If you have a lot of files and a relatively high latency device, especially spinning disks, those random reads can take a long time. Two options can help mitigate the problem:

  • Set options.skip_stats_update_on_db_open=false. This allows RocksDB to do one fewer read per file.
  • Tune the LSM-tree to reduce the number of SST file is also helpful.