Known Issues in ArangoDB 3.2

    The RocksDB storage engine is intentionally missing the following features that are present in the MMFiles engine:

    • the datafile debugger (arango-dfdb) cannot be used with this storage engine

      RocksDB has its own crash recovery so using the dfdb will not make any sense here.

    • APIs that return collection properties or figures will return slightly different attributes for the RocksDB engine than for the MMFiles engine. For example, the attributes , doCompact, indexBuckets and isVolatile are present in the MMFiles engine but not in the RocksDB engine. The memory usage figures reported for collections in the RocksDB engine are estimate values, whereas they are exact for the MMFiles engine.

    • transactions are limited in size. Transactions that get too big (in terms of number of operations involved or the total size of data modified by the transaction) will be committed automatically. Effectively this means that big user transactions are split into multiple smaller RocksDB transactions that are committed individually. The entire user transaction will not necessarily have ACID properties in this case.

      The threshold values for transaction sizes can be configured globally using the startup options

      • --rocksdb.intermediate-commit-size: if the size of all operations in a transaction reaches this threshold, the transaction is committed automatically and a new transaction is started. The value is specified in bytes.

      • --rocksdb.intermediate-commit-count: if the number of operations in a transaction reaches this value, the transaction is committed automatically and a new transaction is started.

      • : this is an upper limit for the total number of bytes of all operations in a transaction. If the operations in a transaction consume more than this threshold value, the transaction will automatically abort with error 32 (“resource limit exceeded”).

    The following known issues will be resolved in future releases:

    • collections for which a geo index is present will use collection-level write locks even with the RocksDB engine. Reads from these collections can still be done in parallel but no writes

    • modifying documents in a collection with a geo index will cause multiple additional writes to RocksDB for maintaining the index structures

    • the number of documents reported for collections (db.<collection>.count()) may be slightly wrong during transactions if there are parallel transactions ongoing for the same collection that also modify the number of documents

    • AQL queries in the cluster still issue an extra locking HTTP request per shard though this would not be necessary for the RocksDB engine in most cases

    • Upgrading from 3.1 to 3.2 on Windows requires the user to manually copy the database directory to the new location and run an upgrade on the database. Please consult the Documentation for detailed instructions.
    • On some Linux systems systemd and system v might report that the arangodb service is in good condition when it could not be started. In this case the user needs to check /var/log/arangodb3 for further information about the failed startup.
    • ArangoDB v3.2 has been tested with OpenSSL 1.0 only and won’t build against 1.1 when compiling on your own.