Master/Slave Architecture

    The master is the ArangoDB instance where all data-modification operations should be directed to. The slave is the ArangoDB instance that replicates the data from the master.

    Components

    Purpose

    The replication logger will write all data-modification operations into the write-ahead log. This log may then be read by clients to replay any data modification on a different server.

    Checking the state

    To query the current state of the logger, use the state command:

    The result might look like this:

    1. "state" : {
    2. "running" : true,
    3. "lastLogTick" : "2339941",
    4. "lastUncommittedLogTick" : "2339941",
    5. "totalEvents" : 2339941,
    6. "time" : "2019-07-02T10:30:30Z"
    7. },
    8. "server" : {
    9. "version" : "3.5.0",
    10. "serverId" : "194754235820456",
    11. "engine" : "rocksdb"
    12. },
    13. "clients" : [
    14. {
    15. "syncerId" : "158",
    16. "serverId" : "161976545824597",
    17. "expires" : "1970-01-23T09:59:10Z",
    18. "lastServedTick" : 2339908
    19. ]
    20. }

    The running attribute will always be true. In earlier versions of ArangoDB the replication was optional and this could have been false.

    The totalEvents attribute indicates how many log events have been logged since the start of the ArangoDB server. The lastLogTick value indicates the id of the last committed operation that was written to the server’s write-ahead log. It can be used to determine whether new operations were logged, and is also used by the replication applier for incremental fetching of data. The lastUncommittedLogTick value contains the id of the last uncommitted operation that was written to the server’s WAL. For the RocksDB storage engine, lastLogTick and lastUncommittedLogTick are identical, as the WAL only contains committed operations.

    Note: The replication logger state can also be queried via the .

    To query which data ranges are still available for replication clients to fetch, the logger provides the firstTick and tickRanges functions:

    This will return the minimum tick value that the server can provide to replication clients via its replication APIs. The tickRanges function returns the minimum and maximum tick values per logfile:

    1. require("@arangodb/replication").logger.tickRanges();

    Replication Applier

    Purpose

    The purpose of the replication applier is to read data from a master database’s event log, and apply them locally. The applier will check the master database for new operations periodically. It will perform an incremental synchronization, i.e. only asking the master for operations that occurred after the last synchronization.

    The replication applier does not get notified by the master database when there are “new” operations available, but instead uses the pull principle. It might thus take some time (the so-called replication lag) before an operation from the master database gets shipped to, and applied in, a slave database.

    The replication applier of a database is run in a separate thread. It may encounter problems when an operation from the master cannot be applied safely, or when the connection to the master database goes down (network outage, master database is down or unavailable etc.). In this case, the database’s replication applier thread might terminate itself. It is then up to the administrator to fix the problem and restart the database’s replication applier.

    If the replication applier cannot connect to the master database, or the communication fails at some point during the synchronization, the replication applier will try to reconnect to the master database. It will give up reconnecting only after a configurable amount of connection attempts.

    The result might look like this:

    1. {
    2. "state" : {
    3. "started" : "2019-03-01T11:36:33Z",
    4. "running" : true,
    5. "phase" : "running",
    6. "lastAppliedContinuousTick" : "2050724544",
    7. "lastProcessedContinuousTick" : "2050724544",
    8. "lastAvailableContinuousTick" : "2050724546",
    9. "safeResumeTick" : "2050694546",
    10. "ticksBehind" : 2,
    11. "progress" : {
    12. "time" : "2019-03-01T11:36:33Z",
    13. "message" : "fetching master log from tick 2050694546, last scanned tick 2050664547, first regular tick 2050544543, barrier: 0, open transactions: 1, chunk size 6291456",
    14. "failedConnects" : 0
    15. },
    16. "totalEvents" : 50010,
    17. "totalDocuments" : 50000,
    18. "totalRemovals" : 0,
    19. "totalResyncs" : 0,
    20. "totalOperationsExcluded" : 0,
    21. "totalApplyTime" : 1.1071290969848633,
    22. "averageApplyTime" : 1.1071290969848633,
    23. "totalFetchTime" : 0.2129514217376709,
    24. "averageFetchTime" : 0.10647571086883545,
    25. "lastError" : {
    26. "errorNum" : 0
    27. },
    28. "time" : "2019-03-01T11:36:34Z"
    29. },
    30. "server" : {
    31. "version" : "3.4.4",
    32. "serverId" : "46402312160836"
    33. },
    34. "endpoint" : "tcp://master.example.org",

    The running attribute indicates whether the replication applier of the current database is currently running and polling the master at endpoint for new events.

    The started attribute shows at what date and time the applier was started (if at all).

    The progress.failedConnects attribute shows how many failed connection attempts the replication applier currently has encountered in a row. In contrast, the totalFailedConnects attribute indicates how many failed connection attempts the applier has made in total. The totalRequests attribute shows how many requests the applier has sent to the master database in total.

    The totalEvents attribute shows how many log events the applier has read from the master. The totalDocuments and totalRemovals attributes indicate how may document operations the slave has applied locally.

    The attributes totalApplyTime and totalFetchTime show the total time the applier spent for applying data batches locally, and the total time the applier waited on data-fetching requests to the master, respectively. The averageApplyTime and averageFetchTime attributes show the average times clocked for these operations. Note that the average times will greatly be influenced by the chunk size used in the applier configuration (bigger chunk sizes mean less requests to the slave, but the batches will include more data and take more time to create and apply).

    The progress.message sub-attribute provides a brief hint of what the applier currently does (if it is running). The lastError attribute also has an optional errorMessage sub-attribute, showing the latest error message. The errorNum sub-attribute of the lastError attribute can be used by clients to programmatically check for errors. It should be 0 if there is no error, and it should be non-zero if the applier terminated itself due to a problem.

    Below is an example of the state after the replication applier terminated itself due to (repeated) connection problems: