tikv-ctl is integrated in the tiup command. Execute the following command to call the tikv-ctl tool:

You can add corresponding parameters and subcommands after tiup ctl tikv.

tikv-ctl provides two operation modes:

  • Remote mode: use the --host option to accept the service address of TiKV as the argument

    For this mode, if SSL is enabled in TiKV, tikv-ctl also needs to specify the related certificate file. For example:

    1. tikv-ctl --ca-path ca.pem --cert-path client.pem --key-path client-key.pem --host 127.0.0.1:20160 <subcommands>

    However, sometimes tikv-ctl communicates with PD instead of TiKV. In this case, you need to use the --pd option instead of --host. Here is an example:

    1. tikv-ctl --pd 127.0.0.1:2379 compact-cluster
    2. store:"127.0.0.1:20160" compact db:KV cf:default range:([], []) success!
  • Local mode: Use the --data-dir option to specify the local TiKV data directory path. In this mode, you need to stop the running TiKV instance.

Unless otherwise noted, all commands support both the remote mode and the local mode.

Additionally, tikv-ctl has two simple commands --to-hex and --to-escaped, which are used to make simple changes to the form of the key.

Generally, use the escaped form of the key. For example:

  1. tikv-ctl --to-escaped 0xaaff
  2. \252\377
  3. tikv-ctl --to-hex "\252\377"
  4. AAFF

When you specify the escaped form of the key in a command line, it is required to enclose it in double quotes. Otherwise, bash eats the backslash and a wrong result is returned.

This section describes the subcommands that tikv-ctl supports in detail. Some subcommands support a lot of options. For all details, run tikv-ctl --help <subcommand>.

Use the raft subcommand to view the status of the Raft state machine at a specific moment. The status information includes two parts: three structs (RegionLocalState, RaftLocalState, and RegionApplyState) and the corresponding Entries of a certain piece of log.

Use the region and log subcommands to obtain the above information respectively. The two subcommands both support the remote mode and the local mode at the same time. Their usage and output are as follows:

  1. tikv-ctl --host 127.0.0.1:20160 raft region -r 2
  2. region id: 2
  3. region state key: \001\003\000\000\000\000\000\000\000\002\001
  4. region state: Some(region {id: 2 region_epoch {conf_ver: 3 version: 1} peers {id: 3 store_id: 1} peers {id: 5 store_id: 4} peers {id: 7 store_id: 6}})
  5. raft state key: \001\002\000\000\000\000\000\000\000\002\002
  6. raft state: Some(hard_state {term: 307 vote: 5 commit: 314617} last_index: 314617)
  7. apply state key: \001\002\000\000\000\000\000\000\000\002\003
  8. apply state: Some(applied_index: 314617 truncated_state {index: 313474 term: 151})

View the Region size

Use the size command to view the Region size:

  1. tikv-ctl --data-dir /path/to/tikv/db size -r 2
  2. region id: 2
  3. cf default region size: 799.703 MB
  4. cf write region size: 41.250 MB
  5. cf lock region size: 27616

Scan to view MVCC of a specific range

The --from and --to options of the scan command accept two escaped forms of raw key, and use the --show-cf flag to specify the column families that you need to view.

  1. tikv-ctl --data-dir /path/to/tikv/db scan --from 'zm' --limit 2 --show-cf lock,default,write
  2. write cf value: start_ts: 399650102814441473 commit_ts: 399650102814441475 short_value: "20"
  3. key: zmDB:29\000\000\377\000\374\000\000\000\000\000\000\377\000H\000\000\000\000\000\000\371
  4. write cf value: start_ts: 399650105239273474 commit_ts: 399650105239273475 short_value: "\000\000\000\000\000\000\000\002"
  5. write cf value: start_ts: 399650105199951882 commit_ts: 399650105213059076 short_value: "\000\000\000\000\000\000\000\001"

View MVCC of a given key

Similar to the scan command, the mvcc command can be used to view MVCC of a given key.

  1. tikv-ctl --data-dir /path/to/tikv/db mvcc -k "zmDB:29\000\000\377\000\374\000\000\000\000\000\000\377\000H\000\000\000\000\000\000\371" --show-cf=lock,write,default
  2. key: zmDB:29\000\000\377\000\374\000\000\000\000\000\000\377\000H\000\000\000\000\000\000\371
  3. write cf value: start_ts: 399650105239273474 commit_ts: 399650105239273475 short_value: "\000\000\000\000\000\000\000\002"
  4. write cf value: start_ts: 399650105199951882 commit_ts: 399650105213059076 short_value: "\000\000\000\000\000\000\000\001"

In this command, the key is also the escaped form of raw key.

Scan raw keys

Use --from and --to options to specify the range to scan (unbounded by default). Use --limit to limit at most how many keys to print out (30 by default). Use --cf to specify which cf to scan (can be default, write or lock).

  1. key: "zt\200\000\000\000\000\000\000\377\005_r\200\000\000\000\000\377\000\000\001\000\000\000\000\000\372\372b2,^\033\377\364", value: "\010\002\002\002%\010\004\002\010root\010\006\002\000\010\010\t\002\010\n\t\002\010\014\t\002\010\016\t\002\010\020\t\002\010\022\t\002\010\024\t\002\010\026\t\002\010\030\t\002\010\032\t\002\010\034\t\002\010\036\t\002\010 \t\002\010\"\t\002\010s\t\002\010&\t\002\010(\t\002\010*\t\002\010,\t\002\010.\t\002\0100\t\002\0102\t\002\0104\t\002"
  2. key: "zt\200\000\000\000\000\000\000\377\025_r\200\000\000\000\000\377\000\000\023\000\000\000\000\000\372\372b2,^\033\377\364", value: "\010\002\002&slow_query_log_file\010\004\002P/usr/local/mysql/data/localhost-slow.log"
  3. Total scanned keys: 2

Print a specific key value

To print the value of a key, use the print command.

In order to record Region state details, TiKV writes some statistics into the SST files of Regions. To view these properties, run tikv-ctl with the region-properties sub-command:

  1. tikv-ctl --host localhost:20160 region-properties -r 2
  2. num_files: 0
  3. num_entries: 0
  4. num_deletes: 0
  5. mvcc.min_ts: 18446744073709551615
  6. mvcc.max_ts: 0
  7. mvcc.num_rows: 0
  8. mvcc.num_puts: 0
  9. mvcc.num_versions: 0
  10. mvcc.max_row_versions: 0
  11. middle_key_by_approximate_size:

The properties can be used to check whether the Region is healthy or not. If not, you can use them to fix the Region. For example, splitting the Region manually by middle_key_approximate_size.

Compact data of each TiKV manually

Use the compact command to manually compact data of each TiKV. If you specify the --from and --to options, then their flags are also in the form of escaped raw key.

  • Use the --host option to specify the TiKV that needs to perform compaction.
  • Use the -d option to specify the RocksDB that performs compaction. The optional values are kv and raft.
  • Use the --threads option allows you to specify the concurrency for the TiKV compaction and its default value is 8. Generally, a higher concurrency comes with a faster compaction speed, which might yet affect the service. You need to choose an appropriate concurrency count based on your scenario.
  • Use the --bottommost option to include or exclude the bottommost files when TiKV performs compaction. The value options are default, skip, and force. The default value is default.
    • default means that the bottommost files are included only when the Compaction Filter feature is enabled.
    • skip means that the bottommost files are excluded when TiKV performs compaction.
    • force means that the bottommost files are always included when TiKV performs compaction.
  1. tikv-ctl --data-dir /path/to/tikv/db compact -d kv

Compact data of the whole TiKV cluster manually

Use the compact-cluster command to manually compact data of the whole TiKV cluster. The flags of this command have the same meanings and usage as those of the compact command.

Set a Region to tombstone

The tombstone command is usually used in circumstances where the sync-log is not enabled, and some data written in the Raft state machine is lost caused by power down.

In a TiKV instance, you can use this command to set the status of some Regions to tombstone. Then when you restart the instance, those Regions are skipped to avoid the restart failure caused by damaged Raft state machines of those Regions. Those Regions need to have enough healthy replicas in other TiKV instances to be able to continue the reads and writes through the Raft mechanism.

In general cases, you can remove the corresponding Peer of this Region using the remove-peer command:

Then use the tikv-ctl tool to set a Region to tombstone on the corresponding TiKV instance to skip the health check for this Region at startup:

  1. tikv-ctl --data-dir /path/to/tikv/db tombstone -p 127.0.0.1:2379 -r <region_id>

However, in some cases, you cannot easily remove this Peer of this Region from PD, so you can specify the --force option in tikv-ctl to forcibly set the Peer to tombstone:

  1. tikv-ctl --data-dir /path/to/tikv/db tombstone -p 127.0.0.1:2379 -r <region_id>,<region_id> --force
  • The tombstone command only supports the local mode.
  • The argument of the -p option specifies the PD endpoints without the http prefix. Specifying the PD endpoints is to query whether PD can safely switch to Tombstone.

Send a consistency-check request to TiKV

Use the consistency-check command to execute a consistency check among replicas in the corresponding Raft of a specific Region. If the check fails, TiKV itself panics. If the TiKV instance specified by --host is not the Region leader, an error is reported.

  1. tikv-ctl --host 127.0.0.1:20160 consistency-check -r 2
  2. success!
  3. tikv-ctl --host 127.0.0.1:20161 consistency-check -r 2
  4. DebugClient::check_region_consistency: RpcFailure(RpcStatus { status: Unknown, details: Some("StringError(\"Leader is on store 1\")") })
  • This command only supports the remote mode.
  • Even if this command returns success!, you need to check whether TiKV panics. This is because this command is only a proposal that requests a consistency check for the leader, and you cannot know from the client whether the whole check process is successful or not.

Dump snapshot meta

This sub-command is used to parse a snapshot meta file at given path and print the result.

To avoid checking the Regions while TiKV is started, you can use the tombstone command to set the Regions where the Raft state machine reports an error to Tombstone. Before running this command, use the bad-regions command to find out the Regions with errors, so as to combine multiple tools for automated processing.

  1. tikv-ctl --data-dir /path/to/tikv/db bad-regions
  2. all regions are healthy

If the command is successfully executed, it prints the above information. If the command fails, it prints the list of bad Regions. Currently, the errors that can be detected include the mismatches between last index, commit index and apply index, and the loss of Raft log. Other conditions like the damage of snapshot files still need further support.

View Region properties

  • To view online the properties of Region 2 on the TiKV instance that is running on 127.0.0.1:20160:

    1. tikv-ctl --host 127.0.0.1:20160 region-properties -r 2

Modify the TiKV configuration dynamically

You can use the command to dynamically modify the configuration arguments. Currently, the TiKV configuration items that can be dynamically modified and the detailed modification are consistent with modifying configuration using SQL statements. For details, see Modify TiKV configuration online.

  • -n is used to specify the full name of the configuration item. For the list of configuration items that can be modified online, see .
  • -v is used to specify the configuration value.
  1. tikv-ctl --host ip:port modify-tikv-config -n storage.block-cache.capacity -v 10GB

When shared block cache is disabled, set block cache size for the write CF:

  1. tikv-ctl --host ip:port modify-tikv-config -n rocksdb.writecf.block-cache-size -v 256MB
  1. tikv-ctl --host ip:port modify-tikv-config -n raftdb.defaultcf.disable-auto-compactions -v true
  1. tikv-ctl --host ip:port modify-tikv-config -n raftstore.sync-log -v false

When the compaction rate limit causes accumulated compaction pending bytes, disable the rate-limiter-auto-tuned mode or set a higher limit for the compaction flow:

  1. tikv-ctl --host ip:port modify-tikv-config -n rocksdb.rate-bytes-per-sec -v "1GB"

Force Region to recover the service from failure of multiple replicas

Use the unsafe-recover remove-fail-stores command to remove the failed machines from the peer list of Regions. Then after you restart TiKV, these Regions can continue to provide services using the other healthy replicas. This command is usually used in circumstances where multiple TiKV stores are damaged or deleted.

The -s option accepts multiple store_id separated by comma and uses the -r flag to specify involved Regions. Otherwise, all Regions’ peers located on these stores will be removed by default.

  1. tikv-ctl --data-dir /path/to/tikv/db unsafe-recover remove-fail-stores -s 3 -r 1001,1002
  • This command only supports the local mode. It prints success! when successfully run.
  • You must run this command for all stores where specified Regions’ peers are located.
  • If the --all-regions option is used, usually you need to run this command on all the remaining healthy stores in the cluster. You need to ensure that the healthy stores stop providing services before recovering the damaged stores. Otherwise, the inconsistent peer lists in Region replicas will cause errors when you execute split-region or remove-peer. This further causes inconsistency between other metadata, and finally, the Regions will become unavailable.

Recover from MVCC data corruption

Use the recover-mvcc command in circumstances where TiKV cannot run normally caused by MVCC data corruption. It cross-checks 3 CFs (“default”, “write”, “lock”) to recover from various kinds of inconsistency.

  • Use the -r option to specify involved Regions by region_id.
  • Use the -p option to specify PD endpoints.
  1. tikv-ctl --data-dir /path/to/tikv/db recover-mvcc -r 1001,1002 -p 127.0.0.1:2379
  • This command only supports the local mode. It prints success! when successfully run.
  • The argument of the -p option specifies the PD endpoints without the http prefix. Specifying the PD endpoints is to query whether the specified region_id is validated or not.
  • You need to run this command for all stores where specified Regions’ peers are located.

Ldb Command

The ldb command line tool offers multiple data access and database administration commands. Some examples are listed below. For more information, refer to the help message displayed when running tikv-ctl ldb or check the documents from RocksDB.

Examples of data access sequence:

To dump an existing RocksDB in HEX:

  1. tikv-ctl ldb --hex --data-dir=/tmp/db dump

To dump the manifest of an existing RocksDB:

  1. tikv-ctl ldb --hex manifest_dump --path=/tmp/db/MANIFEST-000001

You can specify the column family that your query is against using the --column_family=<string> command line.

--try_load_options loads the database options file to open the database. It is recommended to always keep this option on when the database is running. If you open the database with default options, the LSM-tree might be messed up, which cannot be recovered automatically.

Use the encryption-meta subcommand to dump encryption metadata. The subcommand can dump two types of metadata: encryption info for data files, and the list of data encryption keys used.

To dump encryption info for data files, use the encryption-meta dump-file subcommand. You need to create a TiKV config file to specify data-dir for the TiKV deployment:

  1. # conf.toml
  2. [storage]
  3. data-dir = "/path/to/tikv/data"

The --path option can be used to specify an absolute or relative path to the data file of interest. The command might give empty output if the data file is not encrypted. If --path is not provided, encryption info for all data files will be printed.

  1. tikv-ctl --config=./conf.toml encryption-meta dump-file --path=/path/to/tikv/data/db/CURRENT
  2. /path/to/tikv/data/db/CURRENT: key_id: 9291156302549018620 iv: E3C2FDBF63FC03BFC28F265D7E78283F method: Aes128Ctr

To dump data encryption keys, use the encryption-meta dump-key subcommand. In additional to data-dir, you also need to specify the current master key used in the config file. For how to config master key, refer to Encryption-At-Rest. Also with this command, the security.encryption.previous-master-key config will be ignored, and the master key rotation will not be triggered.

  1. # conf.toml
  2. [storage]
  3. data-dir = "/path/to/tikv/data"
  4. [security.encryption.master-key]
  5. type = "kms"
  6. key-id = "0987dcba-09fe-87dc-65ba-ab0987654321"
  7. region = "us-west-2"

Note if the master key is a AWS KMS key, tikv-ctl needs to have access to the KMS key. Access to a AWS KMS key can be granted to tikv-ctl via environment variable, AWS default config file, or IAM role, whichever is suitable. Refer to AWS document for usage.

The --ids option can be used to specified a list of comma-separated data encryption key ids to print. If --ids is not provided, all data encryption keys will be printed, along with current key id, which is the id of the latest active data encryption key.

When using the command, you will see a prompt warning that the action will expose sensitive information. Type “I consent” to continue.

  1. ./tikv-ctl --config=./conf.toml encryption-meta dump-key
  2. This action will expose encryption key(s) as plaintext. Do not output the result in file on disk.
  3. Type "I consent" to continue, anything else to exit: I consent
  4. current key id: 9291156302549018620
  5. 9291156302549018620: key: 8B6B6B8F83D36BE2467ED55D72AE808B method: Aes128Ctr creation_time: 1592938357
  1. ./tikv-ctl --config=./conf.toml encryption-meta dump-key --ids=9291156302549018620
  2. This action will expose encryption key(s) as plaintext. Do not output the result in file on disk.
  3. 9291156302549018620: key: 8B6B6B8F83D36BE2467ED55D72AE808B method: Aes128Ctr creation_time: 1592938357