The overall architecture of TiKV is as follows:
The architecture of TiKV
- : work as the manager of the TiKV cluster
TiKV clients interact with PD and TiKV through gRPC.
TiKV stores data in RocksDB, which is a persistent and fast key-value store. To learn why TiKV selects RocksDB to store data, see .
Implementing the Raft consensus algorithm, TiKV works as follows:
- TiKV replicates data to multiple machines, ensures data consistency, and tolerates machine failures.
- TiKV becomes a distributed Key-Value storage, which can automatically recover lost replicas in case of machine failures and keep the applications unaffected.
Concept | Description |
---|---|
Raft Group | Each replica of a region is called Peer. All of such peers form a raft group. |
Leader | In every raft group, there is a unique role called leader, who is responsible for processing read or write requests from clients. |
As the manager in a TiKV cluster, the Placement Driver (PD) provides the following functions:
-
Timestamp oracle plays a significant role in the Percolator transaction model. PD implements a service to hand out timestamps in the strictly increasing order, which is a property required for the correct operations of the snapshot isolation protocol.