1. Write some data via go-ycsb, and then verify whether the data is replicated in triplicate by default.
  2. Add two more nodes and see how TiKV automatically rebalances replicas to efficiently use all available capacity.

Do not apply this operation in the production environment.

Make sure that you have installed TiUP as described in .

Step 1: Start a 3-node cluster

  1. Check your TiUP version. Execute the following command:

  2. Depending on the TiUP version, execute the command to start a 3-node local cluster.

    If TiUP version is v1.5.2 or later, execute the following command:

    1. tiup playground --mode tikv-slim --kv 3

    If TiUP version is earlier than v1.5.2, execute the following command:

    1. tiup playground --kv 3

After you execute the command, the output is as follows:

  1. Starting component `playground`: /home/pingcap/.tiup/components/playground/v1.5.0/tiup-playground --mode tikv-slim --kv 3
  2. Using the version v5.0.2 for version constraint "".
  3. If you'd like to use a TiDB version other than v5.0.2, cancel and retry with the following arguments:
  4. Specify version range: tiup playground ^5
  5. The nightly version: tiup playground nightly
  6. Playground Bootstrapping...
  7. Start pd instance
  8. Start tikv instance
  9. Start tikv instance
  10. PD client endpoints: [127.0.0.1:2379]
  11. To view the Prometheus: http://127.0.0.1:33703
  12. To view the Grafana: http://127.0.0.1:3000

On another terminal session, you can use go-ycsb to launch a workload.

  1. Build the application from the source.

    1. make
  2. Load a small workload using go-ycsb.

    1. # By default, this workload will insert 1000 records into TiKV.
    2. ./bin/go-ycsb load tikv -P workloads/workloada -p tikv.pd="127.0.0.1:2379" -p tikv.type="raw"

Step 3: Verify the replication

To understand the replication in TiKV, it is important to review several concepts in the architecture.

  1. Open the Grafana at (printed from the tiup-playground command), and then log in to Grafana using username admin and password admin.

In this section, you can launch a larger workload, scale the 3-node local cluster to a 5-node cluster, and then check whether the load of the TiKV cluster is rebalanced as expected.

  1. Start a new terminal session and launch a larger workload with go-ycsb. For example, on a machine with 16 virtual cores, you can launch a workload by executing the following command:

    1. ./bin/go-ycsb load tikv -P workloads/workloada -p tikv.pd="127.0.0.1:2379" -p tikv.type="raw" -p tikv.conncount=16 -p threadcount=16 -p recordcount=1000000

Replication and Rebalancing - 图2

Step 5: Add two more nodes

  1. Start another terminal session and use the tiup playground command to scale out the cluster.

  2. Verify the scale-out cluster by executing the following command:

    1. tiup playground display

    The output is as follows:

    1. Pid Role Uptime
    2. --- ---- ------
    3. 282731 pd 4h1m23.792495134s
    4. 282752 tikv 4h1m23.77761744s
    5. 282757 tikv 4h1m23.761628915s
    6. 282761 tikv 4h1m23.748199302s
    7. 308242 tikv 9m50.551877838s
    8. 308243 tikv 9m50.537477856s

Go to the Grafana page as mentioned above. You can find some Regions are split and rebalanced to the two new nodes.

Step 7: Stop and delete the cluster

If you do not need the local TiKV cluster anymore, you can stop and delete it.

  1. To stop the TiKV cluster, get back to the terminal session in which you have started the TiKV cluster. Press Ctrl + C and wait for the cluster to stop.