- 1x dedicated local SSD mounted as etcd data directory
- 1.8 GB memory
- 2x CPUs
3 etcd 2.2.0 members, each runs on a single machine.
Bootstrap another machine, outside of the etcd cluster, and run the HTTP benchmark tool with a connection reuse patch to send requests to each etcd cluster member. See the for the patch and the steps to reproduce our procedures.
Single Key Write Performance
key size in bytes | number of clients | target etcd server | average write QPS | write QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
---|---|---|---|---|---|---|
64 | 1 | leader only | 55 | 4 | 24.51 | 13.26 |
64 | 64 | leader only | 2139 | 125 | 35.23 | 3.40 |
64 | 256 | leader only | 4581 | 581 | 70.53 | 10.22 |
256 | 1 | leader only | 56 | 4 | 22.37 | 4.33 |
256 | 64 | leader only | 2052 | 151 | 36.83 | 4.20 |
256 | 256 | leader only | 4442 | 560 | 71.59 | 10.03 |
64 | 64 | all servers | 1625 | 85 | 58.51 | 5.14 |
64 | 256 | all servers | 4461 | 298 | 89.47 | 36.48 |
256 | 64 | all servers | 1599 | 94 | 60.11 | 6.43 |
256 | 256 | all servers | 4315 | 193 | 88.98 | 7.01 |
Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.
Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.