• 1x dedicated local SSD mounted as etcd data directory
  • 1.8 GB memory
  • 2x CPUs

3 etcd 2.2.0 members, each runs on a single machine.

Bootstrap another machine, outside of the etcd cluster, and run the HTTP benchmark tool with a connection reuse patch to send requests to each etcd cluster member. See the for the patch and the steps to reproduce our procedures.

Single Key Write Performance

key size in bytesnumber of clientstarget etcd serveraverage write QPSwrite QPS stddevaverage 90th Percentile Latency (ms)latency stddev
641leader only55424.5113.26
6464leader only213912535.233.40
64256leader only458158170.5310.22
2561leader only56422.374.33
25664leader only205215136.834.20
256256leader only444256071.5910.03
6464all servers16258558.515.14
64256all servers446129889.4736.48
25664all servers15999460.116.43
256256all servers431519388.987.01
  • Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.

  • Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.