- 1x dedicated local SSD mounted under /var/lib/etcd
- 1.8 GB memory
- 2x CPUs
3 etcd 2.2.0-rc members, each runs on a single machine.
Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base performance. etcd’s commit head is at c7146bd5, which is the same as the one that we use in .
key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
---|---|---|---|---|
64 | 1 | leader only | 76 (+22%) | 19.4 (-15%) |
64 | 64 | leader only | 2461 (+45%) | 31.8 (-32%) |
64 | 256 | leader only | 4275 (+1%) | 69.6 (-10%) |
256 | 1 | leader only | 64 (+20%) | 16.7 (-30%) |
256 | 64 | leader only | 2385 (+30%) | 31.5 (-19%) |
256 | 256 | leader only | 4353 (-3%) | 74.0 (+9%) |
64 | 64 | all servers | 2005 (+81%) | 49.8 (-55%) |
64 | 256 | all servers | 4868 (+35%) | 81.5 (-40%) |
256 | 64 | all servers | 1925 (+72%) | 47.7 (-59%) |
256 | 256 | all servers | 4975 (+36%) | 70.3 (-36%) |
read QPS in most scenarios is decreased by 5~8%. The reason is that etcd records store metrics for each store operation. The metrics is important for monitoring and debugging, so this is acceptable.
write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.