To recover from disastrous failure, etcd v3 provides snapshot and restore facilities to recreate the cluster without v3 key data loss. To recover v2 keys, refer to the .
Recovering a cluster first needs a snapshot of the keyspace from an etcd member. A snapshot may either be taken from a live member with the command or by copying the member/snap/db
file from an etcd data directory. For example, the following command snapshots the keyspace served by $ENDPOINT
to the file snapshot.db
:
Snapshot integrity may be optionally verified at restore time. If the snapshot is taken with etcdctl snapshot save
, it will have an integrity hash that is checked by etcdctl snapshot restore
. If the snapshot is copied from the data directory, there is no integrity hash and it will only restore by using —skip-hash-check
.
A restore initializes a new member of a new cluster, with a fresh cluster configuration using ’s cluster configuration flags, but preserves the contents of the etcd keyspace. Continuing from the previous example, the following creates new etcd data directories (m1.etcd
, m2.etcd
, m3.etcd
) for a three member cluster:
Now the restored etcd cluster should be available and serving the keyspace given by the snapshot.
Previously, etcd panics on membership mis-reconfiguration with wrong URLs (v3.2.15 or later returns before etcd server panic).