Set up a High Availability etcd cluster with kubeadm

    Kubeadm defaults to running a single member etcd cluster in a static pod managed by the kubelet on the control plane node. This is not a high availability setup as the etcd cluster contains only one member and cannot sustain any members becoming unavailable. This task walks through the process of creating a high availability etcd cluster of three members that can be used as an external etcd when using kubeadm to set up a kubernetes cluster.

    • Three hosts that can talk to each other over ports 2379 and 2380. This document assumes these default ports. However, they are configurable through the kubeadm config file.
    • Each host must have docker, kubelet, and kubeadm installed.
    • Each host should have access to the Kubernetes container image registry () or list/pull the required etcd image using kubeadm config images list/pull. This guide will setup etcd instances as managed by a kubelet.
    • Some infrastructure to copy files between hosts. For example ssh and scp can satisfy this requirement.

    The general approach is to generate all certs on one node and only distribute the necessary files to the other nodes.

    Note: kubeadm contains all the necessary crytographic machinery to generate the certificates described below; no other cryptographic tooling is required for this example.

    1. Configure the kubelet to be a service manager for etcd.

      Note: You must do this on every host where etcd should be running.

      Since etcd was created first, you must override the service priority by creating a new unit file that has higher precedence than the kubeadm-provided kubelet unit file.

      Check the kubelet status to ensure it is running.

      1. systemctl status kubelet
    2. Generate one kubeadm configuration file for each host that will have an etcd member running on it using the following script.

      1. # Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
      2. export HOST0=10.0.0.6
      3. export HOST1=10.0.0.7
      4. export HOST2=10.0.0.8
      5. # Create temp directories to store files that will end up on other hosts.
      6. mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
      7. ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
      8. NAMES=("infra0" "infra1" "infra2")
      9. for i in "${!ETCDHOSTS[@]}"; do
      10. HOST=${ETCDHOSTS[$i]}
      11. NAME=${NAMES[$i]}
      12. cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
      13. apiVersion: "kubeadm.k8s.io/v1beta3"
      14. kind: ClusterConfiguration
      15. etcd:
      16. local:
      17. serverCertSANs:
      18. - "${HOST}"
      19. peerCertSANs:
      20. - "${HOST}"
      21. extraArgs:
      22. initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
      23. initial-cluster-state: new
      24. name: ${NAME}
      25. listen-peer-urls: https://${HOST}:2380
      26. listen-client-urls: https://${HOST}:2379
      27. advertise-client-urls: https://${HOST}:2379
      28. EOF
      29. done
    3. Generate the certificate authority

      If you already have a CA then the only action that is copying the CA’s crt and key file to /etc/kubernetes/pki/etcd/ca.crt and . After those files have been copied, proceed to the next step, “Create certificates for each member”.

      If you do not already have a CA then run this command on $HOST0 (where you generated the configuration files for kubeadm).

      This creates two files

      • /etc/kubernetes/pki/etcd/ca.crt
      • /etc/kubernetes/pki/etcd/ca.key
    4. Create certificates for each member

      1. kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
      2. kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
      3. kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
      4. kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
      5. cp -R /etc/kubernetes/pki /tmp/${HOST2}/
      6. # cleanup non-reusable certificates
      7. find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
      8. kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
      9. kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
      10. kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
      11. kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
      12. cp -R /etc/kubernetes/pki /tmp/${HOST1}/
      13. find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
      14. kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
      15. kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
      16. kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
      17. kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
      18. # No need to move the certs because they are for HOST0
      19. # clean up certs that should not be copied off this host
      20. find /tmp/${HOST2} -name ca.key -type f -delete
      21. find /tmp/${HOST1} -name ca.key -type f -delete
    5. Copy certificates and kubeadm configs

      1. USER=ubuntu
      2. HOST=${HOST1}
      3. scp -r /tmp/${HOST}/* ${USER}@${HOST}:
      4. ssh ${USER}@${HOST}
      5. USER@HOST $ sudo -Es
      6. root@HOST $ chown -R root:root pki
      7. root@HOST $ mv pki /etc/kubernetes/
    6. Ensure all expected files exist

      The complete list of required files on $HOST0 is:

      On $HOST1:

      1. $HOME
      2. ---
      3. /etc/kubernetes/pki
      4. ├── apiserver-etcd-client.key
      5. └── etcd
      6. ├── ca.crt
      7. ├── healthcheck-client.crt
      8. ├── healthcheck-client.key
      9. ├── peer.crt
      10. ├── peer.key
      11. ├── server.crt
      12. └── server.key

      On $HOST2

      1. $HOME
      2. └── kubeadmcfg.yaml
      3. ---
      4. /etc/kubernetes/pki
      5. ├── apiserver-etcd-client.crt
      6. ├── apiserver-etcd-client.key
      7. └── etcd
      8. ├── ca.crt
      9. ├── healthcheck-client.crt
      10. ├── healthcheck-client.key
      11. ├── peer.crt
      12. ├── peer.key
      13. ├── server.crt
      14. └── server.key
    7. Create the static pod manifests

      Now that the certificates and configs are in place it’s time to create the manifests. On each host run the kubeadm command to generate a static manifest for etcd.

    8. Optional: Check the cluster health

      1. docker run --rm -it \
      2. --net host \
      3. -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:${ETCD_TAG} etcdctl \
      4. --cert /etc/kubernetes/pki/etcd/peer.crt \
      5. --key /etc/kubernetes/pki/etcd/peer.key \
      6. --cacert /etc/kubernetes/pki/etcd/ca.crt \
      7. --endpoints https://${HOST0}:2379 endpoint health --cluster
      8. ...
      9. https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms
      10. https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
      11. https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms
      • Set ${ETCD_TAG} to the version tag of your etcd image. For example 3.4.3-0. To see the etcd image and tag that kubeadm uses execute kubeadm config images list --kubernetes-version ${K8S_VERSION}, where ${K8S_VERSION} is for example