Replacing an unhealthy etcd member

    This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.

    • Take an etcd backup prior to replacing an unhealthy etcd member.

    You can identify if your cluster has an unhealthy etcd member.

    Prerequisites

    • Access to the cluster as a user with the role.

    Procedure

    1. Check the status of the EtcdMembersAvailable status condition using the following command:

    2. Review the output:

      1. 2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy

      This example output shows that the ip-10-0-131-183.ec2.internal etcd member is unhealthy.

    The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:

    • The machine is not running or the node is not ready

    • The etcd pod is crashlooping

    This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.

    If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    • You have identified an unhealthy etcd member.

    Procedure

    1. Determine if the machine is not running:

      1. $ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running

      Example output

      1. ip-10-0-131-183.ec2.internal stopped (1)
      1This output lists the node and the status of the node’s machine. If the status is anything other than running, then the machine is not running.

      If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.

    2. Determine if the node is not ready.

      If either of the following scenarios are true, then the node is not ready.

      • If the machine is running, then check whether the node is unreachable:

        1. $ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable

        Example output

        1. ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable (1)
        1If the node is listed with an unreachable taint, then the node is not ready.
      • If the node is still reachable, then check whether the node is listed as NotReady:

        1. $ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"

        Example output

        1. ip-10-0-131-183.ec2.internal NotReady master 122m v1.21.0 (1)
        1If the node is listed as NotReady, then the node is not ready.

      If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.

    3. Determine if the etcd pod is crashlooping.

      If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.

      1. Verify that all control plane nodes (also known as the master nodes) are listed as Ready:

        1. $ oc get nodes -l node-role.kubernetes.io/master

        Example output

        1. NAME STATUS ROLES AGE VERSION
        2. ip-10-0-131-183.ec2.internal Ready master 6h13m v1.21.0
        3. ip-10-0-164-97.ec2.internal Ready master 6h13m v1.21.0
        4. ip-10-0-154-204.ec2.internal Ready master 6h13m v1.21.0
      2. Check whether the status of an etcd pod is either Error or CrashloopBackoff:

        1. $ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd

        Example output

        1. etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m (1)
        2. etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m
        3. etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
        1Since this status of this pod is Error, then the etcd pod is crashlooping.

      If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.

    Depending on the state of your unhealthy etcd member, use one of the following procedures:

    This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.

    Prerequisites

    • You have identified the unhealthy etcd member.

    • You have verified that either the machine is not running or the node is not ready.

    • You have access to the cluster as a user with the cluster-admin role.

    • You have taken an etcd backup.

    Procedure

    1. Remove the unhealthy member.

      1. Choose a pod that is not on the affected node:

        1. $ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd

        Example output

        1. etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m
        2. etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m
        3. etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
      2. Connect to the running etcd container, passing in the name of a pod that is not on the affected node:

        In a terminal that has access to the cluster as a cluster-admin user, run the following command:

        1. $ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
      3. View the member list:

        1. sh-4.2# etcdctl member list -w table

        Example output

        1. +------------------+---------+------------------------------+---------------------------+---------------------------+
        2. | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
        3. +------------------+---------+------------------------------+---------------------------+---------------------------+
        4. | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 |
        5. | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
        6. | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
        7. +------------------+---------+------------------------------+---------------------------+---------------------------+

        Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.

      4. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command:

        1. sh-4.2# etcdctl member remove 6fc1e7c9db35841d

        Example output

        1. Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
      5. View the member list again and verify that the member was removed:

        1. sh-4.2# etcdctl member list -w table

        Example output

        1. +------------------+---------+------------------------------+---------------------------+---------------------------+
        2. | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
        3. +------------------+---------+------------------------------+---------------------------+---------------------------+
        4. | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
        5. | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
        6. +------------------+---------+------------------------------+---------------------------+---------------------------+

        You can now exit the node shell.

    2. Remove the old secrets for the unhealthy etcd member that was removed.

      1. List the secrets for the unhealthy etcd member that was removed.

        1. $ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal (1)
        1Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.

        There is a peer, serving, and metrics secret as shown in the following output:

        Example output

        1. etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
        2. etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
        3. etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
      2. Delete the secrets for the unhealthy etcd member that was removed.

        1. Delete the peer secret:

        2. Delete the serving secret:

          1. Delete the metrics secret:

            1. $ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
      1. Delete and recreate the control plane machine (also known as the master machine). After this machine is recreated, a new revision is forced and etcd scales up automatically.

        If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it.

        1. Obtain the machine for the unhealthy member.

          In a terminal that has access to the cluster as a cluster-admin user, run the following command:

          1. $ oc get machines -n openshift-machine-api -o wide

          Example output

          1. NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
          2. clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped (1)
          3. clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
          4. clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
          5. clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
          6. clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
          1This is the control plane machine for the unhealthy node, ip-10-0-131-183.ec2.internal.
        2. Save the machine configuration to a file on your file system:

          1. $ oc get machine clustername-8qw5l-master-0 \ (1)
          2. -n openshift-machine-api \
          3. -o yaml \
          4. > new-master-machine.yaml
          1Specify the name of the control plane machine for the unhealthy node.
        3. Edit the new-master-machine.yaml file that was created in the previous step to assign a new name and remove unnecessary fields.

          1. Remove the entire status section:

            1. status:
            2. addresses:
            3. - address: 10.0.131.183
            4. type: InternalIP
            5. - address: ip-10-0-131-183.ec2.internal
            6. type: InternalDNS
            7. - address: ip-10-0-131-183.ec2.internal
            8. type: Hostname
            9. lastUpdated: "2020-04-20T17:44:29Z"
            10. nodeRef:
            11. kind: Node
            12. name: ip-10-0-131-183.ec2.internal
            13. uid: acca4411-af0d-4387-b73e-52b2484295ad
            14. phase: Running
            15. providerStatus:
            16. apiVersion: awsproviderconfig.openshift.io/v1beta1
            17. conditions:
            18. - lastProbeTime: "2020-04-20T16:53:50Z"
            19. lastTransitionTime: "2020-04-20T16:53:50Z"
            20. message: machine successfully created
            21. reason: MachineCreationSucceeded
            22. status: "True"
            23. type: MachineCreation
            24. instanceId: i-0fdb85790d76d0c3f
            25. instanceState: stopped
            26. kind: AWSMachineProviderStatus
          2. Change the metadata.name field to a new name.

            It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example, clustername-8qw5l-master-0 is changed to clustername-8qw5l-master-3.

            For example:

            1. apiVersion: machine.openshift.io/v1beta1
            2. kind: Machine
            3. metadata:
            4. ...
            5. name: clustername-8qw5l-master-3
            6. ...
          3. Update the metadata.selfLink field to use the new machine name from the previous step.

            1. apiVersion: machine.openshift.io/v1beta1
            2. kind: Machine
            3. metadata:
            4. ...
            5. selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machines/clustername-8qw5l-master-3
            6. ...
          4. Remove the spec.providerID field:

            1. providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
          5. Remove the metadata.annotations and metadata.generation fields:

            1. annotations:
            2. machine.openshift.io/instance-state: running
            3. ...
            4. generation: 2
          6. Remove the metadata.resourceVersion and metadata.uid fields:

            1. resourceVersion: "13291"
            2. uid: a282eb70-40a2-4e89-8009-d05dd420d31a
        4. Delete the machine of the unhealthy member:

          1. $ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
          1Specify the name of the control plane machine for the unhealthy node.
        5. Verify that the machine was deleted:

          1. $ oc get machines -n openshift-machine-api -o wide

          Example output

          1. NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
          2. clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
          3. clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
          4. clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
          5. clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
        6. Create the new machine using the new-master-machine.yaml file:

          1. Verify that the new machine has been created:

            1. $ oc get machines -n openshift-machine-api -o wide

            Example output

            1. NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
            2. clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
            3. clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
            4. clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running (1)
            5. clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
            6. clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
            7. clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
            1The new machine, clustername-8qw5l-master-3 is being created and is ready once the phase changes from Provisioning to Running.

            It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.

        Verification

        1. Verify that all etcd pods are running properly.

          In a terminal that has access to the cluster as a cluster-admin user, run the following command:

          1. $ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd

          Example output

          1. etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s
          2. etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m
          3. etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m

          If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command:

          1. $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
        2. Verify that there are exactly three etcd members.

          1. Connect to the running etcd container, passing in the name of a pod that was not on the affected node:

            In a terminal that has access to the cluster as a cluster-admin user, run the following command:

            1. $ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
            1. sh-4.2# etcdctl member list -w table

            Example output

            If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.

            Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.

        Replacing an unhealthy etcd member whose etcd pod is crashlooping

        This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.

        Prerequisites

        • You have identified the unhealthy etcd member.

        • You have verified that the etcd pod is crashlooping.

        • You have access to the cluster as a user with the cluster-admin role.

        • You have taken an etcd backup.

          It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.

        Procedure

        1. Stop the crashlooping etcd pod.

          1. Debug the node that is crashlooping.

            In a terminal that has access to the cluster as a cluster-admin user, run the following command:

            1. $ oc debug node/ip-10-0-131-183.ec2.internal (1)
            1Replace this with the name of the unhealthy node.
          2. Change your root directory to the host:

            1. sh-4.2# chroot /host
          3. Move the existing etcd pod file out of the kubelet manifest directory:

            1. sh-4.2# mkdir /var/lib/etcd-backup
            1. sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
          4. Move the etcd data directory to a different location:

            1. sh-4.2# mv /var/lib/etcd/ /tmp

            You can now exit the node shell.

        2. Remove the unhealthy member.

          1. Choose a pod that is not on the affected node.

            In a terminal that has access to the cluster as a cluster-admin user, run the following command:

            1. $ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd

            Example output

            1. etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m
            2. etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m
            3. etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
          2. Connect to the running etcd container, passing in the name of a pod that is not on the affected node.

            In a terminal that has access to the cluster as a cluster-admin user, run the following command:

            1. $ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
          3. View the member list:

            1. sh-4.2# etcdctl member list -w table

            Example output

            1. +------------------+---------+------------------------------+---------------------------+---------------------------+
            2. | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
            3. +------------------+---------+------------------------------+---------------------------+---------------------------+
            4. | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 |
            5. | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
            6. | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
            7. +------------------+---------+------------------------------+---------------------------+---------------------------+

            Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.

          4. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command:

            1. sh-4.2# etcdctl member remove 62bcf33650a7170a

            Example output

            1. Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
          5. View the member list again and verify that the member was removed:

            1. sh-4.2# etcdctl member list -w table

            Example output

            1. +------------------+---------+------------------------------+---------------------------+---------------------------+
            2. | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
            3. +------------------+---------+------------------------------+---------------------------+---------------------------+
            4. | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
            5. | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
            6. +------------------+---------+------------------------------+---------------------------+---------------------------+

            You can now exit the node shell.

        3. Remove the old secrets for the unhealthy etcd member that was removed.

          1. List the secrets for the unhealthy etcd member that was removed.

            1. $ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal (1)
            1Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.

            There is a peer, serving, and metrics secret as shown in the following output:

            Example output

            1. etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
            2. etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
            3. etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
          2. Delete the secrets for the unhealthy etcd member that was removed.

            1. Delete the peer secret:

              1. $ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
            2. Delete the serving secret:

              1. $ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
            3. Delete the metrics secret:

              1. $ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
        1. Force etcd redeployment.

          In a terminal that has access to the cluster as a cluster-admin user, run the following command:

          1. $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
          1The forceRedeploymentReason value must be unique, which is why a timestamp is appended.

          When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes (also known as the master nodes) have a functioning etcd pod.

        Verification

        • Verify that the new member is available and healthy.

          1. Connect to the running etcd container again.

            In a terminal that has access to the cluster as a cluster-admin user, run the following command: