04.部署 etcd 集群

    本文档介绍部署一个三节点高可用 etcd 集群的步骤:

    • 下载和分发 etcd 二进制文件;
    • 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的数据流;
    • 创建 etcd 的 systemd unit 文件,配置服务参数;
    • 检查集群工作状态;

    etcd 集群各节点的名称和 IP 如下:

    • zhangjun-k8s01:172.27.137.240
    • zhangjun-k8s02:172.27.137.239
    • zhangjun-k8s03:172.27.137.238

    注意:如果没有特殊指明,本文档的所有操作均在 zhangjun-k8s01 节点上执行,然后远程分发文件和执行命令。

    到 etcd 的 下载最新版本的发布包:

    1. source /opt/k8s/bin/environment.sh
    2. for node_ip in ${NODE_IPS[@]}
    3. do
    4. echo ">>> ${node_ip}"
    5. scp etcd-v3.3.13-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
    6. ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
    7. done

    创建 etcd 证书和私钥

    创建证书签名请求:

    1. cd /opt/k8s/work
    2. cat > etcd-csr.json <<EOF
    3. {
    4. "CN": "etcd",
    5. "hosts": [
    6. "127.0.0.1",
    7. "172.27.137.240",
    8. "172.27.137.239",
    9. "172.27.137.238"
    10. ],
    11. "key": {
    12. "algo": "rsa",
    13. "size": 2048
    14. },
    15. "names": [
    16. {
    17. "C": "CN",
    18. "ST": "BeiJing",
    19. "L": "BeiJing",
    20. "O": "k8s",
    21. "OU": "4Paradigm"
    22. ]
    23. }
    24. EOF
    • hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,需要将 etcd 集群的三个节点 IP 都列在其中;

    生成证书和私钥:

    1. cd /opt/k8s/work
    2. cfssl gencert -ca=/opt/k8s/work/ca.pem \
    3. -ca-key=/opt/k8s/work/ca-key.pem \
    4. -config=/opt/k8s/work/ca-config.json \
    5. -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
    6. ls etcd*pem

    分发生成的证书和私钥到各 etcd 节点:

    1. cd /opt/k8s/work
    2. source /opt/k8s/bin/environment.sh
    3. do
    4. echo ">>> ${node_ip}"
    5. ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
    6. scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
    7. done
    • WorkingDirectory--data-dir:指定工作目录和数据目录为 ${ETCD_DATA_DIR},需在启动服务前创建这个目录;
    • --wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 --data-dir 不同的磁盘;
    • --name:指定节点名称,当 --initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
    • --cert-file--key-file:etcd server 与 client 通信时使用的证书和私钥;
    • --trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
    • --peer-cert-file--peer-key-file:etcd 与 peer 通信使用的证书和私钥;
    • --peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书;

    为各节点创建和分发 etcd systemd unit 文件

    替换模板文件中的变量,为各节点创建 systemd unit 文件:

    1. cd /opt/k8s/work
    2. source /opt/k8s/bin/environment.sh
    3. for (( i=0; i < 3; i++ ))
    4. do
    5. sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service
    6. done
    7. ls *.service
    • NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;
    1. cd /opt/k8s/work
    2. source /opt/k8s/bin/environment.sh
    3. for node_ip in ${NODE_IPS[@]}
    4. do
    5. echo ">>> ${node_ip}"
    6. scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
    7. done
    • 文件重命名为 etcd.service;
    1. cd /opt/k8s/work
    2. source /opt/k8s/bin/environment.sh
    3. for node_ip in ${NODE_IPS[@]}
    4. echo ">>> ${node_ip}"
    5. ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
    6. done
    • 必须先创建 etcd 数据目录和工作目录;
    • etcd 进程首次启动时会等待其它节点的 etcd 加入集群,命令 systemctl start etcd 会卡住一段时间,为正常现象;

    检查启动结果

    1. cd /opt/k8s/work
    2. source /opt/k8s/bin/environment.sh
    3. for node_ip in ${NODE_IPS[@]}
    4. do
    5. echo ">>> ${node_ip}"
    6. ssh root@${node_ip} "systemctl status etcd|grep Active"
    7. done

    确保状态为 active (running),否则查看日志,确认原因:

    部署完 etcd 集群后,在任一 etcd 节点上执行如下命令:

    1. cd /opt/k8s/work
    2. source /opt/k8s/bin/environment.sh
    3. for node_ip in ${NODE_IPS[@]}
    4. do
    5. echo ">>> ${node_ip}"
    6. ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
    7. --endpoints=https://${node_ip}:2379 \
    8. --cacert=/etc/kubernetes/cert/ca.pem \
    9. --cert=/etc/etcd/cert/etcd.pem \
    10. --key=/etc/etcd/cert/etcd-key.pem endpoint health
    11. done

    预期输出:

    1. >>> 172.27.137.240
    2. https://172.27.137.240:2379 is healthy: successfully committed proposal: took = 2.756451ms
    3. >>> 172.27.137.239
    4. https://172.27.137.239:2379 is healthy: successfully committed proposal: took = 2.025018ms
    5. >>> 172.27.137.238
    6. https://172.27.137.238:2379 is healthy: successfully committed proposal: took = 2.335097ms

    输出均为 healthy 时表示集群服务正常。

    查看当前的 leader

    1. source /opt/k8s/bin/environment.sh
    2. ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
    3. -w table --cacert=/etc/kubernetes/cert/ca.pem \
    4. --cert=/etc/etcd/cert/etcd.pem \
    5. --key=/etc/etcd/cert/etcd-key.pem \
    6. --endpoints=${ETCD_ENDPOINTS} endpoint status
    1. +-----------------------------+------------------+---------+---------+-----------+-----------+------------+
    2. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
    3. +-----------------------------+------------------+---------+---------+-----------+-----------+------------+
    4. | https://172.27.137.240:2379 | bdda68fa64a34210 | 3.3.13 | 20 kB | false | 2 | 8 |
    5. | https://172.27.137.239:2379 | 3405e3220d380204 | 3.3.13 | 20 kB | true | 2 | 8 |
    6. +-----------------------------+------------------+---------+---------+-----------+-----------+------------+
    • 可见,当前的 leader 为 172.27.137.239。