部署 Node 节点组件

    创建 kubeconfig 配置文件

    对各节点依次如下操作创建配置文件:

    1. $ kubectl config set-cluster openeuler-k8s \
    2. --certificate-authority=/etc/kubernetes/pki/ca.pem \
    3. --embed-certs=true \
    4. --server=https://192.168.122.154:6443 \
    5. --kubeconfig=k8snode1.kubeconfig
    6. $ kubectl config set-credentials system:node:k8snode1 \
    7. --client-certificate=/etc/kubernetes/pki/k8snode1.pem \
    8. --client-key=/etc/kubernetes/pki/k8snode1-key.pem \
    9. --embed-certs=true \
    10. --kubeconfig=k8snode1.kubeconfig
    11. $ kubectl config set-context default \
    12. --cluster=openeuler-k8s \
    13. --user=system:node:k8snode1 \
    14. --kubeconfig=k8snode1.kubeconfig
    15. $ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig

    注:修改k8snode1为对应节点名

    拷贝证书

    1. $ ls /etc/kubernetes/pki/
    2. ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem
    3. k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig

    先通过 containernetworking-plugins 作为 kubelet 使用的 cni 插件,后续可以引入 calico,flannel 等插件,增强集群的网络能力。

    1. # 桥网络配置
    2. $ cat /etc/cni/net.d/10-bridge.conf
    3. {
    4. "cniVersion": "0.3.1",
    5. "name": "bridge",
    6. "type": "bridge",
    7. "bridge": "cnio0",
    8. "isGateway": true,
    9. "ipMasq": true,
    10. "ipam": {
    11. "type": "host-local",
    12. "subnet": "10.244.0.0/16",
    13. "gateway": "10.244.0.1"
    14. },
    15. "dns": {
    16. "nameservers": [
    17. "10.244.0.1"
    18. ]
    19. }
    20. }
    21. # 回环网络配置
    22. $ cat /etc/cni/net.d/99-loopback.conf
    23. "cniVersion": "0.3.1",
    24. "name": "lo",
    25. "type": "loopback"
    26. }

    部署 kubelet 服务

    1. $ cat /etc/kubernetes/pki/kubelet_config.yaml
    2. kind: KubeletConfiguration
    3. apiVersion: kubelet.config.k8s.io/v1beta1
    4. authentication:
    5. anonymous:
    6. enabled: false
    7. enabled: true
    8. x509:
    9. clientCAFile: /etc/kubernetes/pki/ca.pem
    10. authorization:
    11. mode: Webhook
    12. clusterDNS:
    13. - 10.32.0.10
    14. clusterDomain: cluster.local
    15. runtimeRequestTimeout: "15m"
    16. tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem"
    17. tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem"

    注意:clusterDNS 的地址为:10.32.0.10,必须和之前设置的 service-cluster-ip-range 一致

    编写 systemd 配置文件

    1. --container-runtime=remote \
    2. --container-runtime-endpoint=unix:///var/run/isulad.sock \

    部署 kube-proxy

    kube-proxy 依赖的配置文件

    1. cat /etc/kubernetes/pki/kube_proxy_config.yaml
    2. kind: KubeProxyConfiguration
    3. apiVersion: kubeproxy.config.k8s.io/v1alpha1
    4. clientConnection:
    5. kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig
    6. clusterCIDR: 10.244.0.0/16
    7. mode: "iptables"
    1. $ cat /usr/lib/systemd/system/kube-proxy.service
    2. [Unit]
    3. Description=Kubernetes Kube-Proxy Server
    4. Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/
    5. After=network.target
    6. [Service]
    7. EnvironmentFile=-/etc/kubernetes/config
    8. EnvironmentFile=-/etc/kubernetes/proxy
    9. ExecStart=/usr/bin/kube-proxy \
    10. $KUBE_LOGTOSTDERR \
    11. $KUBE_LOG_LEVEL \
    12. --config=/etc/kubernetes/pki/kube_proxy_config.yaml \
    13. --hostname-override=k8snode1 \
    14. $KUBE_PROXY_ARGS
    15. Restart=on-failure
    16. LimitNOFILE=65536
    17. [Install]
    18. WantedBy=multi-user.target
    1. $ systemctl enable kubelet kube-proxy
    2. $ systemctl start kubelet kube-proxy

    其他节点依次部署即可。

    验证集群状态

    等待几分钟,使用如下命令查看node状态:

    部署 coredns

    编写 coredns 配置文件

    1. $ cat /etc/kubernetes/pki/dns/Corefile
    2. .:53 {
    3. errors
    4. health {
    5. lameduck 5s
    6. }
    7. ready
    8. kubernetes cluster.local in-addr.arpa ip6.arpa {
    9. pods insecure
    10. endpoint https://192.168.122.154:6443
    11. kubeconfig /etc/kubernetes/pki/admin.kubeconfig default
    12. fallthrough in-addr.arpa ip6.arpa
    13. prometheus :9153
    14. forward . /etc/resolv.conf {
    15. max_concurrent 1000
    16. }
    17. cache 30
    18. loop
    19. reload
    20. loadbalance
    21. }

    说明:

    • 监听53端口;
    • 设置kubernetes插件配置:证书、kube api的URL;

    准备 systemd 的 service 文件

    1. cat /usr/lib/systemd/system/coredns.service
    2. [Unit]
    3. Description=Kubernetes Core DNS server
    4. Documentation=https://github.com/coredns/coredns
    5. After=network.target
    6. [Service]
    7. ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile"
    8. Restart=on-failure
    9. LimitNOFILE=65536
    10. [Install]
    11. WantedBy=multi-user.target
    1. $ systemctl enable coredns
    2. $ systemctl start coredns

    创建 coredns 的 Service 对象

    1. $ cat coredns_server.yaml
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: kube-dns
    6. namespace: kube-system
    7. annotations:
    8. prometheus.io/port: "9153"
    9. prometheus.io/scrape: "true"
    10. labels:
    11. k8s-app: kube-dns
    12. kubernetes.io/cluster-service: "true"
    13. kubernetes.io/name: "CoreDNS"
    14. spec:
    15. clusterIP: 10.32.0.10
    16. ports:
    17. - name: dns
    18. port: 53
    19. protocol: UDP
    20. - name: dns-tcp
    21. port: 53
    22. protocol: TCP
    23. - name: metrics
    24. port: 9153
    25. protocol: TCP

    创建 coredns 的 endpoint 对象

    1. # 查看service对象
    2. $ kubectl get service -n kube-system kube-dns
    3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    4. kube-dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP,9153/TCP 51m
    5. # 查看endpoint对象
    6. $ kubectl get endpoints -n kube-system kube-dns
    7. kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m