Add New Nodes to a Kubernetes Cluster

    This tutorial demonstrates how to add new nodes to a single-node cluster. To scale out a multi-node cluster, the steps are basically the same.

    1. Retrieve your cluster information using KubeKey. The command below creates a configuration file ().

      You can skip this step if you already have the configuration file on your machine. For example, if you want to add nodes to a multi-node cluster which was set up by KubeKey, you might still have the configuration file if you have not deleted it.

    2. In the configuration file, put the information of your new nodes under hosts and roleGroups. The example adds two new nodes (i.e. node1 and node2). Here master1 is the existing node.

      1. ···
      2. spec:
      3. hosts:
      4. - {name: master1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: root, password: [email protected]}
      5. - {name: node1, address: 192.168.0.4, internalAddress: 192.168.0.4, user: root, password: [email protected]}
      6. - {name: node2, address: 192.168.0.5, internalAddress: 192.168.0.5, user: root, password: [email protected]}
      7. roleGroups:
      8. etcd:
      9. - master1
      10. master:
      11. - master1
      12. worker:
      13. - node1
      14. - node2
      15. ···

      Note

      • For more information about the configuration file, see Edit the configuration file.
      • You are not allowed to modify the host name of existing nodes when adding new nodes.
      • Replace the host name in the example with your own.
    3. Execute the following command:

      1. NAME STATUS ROLES AGE VERSION
      2. master1 Ready master,worker 20d v1.17.9
      3. node1 Ready worker 31h v1.17.9

    The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (for example, F5). In addition, Keepalived and , or Nginx is also an alternative for creating highly available clusters.

    1. Create a configuration file using KubeKey.

    2. Open the file and you can see some fields are pre-populated with values. Add the information of new nodes and your load balancer to the file. Here is an example for your reference:

      1. apiVersion: kubekey.kubesphere.io/v1alpha1
      2. kind: Cluster
      3. metadata:
      4. name: sample
      5. spec:
      6. hosts:
      7. # You should complete the ssh information of the hosts
      8. - {name: master1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: root, password: Testing123}
      9. - {name: master2, address: 172.16.0.5, internalAddress: 172.16.0.5, user: root, password: Testing123}
      10. - {name: master3, address: 172.16.0.6, internalAddress: 172.16.0.6, user: root, password: Testing123}
      11. - {name: worker1, address: 172.16.0.3, internalAddress: 172.16.0.3, user: root, password: Testing123}
      12. - {name: worker2, address: 172.16.0.4, internalAddress: 172.16.0.4, user: root, password: Testing123}
      13. - {name: worker3, address: 172.16.0.7, internalAddress: 172.16.0.7, user: root, password: Testing123}
      14. roleGroups:
      15. etcd:
      16. - master1
      17. - master2
      18. - master3
      19. master:
      20. - master1
      21. - master2
      22. worker:
      23. - worker1
      24. - worker2
      25. controlPlaneEndpoint:
      26. # If loadbalancer is used, 'address' should be set to loadbalancer's ip.
      27. domain: lb.kubesphere.local
      28. address: 172.16.0.253
      29. port: 6443
      30. kubernetes:
      31. version: v1.17.9
      32. imageRepo: kubesphere
      33. clusterName: cluster.local
      34. proxyMode: ipvs
      35. masqueradeAll: false
      36. maxPods: 110
      37. nodeCidrMaskSize: 24
      38. network:
      39. plugin: calico
      40. kubePodsCIDR: 10.233.64.0/18
      41. kubeServiceCIDR: 10.233.0.0/18
      42. registry:
      43. privateRegistry: ""
    3. Pay attention to the controlPlaneEndpoint field.

      • The domain name of the load balancer is lb.kubesphere.local by default for internal access. You can change it based on your needs.
      • In most cases, you need to provide the private IP address of the load balancer for the field address. However, different cloud providers may have different configurations for load balancers. For example, if you configure a Server Load Balancer (SLB) on Alibaba Cloud, the platform assigns a public IP address to the SLB, which means you need to specify the public IP address for the field address.
      • The field port indicates the port of api-server.