Deploy KubeSphere on VMware vSphere

    This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of master and etcd nodes using the load balancers on VMware vSphere.

    Prerequisites

    • Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the . This tutorial focuses more on how to configure load balancers.
    • You need a VMware vSphere account to create VMs.
    • Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a default StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.

    Architecture

    This tutorial creates 8 virtual machines of CentOS Linux release 7.6.1810 (Core) for the default minimal installation. Every machine has 2 Cores, 4 GB of memory and 40 G disk space.

    Note

    You do not need to create a virtual machine for (i.e. Virtual IP) above, so only 8 virtual machines need to be created.

    You can follow the New Virtual Machine wizard to create a virtual machine to place in the VMware Host Client inventory.

    create

    1. In the first step Select a creation type, you can deploy a virtual machine from an OVF or OVA file, or register an existing virtual machine directly.

    2. When you create a new virtual machine, provide a unique name for the virtual machine to distinguish it from existing virtual machines on the host you are managing.

      kubesphereOnVsphere-en-0-1-2-name

    3. Select a compute resource and storage (datastore) for the configuration and disk files. You can select the datastore that has the most suitable properties, such as size, speed, and availability, for your virtual machine storage.

      kubesphereOnVsphere-en-0-1-4-storage

    4. Select a guest operating system. The wizard will provide the appropriate defaults for the operating system installation.

      kubesphereOnVsphere-en-0-1-6-system

    5. Before you finish deploying a new virtual machine, you have the option to set Virtual Hardware and VM Options. You can refer to the images below for part of the fields.

      kubesphereOnVsphere-en-0-1-7-hardware-3

    6. In Ready to complete page, you review the configuration selections that you have made for the virtual machine. Click Finish at the bottom-right corner to continue.

      kubesphereOnVsphere-en-0-1-8

    Install a Load Balancer using Keepalived and HAProxy

    For a production environment, you have to prepare an external load balancer for your multiple-master cluster. If you do not have a load balancer, you can install it using Keepalived and HAProxy. If you are provisioning a development or testing environment by installing a single-master cluster, please skip this section.

    host lb-0 (10.10.71.77) and host lb-1 (10.10.71.66).

    On the servers with IP 10.10.71.77 and 10.10.71.66, configure HAProxy as follows.

    Note

    The configuration of the two lb machines is the same. Please pay attention to the backend service address.

    1. # HAProxy Configure /etc/haproxy/haproxy.cfg
    2. global
    3. log 127.0.0.1 local2
    4. chroot /var/lib/haproxy
    5. pidfile /var/run/haproxy.pid
    6. maxconn 4000
    7. user haproxy
    8. group haproxy
    9. daemon
    10. # turn on stats unix socket
    11. stats socket /var/lib/haproxy/stats
    12. #---------------------------------------------------------------------
    13. # common defaults that all the 'listen' and 'backend' sections will
    14. # use if not designated in their block
    15. #---------------------------------------------------------------------
    16. defaults
    17. log global
    18. option httplog
    19. option dontlognull
    20. timeout connect 5000
    21. timeout client 5000
    22. timeout server 5000
    23. #---------------------------------------------------------------------
    24. # main frontend which proxys to the backends
    25. #---------------------------------------------------------------------
    26. frontend kube-apiserver
    27. bind *:6443
    28. mode tcp
    29. option tcplog
    30. default_backend kube-apiserver
    31. #---------------------------------------------------------------------
    32. # round robin balancing between the various backends
    33. #---------------------------------------------------------------------
    34. backend kube-apiserver
    35. mode tcp
    36. option tcplog
    37. balance roundrobin
    38. default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    39. server kube-apiserver-1 10.10.71.214:6443 check
    40. server kube-apiserver-2 10.10.71.73:6443 check
    41. server kube-apiserver-3 10.10.71.62:6443 check

    Check grammar first before you start it.

    1. haproxy -f /etc/haproxy/haproxy.cfg -c

    Restart HAProxy and execute the command below to enable HAProxy.

    1. systemctl restart haproxy && systemctl enable haproxy

    Stop HAProxy.

    1. systemctl stop haproxy

    Main HAProxy 77 lb-0-10.10.71.77 (/etc/keepalived/keepalived.conf).

    1. global_defs {
    2. notification_email {
    3. }
    4. smtp_connect_timeout 30
    5. router_id LVS_DEVEL01
    6. vrrp_skip_check_adv_addr
    7. vrrp_garp_interval 0
    8. vrrp_gna_interval 0
    9. }
    10. vrrp_script chk_haproxy {
    11. script "killall -0 haproxy"
    12. interval 2
    13. weight 20
    14. }
    15. vrrp_instance haproxy-vip {
    16. state MASTER
    17. priority 100
    18. interface ens192
    19. virtual_router_id 60
    20. advert_int 1
    21. authentication {
    22. auth_type PASS
    23. auth_pass 1111
    24. }
    25. unicast_src_ip 10.10.71.77
    26. unicast_peer {
    27. 10.10.71.66
    28. }
    29. virtual_ipaddress {
    30. #vip
    31. 10.10.71.67/24
    32. track_script {
    33. chk_haproxy
    34. }
    35. }

    Remark HAProxy 66 lb-1-10.10.71.66 (/etc/keepalived/keepalived.conf).

    1. global_defs {
    2. notification_email {
    3. }
    4. router_id LVS_DEVEL02
    5. vrrp_skip_check_adv_addr
    6. vrrp_garp_interval 0
    7. vrrp_gna_interval 0
    8. }
    9. vrrp_script chk_haproxy {
    10. script "killall -0 haproxy"
    11. weight 20
    12. }
    13. vrrp_instance haproxy-vip {
    14. state BACKUP
    15. priority 90
    16. interface ens192
    17. virtual_router_id 60
    18. advert_int 1
    19. authentication {
    20. auth_type PASS
    21. auth_pass 1111
    22. }
    23. unicast_src_ip 10.10.71.66
    24. unicast_peer {
    25. 10.10.71.77
    26. }
    27. virtual_ipaddress {
    28. 10.10.71.67/24
    29. }
    30. track_script {
    31. chk_haproxy
    32. }
    33. }

    Start keepalived and enable keepalived.

    1. systemctl restart keepalived && systemctl enable keepalived
    1. systemctl start keepalived

    Use ip a s to view the vip binding status of each lb node:

    1. ip a s

    Pause VIP node HAProxy through the following command:

    1. systemctl stop haproxy

    Use ip a s again to check the vip binding of each lb node, and check whether vip drifts:

    1. ip a s

    Alternatively, use the command below:

    1. systemctl status -l keepalived

    Download KubeKey

    Follow the step below to download KubeKey.

    Download KubeKey from its or use the following command directly.

    1. curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh -

    Run the following command first to make sure you download KubeKey from the correct zone.

    1. export KKZONE=cn

    Run the following command to download KubeKey:

    Note

    After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run export KKZONE=cn again before you proceed with the steps below.

    Note

    The commands above download the latest release (v1.2.0) of KubeKey. You can change the version number in the command to download a specific version.

    Make kk executable:

    1. chmod +x kk

    With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.

    Create a Kubernetes cluster with KubeSphere installed (for example, --with-kubesphere v3.2.0):

    1. ./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.0

    Note

    • Recommended Kubernetes versions for KubeSphere 3.2.0: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see Support Matrix.

    • If you do not add the flag --with-kubesphere in the command in this step, KubeSphere will not be deployed unless you install it using the addons field in the configuration file or add this flag again when you use ./kk create cluster later.

    • If you add the flag --with-kubesphere without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

    A default file config-sample.yaml will be created. Modify it according to your environment.

    1. vi config-sample.yaml
    1. apiVersion: kubekey.kubesphere.io/v1alpha1
    2. kind: Cluster
    3. metadata:
    4. name: config-sample
    5. spec:
    6. hosts:
    7. - {name: master1, address: 10.10.71.214, internalAddress: 10.10.71.214, password: [email protected]!}
    8. - {name: master2, address: 10.10.71.73, internalAddress: 10.10.71.73, password: [email protected]!}
    9. - {name: master3, address: 10.10.71.62, internalAddress: 10.10.71.62, password: [email protected]!}
    10. - {name: node1, address: 10.10.71.75, internalAddress: 10.10.71.75, password: [email protected]!}
    11. - {name: node2, address: 10.10.71.76, internalAddress: 10.10.71.76, password: [email protected]!}
    12. - {name: node3, address: 10.10.71.79, internalAddress: 10.10.71.79, password: [email protected]!}
    13. roleGroups:
    14. etcd:
    15. - master1
    16. - master2
    17. - master3
    18. master:
    19. - master1
    20. - master2
    21. - master3
    22. worker:
    23. - node1
    24. - node2
    25. - node3
    26. controlPlaneEndpoint:
    27. domain: lb.kubesphere.local
    28. # vip
    29. address: "10.10.71.67"
    30. port: "6443"
    31. kubernetes:
    32. version: v1.21.5
    33. imageRepo: kubesphere
    34. clusterName: cluster.local
    35. masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
    36. maxPods: 110 # maxPods is the number of pods that can run on this Kubelet. [Default: 110]
    37. nodeCidrMaskSize: 24 # internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
    38. proxyMode: ipvs # mode specifies which proxy mode to use. [Default: ipvs]
    39. network:
    40. plugin: calico
    41. calico:
    42. ipipMode: Always # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
    43. vxlanMode: Never # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
    44. vethMTU: 1440 # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. [Default: 1440]
    45. kubePodsCIDR: 10.233.64.0/18
    46. kubeServiceCIDR: 10.233.0.0/18
    47. registryMirrors: []
    48. insecureRegistries: []
    49. privateRegistry: ""
    50. defaultStorageClass: localVolume
    51. localVolume:
    52. storageClassName: local
    53. ---
    54. apiVersion: installer.kubesphere.io/v1alpha1
    55. kind: ClusterConfiguration
    56. metadata:
    57. name: ks-installer
    58. namespace: kubesphere-system
    59. labels:
    60. version: v3.2.0
    61. spec:
    62. local_registry: ""
    63. persistence:
    64. storageClass: ""
    65. authentication:
    66. jwtSecret: ""
    67. etcd:
    68. monitoring: true # Whether to install etcd monitoring dashboard
    69. endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 # etcd cluster endpointIps
    70. port: 2379 # etcd port
    71. tlsEnable: true
    72. common:
    73. mysqlVolumeSize: 20Gi # MySQL PVC size
    74. minioVolumeSize: 20Gi # Minio PVC size
    75. etcdVolumeSize: 20Gi # etcd PVC size
    76. openldapVolumeSize: 2Gi # openldap PVC size
    77. redisVolumSize: 2Gi # Redis PVC size
    78. es: # Storage backend for logging, tracing, events and auditing.
    79. elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
    80. elasticsearchDataReplicas: 1 # total number of data nodes
    81. elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
    82. elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
    83. logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
    84. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
    85. # externalElasticsearchUrl:
    86. # externalElasticsearchPort:
    87. console:
    88. enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time.
    89. port: 30880
    90. alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    91. enabled: false
    92. auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
    93. enabled: false
    94. devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
    95. enabled: false
    96. jenkinsMemoryLim: 2Gi # Jenkins memory limit
    97. jenkinsMemoryReq: 1500Mi # Jenkins memory request
    98. jenkinsVolumeSize: 8Gi # Jenkins volume size
    99. jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters
    100. jenkinsJavaOpts_Xmx: 512m
    101. jenkinsJavaOpts_MaxRAM: 2g
    102. events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    103. enabled: false
    104. logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    105. enabled: false
    106. logsidecarReplicas: 2
    107. metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
    108. enabled: true
    109. monitoring: #
    110. prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
    111. prometheusMemoryRequest: 400Mi # Prometheus request memory
    112. prometheusVolumeSize: 20Gi # Prometheus PVC size
    113. alertmanagerReplicas: 1 # AlertManager Replicas
    114. multicluster:
    115. clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster
    116. networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    117. enabled: false
    118. notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
    119. enabled: false
    120. openpitrix: # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management
    121. enabled: false
    122. servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology
    123. enabled: false

    Create a cluster using the configuration file you customized above:

    1. ./kk create cluster -f config-sample.yaml

    Verify the Multi-node Installation

    Inspect the logs of installation by executing the command below:

    1. kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

    If you can see the welcome log return, it means the installation is successful. Your cluster is up and running.

    1. **************************************************
    2. #####################################################
    3. ### Welcome to KubeSphere! ###
    4. #####################################################
    5. Console: http://10.10.71.214:30880
    6. Account: admin
    7. Password: [email protected]
    8. NOTES
    9. 1. After you log into the console, please check the
    10. monitoring status of service components in
    11. the "Cluster Management". If any service is not
    12. ready, please wait patiently until all components
    13. are up and running.
    14. 2. Please change the default password after login.
    15. #####################################################
    16. https://kubesphere.io 2020-08-15 23:32:12
    17. #####################################################

    Enable Pluggable Components (Optional)

    The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see for more details.