Air-gapped Installation On Linux

    Please see the requirements for hardware and operating system shown below. To get started with multi-node installation, you need to prepare at least three hosts according to the following requirements.

    Note

    KubeKey uses as the default directory where all Docker related files, including images, are stored. It is recommended you add additional storage volumes with at least 100G mounted to /var/lib/docker and /mnt/registry respectively. See command for reference.

    Node requirements

    • It’s recommended that your OS be clean (without any other software installed). Otherwise, there may be conflicts.
    • Ensure your disk of each node is at least 100G.
    • All nodes must be accessible through SSH.
    • Time synchronization for all nodes.
    • sudo/curl/openssl should be used in all nodes.

    KubeKey can install Kubernetes and KubeSphere together. The dependency that needs to be installed may be different based on the Kubernetes version to be installed. You can refer to the list below to see if you need to install relevant dependencies on your node in advance.

    Note

    • In an air-gapped environment, you can install these dependencies using a private package, a RPM package (for CentOS) or a Deb package (for Debian).
    • It is recommended you create an OS image file with all relevant dependencies installed in advance. In this way, you can use the image file directly for the installation of OS on each machine, improving deployment efficiency while not worrying about any dependency issues.

    Container runtimes

    Your cluster must have an available container runtime. For air-gapped installation, you must install Docker or other container runtimes by yourself before you create a cluster.

    • Make sure the DNS address in /etc/resolv.conf is available. Otherwise, it may cause some issues of DNS in clusters.
    • If your network configuration uses Firewall or Security Group, you must ensure infrastructure components can communicate with each other through specific ports. It’s recommended that you turn off the firewall. For more information, refer to Port Requirements.
    • Supported CNI plugins: Calico and Flannel. Others (such as Cilium and Kube-OVN) may also work but note that they have not been fully tested.

    Example machines

    This example includes three hosts as below with the master node serving as the taskbox.

    Step 2: Prepare a Private Image Registry

    You can use Harbor or any other private image registries. This tutorial uses Docker registry as an example with (If you have your own private image registry, you can skip this step).

    Use self-signed certificates

    1. Generate your own certificate by executing the following commands:

      1. openssl req \
      2. -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
      3. -x509 -days 36500 -out certs/domain.crt
    2. Make sure you specify a domain name in the field Common Name when you are generating your own certificate. For instance, the field is set to dockerhub.kubekey.local in this example.

    Run the following commands to start the Docker registry:

    1. docker run -d \
    2. --restart=always \
    3. --name registry \
    4. -v "$(pwd)"/certs:/certs \
    5. -v /mnt/registry:/var/lib/registry \
    6. -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
    7. -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
    8. -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
    9. -p 443:443 \
    10. registry:2

    Configure the registry

    1. Add an entry to /etc/hosts to map the hostname (i.e. the registry domain name; in this case, it is dockerhub.kubekey.local) to the private IP address of your machine as below.

      1. # docker registry
      2. 192.168.0.2 dockerhub.kubekey.local
    2. Execute the following commands to copy the certificate to a specified directory and make Docker trust it.

      1. mkdir -p /etc/docker/certs.d/dockerhub.kubekey.local
      1. cp certs/domain.crt /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt

      Note

      The path of the certificate is related to the domain name. When you copy the path, use your actual domain name if it is different from the one set above.

    3. To verify whether the private registry is effective, you can copy an image to your local machine first, and use docker push and docker pull to test it.

    Step 3: Download KubeKey

    1. chmod +x kk

    As you install KubeSphere and Kubernetes on Linux, you need to prepare an image package containing all the necessary images and download the Kubernetes binary file in advance.

    1. Download the image list file images-list.txt from a machine that has access to the Internet through the following command:

      Note

      This file lists images under ##+modulename based on different modules. You can add your own images to this file following the same rule. To view the complete file, see .

    2. Download offline-installation-tool.sh.

      1. curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/offline-installation-tool.sh
    3. Make the .sh file executable.

      1. chmod +x offline-installation-tool.sh
    4. You can execute the command ./offline-installation-tool.sh -h to see how to use the script:

      1. [email protected]:/home/ubuntu# ./offline-installation-tool.sh -h
      2. Usage:
      3. ./offline-installation-tool.sh [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES-VERSION ]
      4. Description:
      5. -b : save kubernetes' binaries.
      6. -d IMAGES-DIR : the dir of files (tar.gz) which generated by `docker save`. default: ./kubesphere-images
      7. -l IMAGES-LIST : text file with list of images.
      8. -r PRIVATE-REGISTRY : target private registry:port.
      9. -s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
      10. -v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.17.9
      11. -h : usage message
    5. Download the Kubernetes binary file.

      1. ./offline-installation-tool.sh -b -v v1.21.5

      If you cannot access the object storage service of Google, run the following command instead to add the environment variable to change the source.

      1. export KKZONE=cn;./offline-installation-tool.sh -b -v v1.21.5

      Note

      • You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.2.0: v1.19.x, v1.20.x, v1.21.x or v1.22.x (experimental). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.21.5 by default. For more information about supported Kubernetes versions, see Support Matrix.

      • After you run the script, a folder kubekey is automatically created. Note that this file and kk must be placed in the same directory when you create the cluster later.

    6. Pull images in offline-installation-tool.sh.

      1. ./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images

      Note

      You can choose to pull images as needed. For example, you can delete ##k8s-images and related images under it in images-list.text if you already have a Kubernetes cluster.

    Step 5: Push Images to Your Private Registry

    Transfer your packaged image file to your local machine and execute the following command to push it to the registry.

    Note

    Step 6: Create a Cluster

    In this tutorial, KubeSphere is installed on multiple nodes, so you need to specify a configuration file to add host information. Besides, for air-gapped installation, pay special attention to .spec.registry.privateRegistry, which must be set to your own registry address. See the below for more information.

    Create an example configuration file

    Execute the following command to generate an example configuration file for installation:

    1. ./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]

    For example:

    1. ./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.0 -f config-sample.yaml

    Note

    • Make sure the Kubernetes version is the one you downloaded.

    • If you do not add the flag --with-kubesphere in the command in this step, KubeSphere will not be deployed unless you install it using the addons field in the configuration file or add this flag again when you use ./kk create cluster later.

    Edit the generated configuration file config-sample.yaml. Here is an example for your reference:

    Warning

    For air-gapped installation, you must specify privateRegistry, which is dockerhub.kubekey.local in this example.

    1. apiVersion: kubekey.kubesphere.io/v1alpha1
    2. kind: Cluster
    3. metadata:
    4. name: sample
    5. spec:
    6. hosts:
    7. - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, password: [email protected]}
    8. - {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, password: [email protected]}
    9. - {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, password: [email protected]}
    10. roleGroups:
    11. etcd:
    12. - master
    13. master:
    14. - master
    15. worker:
    16. - master
    17. - node1
    18. - node2
    19. controlPlaneEndpoint:
    20. domain: lb.kubesphere.local
    21. address: ""
    22. port: 6443
    23. kubernetes:
    24. version: v1.21.5
    25. imageRepo: kubesphere
    26. clusterName: cluster.local
    27. network:
    28. plugin: calico
    29. kubePodsCIDR: 10.233.64.0/18
    30. kubeServiceCIDR: 10.233.0.0/18
    31. registry:
    32. registryMirrors: []
    33. insecureRegistries: []
    34. privateRegistry: dockerhub.kubekey.local # Add the private image registry address here.
    35. addons: []
    36. ---
    37. apiVersion: installer.kubesphere.io/v1alpha1
    38. kind: ClusterConfiguration
    39. metadata:
    40. name: ks-installer
    41. namespace: kubesphere-system
    42. labels:
    43. version: v3.2.0
    44. spec:
    45. persistence:
    46. storageClass: ""
    47. authentication:
    48. zone: ""
    49. local_registry: ""
    50. etcd:
    51. monitoring: false
    52. endpointIps: localhost
    53. port: 2379
    54. common:
    55. redis:
    56. enabled: false
    57. redisVolumSize: 2Gi
    58. openldap:
    59. enabled: false
    60. openldapVolumeSize: 2Gi
    61. minioVolumeSize: 20Gi
    62. monitoring:
    63. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    64. es:
    65. elasticsearchMasterVolumeSize: 4Gi
    66. elasticsearchDataVolumeSize: 20Gi
    67. logMaxAge: 7
    68. elkPrefix: logstash
    69. basicAuth:
    70. enabled: false
    71. username: ""
    72. password: ""
    73. externalElasticsearchUrl: ""
    74. externalElasticsearchPort: ""
    75. console:
    76. enableMultiLogin: true
    77. port: 30880
    78. alerting:
    79. enabled: false
    80. # thanosruler:
    81. # replicas: 1
    82. # resources: {}
    83. auditing:
    84. enabled: false
    85. devops:
    86. enabled: false
    87. jenkinsMemoryLim: 2Gi
    88. jenkinsMemoryReq: 1500Mi
    89. jenkinsVolumeSize: 8Gi
    90. jenkinsJavaOpts_Xms: 512m
    91. jenkinsJavaOpts_Xmx: 512m
    92. jenkinsJavaOpts_MaxRAM: 2g
    93. events:
    94. enabled: false
    95. ruler:
    96. enabled: true
    97. replicas: 2
    98. logging:
    99. enabled: false
    100. logsidecar:
    101. enabled: true
    102. replicas: 2
    103. metrics_server:
    104. enabled: false
    105. monitoring:
    106. storageClass: ""
    107. prometheusMemoryRequest: 400Mi
    108. prometheusVolumeSize: 20Gi
    109. multicluster:
    110. clusterRole: none
    111. network:
    112. networkpolicy:
    113. enabled: false
    114. ippool:
    115. type: none
    116. topology:
    117. type: none
    118. notification:
    119. enabled: false
    120. openpitrix:
    121. store:
    122. enabled: false
    123. servicemesh:
    124. enabled: false
    125. kubeedge:
    126. enabled: false
    127. cloudCore:
    128. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    129. tolerations: []
    130. cloudhubPort: "10000"
    131. cloudhubQuicPort: "10001"
    132. cloudhubHttpsPort: "10002"
    133. cloudstreamPort: "10003"
    134. tunnelPort: "10004"
    135. cloudHub:
    136. advertiseAddress:
    137. - ""
    138. nodeLimit: "100"
    139. service:
    140. cloudhubNodePort: "30000"
    141. cloudhubQuicNodePort: "30001"
    142. cloudhubHttpsNodePort: "30002"
    143. cloudstreamNodePort: "30003"
    144. tunnelNodePort: "30004"
    145. edgeWatcher:
    146. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    147. tolerations: []
    148. edgeWatcherAgent:
    149. nodeSelector: {"node-role.kubernetes.io/worker": ""}
    150. tolerations: []

    Info

    For more information about these parameters, see and Kubernetes Cluster Configuration. To enable pluggable components in config-sample.yaml, refer to for more details.

    You can execute the following command after you make sure that all steps above are completed.

    1. ./kk create cluster -f config-sample.yaml

    Warning

    After you transfer the executable file kk and the folder kubekey that contains the Kubernetes binary file to the taskbox machine for installation, they must be placed in the same directory before you execute the command above.

    Step 8: Verify Installation

    When the installation finishes, you can see the content as follows:

    1. #####################################################
    2. ### Welcome to KubeSphere! ###
    3. #####################################################
    4. Console: http://192.168.0.2:30880
    5. Account: admin
    6. Password: [email protected]
    7. NOTES
    8. 1. After you log into the console, please check the
    9. monitoring status of service components in
    10. the "Cluster Management". If any service is not
    11. ready, please wait patiently until all components
    12. are up and running.
    13. 2. Please change the default password after login.
    14. #####################################################
    15. https://kubesphere.io 20xx-xx-xx xx:xx:xx
    16. #####################################################

    Now, you will be able to access the web console of KubeSphere through http://{IP}:30880 with the default account and password admin/[[email protected]](https://kubesphere.io/cdn-cgi/l/email-protection).

    Note

    kubesphere-login

    Appendix

    Image list of KubeSphere 3.2.0

    1. ##k8s-images
    2. kubesphere/kube-apiserver:v1.22.1
    3. kubesphere/kube-controller-manager:v1.22.1
    4. kubesphere/kube-proxy:v1.22.1
    5. kubesphere/kube-scheduler:v1.22.1
    6. kubesphere/kube-apiserver:v1.21.5
    7. kubesphere/kube-controller-manager:v1.21.5
    8. kubesphere/kube-proxy:v1.21.5
    9. kubesphere/kube-controller-manager:v1.20.10
    10. kubesphere/kube-proxy:v1.20.10
    11. kubesphere/kube-scheduler:v1.20.10
    12. kubesphere/kube-apiserver:v1.19.9
    13. kubesphere/kube-controller-manager:v1.19.9
    14. kubesphere/kube-proxy:v1.19.9
    15. kubesphere/kube-scheduler:v1.19.9
    16. kubesphere/pause:3.5
    17. kubesphere/pause:3.4.1
    18. coredns/coredns:1.8.0
    19. calico/cni:v3.20.0
    20. calico/kube-controllers:v3.20.0
    21. calico/node:v3.20.0
    22. calico/pod2daemon-flexvol:v3.20.0
    23. calico/typha:v3.20.0
    24. kubesphere/flannel:v0.12.0
    25. openebs/provisioner-localpv:2.10.1
    26. openebs/linux-utils:2.10.0
    27. kubesphere/k8s-dns-node-cache:1.15.12
    28. ##kubesphere-images
    29. kubesphere/ks-installer:v3.2.0
    30. kubesphere/ks-apiserver:v3.2.0
    31. kubesphere/ks-console:v3.2.0
    32. kubesphere/ks-controller-manager:v3.2.0
    33. kubesphere/kubectl:v1.20.0
    34. kubesphere/kubefed:v0.8.1
    35. kubesphere/tower:v0.2.0
    36. kubesphere/kubectl:v1.19.1
    37. minio/minio:RELEASE.2019-08-07T01-59-21Z
    38. minio/mc:RELEASE.2019-08-07T23-14-43Z
    39. csiplugin/snapshot-controller:v4.0.0
    40. kubesphere/nginx-ingress-controller:v0.48.1
    41. mirrorgooglecontainers/defaultbackend-amd64:1.4
    42. kubesphere/metrics-server:v0.4.2
    43. redis:5.0.12-alpine
    44. haproxy:2.0.22-alpine
    45. alpine:3.14
    46. osixia/openldap:1.3.0
    47. kubesphere/netshoot:v1.0
    48. ##kubeedge-images
    49. kubeedge/cloudcore:v1.7.2
    50. kubesphere/edge-watcher:v0.1.1
    51. kubesphere/edge-watcher-agent:v0.1.0
    52. ##gatekeeper-images
    53. openpolicyagent/gatekeeper:v3.5.2
    54. ##openpitrix-images
    55. kubesphere/openpitrix-jobs:v3.2.0
    56. ##kubesphere-devops-images
    57. kubesphere/devops-apiserver:v3.2.0
    58. kubesphere/devops-controller:v3.2.0
    59. kubesphere/devops-tools:v3.2.0
    60. kubesphere/ks-jenkins:v3.2.0-2.249.1
    61. jenkins/jnlp-slave:3.27-1
    62. kubesphere/builder-base:v3.2.0
    63. kubesphere/builder-nodejs:v3.2.0
    64. kubesphere/builder-maven:v3.2.0
    65. kubesphere/builder-go:v3.2.0
    66. kubesphere/builder-go:v3.2.0
    67. kubesphere/s2ioperator:v3.2.0
    68. kubesphere/s2irun:v3.2.0
    69. kubesphere/s2i-binary:v3.2.0
    70. kubesphere/tomcat85-java11-centos7:v3.2.0
    71. kubesphere/tomcat85-java11-runtime:v3.2.0
    72. kubesphere/tomcat85-java8-centos7:v3.2.0
    73. kubesphere/tomcat85-java8-runtime:v3.2.0
    74. kubesphere/java-11-centos7:v3.2.0
    75. kubesphere/java-8-centos7:v3.2.0
    76. kubesphere/java-8-runtime:v3.2.0
    77. kubesphere/java-11-runtime:v3.2.0
    78. kubesphere/nodejs-8-centos7:v3.2.0
    79. kubesphere/nodejs-6-centos7:v3.2.0
    80. kubesphere/nodejs-4-centos7:v3.2.0
    81. kubesphere/python-36-centos7:v3.2.0
    82. kubesphere/python-35-centos7:v3.2.0
    83. kubesphere/python-34-centos7:v3.2.0
    84. kubesphere/python-27-centos7:v3.2.0
    85. ##kubesphere-monitoring-images
    86. jimmidyson/configmap-reload:v0.3.0
    87. prom/prometheus:v2.26.0
    88. kubesphere/prometheus-config-reloader:v0.43.2
    89. kubesphere/prometheus-operator:v0.43.2
    90. kubesphere/kube-rbac-proxy:v0.8.0
    91. kubesphere/kube-state-metrics:v1.9.7
    92. prom/node-exporter:v0.18.1
    93. kubesphere/k8s-prometheus-adapter-amd64:v0.6.0
    94. prom/alertmanager:v0.21.0
    95. thanosio/thanos:v0.18.0
    96. grafana/grafana:7.4.3
    97. kubesphere/kube-rbac-proxy:v0.8.0
    98. kubesphere/notification-manager-operator:v1.4.0
    99. kubesphere/notification-manager:v1.4.0
    100. kubesphere/notification-tenant-sidecar:v3.2.0
    101. ##kubesphere-logging-images
    102. kubesphere/elasticsearch-curator:v5.7.6
    103. kubesphere/elasticsearch-oss:6.7.0-1
    104. kubesphere/fluentbit-operator:v0.11.0
    105. docker:19.03
    106. kubesphere/fluent-bit:v1.8.3
    107. kubesphere/log-sidecar-injector:1.1
    108. elastic/filebeat:6.7.0
    109. kubesphere/kube-events-operator:v0.3.0
    110. kubesphere/kube-events-exporter:v0.3.0
    111. kubesphere/kube-events-ruler:v0.3.0
    112. kubesphere/kube-auditing-operator:v0.2.0
    113. kubesphere/kube-auditing-webhook:v0.2.0
    114. ##istio-images
    115. istio/pilot:1.11.1
    116. istio/proxyv2:1.11.1
    117. jaegertracing/jaeger-operator:1.27
    118. jaegertracing/jaeger-agent:1.27
    119. jaegertracing/jaeger-collector:1.27
    120. jaegertracing/jaeger-query:1.27
    121. jaegertracing/jaeger-es-index-cleaner:1.27
    122. kubesphere/kiali-operator:v1.38.1
    123. kubesphere/kiali:v1.38
    124. ##example-images
    125. busybox:1.31.1
    126. nginx:1.14-alpine
    127. joosthofman/wget:1.0
    128. nginxdemos/hello:plain-text
    129. wordpress:4.8-apache
    130. mirrorgooglecontainers/hpa-example:latest
    131. java:openjdk-8-jre-alpine
    132. fluent/fluentd:v1.4.2-2.0
    133. perl:latest
    134. kubesphere/examples-bookinfo-productpage-v1:1.16.2
    135. kubesphere/examples-bookinfo-reviews-v1:1.16.2
    136. kubesphere/examples-bookinfo-reviews-v2:1.16.2
    137. kubesphere/examples-bookinfo-details-v1:1.16.2
    138. kubesphere/examples-bookinfo-ratings-v1:1.16.3
    139. ##weave-scope-images