Import an AWS EKS Cluster

    • You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the host cluster. For more information about how to prepare a host cluster, refer to Prepare a host cluster.
    • You have an EKS cluster to be used as the member cluster.

    Import an EKS Cluster

    You need to deploy KubeSphere on your EKS cluster first. For more information about how to deploy KubeSphere on EKS, refer to Deploy KubeSphere on AWS EKS.

    1. In order to manage the member cluster from the host cluster, you need to make the same between them. Therefore, get it first by executing the following command on your host cluster.

      The output is similar to the following:

      1. jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
    2. Log in to the KubeSphere console of the EKS cluster as admin. Click Platform in the upper-left corner and then select Cluster Management.

    3. Go to CRDs, enter ClusterConfiguration in the search bar, and then press Enter on your keyboard. Click ClusterConfiguration to go to its detail page.

    4. Click on the right and then select Edit YAML to edit ks-installer.

      1. authentication:
      2. jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g

      Note

      Make sure you use the value of your own jwtSecret. You need to wait for a while so that the changes can take effect.

    1. doesn’t provide a built-in kubeconfig file as a standard kubeadm cluster does. Nevertheless, you can create a kubeconfig file by referring to this document. The generated kubeconfig file will be like the following:

      1. apiVersion: v1
      2. clusters:
      3. - cluster:
      4. server: <endpoint-url>
      5. certificate-authority-data: <base64-encoded-ca-cert>
      6. name: kubernetes
      7. contexts:
      8. - context:
      9. cluster: kubernetes
      10. user: aws
      11. name: aws
      12. kind: Config
      13. preferences: {}
      14. users:
      15. user:
      16. exec:
      17. apiVersion: client.authentication.k8s.io/v1alpha1
      18. command: aws
      19. args:
      20. - "eks"
      21. - "get-token"
      22. - "--cluster-name"
      23. - "<cluster-name>"
      24. # - "--role"
      25. # - "<role-arn>"
      26. # env:
      27. # - name: AWS_PROFILE
      28. # value: "<aws-profile>"

      However, this automatically generated kubeconfig file requires the command aws (aws CLI tools) to be installed on every computer that wants to use this kubeconfig.

    2. Run the following commands on your local computer to get the token of the ServiceAccount kubesphere created by KubeSphere. It has the cluster admin access to the cluster and will be used as the new kubeconfig token.

      1. TOKEN=$(kubectl -n kubesphere-system get secret $(kubectl -n kubesphere-system get sa kubesphere -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 -d)
      2. kubectl config set-credentials kubesphere --token=${TOKEN}
      3. kubectl config set-context --current --user=kubesphere
    3. Retrieve the new kubeconfig file by running the following command:

      1. apiVersion: v1
      2. clusters:
      3. - cluster:
      4. certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZ...S0tLQo=
      5. server: https://*.sk1.cn-north-1.eks.amazonaws.com.cn
      6. - context:
      7. cluster: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
      8. user: kubesphere
      9. name: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
      10. current-context: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
      11. kind: Config
      12. preferences: {}
      13. users:
      14. - name: arn:aws-cn:eks:cn-north-1:660450875567:cluster/EKS-LUSLVMT6
      15. user:
      16. exec:
      17. apiVersion: client.authentication.k8s.io/v1alpha1
      18. args:
      19. - --region
      20. - cn-north-1
      21. - eks
      22. - get-token
      23. - --cluster-name
      24. - EKS-LUSLVMT6
      25. command: aws
      26. env: null
      27. - name: kubesphere
      28. user:
      29. token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImlCRHF4SlE5a0JFNDlSM2xKWnY1Vkt5NTJrcDNqRS1Ta25IYkg1akhNRmsifQ.eyJpc3M................9KQtFULW544G-FBwURd6ArjgQ3Ay6NHYWZe3gWCHLmag9gF-hnzxequ7oN0LiJrA-al1qGeQv-8eiOFqX3RPCQgbybmix8qw5U6f-Rwvb47-xA

      You can run the following command to check that the new kubeconfig does have access to the EKS cluster.

      1. kubectl get nodes

      The output is simialr to this:

    1. Log in to the KubeSphere console on your host cluster as admin. Click Platform in the upper-left corner and then select Cluster Management. On the Cluster Management page, click Add Cluster.

    2. Enter the basic information based on your needs and click Next.

    3. Wait for cluster initialization to finish.