This section describes how to install a Kubernetes cluster according to the

Prerequisites

These instructions assume you have set up three nodes, a load balancer, and a DNS record, as described in

Note that in order for RKE2 to work correctly with the load balancer, you need to set up two listeners: one for the supervisor on port 9345, and one for the Kubernetes API on port 6443.

Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the support maintenance terms. To specify the RKE2 version, use the INSTALL_RKE2_VERSION environment variable when running the RKE2 installation script.

Installing Kubernetes

RKE2 server runs with embedded etcd so you will not need to set up an external datastore to run in HA mode.

On the first node, you should set up the configuration file with your own pre-shared secret as the token. The token argument can be set on startup.

If you do not specify a pre-shared secret, RKE2 will generate one and place it at /var/lib/rancher/rke2/server/node-token.

Here is an example of what the RKE2 config file (at /etc/rancher/rke2/config.yaml) would look like if you are following this guide:

After that you need to run the install command and enable and start rke2:

  1. systemctl enable rke2-server.service
  2. systemctl start rke2-server.service
  1. Repeat the same command on your third RKE2 server node.

Once you’ve launched the rke2 server process on all server nodes, ensure that the cluster has come up properly with

Then test the health of the cluster pods:

  1. /var/lib/rancher/rke2/bin/kubectl \
  2. --kubeconfig /etc/rancher/rke2/rke2.yaml get pods --all-namespaces

Result: You have successfully set up a RKE2 Kubernetes cluster.

When you installed RKE2 on each Rancher server node, a kubeconfig file was created on the node at /etc/rancher/rke2/rke2.yaml. This file contains credentials for full access to the cluster, and you should save this file in a secure location.

  1. Install kubectl, a Kubernetes command-line tool.
  2. Copy the file at /etc/rancher/rke2/rke2.yaml and save it to the directory on your local machine.
  3. In the kubeconfig file, the server directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example rke2.yaml:

Result: You can now use kubectl to manage your RKE2 cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using kubectl:

  1. kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces

For more information about the kubeconfig file, refer to the or the official Kubernetes documentation about organizing cluster access using kubeconfig files.

Now that you have set up the kubeconfig file, you can use kubectl to access the cluster from your local machine.

Check that all the required pods and containers are healthy are ready to continue:

Result: You have confirmed that you can access the cluster with and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster.

Currently, RKE2 deploys nginx-ingress as a deployment, and that can impact the Rancher deployment so that you cannot use all servers to proxy requests to the Rancher pods.

To rectify that, place the following file in /var/lib/rancher/rke2/server/manifests on any of the server nodes:

  1. kind: HelmChartConfig
  2. metadata:
  3. name: rke2-ingress-nginx
  4. namespace: kube-system
  5. spec:
  6. valuesContent: |-
  7. controller:
  8. kind: DaemonSet