To use this option you’ll need access to servers you intend to use in your Kubernetes cluster. Provision each server according to the requirements, which includes some hardware specifications and Docker. After you install Docker on each server, you willl also run the command provided in the Rancher UI on each server to turn each one into a Kubernetes node.

    This section describes how to set up a custom cluster.

    Creating a Cluster with Custom Nodes

    Begin creation of a custom cluster by provisioning a Linux host. Your host can be:

    • A cloud-host virtual machine (VM)
    • An on-prem VM
    • A bare-metal server

    If you want to reuse a node from a previous custom cluster, before using it in a cluster again. If you reuse a node that hasn’t been cleaned, cluster provisioning may fail.

    Provision the host according to the installation requirements and the

    Clusters won’t begin provisioning until all three node roles (worker, etcd and controlplane) are present.

    1. Choose Custom.

    2. Enter a Cluster Name.

    3. Use Cluster Options to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on Show advanced options.

      Using Windows nodes as Kubernetes workers?

    4. Click Next.

    5. From Node Role, choose the roles that you want filled by a cluster node. You must provision at least one node for each role: , worker, and control plane. All three roles are required for a custom cluster to finish provisioning. For more information on roles, see

    6. Copy the command displayed on screen to your clipboard.

    7. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.

      Note: Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles. Repeat the steps as many times as needed.

    8. When you finish running the command(s) on your Linux host(s), click Done.

    Result:

    You can access your cluster after its state is updated to Active.

    Active clusters are assigned two Projects:

    • Default, containing the default namespace
    • , containing the cattle-system, ingress-nginx, kube-public, and kube-system namespaces

    If you have configured your cluster to use Amazon as Cloud Provider, tag your AWS resources with a cluster ID.

    Amazon Documentation: Tagging Your Amazon EC2 Resources

    The following resources need to be tagged with a :

    • Nodes: All hosts added in Rancher.
    • Subnet: The subnet used for your cluster
    • Security Group: The security group used for your cluster.

      Note: Do not tag multiple security groups. Tagging multiple groups generates an error when creating Elastic Load Balancer.

    The tag that should be used is:

    <CLUSTERID> can be any string you choose. However, the same string must be used on every resource you tag. Setting the tag value to owned informs the cluster that all resources tagged with the <CLUSTERID> are owned and managed by this cluster.

    If you share resources between clusters, you can change the tag to:

    1. Key=kubernetes.io/cluster/CLUSTERID, Value=shared

    Optional Next Steps

    • Access your cluster with the kubectl CLI: Follow these steps to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.