Rancher Nodes

The following table lists the ports that need to be open to and from nodes that are running the Rancher server.

The port requirements differ based on the Rancher server architecture.

As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s, RKE, or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution’s documentation for the port requirements for cluster nodes.

Click to expand

The K3s server needs port 6443 to be accessible by the nodes.

The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s.

If you wish to utilize the metrics server, you will need to open port 10250 on each node.

The following tables break down the port requirements for inbound and outbound traffic:

Inbound Rules for Rancher Server Nodes

Outbound Rules for Rancher Nodes

ProtocolPortDestinationDescription
TCP22Any node IP from a node created using Node DriverSSH provisioning of nodes using Node Driver
TCP443git.rancher.ioRancher catalog
TCP2376Any node IP from a node created using Node driverDocker daemon TLS port used by Docker Machine
TCP6443Hosted/Imported Kubernetes APIKubernetes API server

Ports for Rancher Server Nodes on RKE

Click to expand

Typically Rancher is installed on three RKE nodes that all have the etcd, control plane and worker roles.

The following tables break down the port requirements for traffic between the Rancher nodes:

Rules for traffic between Rancher nodes

ProtocolPortDescription
TCP443Rancher agents
TCP2379etcd client requests
TCP2380etcd peer communication
TCP6443Kubernetes apiserver
UDP8472Canal/Flannel VXLAN overlay networking
TCP9099Canal/Flannel livenessProbe/readinessProbe
TCP10250Metrics server communication with all nodes
TCP10254Ingress controller livenessProbe/readinessProbe

The following tables break down the port requirements for inbound and outbound traffic:

Inbound Rules for Rancher Nodes

ProtocolPortSourceDescription
TCP22RKE CLISSH provisioning of node by RKE
TCP80Load Balancer/Reverse ProxyHTTP traffic to Rancher UI/API
TCP443
  • Load Balancer/Reverse Proxy
HTTPS traffic to Rancher UI/API
TCP6443Kubernetes API clientsHTTPS traffic to Kubernetes API

Outbound Rules for Rancher Nodes

Ports for Rancher Server Nodes on RancherD or RKE2

The RancherD (or RKE2) server needs port 6443 and 9345 to be accessible by other nodes in the cluster.

All nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used.

If you wish to utilize the metrics server, you will need to open port 10250 on each node.

Important: The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472.

Inbound Rules for RancherD or RKE2 Server Nodes

ProtocolPortSourceDescription
TCP9345RancherD/RKE2 agent nodesKubernetes API
TCP6443RancherD/RKE2 agent nodesKubernetes API
UDP8472RancherD/RKE2 server and agent nodesRequired only for Flannel VXLAN
TCP10250RancherD/RKE2 server and agent nodeskubelet
TCP2379RancherD/RKE2 server nodesetcd client port
TCP2380RancherD/RKE2 server nodesetcd peer port
TCP30000-32767RancherD/RKE2 server and agent nodesNodePort port range
HTTP8080Load balancer/proxy that does external SSL terminationRancher UI/API when external SSL termination is used
HTTPS8443
  • hosted/registered Kubernetes
  • any source that needs to be able to use the Rancher UI or API
Rancher agent, Rancher UI/API, kubectl. Not needed if you have LB doing TLS termination.

Typically all outbound traffic is allowed.

Ports for Rancher Server in Docker

Click to expand

The following tables break down the port requirements for Rancher nodes, for inbound and outbound traffic:

Inbound Rules for Rancher Node

ProtocolPortSourceDescription
TCP80Load balancer/proxy that does external SSL terminationRancher UI/API when external SSL termination is used
TCP443
  • hosted/registered Kubernetes
  • any source that needs to be able to use the Rancher UI or API
Rancher agent, Rancher UI/API, kubectl

Outbound Rules for Rancher Node

ProtocolPortSourceDescription
TCP22Any node IP from a node created using Node DriverSSH provisioning of nodes using Node Driver
TCP443git.rancher.ioRancher catalog
TCP2376Any node IP from a node created using a node driverDocker daemon TLS port used by Docker Machine
TCP6443Hosted/Imported Kubernetes APIKubernetes API server

Downstream Kubernetes Cluster Nodes

Downstream Kubernetes clusters run your apps and services. This section describes what ports need to be opened on the nodes in downstream clusters so that Rancher can communicate with them.

The port requirements differ depending on how the downstream cluster was launched. Each of the tabs below list the ports that need to be opened for different cluster types.

The following diagram depicts the ports that are opened for each .

Port Requirements for the Rancher Management Plane

Click to expand

The following table depicts the port requirements for Rancher Launched Kubernetes with nodes created in an .

Ports for Rancher Launched Kubernetes Clusters using Custom Nodes

Click to expand

The following table depicts the port requirements for with Custom Nodes.

From / ToRancher Nodesetcd Plane NodesControl Plane NodesWorker Plane NodesExternal Rancher Load BalancerInternet
Rancher Nodes (1)git.rancher.io
etcd Plane Nodes443 TCP (3)2379 TCP443 TCP
2380 TCP
6443 TCP
8472 UDP
4789 UDP (6)
9099 TCP (4)
Control Plane Nodes443 TCP (3)2379 TCP443 TCP
2380 TCP
6443 TCP
8472 UDP
4789 UDP (6)
10250 TCP
9099 TCP (4)
10254 TCP (4)
Worker Plane Nodes443 TCP (3)6443 TCP443 TCP
8472 UDP
4789 UDP (6)
9099 TCP (4)
10254 TCP (4)
Kubernetes API Clients6443 TCP (5)
Workload Clients or Load Balancer30000-32767 TCP / UDP
(nodeport)
80 TCP (Ingress)
443 TCP (Ingress)
Notes:

1. Nodes running standalone server or Rancher HA deployment.
2. Required to fetch Rancher chart library.
3. Only without external load balancer in front of Rancher.
4. Local traffic to the node itself (not across nodes).
5. Only if Authorized Cluster Endpoints are activated.
6. Only if using Overlay mode on Windows cluster.

Ports for Hosted Kubernetes Clusters

Click to expand

The following table depicts the port requirements for hosted clusters.

From / ToRancher NodesHosted / Imported ClusterExternal Rancher Load BalancerInternet
Rancher Nodes (1)Kubernetes API
Endpoint Port (2)
git.rancher.io
8443 TCP
9443 TCP
Hosted / Imported Cluster443 TCP (4)(5)443 TCP (5)
Kubernetes API ClientsCluster / Provider Specific (6)
Workload ClientCluster / Provider Specific (7)
Notes:

1. Nodes running standalone server or Rancher HA deployment.
2. Only for hosted clusters.
3. Required to fetch Rancher chart library.
4. Only without external load balancer.
5. From worker nodes.
6. For direct access to the Kubernetes API without Rancher.
7. Usually Ingress backed by infrastructure load balancer and/or nodeport.

Ports for Registered Clusters

Note: Registered clusters were called imported clusters before Rancher v2.5.

Click to expand

The following table depicts the port requirements for registered clusters.

From / ToRancher NodesHosted / Imported ClusterExternal Rancher Load BalancerInternet
Rancher Nodes (1)Kubernetes API
Endpoint Port (2)
git.rancher.io
8443 TCP
9443 TCP
Hosted / Imported Cluster443 TCP (4)(5)443 TCP (5)
Kubernetes API ClientsCluster / Provider Specific (6)
Workload ClientCluster / Provider Specific (7)
Notes:

1. Nodes running standalone server or Rancher HA deployment.
2. Only for hosted clusters.
3. Required to fetch Rancher chart library.
4. Only without external load balancer.
5. From worker nodes.
6. For direct access to the Kubernetes API without Rancher.
7. Usually Ingress backed by infrastructure load balancer and/or nodeport.

Other Port Considerations

These ports are typically opened on your Kubernetes nodes, regardless of what type of cluster it is.


Local Node Traffic

Ports marked as local traffic (i.e., 9099 TCP) in the above requirements are used for Kubernetes healthchecks ( andreadinessProbe). These healthchecks are executed on the node itself. In most cloud environments, this local traffic is allowed by default.

However, this traffic may be blocked when:

  • You have applied strict host firewall policies on the node.
  • You are using nodes that have multiple interfaces (multihomed).

In these cases, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as source or destination in your security group, explicitly opening ports only applies to the private interface of the nodes / instances.

Rancher AWS EC2 Security Group

When using the AWS EC2 node driver to provision cluster nodes in Rancher, you can choose to let Rancher create a security group called rancher-nodes. The following rules are automatically added to this security group.

TypeProtocolPort RangeSource/DestinationRule Type
SSHTCP220.0.0.0/0Inbound
HTTPTCP800.0.0.0/0Inbound
Custom TCP RuleTCP4430.0.0.0/0Inbound
Custom TCP RuleTCP23760.0.0.0/0Inbound
Custom TCP RuleTCP2379-2380sg-xxx (rancher-nodes)Inbound
Custom UDP RuleUDP4789sg-xxx (rancher-nodes)Inbound
Custom TCP RuleTCP64430.0.0.0/0Inbound
Custom UDP RuleUDP8472sg-xxx (rancher-nodes)Inbound
Custom TCP RuleTCP10250-10252sg-xxx (rancher-nodes)Inbound
Custom TCP RuleTCP10256sg-xxx (rancher-nodes)Inbound
Custom TCP RuleTCP30000-327670.0.0.0/0Inbound
Custom UDP RuleUDP30000-327670.0.0.0/0Inbound
All trafficAllAll0.0.0.0/0Outbound

Opening SUSE Linux Ports

SUSE Linux may have a firewall that blocks all ports by default. To open the ports needed for adding the host to a custom cluster,

  1. SSH into the instance.
  2. Start YaST in text mode:

  3. Navigate to Security and Users > Firewall > Zones:public > Ports. To navigate within the interface, follow the instructions here.

  4. To open the required ports, enter them into the TCP Ports and UDP Ports fields. In this example, ports 9796 and 10250 are also opened for monitoring. The resulting fields should look similar to the following:

    1. TCP Ports
    2. 22, 80, 443, 2376, 2379, 2380, 6443, 9099, 9796, 10250, 10254, 30000-32767
    3. 8472, 30000-32767
  5. When all required ports are enter, select Accept.

  6. SSH into the instance.

  7. Edit /etc/sysconfig/SuSEfirewall2 and open the required ports. In this example, ports 9796 and 10250 are also opened for monitoring: FW_SERVICES_EXT_TCP="22 80 443 2376 2379 2380 6443 9099 9796 10250 10254 30000:32767" FW_SERVICES_EXT_UDP="8472 30000:32767" FW_ROUTE=yes
  8. Restart the firewall with the new ports: SuSEfirewall2

Result: The node has the open ports required to be added to a custom cluster.