1 - 需求
- SSH user - 用于访问节点的SSH用户,必须加入docker组:
请参阅 以了解如何在不使用root用户的情况下配置对Docker的访问。
加载以下内核模块,可以使用以下方法检查:
modprobe module_name
lsmod | grep module_name
grep module_name /lib/modules/$(uname -r)/modules.builtin
, 如果它是一个内置模块Module namebr_netfilterip6_udp_tunnelip_setip_set_hash_ipip_set_hash_netiptable_filteriptable_natiptable_mangleiptable_rawnf_conntrack_netlinknf_conntracknf_conntrack_ipv4nf_defrag_ipv4nf_natnf_nat_ipv4nf_nat_masquerade_ipv4nfnetlinkudp_tunnelvethvxlanx_tablesxt_addrtypext_conntrackxt_commentxt_markxt_multiportxt_natxt_recentxt_setxt_statisticxt_tcpudp
必须应用以下sysctl设置
net.bridge.bridge-nf-call-iptables=1
2. Red Hat Enterprise Linux(RHEL)/Oracle Enterprise Linux(OEL)/CentOS
如果使用Red Hat Enterprise Linux,Oracle Enterprise Linux或CentOS,由于您无法将root用户用作SSH用户。请根据您在节点上安装Docker的方式,按照以下说明正确设置Docker。
- 使用docker-ce
检查是否安装docker-ce或docker-ee,可以执行以下命令检查已安装的软件包:
rpm -q docker-ce
- 使用RHEL/CentOS维护的Docker
rpm -q docker
如果您使用的是Red Hat/CentOS提供的Docker软件包,该dockerroot
组将自动添加到系统中。您需要编辑(或创建)/etc/docker/daemon.json
以包含以下内容:
编辑或创建文件后重新启动Docker,重新启动Docker后,您可以检查Docker socket(/var/run/docker.sock)的组权限,该权限应显示为group(dockerroot)
srw-rw----. 1 root dockerroot 0 Jul 4 09:57 /var/run/docker.sock
将要使用的SSH用户添加到该组,这不是root用户。
usermod -aG dockerroot <user_name>
要验证用户配置是否正确,请注销节点并使用SSH用户重新登录,然后执行docker ps
:
ssh <user_name>@node
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3. Red Hat Atomic
在尝试将RKE与Red Hat Atomic节点一起使用之前,需要对操作系统进行一些更新才能使RKE正常工作。
- OpenSSH 版本
默认情况下,Atomic安装OpenSSH 6.4,它不支持SSH隧道,这是核心RKE要求,需要升级openssh。
- 创建Docker Group
您可以按照说明操作,也可以使用Rancher的安装脚本安装Docker。对于RHEL,请参阅。
确认安装的docker版本: docker version —format '{{.Server.Version}}'
docker version --format '{{.Server.Version}}'
17.03.2-ce
- OpenSSH 7.0+ - 必须在每个节点上安装OpenSSH。
RKE node:Node that runs the rke
commands
RKE node - Outbound rules
Protocol | Port | Source | Destination | Description |
---|---|---|---|---|
TCP | 22 | RKE node | - Any node configured in Cluster Configuration File | SSH provisioning of node by RKE |
TCP | 6443 | RKE node | - controlplane nodes | Kubernetes apiserver |
etcd nodes:Nodes with the role etcd
etcd nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 | - Rancher nodes | Rancher agent |
TCP | 2379 | - etcd nodes | etcd client requests |
TCP | 2380 | - etcd nodes | etcd peer communication |
TCP | 6443 | - controlplane nodes | Kubernetes apiserver |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - etcd node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
controlplane nodes:Nodes with the role controlplane
controlplane nodes - Inbound rules
controlplane nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 | - Rancher nodes | Rancher agent |
TCP | 2379 | - etcd nodes | etcd client requests |
TCP | 2380 | - etcd nodes | etcd peer communication |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - controlplane node itself (local traffic, not across nodes)See | Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 | - etcd nodes- controlplane nodes- worker nodes | kubelet |
TCP | 10254 | - controlplane node itself (local traffic, not across nodes)See Local node traffic | Ingress controller livenessProbe/readinessProbe |
worker nodes:Nodes with the role worker
worker nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 | - Rancher nodes | Rancher agent |
TCP | 6443 | - controlplane nodes | Kubernetes apiserver |
UDP | 8472 | - etcd nodes- controlplane nodes- worker nodes | Canal/Flannel VXLAN overlay networking |
TCP | 9099 | - worker node itself (local traffic, not across nodes)See Local node traffic | Canal/Flannel livenessProbe/readinessProbe |
TCP | 10254 | - worker node itself (local traffic, not across nodes)See | Ingress controller livenessProbe/readinessProbe |
Information on local node traffic
If you are using an external firewall, make sure you have this port opened between the machine you are using to run rke
and the nodes that you are going to use in the cluster.
iptables 放行端口TCP/6443
iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
# Open TCP/6443 for one specific IP
iptables -A INPUT -p tcp -s your_ip_here --dport 6443 -j ACCEPT
# Open TCP/6443 for all
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --reload
# Open TCP/6443 for one specific IP
firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="your_ip_here/32"