1 - 需求


    • SSH user - 用于访问节点的SSH用户,必须加入docker组:

    请参阅 以了解如何在不使用root用户的情况下配置对Docker的访问。

    • 加载以下内核模块,可以使用以下方法检查:

      • modprobe module_name
      • lsmod | grep module_name
      • grep module_name /lib/modules/$(uname -r)/modules.builtin, 如果它是一个内置模块Module namebr_netfilterip6_udp_tunnelip_setip_set_hash_ipip_set_hash_netiptable_filteriptable_natiptable_mangleiptable_rawnf_conntrack_netlinknf_conntracknf_conntrack_ipv4nf_defrag_ipv4nf_natnf_nat_ipv4nf_nat_masquerade_ipv4nfnetlinkudp_tunnelvethvxlanx_tablesxt_addrtypext_conntrackxt_commentxt_markxt_multiportxt_natxt_recentxt_setxt_statisticxt_tcpudp
    • 必须应用以下sysctl设置

    1. net.bridge.bridge-nf-call-iptables=1

    2. Red Hat Enterprise Linux(RHEL)/Oracle Enterprise Linux(OEL)/CentOS

    如果使用Red Hat Enterprise Linux,Oracle Enterprise Linux或CentOS,由于您无法将root用户用作SSH用户。请根据您在节点上安装Docker的方式,按照以下说明正确设置Docker。

    • 使用docker-ce

    检查是否安装docker-ce或docker-ee,可以执行以下命令检查已安装的软件包:

    1. rpm -q docker-ce
    • 使用RHEL/CentOS维护的Docker
    1. rpm -q docker

    如果您使用的是Red Hat/CentOS提供的Docker软件包,该dockerroot组将自动添加到系统中。您需要编辑(或创建)/etc/docker/daemon.json以包含以下内容:

    编辑或创建文件后重新启动Docker,重新启动Docker后,您可以检查Docker socket(/var/run/docker.sock)的组权限,该权限应显示为group(dockerroot)

    1. srw-rw----. 1 root dockerroot 0 Jul 4 09:57 /var/run/docker.sock

    将要使用的SSH用户添加到该组,这不是root用户。

    1. usermod -aG dockerroot <user_name>

    要验证用户配置是否正确,请注销节点并使用SSH用户重新登录,然后执行docker ps

    1. ssh <user_name>@node
    2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

    3. Red Hat Atomic

    在尝试将RKE与Red Hat Atomic节点一起使用之前,需要对操作系统进行一些更新才能使RKE正常工作。

    • OpenSSH 版本

    默认情况下,Atomic安装OpenSSH 6.4,它不支持SSH隧道,这是核心RKE要求,需要升级openssh。

    • 创建Docker Group

    您可以按照说明操作,也可以使用Rancher的安装脚本安装Docker。对于RHEL,请参阅。

    确认安装的docker版本: docker version —format '{{.Server.Version}}'

    1. docker version --format '{{.Server.Version}}'
    2. 17.03.2-ce
    • OpenSSH 7.0+ - 必须在每个节点上安装OpenSSH。

    RKE node:Node that runs the rke commands

    RKE node - Outbound rules

    ProtocolPortSourceDestinationDescription
    TCP22RKE node- Any node configured in Cluster Configuration FileSSH provisioning of node by RKE
    TCP6443RKE node- controlplane nodesKubernetes apiserver

    etcd nodes:Nodes with the role etcd

    etcd nodes - Outbound rules

    ProtocolPortDestinationDescription
    TCP443- Rancher nodesRancher agent
    TCP2379- etcd nodesetcd client requests
    TCP2380- etcd nodesetcd peer communication
    TCP6443- controlplane nodesKubernetes apiserver
    UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
    TCP9099- etcd node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe

    controlplane nodes:Nodes with the role controlplane

    controlplane nodes - Inbound rules

    controlplane nodes - Outbound rules

    ProtocolPortDestinationDescription
    TCP443- Rancher nodesRancher agent
    TCP2379- etcd nodesetcd client requests
    TCP2380- etcd nodesetcd peer communication
    UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
    TCP9099- controlplane node itself (local traffic, not across nodes)See Canal/Flannel livenessProbe/readinessProbe
    TCP10250- etcd nodes- controlplane nodes- worker nodeskubelet
    TCP10254- controlplane node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe

    worker nodes:Nodes with the role worker

    worker nodes - Outbound rules

    ProtocolPortDestinationDescription
    TCP443- Rancher nodesRancher agent
    TCP6443- controlplane nodesKubernetes apiserver
    UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
    TCP9099- worker node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
    TCP10254- worker node itself (local traffic, not across nodes)See Ingress controller livenessProbe/readinessProbe

    Information on local node traffic

    If you are using an external firewall, make sure you have this port opened between the machine you are using to run rke and the nodes that you are going to use in the cluster.

    iptables 放行端口TCP/6443

    1. iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
    2. # Open TCP/6443 for one specific IP
    3. iptables -A INPUT -p tcp -s your_ip_here --dport 6443 -j ACCEPT
    1. # Open TCP/6443 for all
    2. firewall-cmd --zone=public --add-port=6443/tcp --permanent
    3. firewall-cmd --reload
    4. # Open TCP/6443 for one specific IP
    5. firewall-cmd --permanent --zone=public --add-rich-rule='
    6. rule family="ipv4"
    7. source address="your_ip_here/32"