Prerequisites for TiDB in Kubernetes

    Configure the firewall

    It is recommended that you disable the firewall.

    If you cannot stop the firewalld service, to ensure the normal operation of Kubernetes, take the following steps:

    1. Enable the following ports on the master, and then restart the service:

      1. firewall-cmd --permanent --add-port=6443/tcp
      2. firewall-cmd --permanent --add-port=2379-2380/tcp
      3. firewall-cmd --permanent --add-port=10250/tcp
      4. firewall-cmd --permanent --add-port=10251/tcp
      5. firewall-cmd --permanent --add-port=10252/tcp
      6. firewall-cmd --permanent --add-port=10255/tcp
      7. firewall-cmd --permanent --add-port=8472/udp
      8. firewall-cmd --add-masquerade --permanent
      9. # Set it when you need to expose NodePort on the master node.
      10. firewall-cmd --permanent --add-port=30000-32767/tcp
      11. systemctl restart firewalld
    2. Enable the following ports on the nodes, and then restart the service:

      1. firewall-cmd --permanent --add-port=10255/tcp
      2. firewall-cmd --permanent --add-port=8472/udp
      3. firewall-cmd --permanent --add-port=30000-32767/tcp
      4. firewall-cmd --add-masquerade --permanent
      5. systemctl restart firewalld

    Configure Iptables

    The FORWARD chain is configured to ACCEPT by default and is set in the startup script:

    1. iptables -P FORWARD ACCEPT

    Disable SELinux

    1. setenforce 0
    2. sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

    To make kubelet work, you need to turn off swap and comment out the swap-related line in the /etc/fstab file.

    Configure kernel parameters

    Configure the kernel parameters as follows. You can also adjust them according to your environment:

    1. modprobe br_netfilter
    2. cat <<EOF > /etc/sysctl.d/k8s.conf
    3. net.bridge.bridge-nf-call-ip6tables = 1
    4. net.bridge.bridge-nf-call-iptables = 1
    5. net.bridge.bridge-nf-call-arptables = 1
    6. net.core.somaxconn = 32768
    7. vm.swappiness = 0
    8. net.ipv4.tcp_syncookies = 0
    9. net.ipv4.ip_forward = 1
    10. fs.file-max = 1000000
    11. fs.inotify.max_user_watches = 1048576
    12. fs.inotify.max_user_instances = 1024
    13. net.ipv4.conf.all.rp_filter = 1
    14. net.ipv4.neigh.default.gc_thresh1 = 80000
    15. net.ipv4.neigh.default.gc_thresh2 = 90000
    16. net.ipv4.neigh.default.gc_thresh3 = 100000
    17. sysctl --system

    Configure the Irqbalance service

    The service binds the interrupts of each equipment to different CPUs respectively. This avoids the performance bottleneck when all interrupt requests are sent to the same CPU.

    1. systemctl enable irqbalance
    2. systemctl start irqbalance

    Configure the CPUfreq governor mode

    1. cpupower frequency-set --governor performance

    The TiDB cluster uses many file descriptors by default. The ulimit of the worker node must be greater than or equal to 1048576.

    1. root soft nofile 1048576
    2. root hard nofile 1048576
    3. root soft stack 10240
    4. EOF
    5. sysctl --system

    Docker service

    It is recommended to install Docker CE 18.09.6 or later versions. See Install Docker for details.

    After the installation, take the following steps:

    1. Save the Docker data to a separate disk. The data mainly contains images and the container logs. To implement this, set the parameter:

      The above command sets the data directory of Docker to /data1/docker.

    2. Set ulimit for the Docker daemon:

      1. Create the systemd drop-in directory for the docker service:

        1. mkdir -p /etc/systemd/system/docker.service.d
      2. Create a file named as /etc/systemd/system/docker.service.d/limit-nofile.conf, and configure the value of the LimitNOFILE parameter. The value must be a number equal to or greater than 1048576.

        1. cat > /etc/systemd/system/docker.service.d/limit-nofile.conf <<EOF
        2. [Service]
        3. LimitNOFILE=1048576
        4. EOF

        DO NOT set the value of LimitNOFILE to infinity. Due to a bug of systemd, the infinity value of systemd in some versions is 65536.

      3. Reload the configuration.

        1. systemctl daemon-reload && systemctl restart docker

    Kubernetes service

    To deploy a multi-master, highly available cluster, see Kubernetes documentation.

    The configuration of the Kubernetes master depends on the number of nodes. More nodes consumes more resources. You can adjust the number of nodes as needed.

    Nodes in a Kubernetes clusterKubernetes master configuration
    1-51vCPUs 4GB Memory
    6-102vCPUs 8GB Memory
    11-1004vCPUs 16GB Memory
    101-2508vCPUs 32GB Memory
    251-50016vCPUs 64GB Memory
    501-500032vCPUs 128GB Memory

    After Kubelet is installed, take the following steps:

    1. Save the Kubelet data to a separate disk (it can share the same disk with Docker). The data mainly contains the data used by . To implement this, set the --root-dir parameter:

      1. echo "KUBELET_EXTRA_ARGS=--root-dir=/data1/kubelet" > /etc/sysconfig/kubelet
      2. systemctl restart kubelet

      The above command sets the data directory of Kubelet to /data1/kubelet.

    2. Reserve compute resources by using Kubelet, to ensure that the system process of the machine and the kernel process of Kubernetes have enough resources for operation in heavy workloads. This maintains the stability of the entire system.

    TiDB cluster’s requirements for resources

    In a production environment, avoid deploying TiDB instances on a kubernetes master, or deploy as few TiDB instances as possible. Due to the NIC bandwidth, if the NIC of the master node works at full capacity, the heartbeat report between the worker node and the master node will be affected and might lead to serious problems.