Deploy TiDB Operator in Kubernetes

    Before deploying TiDB Operator, make sure the following items are installed on your machine:

    TiDB Operator runs in the Kubernetes cluster. You can refer to the document of how to set up Kubernetes to set up a Kubernetes cluster. Make sure that the Kubernetes version is v1.12 or higher. If you want to deploy a very simple Kubernetes cluster for testing purposes, consult the document.

    For some public cloud environments, refer to the following documents:

    TiDB Operator uses to persist the data of TiDB cluster (including the database, monitoring data, and backup data), so the Kubernetes cluster must provide at least one kind of persistent volumes.

    It is recommended to enable RBAC in the Kubernetes cluster.

    Refer to to install Helm and configure it with the official PingCAP chart repository.

    TiDB Operator uses Custom Resource Definition (CRD) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the CRD, which is a one-time job in your Kubernetes cluster.

    If the server cannot access the Internet, you need to download the crd.yaml file on a machine with Internet access before installing:

    1. wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.3.2/manifests/crd.yaml
    2. kubectl create -f ./crd.yaml

    Note

    For Kubernetes earlier than 1.16, only v1beta1 CRD is supported, so you need to change crd.yaml in the above command to crd_v1beta1.yaml.

    If the following message is displayed, the CRD installation is successful:

    1. kubectl get crd
    1. NAME CREATED AT
    2. backups.pingcap.com 2020-06-11T07:59:40Z
    3. backupschedules.pingcap.com 2020-06-11T07:59:41Z
    4. tidbclusterautoscalers.pingcap.com 2020-06-11T07:59:42Z
    5. tidbclusters.pingcap.com 2020-06-11T07:59:38Z
    6. tidbinitializers.pingcap.com 2020-06-11T07:59:42Z
    7. tidbmonitors.pingcap.com 2020-06-11T07:59:41Z

    To deploy TiDB Operator quickly, you can refer to . This section describes how to customize the deployment of TiDB Operator.

    After creating CRDs in the step above, there are two methods to deploy TiDB Operator on your Kubernetes cluster: online and offline.

    When you use TiDB Operator, tidb-scheduler is not mandatory. Refer to tidb-scheduler and default-scheduler to confirm whether you need to deploy tidb-scheduler. If you do not need tidb-scheduler, you can configure scheduler.create: false in the values.yaml file, so tidb-scheduler is not deployed.

    Online deployment

    1. Get the values.yaml file of the tidb-operator chart you want to deploy:

      1. mkdir -p ${HOME}/tidb-operator && \
      2. helm inspect values pingcap/tidb-operator --version=${chart_version} > ${HOME}/tidb-operator/values-tidb-operator.yaml

      Deploy TiDB Operator - 图2Note

    2. Configure TiDB Operator

      TiDB Operator manages all TiDB clusters in the Kubernetes cluster by default. If you only need it to manage clusters in a specific namespace, you can set clusterScoped: false in values.yaml.

      Note

      After setting clusterScoped: false, TiDB Operator will still operate Nodes, Persistent Volumes, and Storage Classes in the Kubernetes cluster by default. If the role that deploys TiDB Operator does not have the permissions to operate these resources, you can set the corresponding permission request under controllerManager.clusterPermissions to to disable TiDB Operator’s operations on these resources.

      You can modify other items such as limits, requests, and replicas as needed.

    3. Deploy TiDB Operator

      Deploy TiDB Operator - 图4Note

      If the corresponding tidb-admin namespace does not exist, you can create the namespace first by running the kubectl create namespace tidb-admin command.

    4. Upgrade TiDB Operator

      If you need to upgrade the TiDB Operator, modify the ${HOME}/tidb-operator/values-tidb-operator.yaml file, and then execute the following command to upgrade:

      1. helm upgrade tidb-operator pingcap/tidb-operator --namespace=tidb-admin -f ${HOME}/tidb-operator/values-tidb-operator.yaml

    Offline installation

    If your server cannot access the Internet, install TiDB Operator offline by the following steps:

    1. Download the tidb-operator chart

      If the server has no access to the Internet, you cannot configure the Helm repository to install the TiDB Operator component and other applications. At this time, you need to download the chart file needed for cluster installation on a machine with Internet access, and then copy it to the server.

      Use the following command to download the tidb-operator chart file:

      1. wget http://charts.pingcap.org/tidb-operator-v1.3.2.tgz
      the tidb-operator-v1.3.2.tgz file to the target server and extract it to the current directory:
      1. tar zxvf tidb-operator.v1.3.2.tgz
    2. Download the Docker images used by TiDB Operator

      If the server has no access to the Internet, you need to download all Docker images used by TiDB Operator on a machine with Internet access and upload them to the server, and then use docker load to install the Docker image on the server.

      1. pingcap/tidb-operator:v1.3.2
      2. pingcap/tidb-backup-manager:v1.3.2
      3. bitnami/kubectl:latest
      4. pingcap/advanced-statefulset:v0.3.3
      5. k8s.gcr.io/kube-scheduler:v1.16.9

      Among them, k8s.gcr.io/kube-scheduler:v1.16.9 should be consistent with the version of your Kubernetes cluster. You do not need to download it separately.

      Next, download all these images using the following command:

      Next, upload these Docker images to the server, and execute docker load to install these Docker images on the server:

      1. docker load -i tidb-operator-v1.3.2.tar
      2. docker load -i tidb-backup-manager-v1.3.2.tar
      3. docker load -i bitnami-kubectl.tar
    3. Configure TiDB Operator

      TiDB Operator embeds a kube-scheduler to implement a custom scheduler. If you need to deploy tidb-scheduler, modify the ./tidb-operator/values.yaml file to configure the Docker image’s name and version of this built-in kube-scheduler component. For example, if kube-scheduler in your Kubernetes cluster uses the image k8s.gcr.io/kube-scheduler:v1.16.9, set ./tidb-operator/values.yaml as follows:

      1. scheduler:
      2. serviceAccount: tidb-scheduler
      3. logLevel: 2
      4. replicas: 1
      5. schedulerName: tidb-scheduler
      6. resources:
      7. limits:
      8. cpu: 250m
      9. memory: 150Mi
      10. requests:
      11. cpu: 80m
      12. memory: 50Mi
      13. kubeSchedulerImageName: k8s.gcr.io/kube-scheduler
      14. kubeSchedulerImageTag: v1.16.9
      15. ...

      You can modify other items such as limits, requests, and replicas as needed.

    4. Install TiDB Operator

      Install TiDB Operator using the following command:

      1. helm install tidb-operator ./tidb-operator --namespace=tidb-admin

      Note

      If the corresponding tidb-admin namespace does not exist, you can create the namespace first by running the kubectl create namespace tidb-admin command.

    5. Upgrade TiDB Operator

      If you need to upgrade TiDB Operator, modify the ./tidb-operator/values.yaml file, and then execute the following command to upgrade:

      1. helm upgrade tidb-operator ./tidb-operator --namespace=tidb-admin

    To customize TiDB Operator, modify ${HOME}/tidb-operator/values-tidb-operator.yaml. The rest sections of the document use values.yaml to refer to ${HOME}/tidb-operator/values-tidb-operator.yaml

    TiDB Operator contains two components:

    • tidb-controller-manager
    • tidb-scheduler

    These two components are stateless and deployed via Deployment. You can customize resource limit, request, and replicas in the values.yaml file.

    After modifying , run the following command to apply this modification: