Get Started with TiDB Operator in Kubernetes

    Warning

    This document is for demonstration purposes only. Do not follow it in production environments. For deployment in production environments, see the instructions in See also.

    You can follow these steps to deploy TiDB Operator and a TiDB cluster:

    1. Deploy TiDB Operator
    2. Connect to a TiDB cluster
    3. Destroy the TiDB cluster and the Kubernetes cluster

    This section describes two ways to create a simple Kubernetes cluster. After creating a Kubernetes cluster, you can use it to test TiDB clusters managed by TiDB Operator. Choose whichever best matches your environment.

    • to deploy a Kubernetes cluster in Docker. It is a common and recommended way.
    • Use minikube to deploy a Kubernetes cluster running locally in a VM.

    Alternatively, you can deploy a Kubernetes cluster in Google Kubernetes Engine in Google Cloud Platform using the .

    This section shows how to deploy a Kubernetes cluster using kind.

    kind is a popular tool for running local Kubernetes clusters using Docker containers as cluster nodes. For available tags, see . The latest version of kind is used by default.

    Before deployment, make sure the following requirements are satisfied:

    • Docker: version >= 17.03
    • : version >= 1.12
    • kind: version >= 0.8.0
    • For Linux, the value of the sysctl parameter should be set to 1.

    The following is an example of using kind v0.8.1:

    Expected output

    1. Creating cluster "kind" ...
    2. Ensuring node image (kindest/node:v1.18.2) 🖼
    3. Preparing nodes 📦
    4. Writing configuration 📜
    5. Starting control-plane 🕹️
    6. Installing CNI 🔌
    7. Installing StorageClass 💾
    8. Set kubectl context to "kind-kind"
    9. You can now use your cluster with:
    10. kubectl cluster-info --context kind-kind
    11. Thanks for using kind! 😊

    Check whether the cluster is successfully created:

    1. kubectl cluster-info

    Expected output

    1. Kubernetes master is running at https://127.0.0.1:51026
    2. KubeDNS is running at https://127.0.0.1:51026/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    3. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    You are now ready to deploy TiDB Operator.

    Create a Kubernetes cluster using minikube

    You can create a Kubernetes cluster in a VM using , which supports macOS, Linux, and Windows.

    Before deployment, make sure the following requirements are satisfied:

    • minikube: version 1.0.0 or later versions. Newer versions like v1.24 is recommended. minikube requires a compatible hypervisor. For details, refer to minikube installation instructions.
    • : version >= 1.12

    Start a minikube Kubernetes cluster

    After minikube is installed, run the following command to start a minikube Kubernetes cluster:

    1. minikube start

    Expected outputYou should see output like this, with some differences depending on your OS and hypervisor:

    1. 😄 minikube v1.24.0 on Darwin 12.1
    2. Automatically selected the docker driver. Other choices: hyperkit, virtualbox, ssh
    3. 👍 Starting control plane node minikube in cluster minikube
    4. 🚜 Pulling base image ...
    5. 💾 Downloading Kubernetes v1.22.3 preload ...
    6. > gcr.io/k8s-minikube/kicbase: 355.78 MiB / 355.78 MiB 100.00% 4.46 MiB p/
    7. > preloaded-images-k8s-v13-v1...: 501.73 MiB / 501.73 MiB 100.00% 5.18 MiB
    8. 🔥 Creating docker container (CPUs=2, Memory=1985MB) ...
    9. 🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
    10. Generating certificates and keys ...
    11. Booting up control plane ...
    12. Configuring RBAC rules ...
    13. 🔎 Verifying Kubernetes components...
    14. Using image gcr.io/k8s-minikube/storage-provisioner:v5
    15. 🌟 Enabled addons: storage-provisioner, default-storageclass

    Use kubectl to interact with the cluster

    To interact with the cluster, you can use kubectl, which is included as a sub-command in minikube. To make the kubectl command available, you can either add the following alias definition command to your shell profile, or run the following alias definition command after opening a new shell.

    1. alias kubectl='minikube kubectl --'

    Run the following command to check the status of Kubernetes and ensure that kubectl can connect to it:

    1. kubectl cluster-info

    Expected output

    1. Kubernetes master is running at https://192.168.64.2:8443
    2. KubeDNS is running at https://192.168.64.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    3. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    You are now ready to deploy TiDB Operator.

    Step 2. Deploy TiDB Operator

    You need to install TiDB Operator CRDs first, and then install TiDB Operator.

    Install TiDB Operator CRDs

    TiDB Operator includes a number of Custom Resource Definitions (CRDs) that implement different components of the TiDB cluster.

    Run the following command to install the CRDs into your cluster:

    1. kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.3.2/manifests/crd.yaml

    Expected output

    1. customresourcedefinition.apiextensions.k8s.io/tidbclusters.pingcap.com created
    2. customresourcedefinition.apiextensions.k8s.io/backups.pingcap.com created
    3. customresourcedefinition.apiextensions.k8s.io/restores.pingcap.com created
    4. customresourcedefinition.apiextensions.k8s.io/backupschedules.pingcap.com created
    5. customresourcedefinition.apiextensions.k8s.io/tidbmonitors.pingcap.com created
    6. customresourcedefinition.apiextensions.k8s.io/tidbinitializers.pingcap.com created
    7. customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com created

    Get Started - 图2Note

    For Kubernetes earlier than 1.16, only v1beta1 CRD is supported. Therefore, you need to change crd.yaml in the preceding command to crd_v1beta1.yaml.

    Install TiDB Operator

    This section describes how to install TiDB Operator using .

      1. helm repo add pingcap https://charts.pingcap.org/

      Expected output

      1. "pingcap" has been added to your repositories
    1. Create a namespace for TiDB Operator:

      1. kubectl create namespace tidb-admin

      Expected output

      1. namespace/tidb-admin created
    2. Install TiDB Operator

      1. helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.3.2

      Expected output

    To confirm that the TiDB Operator components are running, run the following command:

    1. kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator

    Expected output

    1. NAME READY STATUS RESTARTS AGE
    2. tidb-controller-manager-6d8d5c6d64-b8lv4 1/1 Running 0 2m22s
    3. tidb-scheduler-644d59b46f-4f6sb 2/2 Running 0 2m22s

    As soon as all Pods are in the “Running” state, proceed to the next step.

    This section describes how to deploy a TiDB cluster and its monitoring services.

    Deploy a TiDB cluster

    1. kubectl create namespace tidb-cluster && \
    2. kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml

    Expected output

    1. namespace/tidb-cluster created
    2. tidbcluster.pingcap.com/basic created

    If you need to deploy a TiDB cluster on an ARM64 machine, refer to .

    Expected output

    1. tidbmonitor.pingcap.com/basic created

    View the Pod status

    1. watch kubectl get po -n tidb-cluster

    Expected output

    1. NAME READY STATUS RESTARTS AGE
    2. basic-discovery-6bb656bfd-xl5pb 1/1 Running 0 9m9s
    3. basic-monitor-5fc8589c89-gvgjj 3/3 Running 0 8m58s
    4. basic-pd-0 1/1 Running 0 9m8s
    5. basic-tidb-0 2/2 Running 0 7m14s
    6. basic-tikv-0 1/1 Running 0 8m13s

    Wait until all Pods for all services are started. As soon as you see Pods of each type (-pd, -tikv, and -tidb) are in the “Running” state, you can press Ctrl+C to get back to the command line and go on to connect to your TiDB cluster.

    Step 4. Connect to TiDB

    Because TiDB supports the MySQL protocol and most of its syntax, you can connect to TiDB using the MySQL client.

    Install the MySQL client

    To connect to TiDB, you need a MySQL-compatible client installed on the host where kubectl is installed. This can be the mysql executable from an installation of MySQL Server, MariaDB Server, Percona Server, or a standalone client executable from the package of your operating system.

    Forward port 4000

    You can connect to TiDB by first forwarding a port from the local host to the TiDB service in Kubernetes.

    First, get a list of services in the tidb-cluster namespace:

    1. kubectl get svc -n tidb-cluster

    Expected output

    1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2. basic-discovery ClusterIP 10.101.69.5 <none> 10261/TCP 10m
    3. basic-grafana ClusterIP 10.106.41.250 <none> 3000/TCP 10m
    4. basic-monitor-reloader ClusterIP 10.99.157.225 <none> 9089/TCP 10m
    5. basic-pd ClusterIP 10.104.43.232 <none> 2379/TCP 10m
    6. basic-pd-peer ClusterIP None <none> 2380/TCP 10m
    7. basic-prometheus ClusterIP 10.106.177.227 <none> 9090/TCP 10m
    8. basic-tidb ClusterIP 10.99.24.91 <none> 4000/TCP,10080/TCP 8m40s
    9. basic-tidb-peer ClusterIP None <none> 10080/TCP 8m40s
    10. basic-tikv-peer ClusterIP None <none> 20160/TCP 9m39s

    In this case, the TiDB service is called basic-tidb. Run the following command to forward this port from the local host to the cluster:

    1. kubectl port-forward -n tidb-cluster svc/basic-tidb 14000:4000 > pf14000.out &

    If the port 14000 is already occupied, you can replace it with an available port. This command runs in the background and writes its output to a file named pf14000.out. You can continue to run the command in the current shell session.

    Connect to the TiDB service

    Note

    To connect to TiDB (< v4.0.7) using a MySQL 8.0 client, if the user account has a password, you must explicitly specify --default-auth=mysql_native_password. This is because mysql_native_password is .

    1. mysql --comments -h 127.0.0.1 -P 14000 -u root

    Expected output

    1. Welcome to the MySQL monitor. Commands end with ; or \g.
    2. Your MySQL connection id is 76
    3. Server version: 5.7.25-TiDB-v4.0.0 MySQL Community Server (Apache License 2.0)
    4. <!-- Copy -->right (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
    5. Oracle is a registered trademark of Oracle Corporation and/or its
    6. affiliates. Other names may be trademarks of their respective
    7. owners.
    8. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    9. mysql>

    After connecting to the cluster, you can run the following commands to verify that some features available in TiDB. Note that some commands require TiDB 4.0 or higher versions. If you have deployed an earlier version, you need to upgrade the TiDB cluster.

    Create a hello_world table

    1. mysql> use test;
    2. mysql> create table hello_world (id int unsigned not null auto_increment primary key, v varchar(32));
    3. Query OK, 0 rows affected (0.17 sec)
    4. mysql> select * from information_schema.tikv_region_status where db_name=database() and table_name='hello_world'\G
    5. *************************** 1. row ***************************
    6. REGION_ID: 2
    7. START_KEY: 7480000000000000FF3700000000000000F8
    8. END_KEY:
    9. TABLE_ID: 55
    10. DB_NAME: test
    11. TABLE_NAME: hello_world
    12. IS_INDEX: 0
    13. INDEX_ID: NULL
    14. INDEX_NAME: NULL
    15. EPOCH_CONF_VER: 5
    16. WRITTEN_BYTES: 0
    17. READ_BYTES: 0
    18. APPROXIMATE_SIZE: 1
    19. APPROXIMATE_KEYS: 0
    20. 1 row in set (0.03 sec)

    Query the TiDB version

    1. mysql> select tidb_version()\G
    2. *************************** 1. row ***************************
    3. tidb_version(): Release Version: v5.4.0
    4. Edition: Community
    5. Git Commit Hash: 4a1b2e9fe5b5afb1068c56de47adb07098d768d6
    6. UTC Build Time: 2021-11-24 13:32:39
    7. GoVersion: go1.16.4
    8. Race Enabled: false
    9. TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
    10. Check Table Before Drop: false
    11. 1 row in set (0.01 sec)

    Query the TiKV store status

    This command is effective only in TiDB 4.0 or later versions. If your TiDB does not support the command, you need to .

    1. mysql> select * from information_schema.cluster_info\G
    2. *************************** 1. row ***************************
    3. TYPE: tidb
    4. INSTANCE: basic-tidb-0.basic-tidb-peer.tidb-cluster.svc:4000
    5. STATUS_ADDRESS: basic-tidb-0.basic-tidb-peer.tidb-cluster.svc:10080
    6. VERSION: 5.2.1
    7. GIT_HASH: 689a6b6439ae7835947fcaccf329a3fc303986cb
    8. START_TIME: 2020-05-28T22:50:11Z
    9. UPTIME: 3m21.459090928s
    10. *************************** 2. row ***************************
    11. TYPE: pd
    12. INSTANCE: basic-pd:2379
    13. STATUS_ADDRESS: basic-pd:2379
    14. VERSION: 5.2.1
    15. GIT_HASH: 56d4c3d2237f5bf6fb11a794731ed1d95c8020c2
    16. START_TIME: 2020-05-28T22:45:04Z
    17. UPTIME: 8m28.459091915s
    18. *************************** 3. row ***************************
    19. TYPE: tikv
    20. INSTANCE: basic-tikv-0.basic-tikv-peer.tidb-cluster.svc:20160
    21. STATUS_ADDRESS: 0.0.0.0:20180
    22. VERSION: 5.2.1
    23. GIT_HASH: 198a2cea01734ce8f46d55a29708f123f9133944
    24. START_TIME: 2020-05-28T22:48:21Z
    25. UPTIME: 5m11.459102648s
    26. 3 rows in set (0.01 sec)

    You can forward the port for Grafana to access the Grafana dashboard locally:

    1. kubectl port-forward -n tidb-cluster svc/basic-grafana 3000 > pf3000.out &

    You can access the Grafana dashboard at http://localhost:3000 on the host where you run kubectl. The default username and password in Grafana are both admin.

    Note that if you run kubectl in a Docker container or on a remote host instead of your local host, you can not access the Grafana dashboard at from your browser. In this case, you can run the following command to listen on all addresses.

    1. kubectl port-forward --address 0.0.0.0 -n tidb-cluster svc/basic-grafana 3000 > pf3000.out &

    Then access Grafana through http://${remote-server-IP}:3000.

    For more information about monitoring the TiDB cluster in TiDB Operator, refer to .

    TiDB Operator also makes it easy to perform a rolling upgrade of the TiDB cluster. This section describes how to upgrade your TiDB cluster to the “nightly” release.

    Before that, you need to get familiar with a kubectl sub-command kubectl patch. It applies a specification change directly to the running cluster resources. There are several different patch strategies, each of which has various capabilities, limitations, and allowed formats. For details, see Kubernetes Patch

    Modify the TiDB cluster version

    In this case, you can use a JSON merge patch to update the version of the TiDB cluster to “nightly”:

    1. kubectl patch tc basic -n tidb-cluster --type merge -p '{"spec": {"version": "nightly"} }'

    Expected output

    1. tidbcluster.pingcap.com/basic patched

    Wait for Pods to restart

    To follow the progress of the cluster as its components are upgraded, run the following command. You should see some Pods transiting to Terminating and then back to ContainerCreating and then to Running.

    1. watch kubectl get po -n tidb-cluster

    Expected output

    1. NAME READY STATUS RESTARTS AGE
    2. basic-discovery-6bb656bfd-7lbhx 1/1 Running 0 24m
    3. basic-pd-0 1/1 Terminating 0 5m31s
    4. basic-tidb-0 2/2 Running 0 2m19s
    5. basic-tikv-0 1/1 Running 0 4m13s

    Forward the TiDB service port

    After all Pods have been restarted, you can see that the version number of the cluster has changed.

    Note that you need to reset any port forwarding you set up in a previous step, because the pods they forwarded to have been destroyed and recreated.

    1. kubectl port-forward -n tidb-cluster svc/basic-tidb 24000:4000 > pf24000.out &

    If the port 24000 is already occupied, you can replace it with an available port.

    Check the TiDB cluster version

    1. mysql --comments -h 127.0.0.1 -P 24000 -u root -e 'select tidb_version()\G'

    Expected output

    Note that nightly is not a fixed version. Running the command above at a different time might return different results.

    1. *************************** 1. row ***************************
    2. tidb_version(): Release Version: v5.4.0-alpha-445-g778e188fa
    3. Edition: Community
    4. Git Commit Hash: 778e188fa7af4f48497ff9e05ca6681bf9a5fa16
    5. Git Branch: master
    6. UTC Build Time: 2021-12-17 17:02:49
    7. GoVersion: go1.16.4
    8. Race Enabled: false
    9. TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
    10. Check Table Before Drop: false

    Step 6. Destroy the TiDB cluster and the Kubernetes cluster

    After you finish testing, you can destroy the TiDB cluster and the Kubernetes cluster.

    This section introduces how to destroy a TiDB cluster.

    Stop kubectl port forwarding

    If you still have running kubectl processes that are forwarding ports, end them:

    1. pgrep -lfa kubectl

    Delete the TiDB cluster

    1. kubectl delete tc basic -n tidb-cluster

    The tc in this command is a short name for tidbclusters.

    Delete TiDB monitoring services

    1. kubectl delete tidbmonitor basic -n tidb-cluster

    Delete PV data

    If your deployment has persistent data storage, deleting the TiDB cluster does not remove the data in the cluster. If you do not need the data, run the following commands to clean it:

    1. kubectl delete pvc -n tidb-cluster -l app.kubernetes.io/instance=basic,app.kubernetes.io/managed-by=tidb-operator && \
    2. kubectl get pv -l app.kubernetes.io/namespace=tidb-cluster,app.kubernetes.io/managed-by=tidb-operator,app.kubernetes.io/instance=basic -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

    Delete namespaces

    To ensure that there are no lingering resources, delete the namespace used for your TiDB cluster.

    Destroy the Kubernetes cluster

    The method of destroying a Kubernetes cluster depends on how you create it. Here are the steps for destroying a Kubernetes cluster.

    To destroy a Kubernetes cluster created using kind, run the following command:

    To destroy a Kubernetes cluster created using minikube, run the following command:

    1. minikube delete

    If you want to deploy a TiDB cluster in production environments, refer to the following documents:

    On public clouds: