Deploy TiDB on Azure AKS
To deploy TiDB Operator and the TiDB cluster in a self-managed Kubernetes environment, refer to Deploy TiDB Operator and .
Before deploying a TiDB cluster on Azure AKS, perform the following operations:
Install Helm 3 for deploying TiDB Operator.
and install and configure .
Refer to use Ultra disks to create a new cluster that can use Ultra disks or enable Ultra disks in an exist cluster.
Acquire .
If the Kubernetes version of the cluster is earlier than 1.21, install aks-preview CLI extension for using Ultra Disks and register in your subscription.
Install the aks-preview CLI extension:
Register
EnableAzureDiskFileCSIDriver
:az feature register --name EnableAzureDiskFileCSIDriver --namespace Microsoft.ContainerService --subscription ${your-subscription-id}
Create an AKS cluster and a node pool
Most of the TiDB cluster components use Azure disk as storage. According to AKS Best Practices, when creating an AKS cluster, it is recommended to ensure that each node pool uses one availability zone (at least 3 in total).
To create an AKS cluster with , run the following command:
Note
If the Kubernetes version of the cluster is earlier than 1.21, you need to append an --aks-custom-headers
flag to enable the EnableAzureDiskFileCSIDriver feature by running the following command:
# create AKS cluster
az aks create \
--resource-group ${resourceGroup} \
--name ${clusterName} \
--location ${location} \
--generate-ssh-keys \
--vm-set-type VirtualMachineScaleSets \
--load-balancer-sku standard \
--node-count 3 \
--zones 1 2 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true
Create component node pools
After creating an AKS cluster, run the following commands to create component node pools. Each node pool may take two to five minutes to create. It is recommended to enable in the TiKV node pool. For more details about cluster configuration, refer to az aks documentation and .
To create a TiDB Operator and monitor pool:
az aks nodepool add --name admin \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--zones 1 2 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 1 \
--labels dedicated=admin
Create a PD node pool with
nodeType
beingStandard_F4s_v2
or higher:az aks nodepool add --name pd \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 1 2 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 3 \
--labels dedicated=pd \
--node-taints dedicated=pd:NoSchedule
Create a TiDB node pool with
nodeType
beingStandard_F8s_v2
or higher. You can set--node-count
to2
because only two TiDB nodes are required by default. You can also scale out this node pool by modifying this parameter at any time if necessary.az aks nodepool add --name tidb \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 1 2 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 2 \
--labels dedicated=tidb \
--node-taints dedicated=tidb:NoSchedule
Create a TiKV node pool with
nodeType
beingStandard_E8s_v4
or higher:az aks nodepool add --name tikv \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 1 2 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 3 \
--labels dedicated=tikv \
--node-taints dedicated=tikv:NoSchedule \
Deploy component node pools in availability zones
The Azure AKS cluster deploys nodes across multiple zones using “best effort zone balance”. If you want to apply “strict zone balance” (not supported in AKS now), you can deploy one node pool in one zone. For example:
Create TiKV node pool 1 in zone 1:
az aks nodepool add --name tikv1 \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 1 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 1 \
--labels dedicated=tikv \
--node-taints dedicated=tikv:NoSchedule \
--enable-ultra-ssd
Create TiKV node pool 2 in zone 2:
az aks nodepool add --name tikv2 \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 2 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 1 \
--labels dedicated=tikv \
--node-taints dedicated=tikv:NoSchedule \
--enable-ultra-ssd
Create TiKV node pool 3 in zone 3:
az aks nodepool add --name tikv3 \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 1 \
--labels dedicated=tikv \
--node-taints dedicated=tikv:NoSchedule \
Warning
About node pool scale-in:
- You can manually scale in or out an AKS cluster to run a different number of nodes. When you scale in, nodes are carefully to minimize disruption to running applications. Refer to Scale the node count in an Azure Kubernetes Service (AKS) cluster.
Configure StorageClass
To improve disk IO performance, it is recommended to add mountOptions
in StorageClass
to configure nodelalloc
and noatime
. Refer to Mount the data disk ext4 filesystem with options on the target machines that deploy TiKV.
Deploy TiDB Operator
Deploy TiDB Operator in the AKS cluster by referring to Deploy TiDB Operator section.
This section describes how to deploy a TiDB cluster and its monitoring component in Azure AKS.
Create namespace
To create a namespace to deploy the TiDB cluster, run the following command:
kubectl create namespace tidb-cluster
Note
A namespace is a virtual cluster backed by the same physical cluster. This document takes tidb-cluster
as an example. If you want to use other namespaces, modify the corresponding arguments of -n
or --namespace
.
First, download the sample TidbCluster
and TidbMonitor
configuration files:
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-cluster.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aks/tidb-monitor.yaml
Note
By default, TiDB LoadBalancer in tidb-cluster.yaml
is set to “internal”, indicating that the LoadBalancer is only accessible within the cluster virtual network, not externally. To access TiDB over the MySQL protocol, you need to use a bastion to access the internal host of the cluster or use kubectl port-forward
. You can delete the “internal” schema in the tidb-cluster.yaml
file to expose the LoadBalancer publicly by default. However, notice that this practice may expose TiDB to risks.
To deploy the TidbCluster
and TidbMonitor
CR in the AKS cluster, run the following command:
kubectl apply -f tidb-cluster.yaml -n tidb-cluster && \
kubectl apply -f tidb-monitor.yaml -n tidb-cluster
After the yaml file above is applied to the Kubernetes cluster, TiDB Operator creates the desired TiDB cluster and its monitoring component according to the yaml file.
View the cluster status
To view the status of the TiDB cluster, run the following command:
kubectl get pods -n tidb-cluster
When all the pods are in the Running
or Ready
state, the TiDB cluster is successfully started. For example:
NAME READY STATUS RESTARTS AGE
tidb-discovery-5cb8474d89-n8cxk 1/1 Running 0 47h
tidb-monitor-6fbcc68669-dsjlc 3/3 Running 0 47h
tidb-pd-0 1/1 Running 0 47h
tidb-pd-1 1/1 Running 0 46h
tidb-pd-2 1/1 Running 0 46h
tidb-tidb-0 2/2 Running 0 47h
tidb-tidb-1 2/2 Running 0 46h
tidb-tikv-0 1/1 Running 0 47h
tidb-tikv-1 1/1 Running 0 47h
tidb-tikv-2 1/1 Running 0 47h
Access the database
After deploying a TiDB cluster, you can access the TiDB database to test or develop applications.
Access method
- Access via Bastion
The LoadBalancer created for your TiDB cluster resides in an intranet. You can create a Bastion in the cluster virtual network to connect to an internal host and then access the database.
Note
In addition to the bastion host, you can also connect an existing host to the cluster virtual network by . If the AKS cluster is created in an existing virtual network, you can use hosts in this virtual network to access the database.
- Access via SSH
You can create the SSH connection to a Linux node to access the database.
- Access via node-shell
You can simply use tools like to connect to nodes in the cluster, then access the database.
Access via the MySQL client
After access to the internal host via SSH, you can access the TiDB cluster through the MySQL client.
Install the MySQL client on the host:
sudo yum install mysql -y
Connect the client to the TiDB cluster:
mysql --comments -h ${tidb-lb-ip} -P 4000 -u root
${tidb-lb-ip}
is the LoadBalancer IP address of the TiDB service. To obtain it, run thekubectl get svc basic-tidb -n tidb-cluster
command. TheEXTERNAL-IP
field returned is the IP address.For example:
$ mysql --comments -h 20.240.0.7 -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 1189
Server version: 5.7.25-TiDB-v4.0.2 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
<!-- Copy -->right (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> show status;
+--------------------+--------------------------------------+
| Variable_name | Value |
+--------------------+--------------------------------------+
| Ssl_cipher | |
| Ssl_cipher_list | |
| Ssl_verify_mode | 0 |
| Ssl_version | |
| ddl_schema_version | 22 |
| server_id | ed4ba88b-436a-424d-9087-977e897cf5ec |
+--------------------+--------------------------------------+
6 rows in set (0.00 sec)
Note
- is updated from
mysql_native_password
tocaching_sha2_password
. Therefore, if you access the TiDB service (earlier than v4.0.7) by using MySQL 8.0 client via password authentication, you need to specify the--default-auth=mysql_native_password
parameter. - By default, TiDB (starting from v4.0.2) periodically shares usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see Telemetry.
Access the Grafana monitoring dashboard
Obtain the LoadBalancer IP address of Grafana:
kubectl -n tidb-cluster get svc basic-grafana
For example:
In the output above, the EXTERNAL-IP
column is the LoadBalancer IP address.
You can access the ${grafana-lb}:3000
address using your web browser to view monitoring metrics. Replace ${grafana-lb}
with the LoadBalancer IP address.
Note
The default Grafana username and password are both admin
.
Access TiDB Dashboard
See for instructions about how to securely allow access to TiDB Dashboard.
To upgrade the TiDB cluster, execute the following command:
kubectl patch tc basic -n tidb-cluster --type merge -p '{"spec":{"version":"${version}"}}`.
The upgrade process does not finish immediately. You can view the upgrade progress by running the kubectl get pods -n tidb-cluster --watch
command.
Scale out
Before scaling out the cluster, you need to scale out the corresponding node pool so that the new instances have enough resources for operation.
This section describes how to scale out the AKS node pool and TiDB components.
When scaling out TiKV, the node pools must be scaled out evenly among availability zones. The following example shows how to scale out the TiKV node pool of the ${clusterName}
cluster to 6 nodes:
--resource-group ${resourceGroup} \
--cluster-name ${clusterName} \
--name ${nodePoolName} \
--node-count 6
For more information on node pool management, refer to .
Scale out TiDB components
After scaling out the AKS node pool, run the kubectl edit tc basic -n tidb-cluster
command with replicas
of each component set to desired value. The scaling-out process is then completed.
Deploy TiFlash/TiCDC
TiFlash is the columnar storage extension of TiKV.
The two components are not required in the deployment. This section shows a quick start example.
Add node pools
Add a node pool for TiFlash/TiCDC respectively. You can set --node-count
as required.
Create a TiFlash node pool with
nodeType
beingStandard_E8s_v4
or higher:az aks nodepool add --name tiflash \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 1 2 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 3 \
--node-taints dedicated=tiflash:NoSchedule
Create a TiCDC node pool with
nodeType
beingStandard_E16s_v4
or higher:az aks nodepool add --name ticdc \
--cluster-name ${clusterName} \
--resource-group ${resourceGroup} \
--node-vm-size ${nodeType} \
--zones 1 2 3 \
--aks-custom-headers EnableAzureDiskFileCSIDriver=true \
--node-count 3 \
--labels dedicated=ticdc \
--node-taints dedicated=ticdc:NoSchedule
Configure and deploy
To deploy TiFlash, configure
spec.tiflash
intidb-cluster.yaml
. The following is an example:spec:
...
tiflash:
baseImage: pingcap/tiflash
maxFailoverCount: 0
replicas: 1
storageClaims:
- resources:
requests:
storage: 100Gi
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: tiflash
For other parameters, refer to .
Warning
TiDB Operator automatically mounts PVs in the order of the configuration in the
storageClaims
list. Therefore, if you need to add disks for TiFlash, make sure that you add the disks only to the end of the original configuration in the list. In addition, you must not alter the order of the original configuration.To deploy TiCDC, configure
spec.ticdc
intidb-cluster.yaml
. The following is an example:spec:
...
ticdc:
baseImage: pingcap/ticdc
replicas: 1
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: ticdc
Modify
replicas
as required.
Finally, run the kubectl -n tidb-cluster apply -f tidb-cluster.yaml
command to update the TiDB cluster configuration.
For detailed CR configuration, refer to API references and .
Deploy TiDB Enterprise Edition
To deploy TiDB/PD/TiKV/TiFlash/TiCDC Enterprise Edition, configure spec.[tidb|pd|tikv|tiflash|ticdc].baseImage
in tidb-cluster.yaml
as the enterprise image. The enterprise image format is pingcap/[tidb|pd|tikv|tiflash|ticdc]-enterprise
.
For example:
spec:
...
pd:
baseImage: pingcap/pd-enterprise
...
tikv:
baseImage: pingcap/tikv-enterprise
Azure disks support multiple volume types. Among them, UltraSSD
delivers low latency and high throughput and can be enabled by performing the following steps:
and create a storage class for
UltraSSD
:apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ultra
provisioner: disk.csi.azure.com
parameters:
skuname: UltraSSD_LRS # alias: storageaccounttype, available values: Standard_LRS, Premium_LRS, StandardSSD_LRS, UltraSSD_LRS
cachingMode: None
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
mountOptions:
- nodelalloc,noatime
You can add more Driver Parameters as required.
In
tidb-cluster.yaml
, specify theultra
storage class to apply for theUltraSSD
volume type through thestorageClassName
field.The following is a TiKV configuration example you can refer to:
spec:
tikv:
...
storageClassName: ultra
You can use any supported Azure disk type. It is recommended to use Premium_LRS
or UltraSSD_LRS
.
For more information about the storage class configuration and Azure disk types, refer to and Azure Disk Types.
Use local storage
Use Azure LRS disks for storage in production environment. To simulate bare-metal performance, use additional NVMe SSD local store volumes provided by some Azure instances. You can choose such instances for the TiKV node pool to achieve higher IOPS and lower latency.
Note
- You cannot dynamically change the storage class of a running TiDB cluster. In this case, create a new cluster for testing.
- Local NVMe Disks are ephemeral. Data will be lost on these disks if you stop/deallocate your node. When the node is reconstructed, you need to migrate data in TiKV. If you do not want to migrate data, it is recommended not to use the local disk in a production environment.
For instance types that provide local disks, refer to . The following takes Standard_L8s_v2
as an example:
Create a node pool with local storage for TiKV.
Modify the instance type of the TiKV node pool in the
az aks nodepool add
command toStandard_L8s_v2
:If the TiKV node pool already exists, you can either delete the old group and then create a new one, or change the group name to avoid conflict.
Deploy the local volume provisioner.
You need to use the local-volume-provisioner to discover and manage the local storage. Run the following command to deploy and create a
local-storage
storage class:kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.3.2/manifests/eks/local-volume-provisioner.yaml
Use local storage.
After the steps above, the local volume provisioner can discover all the local NVMe SSD disks in the cluster.
Add the
tikv.storageClassName
field to thetidb-cluster.yaml
file and set the value of the field tolocal-storage
.For more information, refer to