TiDB FAQs in Kubernetes
The default time zone setting for each component container of a TiDB cluster in Kubernetes is UTC. To modify this setting, take the steps below based on your cluster status:
Configure the attribute in the TidbCluster CR. For example:
Then deploy the TiDB cluster.
For a running cluster
If the TiDB cluster is already running, first upgrade the cluster, and then configure it to support the new time zone.
Upgrade the TiDB cluster:
Configure the
.spec.timezone
attribute in the TidbCluster CR. For example:...
spec:
...
Then upgrade the TiDB cluster.
Configure TiDB to support the new time zone:
Refer to to modify TiDB service time zone settings.
Can HPA or VPA be configured on TiDB components?
What scenarios require manual intervention when I use TiDB Operator to orchestrate a TiDB cluster?
Besides the operation of the Kubernetes cluster itself, there are the following two scenarios that might require manual intervention when using TiDB Operator:
- Adjusting the cluster after the auto-failover of TiKV. See Auto-Failover for details;
- Maintaining or dropping the specified Kubernetes nodes. See for details.
To achieve high availability and data safety, it is recommended that you deploy the TiDB cluster in at least three availability zones in a production environment.
In terms of the deployment topology relationship between the TiDB cluster and TiDB services, TiDB Operator supports the following three deployment modes. Each mode has its own merits and demerits, so your choice must be based on actual application needs.
- Deploy the TiDB cluster and TiDB services in the same Kubernetes cluster of the same VPC;
- Deploy the TiDB cluster and TiDB services in different Kubernetes clusters of the same VPC;
Does TiDB Operator support TiSpark?
TiDB Operator does not yet support automatically orchestrating TiSpark.
If you want to add the TiSpark component to TiDB in Kubernetes, you must maintain Spark on your own in the same Kubernetes cluster. You must ensure that Spark can access the IPs and ports of PD and TiKV instances, and install the TiSpark plugin for Spark. offers a detailed guide for you to install the TiSpark plugin.
To maintain Spark in Kubernetes, refer to Spark on Kubernetes.
How to check the configuration of the TiDB cluster?
To check the configuration of the PD, TiKV, and TiDB components of the current cluster, run the following command:
Check the PD configuration file:
Check the TiKV configuration file:
kubectl exec -it ${pod_name} -n ${namespace} -- cat /etc/tikv/tikv.toml
Three possible reasons:
Insufficient resource or HA Policy causes the Pod stuck in the
Pending
state. Refer to Deployment Failures for more details.is applied to some nodes, which prevents the Pod from being scheduled to these nodes unless the Pod has the matching
toleration
. Refer to for more details.Scheduling conflict, which causes the Pod stuck in the
ContainerCreating
state. In such cases, you can check if there is more than one TiDB Operator deployed in the Kubernetes cluster. Conflicts occur when custom schedulers in multiple TiDB Operators schedule the same Pod in different phases.You can execute the following command to verify whether there is more than one TiDB Operator deployed. If more than one record is returned, delete the extra TiDB Operator to resolve the scheduling conflict.
kubectl get deployment --all-namespaces | grep tidb-scheduler
How does TiDB ensure data safety and reliability?
To ensure persistent storage of data, TiDB clusters deployed by TiDB Operator use provided by Kubernetes cluster as the storage.
To ensure data safety in case one node is down, PD and TiKV use Raft Consistency Algorithm to replicate the stored data as multiple replicas across nodes.
In the bottom layer, TiKV replicates data using the log replication and State Machine model. For write requests, data is written to the Leader node first, and then the Leader node replicates the command to its Follower nodes as a log. When most of the Follower nodes in the cluster receive this log from the Leader node, the log is committed and the State Machine changes accordingly.
If the Ready field of a TidbCluster is false, does it mean that the corresponding TiDBCluster is unavailable?
After you execute the command, if the output shows that the Ready field of a TiDBCluster is false, it does not mean that the corresponding TiDBCluster is unavailable, because the cluster might be in any of the following status:
- Upgrading
- Scaling