Upgrade a TiDB Cluster in Kubernetes
This document describes how to upgrade a TiDB cluster in Kubernetes using rolling updates.
Kubernetes provides the rolling update feature to update your application with zero downtime.
When you perform a rolling update, TiDB Operator serially deletes an old Pod and creates the corresponding new Pod in the order of PD, TiKV, and TiDB. After the new Pod runs normally, TiDB Operator proceeds with the next Pod.
During the rolling update, TiDB Operator automatically completes Leader transfer for PD and TiKV. Under the highly available deployment topology (minimum requirements: PD * 3, TiKV * 3, TiDB * 2), performing a rolling update to PD and TiKV servers does not impact the running application. If your client supports retrying stale connections, performing a rolling update to TiDB servers does not impact application, either.
Warning
- Before upgrading, refer to the to confirm that there are no DDL operations in progress.
Note
In CR, modify the image configurations of all components of the cluster to be upgraded.
Usually, all components in a cluster are in the same version. You can upgrade the TiDB cluster simply by modifying
spec.version
. If you need to use different versions for different components, modifyspec.<pd/tidb/tikv/pump/tiflash/ticdc>.version
.The
version
field has following formats:spec.version
: the format is , such asv5.4.0
Check the upgrade progress:
After all the Pods finish rebuilding and become , the upgrade is completed.
Note
For example, change spec.pd.baseImage
from pingcap/pd
to .
If the PD cluster is unavailable due to PD configuration errors, PD image tag errors, NodeAffinity, or other causes, you might not be able to successfully upgrade the TiDB cluster. In such cases, you can force an upgrade of the cluster to recover the cluster functionality.
The steps of force upgrade are as follows:
Set
annotation
for the cluster:Change the related PD configuration to make sure that PD turns into a normal state.