Restore Data from PV
The restore method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to restore the data. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.
After backing up TiDB cluster data to PVs using BR, if you need to recover the backup SST (key-value pairs) files from PVs to a TiDB cluster, you can follow steps in this document to restore the data using BR.
Note
- BR is only applicable to TiDB v3.1 or later releases.
Before restoring backup data on PVs to TiDB using BR, take the following steps to prepare the restore environment:
Download .
Make sure that the NFS server is accessible from your Kubernetes cluster.
For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.
Make sure that you have the
SELECT
andUPDATE
privileges on themysql.tidb
table of the target database so that theRestore
CR can adjust the GC time before and after the restore.
Create the
Restore
custom resource (CR), and restore the specified data to your cluster:When configuring , note the following:
The example above restores data from the
local://${.spec.local.volume.nfs.path}/${.spec.local.prefix}/
directory on NFS to thedemo2
TiDB cluster in thetest2
namespace. For more information about PV configuration, refer to Local storage fields.Some parameters in
spec.br
are optional, such aslogLevel
,statusAddr
,concurrency
, ,checksum
,timeAgo
, andsendCredToTikv
. For more information about.spec.br
, refer to .For v4.0.8 or a later version, BR can automatically adjust
tikv_gc_life_time
. You do not need to configure thespec.to
field in theRestore
CR.For more information about the CR fields, refer to Restore CR fields.
If you encounter any problem during the restore process, refer to .