Back up Data to S3-Compatible Storage Using BR
The backup method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to get the backup data of the TiDB cluster, and then send the data to the AWS storage. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.
If you have the following backup needs, you can use BR to make an or scheduled full backup of the TiDB cluster data to S3-compatible storages.
- To back up a large volume of data at a fast speed
- To get a direct backup of data as SST files (key-value pairs)
For other backup needs, refer to to choose an appropriate backup method.
Note
- BR is only applicable to TiDB v3.1 or later releases.
- Data that is backed up using BR can only be restored to TiDB instead of other databases.
Ad-hoc backup supports both full backup and incremental backup.
To get an Ad-hoc backup, you need to create a Custom Resource (CR) object to describe the backup details. Then, TiDB Operator performs the specific backup operation based on this Backup
object. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.
This document provides an example about how to back up the data of the demo1
TiDB cluster in the test1
Kubernetes namespace to the AWS storage. The following are the detailed steps.
Download backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the
test1
namespace:Grant permissions to the remote storage.
- If you are using Amazon S3 to backup your cluster, you can grant permissions in three methods. For more information, refer to .
- If you are using other S3-compatible storage (such as Ceph and MinIO) to backup your cluster, you can grant permissions by using AccessKey and SecretKey.
For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.
Make sure that you have the
SELECT
andUPDATE
privileges on themysql.tidb
table of the backup database so that theBackup
CR can adjust the GC time before and after the backup.Create the
backup-demo1-tidb-secret
secret to store the account and password to access the TiDB cluster:kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=${password} --namespace=test1
Depending on which method you choose to grant permissions to the remote storage when preparing for the ad-hoc backup, export your data to the S3-compatible storage by doing one of the following:
-
kubectl apply -f backup-aws-s3.yaml
The content of
backup-aws-s3.yaml
is as follows:---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
br:
cluster: demo1
clusterNamespace: test1
# logLevel: info
# statusAddr: ${status_addr}
# concurrency: 4
# rateLimit: 0
# timeAgo: ${time}
# checksum: true
# sendCredToTikv: true
# options:
# - --lastbackupts=420134118382108673
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb_host}
port: ${tidb_port}
user: ${tidb_user}
secretName: backup-demo1-tidb-secret
s3:
provider: aws
secretName: s3-secret
region: us-west-1
bucket: my-bucket
prefix: my-folder
Method 2: If you grant permissions by associating IAM with Pod, create the
Backup
CR to back up cluster data as described below:kubectl apply -f backup-aws-s3.yaml
The content of
backup-aws-s3.yaml
is as follows:---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
annotations:
iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user
spec:
backupType: full
br:
cluster: demo1
sendCredToTikv: false
clusterNamespace: test1
# logLevel: info
# statusAddr: ${status_addr}
# concurrency: 4
# rateLimit: 0
# timeAgo: ${time}
# checksum: true
# options:
# - --lastbackupts=420134118382108673
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb_host}
port: ${tidb_port}
user: ${tidb_user}
secretName: backup-demo1-tidb-secret
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Method 3: If you grant permissions by associating IAM with ServiceAccount, create the
Backup
CR to back up cluster data as described below:kubectl apply -f backup-aws-s3.yaml
The content of
backup-aws-s3.yaml
is as follows:
When configuring backup-aws-s3.yaml
, note the following:
- Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp
--lastbackupts
inspec.br.options
. For the limitations of incremental backup, refer to . - Some parameters in
.spec.br
are optional, such aslogLevel
andstatusAddr
. For more information about BR configuration, refer to BR fields. - For v4.0.8 or a later version, BR can automatically adjust
tikv_gc_life_time
. You do not need to configurespec.tikvGCLifeTime
andspec.from
fields in theBackup
CR. - For more information about the
Backup
CR fields, refer to .
After you create the CR, TiDB Operator starts the backup automatically. You can view the backup status by running the following command:
kubectl get bk -n test1 -o wide
Back up data of all clusters
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
br:
cluster: demo1
sendCredToTikv: false
clusterNamespace: test1
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
# from:
# host: ${tidb_host}
# port: ${tidb_port}
# user: ${tidb_user}
# secretName: backup-demo1-tidb-secret
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Back up data of a single database
The following example backs up data of the db1
database.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
tableFilter:
- "db1.*"
br:
cluster: demo1
sendCredToTikv: false
clusterNamespace: test1
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
# from:
# host: ${tidb_host}
# port: ${tidb_port}
# user: ${tidb_user}
# secretName: backup-demo1-tidb-secret
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Back up data of a single table
The following example backs up data of the db1.table1
table.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
tableFilter:
- "db1.table1"
br:
cluster: demo1
sendCredToTikv: false
clusterNamespace: test1
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
# from:
# host: ${tidb_host}
# port: ${tidb_port}
# user: ${tidb_user}
# secretName: backup-demo1-tidb-secret
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Back up data of multiple tables using the table filter
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-s3
namespace: test1
spec:
backupType: full
serviceAccount: tidb-backup-manager
tableFilter:
- "db1.table1"
- "db1.table2"
# ...
br:
sendCredToTikv: false
clusterNamespace: test1
# from:
# host: ${tidb_host}
# port: ${tidb_port}
# user: ${tidb_user}
# secretName: backup-demo1-tidb-secret
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
You can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled full backup is described by a custom BackupSchedule
CR object. A full backup is triggered at each backup time point. Its underlying implementation is the ad-hoc full backup.
The steps to prepare for a scheduled full backup are the same as that of Prepare for an ad-hoc backup.
Depending on which method you choose to grant permissions to the remote storage, perform a scheduled full backup by doing one of the following:
Method 1: If you grant permissions by importing AccessKey and SecretKey, create the
BackupSchedule
CR, and back up cluster data as described below:kubectl apply -f backup-scheduler-aws-s3.yaml
The content of
backup-scheduler-aws-s3.yaml
is as follows:Method 2: If you grant permissions by associating IAM with the Pod, create the
BackupSchedule
CR, and back up cluster data as described below:kubectl apply -f backup-scheduler-aws-s3.yaml
The content of
backup-scheduler-aws-s3.yaml
is as follows:---
apiVersion: pingcap.com/v1alpha1
kind: BackupSchedule
metadata:
name: demo1-backup-schedule-s3
namespace: test1
annotations:
iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user
spec:
#maxBackups: 5
#pause: true
maxReservedTime: "3h"
schedule: "*/2 * * * *"
backupTemplate:
backupType: full
br:
cluster: demo1
sendCredToTikv: false
clusterNamespace: test1
# logLevel: info
# statusAddr: ${status_addr}
# concurrency: 4
# rateLimit: 0
# timeAgo: ${time}
# checksum: true
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb_host}
port: ${tidb_port}
user: ${tidb_user}
secretName: backup-demo1-tidb-secret
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
Method 3: If you grant permissions by associating IAM with ServiceAccount, create the
BackupSchedule
CR, and back up cluster data as described below:kubectl apply -f backup-scheduler-aws-s3.yaml
The content of
backup-scheduler-aws-s3.yaml
is as follows:---
apiVersion: pingcap.com/v1alpha1
kind: BackupSchedule
metadata:
name: demo1-backup-schedule-s3
namespace: test1
spec:
#maxBackups: 5
#pause: true
maxReservedTime: "3h"
schedule: "*/2 * * * *"
serviceAccount: tidb-backup-manager
backupTemplate:
backupType: full
br:
cluster: demo1
sendCredToTikv: false
clusterNamespace: test1
# logLevel: info
# statusAddr: ${status_addr}
# concurrency: 4
# rateLimit: 0
# timeAgo: ${time}
# checksum: true
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb_host}
port: ${tidb_port}
user: ${tidb_user}
secretName: backup-demo1-tidb-secret
s3:
provider: aws
region: us-west-1
bucket: my-bucket
prefix: my-folder
From the above content in backup-scheduler-aws-s3.yaml
, you can see that the backupSchedule
configuration consists of two parts. One is the unique configuration of backupSchedule
, and the other is backupTemplate
.
- For the unique configuration of
backupSchedule
, refer to . backupTemplate
specifies the configuration related to the cluster and remote storage, which is the same as thespec
configuration of the Backup CR.
After creating the scheduled full backup, use the following command to check the backup status:
You can use the following command to check all the backup items:
kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-s3 -n test1
If you no longer need the backup CR, refer to .