Back up Data to S3-Compatible Storage Using BR

    The backup method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to get the backup data of the TiDB cluster, and then send the data to the AWS storage. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.

    If you have the following backup needs, you can use BR to make an or scheduled full backup of the TiDB cluster data to S3-compatible storages.

    • To back up a large volume of data at a fast speed
    • To get a direct backup of data as SST files (key-value pairs)

    For other backup needs, refer to to choose an appropriate backup method.

    Note

    • BR is only applicable to TiDB v3.1 or later releases.
    • Data that is backed up using BR can only be restored to TiDB instead of other databases.

    Ad-hoc backup supports both full backup and incremental backup.

    To get an Ad-hoc backup, you need to create a Custom Resource (CR) object to describe the backup details. Then, TiDB Operator performs the specific backup operation based on this Backup object. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.

    This document provides an example about how to back up the data of the demo1 TiDB cluster in the test1 Kubernetes namespace to the AWS storage. The following are the detailed steps.

    1. Download backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the test1 namespace:

    2. Grant permissions to the remote storage.

      • If you are using Amazon S3 to backup your cluster, you can grant permissions in three methods. For more information, refer to .
      • If you are using other S3-compatible storage (such as Ceph and MinIO) to backup your cluster, you can grant permissions by using AccessKey and SecretKey.
    3. For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.

      1. Make sure that you have the SELECT and UPDATE privileges on the mysql.tidb table of the backup database so that the Backup CR can adjust the GC time before and after the backup.

      2. Create the backup-demo1-tidb-secret secret to store the account and password to access the TiDB cluster:

        1. kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=${password} --namespace=test1

    Depending on which method you choose to grant permissions to the remote storage when preparing for the ad-hoc backup, export your data to the S3-compatible storage by doing one of the following:

      1. kubectl apply -f backup-aws-s3.yaml

      The content of backup-aws-s3.yaml is as follows:

      1. ---
      2. apiVersion: pingcap.com/v1alpha1
      3. kind: Backup
      4. metadata:
      5. name: demo1-backup-s3
      6. namespace: test1
      7. spec:
      8. backupType: full
      9. br:
      10. cluster: demo1
      11. clusterNamespace: test1
      12. # logLevel: info
      13. # statusAddr: ${status_addr}
      14. # concurrency: 4
      15. # rateLimit: 0
      16. # timeAgo: ${time}
      17. # checksum: true
      18. # sendCredToTikv: true
      19. # options:
      20. # - --lastbackupts=420134118382108673
      21. # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
      22. from:
      23. host: ${tidb_host}
      24. port: ${tidb_port}
      25. user: ${tidb_user}
      26. secretName: backup-demo1-tidb-secret
      27. s3:
      28. provider: aws
      29. secretName: s3-secret
      30. region: us-west-1
      31. bucket: my-bucket
      32. prefix: my-folder
    • Method 2: If you grant permissions by associating IAM with Pod, create the Backup CR to back up cluster data as described below:

      1. kubectl apply -f backup-aws-s3.yaml

      The content of backup-aws-s3.yaml is as follows:

      1. ---
      2. apiVersion: pingcap.com/v1alpha1
      3. kind: Backup
      4. metadata:
      5. name: demo1-backup-s3
      6. namespace: test1
      7. annotations:
      8. iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user
      9. spec:
      10. backupType: full
      11. br:
      12. cluster: demo1
      13. sendCredToTikv: false
      14. clusterNamespace: test1
      15. # logLevel: info
      16. # statusAddr: ${status_addr}
      17. # concurrency: 4
      18. # rateLimit: 0
      19. # timeAgo: ${time}
      20. # checksum: true
      21. # options:
      22. # - --lastbackupts=420134118382108673
      23. # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
      24. from:
      25. host: ${tidb_host}
      26. port: ${tidb_port}
      27. user: ${tidb_user}
      28. secretName: backup-demo1-tidb-secret
      29. s3:
      30. provider: aws
      31. region: us-west-1
      32. bucket: my-bucket
      33. prefix: my-folder
    • Method 3: If you grant permissions by associating IAM with ServiceAccount, create the Backup CR to back up cluster data as described below:

      1. kubectl apply -f backup-aws-s3.yaml

      The content of backup-aws-s3.yaml is as follows:

    When configuring backup-aws-s3.yaml, note the following:

    • Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp --lastbackupts in spec.br.options. For the limitations of incremental backup, refer to .
    • Some parameters in .spec.br are optional, such as logLevel and statusAddr. For more information about BR configuration, refer to BR fields.
    • For v4.0.8 or a later version, BR can automatically adjust tikv_gc_life_time. You do not need to configure spec.tikvGCLifeTime and spec.from fields in the Backup CR.
    • For more information about the Backup CR fields, refer to .

    After you create the CR, TiDB Operator starts the backup automatically. You can view the backup status by running the following command:

    1. kubectl get bk -n test1 -o wide

    Back up data of all clusters

    1. ---
    2. apiVersion: pingcap.com/v1alpha1
    3. kind: Backup
    4. metadata:
    5. name: demo1-backup-s3
    6. namespace: test1
    7. spec:
    8. backupType: full
    9. serviceAccount: tidb-backup-manager
    10. br:
    11. cluster: demo1
    12. sendCredToTikv: false
    13. clusterNamespace: test1
    14. # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
    15. # from:
    16. # host: ${tidb_host}
    17. # port: ${tidb_port}
    18. # user: ${tidb_user}
    19. # secretName: backup-demo1-tidb-secret
    20. s3:
    21. provider: aws
    22. region: us-west-1
    23. bucket: my-bucket
    24. prefix: my-folder

    Back up data of a single database

    The following example backs up data of the db1 database.

    1. ---
    2. apiVersion: pingcap.com/v1alpha1
    3. kind: Backup
    4. metadata:
    5. name: demo1-backup-s3
    6. namespace: test1
    7. spec:
    8. backupType: full
    9. serviceAccount: tidb-backup-manager
    10. tableFilter:
    11. - "db1.*"
    12. br:
    13. cluster: demo1
    14. sendCredToTikv: false
    15. clusterNamespace: test1
    16. # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
    17. # from:
    18. # host: ${tidb_host}
    19. # port: ${tidb_port}
    20. # user: ${tidb_user}
    21. # secretName: backup-demo1-tidb-secret
    22. s3:
    23. provider: aws
    24. region: us-west-1
    25. bucket: my-bucket
    26. prefix: my-folder

    Back up data of a single table

    The following example backs up data of the db1.table1 table.

    1. ---
    2. apiVersion: pingcap.com/v1alpha1
    3. kind: Backup
    4. metadata:
    5. name: demo1-backup-s3
    6. namespace: test1
    7. spec:
    8. backupType: full
    9. serviceAccount: tidb-backup-manager
    10. tableFilter:
    11. - "db1.table1"
    12. br:
    13. cluster: demo1
    14. sendCredToTikv: false
    15. clusterNamespace: test1
    16. # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
    17. # from:
    18. # host: ${tidb_host}
    19. # port: ${tidb_port}
    20. # user: ${tidb_user}
    21. # secretName: backup-demo1-tidb-secret
    22. s3:
    23. provider: aws
    24. region: us-west-1
    25. bucket: my-bucket
    26. prefix: my-folder

    Back up data of multiple tables using the table filter

    1. ---
    2. apiVersion: pingcap.com/v1alpha1
    3. kind: Backup
    4. metadata:
    5. name: demo1-backup-s3
    6. namespace: test1
    7. spec:
    8. backupType: full
    9. serviceAccount: tidb-backup-manager
    10. tableFilter:
    11. - "db1.table1"
    12. - "db1.table2"
    13. # ...
    14. br:
    15. sendCredToTikv: false
    16. clusterNamespace: test1
    17. # from:
    18. # host: ${tidb_host}
    19. # port: ${tidb_port}
    20. # user: ${tidb_user}
    21. # secretName: backup-demo1-tidb-secret
    22. s3:
    23. provider: aws
    24. region: us-west-1
    25. bucket: my-bucket
    26. prefix: my-folder

    You can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled full backup is described by a custom BackupSchedule CR object. A full backup is triggered at each backup time point. Its underlying implementation is the ad-hoc full backup.

    The steps to prepare for a scheduled full backup are the same as that of Prepare for an ad-hoc backup.

    Depending on which method you choose to grant permissions to the remote storage, perform a scheduled full backup by doing one of the following:

    • Method 1: If you grant permissions by importing AccessKey and SecretKey, create the BackupSchedule CR, and back up cluster data as described below:

      1. kubectl apply -f backup-scheduler-aws-s3.yaml

      The content of backup-scheduler-aws-s3.yaml is as follows:

    • Method 2: If you grant permissions by associating IAM with the Pod, create the BackupSchedule CR, and back up cluster data as described below:

      1. kubectl apply -f backup-scheduler-aws-s3.yaml

      The content of backup-scheduler-aws-s3.yaml is as follows:

      1. ---
      2. apiVersion: pingcap.com/v1alpha1
      3. kind: BackupSchedule
      4. metadata:
      5. name: demo1-backup-schedule-s3
      6. namespace: test1
      7. annotations:
      8. iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user
      9. spec:
      10. #maxBackups: 5
      11. #pause: true
      12. maxReservedTime: "3h"
      13. schedule: "*/2 * * * *"
      14. backupTemplate:
      15. backupType: full
      16. br:
      17. cluster: demo1
      18. sendCredToTikv: false
      19. clusterNamespace: test1
      20. # logLevel: info
      21. # statusAddr: ${status_addr}
      22. # concurrency: 4
      23. # rateLimit: 0
      24. # timeAgo: ${time}
      25. # checksum: true
      26. # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
      27. from:
      28. host: ${tidb_host}
      29. port: ${tidb_port}
      30. user: ${tidb_user}
      31. secretName: backup-demo1-tidb-secret
      32. s3:
      33. provider: aws
      34. region: us-west-1
      35. bucket: my-bucket
      36. prefix: my-folder
    • Method 3: If you grant permissions by associating IAM with ServiceAccount, create the BackupSchedule CR, and back up cluster data as described below:

      1. kubectl apply -f backup-scheduler-aws-s3.yaml

      The content of backup-scheduler-aws-s3.yaml is as follows:

      1. ---
      2. apiVersion: pingcap.com/v1alpha1
      3. kind: BackupSchedule
      4. metadata:
      5. name: demo1-backup-schedule-s3
      6. namespace: test1
      7. spec:
      8. #maxBackups: 5
      9. #pause: true
      10. maxReservedTime: "3h"
      11. schedule: "*/2 * * * *"
      12. serviceAccount: tidb-backup-manager
      13. backupTemplate:
      14. backupType: full
      15. br:
      16. cluster: demo1
      17. sendCredToTikv: false
      18. clusterNamespace: test1
      19. # logLevel: info
      20. # statusAddr: ${status_addr}
      21. # concurrency: 4
      22. # rateLimit: 0
      23. # timeAgo: ${time}
      24. # checksum: true
      25. # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
      26. from:
      27. host: ${tidb_host}
      28. port: ${tidb_port}
      29. user: ${tidb_user}
      30. secretName: backup-demo1-tidb-secret
      31. s3:
      32. provider: aws
      33. region: us-west-1
      34. bucket: my-bucket
      35. prefix: my-folder

    From the above content in backup-scheduler-aws-s3.yaml, you can see that the backupSchedule configuration consists of two parts. One is the unique configuration of backupSchedule, and the other is backupTemplate.

    • For the unique configuration of backupSchedule, refer to .
    • backupTemplate specifies the configuration related to the cluster and remote storage, which is the same as the spec configuration of the Backup CR.

    After creating the scheduled full backup, use the following command to check the backup status:

    You can use the following command to check all the backup items:

    1. kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-s3 -n test1

    If you no longer need the backup CR, refer to .