Deploy the HTAP Storage Engine Tiflash for an Existing TiDB Cluster
This document is applicable to scenarios in which you already have a TiDB cluster and need to use TiDB HTAP capabilities by deploying TiFlash, such as the following:
- Hybrid workload scenarios with online real-time analytic processing
- Real-time stream processing scenarios
- Data hub scenarios
If you need to deploy TiFlash for an existing TiDB cluster, do the following:
Note
If your server does not have an external network, you can download the required Docker image on the machine with an external network, upload the Docker image to your server, and then use to install the Docker image on the server. For details, see deploy the TiDB cluster.
Edit the
TidbCluster
Custom Resource (CR):Add the TiFlash configuration as the following example:
spec:
tiflash:
# To deploy the enterprise edition of TiFlash, change the value of `baseImage` to `pingcap/tiflash-enterprise`.
baseImage: pingcap/tiflash
maxFailoverCount: 0
replicas: 1
storageClaims:
- resources:
requests:
storage: 100Gi
storageClassName: local-storage
TiFlash supports mounting multiple Persistent Volumes (PVs). If you want to configure multiple PVs for TiFlash, configure multiple
resources
intiflash.storageClaims
, eachresources
with a separaterequests.storage
andstorageClassName
. For example:tiflash:
baseImage: pingcap/tiflash
maxFailoverCount: 0
replicas: 1
storageClaims:
- resources:
requests:
storage: 100Gi
storageClassName: local-storage
- resources:
requests:
storageClassName: local-storage
Configure the relevant parameters of
spec.tiflash.config
in TidbCluster CR. For example:spec:
tiflash:
config:
config: |
[flash]
[flash.flash_cluster]
log = "/data0/logs/flash_cluster_manager.log"
[logger]
count = 10
level = "information"
errorlog = "/data0/logs/error.log"
log = "/data0/logs/server.log"
For more TiFlash parameters that can be configured, refer to .
Note
For different TiFlash versions, note the following configuration differences:
- If TiFlash version <= v4.0.4, you need to set
spec.tiflash.config.config.flash.service_addr
to${clusterName}-tiflash-POD_NUM.${clusterName}-tiflash-peer.${namespace}.svc:3930
in TidbCluster CR, where${clusterName}
and${namespace}
need to be replaced according to the real case. - If TiFlash version >= v4.0.5, there is no need to manually configure
spec.tiflash.config.config.flash.service_addr
. - If you upgrade from TiFlash v4.0.4 or an earlier version to TiFlash v4.0.5 or a later version, you need to delete the configuration of
spec.tiflash.config.config.flash.service_addr
from theTidbCluster
CR.
Edit the TidbCluster Custom Resource (CR).
kubectl edit tc ${cluster_name} -n ${namespace}
TiDB Operator automatically mounts PVs in the order of the items in the
storageClaims
list. If you need to add moreresources
items to TiFlash, make sure to append new items only to the end of the original items, and DO NOT modify the order of the original items. For example:tiflash:
baseImage: pingcap/tiflash
maxFailoverCount: 0
replicas: 1
- resources:
requests:
storage: 100Gi
storageClassName: local-storage
- resources:
storage: 100Gi
storageClassName: local-storage
- resources: #newly added
requests: #newly added
storage: 100Gi #newly added
storageClassName: local-storage #newly added
Manually delete the TiFlash StatefulSet by running the following command. Then, wait for the TiDB Operator to recreate the TiFlash StatefulSet.
If your TiDB cluster no longer needs the TiDB HTAP storage engine TiFlash, take the following steps to remove TiFlash:
Adjust the number of replicas of the tables replicated to the TiFlash cluster.
To completely remove TiFlash, you need to set the number of replicas of all tables replicated to the TiFlash to
0
.To connect to the TiDB service, refer to the steps in Access the TiDB Cluster in Kubernetes.
To adjust the number of replicas of the tables replicated to the TiFlash cluster, run the following command:
alter table <db_name>.<table_name> set tiflash replica 0;
Wait for the TiFlash replicas of the related tables to be deleted.
Connect to the TiDB service and run the following command. If you can not find the replication information of the related tables, it means that the replicas are deleted:
SELECT * FROM information_schema.tiflash_replica WHERE TABLE_SCHEMA = '<db_name>' and TABLE_NAME = '<table_name>';
To remove TiFlash Pods, run the following command to modify
spec.tiflash.replicas
to0
:kubectl patch tidbcluster ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"tiflash":{"replicas": 0}}}
Check the state of TiFlash Pods and TiFlash stores.
-
kubectl get pod -n ${namespace} -l app.kubernetes.io/component=tiflash,app.kubernetes.io/instance=${cluster_name}
If the output is empty, it means that you delete the Pod of the TiFlash cluster successfully.
To check whether the stores of the TiFlash are in the
Tombstone
state, run the following command:kubectl get tidbcluster ${cluster_name} -n ${namespace} -o yaml
The value of the
status.tiflash
field in the output result is similar to the example below.Only after you delete all Pods of the TiFlash cluster successfully and all the TiFlash stores have changed to the
Tombstone
state, can you perform the next operation.
-
Delete the TiFlash StatefulSet.
To modify the TidbCluster CR and delete the
spec.tiflash
field, run the following command:kubectl patch tidbcluster ${cluster_name} -n ${namespace} --type json -p '[{"op":"remove", "path":"/spec/tiflash"}]'
To delete the TiFlash StatefulSet, run the following command:
kubectl delete statefulsets -n ${namespace} -l app.kubernetes.io/component=tiflash,app.kubernetes.io/instance=${cluster_name}
To check whether you delete the StatefulSet of the TiFlash cluster successfully, run the following command:
kubectl get sts -n ${namespace} -l app.kubernetes.io/component=tiflash,app.kubernetes.io/instance=${cluster_name}
If the output is empty, it means that you delete the StatefulSet of the TiFlash cluster successfully.
(Optional) Delete PVC and PV.
If you confirm that you do not use the data in TiFlash, and you want to delete the data, you need to strictly follow the steps below to delete the data in TiFlash:
Delete the PVC object corresponding to the PV
kubectl delete pvc -n ${namespace} -l app.kubernetes.io/component=tiflash,app.kubernetes.io/instance=${cluster_name}
If the PV reclaim policy is
Retain
, the corresponding PV is still retained after you delete the PVC object. If you want to delete the PV, you can set the reclaim policy of the PV toDelete
, and the PV can be deleted and recycled automatically.kubectl patch pv ${pv_name} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
In the above command, represents the PV name of the TiFlash cluster. You can check the PV name by running the following command: