快速试用 Kubernetes 部署

    • Helm 3.1.0+
    • 1.12+
    • PV 供应(需要基础设施支持)

    安装 dolphinscheduler

    发布一个名为 的版本(release),请执行以下命令:

    将名为 dolphinscheduler 的版本(release) 发布到 test 的命名空间中:

    1. $ helm install dolphinscheduler . -n test

    这些命令以默认配置在 Kubernetes 集群上部署 DolphinScheduler,部分列出了可以在安装过程中配置的参数

    提示: 列出所有已发布的版本,使用 helm list

    PostgreSQL (用户 root, 密码 root, 数据库 dolphinscheduler) 和 ZooKeeper 服务将会默认启动

    如果 values.yaml 文件中的 ingress.enabled 被设置为 true, 在浏览器中访问 http://${ingress.host}/dolphinscheduler 即可

    提示: 如果 ingress 访问遇到问题,请联系 Kubernetes 管理员并查看 Ingress

    否则,当 api.service.type=ClusterIP 时,你需要执行 port-forward 端口转发命令:

    1. $ kubectl port-forward --address 0.0.0.0 svc/dolphinscheduler-api 12345:12345
    2. $ kubectl port-forward --address 0.0.0.0 -n test svc/dolphinscheduler-api 12345:12345 # 使用 test 命名空间

    然后访问前端页面: (本地地址为 http://127.0.0.1:12345/dolphinscheduler)

    或者当 api.service.type=NodePort 时,你需要执行命令:

    1. NODE_IP=$(kubectl get no -n {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
    2. NODE_PORT=$(kubectl get svc {{ template "dolphinscheduler.fullname" . }}-api -n {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}")
    3. echo http://$NODE_IP:$NODE_PORT/dolphinscheduler

    然后访问前端页面:

    默认的用户是admin,默认的密码是dolphinscheduler123

    请参考用户手册章节的快速上手查看如何使用DolphinScheduler

    卸载 dolphinscheduler

    卸载名为 dolphinscheduler 的版本(release),请执行:

    1. $ helm uninstall dolphinscheduler

    该命令将删除与 dolphinscheduler 相关的所有 Kubernetes 组件(但PVC除外),并删除版本(release)

    要删除与 dolphinscheduler 相关的PVC,请执行:

    1. $ kubectl delete pvc -l app.kubernetes.io/instance=dolphinscheduler

    注意: 删除PVC也会删除所有数据,请谨慎操作!

    配置文件为 values.yaml附录-配置 表格列出了 DolphinScheduler 的可配置参数及其默认值

    支持矩阵

    列出所有 pods (别名 po):

    1. kubectl get po
    2. kubectl get po -n test # with test namespace

    查看名为 dolphinscheduler-master-0 的 pod 容器的日志:

    1. kubectl logs dolphinscheduler-master-0
    2. kubectl logs -f dolphinscheduler-master-0 # 跟随日志输出
    3. kubectl logs --tail 10 dolphinscheduler-master-0 -n test # 显示倒数10行日志

    如何在 Kubernetes 上扩缩容 api, master 和 worker?

    列出所有 deployments (别名 deploy):

    1. kubectl get deploy
    2. kubectl get deploy -n test # with test namespace

    扩缩容 api 至 3 个副本:

    1. kubectl scale --replicas=3 deploy dolphinscheduler-api
    2. kubectl scale --replicas=3 deploy dolphinscheduler-api -n test # with test namespace

    列出所有 statefulsets (别名 sts):

    1. kubectl get sts
    2. kubectl get sts -n test # with test namespace

    扩缩容 master 至 2 个副本:

    1. kubectl scale --replicas=2 sts dolphinscheduler-master
    2. kubectl scale --replicas=2 sts dolphinscheduler-master -n test # with test namespace

    扩缩容 worker 至 6 个副本:

    如何用 MySQL 替代 PostgreSQL 作为 DolphinScheduler 的数据库?

    由于商业许可证的原因,我们不能直接使用 MySQL 的驱动包.

    如果你要使用 MySQL, 你可以基于官方镜像 apache/dolphinscheduler 进行构建.

    1. 下载 MySQL 驱动包 mysql-connector-java-5.1.49.jar (要求 >=5.1.47)

    1. FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.6
    2. COPY mysql-connector-java-5.1.49.jar /opt/dolphinscheduler/lib
    1. 构建一个包含 MySQL 驱动包的新镜像:
    1. docker build -t apache/dolphinscheduler:mysql-driver .
    1. 推送 docker 镜像 apache/dolphinscheduler:mysql-driver 到一个 docker registry 中

    2. 修改 values.yaml 文件中 image 的 repository 字段,并更新 tagmysql-driver

    3. 修改 values.yaml 文件中 postgresql 的 enabledfalse

    4. 修改 values.yaml 文件中的 externalDatabase 配置 (尤其修改 host, usernamepassword)

    1. externalDatabase:
    2. type: "mysql"
    3. driver: "com.mysql.jdbc.Driver"
    4. host: "localhost"
    5. port: "3306"
    6. username: "root"
    7. database: "dolphinscheduler"
    8. params: "useUnicode=true&characterEncoding=UTF-8"
    1. 部署 dolphinscheduler (详见安装 dolphinscheduler)

    如何在数据源中心支持 MySQL 数据源?

    1. 下载 MySQL 驱动包 mysql-connector-java-5.1.49.jar (要求 >=5.1.47)

    2. 创建一个新的 Dockerfile,用于添加 MySQL 驱动包:

    1. FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.6
    2. COPY mysql-connector-java-5.1.49.jar /opt/dolphinscheduler/lib
    1. 构建一个包含 MySQL 驱动包的新镜像:
    1. docker build -t apache/dolphinscheduler:mysql-driver .
    1. 推送 docker 镜像 apache/dolphinscheduler:mysql-driver 到一个 docker registry 中

    2. 修改 values.yaml 文件中 image 的 repository 字段,并更新 tagmysql-driver

    3. 部署 dolphinscheduler (详见安装 dolphinscheduler)

    4. 在数据源中心添加一个 MySQL 数据源

    由于商业许可证的原因,我们不能直接使用 Oracle 的驱动包.

    如果你要添加 Oracle 数据源, 你可以基于官方镜像 apache/dolphinscheduler 进行构建.

    1. 下载 Oracle 驱动包 (例如 ojdbc8-19.9.0.0.jar)

    2. 创建一个新的 Dockerfile,用于添加 Oracle 驱动包:

    1. FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.6
    2. COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
    1. 构建一个包含 Oracle 驱动包的新镜像:
    1. docker build -t apache/dolphinscheduler:oracle-driver .
    1. 推送 docker 镜像 apache/dolphinscheduler:oracle-driver 到一个 docker registry 中

    2. 修改 values.yaml 文件中 image 的 repository 字段,并更新 tagoracle-driver

    3. 部署 dolphinscheduler (详见安装 dolphinscheduler)

    4. 在数据源中心添加一个 Oracle 数据源

    如何支持 Python 2 pip 以及自定义 requirements.txt?

    1. 创建一个新的 Dockerfile,用于安装 pip:
    1. FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.6
    2. COPY requirements.txt /tmp
    3. RUN apt-get update && \
    4. apt-get install -y --no-install-recommends python-pip && \
    5. pip install --no-cache-dir -r /tmp/requirements.txt && \
    6. rm -rf /var/lib/apt/lists/*

    这个命令会安装默认的 pip 18.1. 如果你想升级 pip, 只需添加一行

    1. pip install --no-cache-dir -U pip && \
    1. 构建一个包含 pip 的新镜像:
    1. docker build -t apache/dolphinscheduler:pip .
    1. 推送 docker 镜像 apache/dolphinscheduler:pip 到一个 docker registry 中

    2. 修改 values.yaml 文件中 image 的 repository 字段,并更新 tagpip

    3. 部署 dolphinscheduler (详见安装 dolphinscheduler)

    4. 在一个新 Python 任务下验证 pip

    如何支持 Python 3?

    1. 创建一个新的 Dockerfile,用于安装 Python 3:
    1. FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.6
    2. RUN apt-get update && \
    3. apt-get install -y --no-install-recommends python3 && \
    4. rm -rf /var/lib/apt/lists/*

    这个命令会安装默认的 Python 3.7.3. 如果你也想安装 pip3, 将 python3 替换为 python3-pip 即可

    1. 构建一个包含 Python 3 的新镜像:
    1. docker build -t apache/dolphinscheduler:python3 .
    1. 推送 docker 镜像 apache/dolphinscheduler:python3 到一个 docker registry 中

    2. 修改 values.yaml 文件中 image 的 repository 字段,并更新 tagpython3

    3. 修改 values.yaml 文件中的 PYTHON_HOME/usr/bin/python3

    4. 部署 dolphinscheduler (详见安装 dolphinscheduler)

    以 Spark 2.4.7 为例:

    1. 下载 Spark 2.4.7 发布的二进制包 spark-2.4.7-bin-hadoop2.7.tgz

    2. 确保 common.sharedStoragePersistence.enabled 开启

    3. 部署 dolphinscheduler (详见安装 dolphinscheduler)

    4. 复制 Spark 3.1.1 二进制包到 Docker 容器中

    1. kubectl cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
    2. kubectl cp -n test spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace

    因为存储卷 sharedStoragePersistence 被挂载到 /opt/soft, 因此 /opt/soft 中的所有文件都不会丢失

    1. 登录到容器并确保 SPARK_HOME2 存在
    1. kubectl exec -it dolphinscheduler-worker-0 bash
    2. kubectl exec -n test -it dolphinscheduler-worker-0 bash # with test namespace
    3. cd /opt/soft
    4. tar zxf spark-2.4.7-bin-hadoop2.7.tgz
    5. rm -f spark-2.4.7-bin-hadoop2.7.tgz
    6. $SPARK_HOME2/bin/spark-submit --version

    如果一切执行正常,最后一条命令将会打印 Spark 版本信息

    1. 在一个 Shell 任务下验证 Spark
    1. $SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.11-2.4.7.jar

    检查任务日志是否包含输出 Pi is roughly 3.146015

    1. 在一个 Spark 任务下验证 Spark

    文件 spark-examples_2.11-2.4.7.jar 需要先被上传到资源中心,然后创建一个 Spark 任务并设置:

    • Spark版本: SPARK2
    • 主函数的Class: org.apache.spark.examples.SparkPi
    • 主程序包: spark-examples_2.11-2.4.7.jar
    • 部署方式: local

    同样地, 检查任务日志是否包含输出 Pi is roughly 3.146015

    1. 验证 Spark on YARN

    Spark on YARN (部署方式为 clusterclient) 需要 Hadoop 支持. 类似于 Spark 支持, 支持 Hadoop 的操作几乎和前面的步骤相同

    确保 $HADOOP_HOME$HADOOP_CONF_DIR 存在

    事实上,使用 spark-submit 提交应用的方式是相同的, 无论是 Spark 1, 2 或 3. 换句话说,SPARK_HOME2 的语义是第二个 SPARK_HOME, 而非 SPARK2HOME, 因此只需设置 SPARK_HOME2=/path/to/spark3 即可

    以 Spark 3.1.1 为例:

    1. 下载 Spark 3.1.1 发布的二进制包 spark-3.1.1-bin-hadoop2.7.tgz

    2. 确保 common.sharedStoragePersistence.enabled 开启

    3. 部署 dolphinscheduler (详见安装 dolphinscheduler)

    4. 复制 Spark 3.1.1 二进制包到 Docker 容器中

    1. kubectl cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft
    2. kubectl cp -n test spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker-0:/opt/soft # with test namespace
    1. 登录到容器并确保 SPARK_HOME2 存在
    1. kubectl exec -it dolphinscheduler-worker-0 bash
    2. kubectl exec -n test -it dolphinscheduler-worker-0 bash # with test namespace
    3. cd /opt/soft
    4. tar zxf spark-3.1.1-bin-hadoop2.7.tgz
    5. rm -f spark-3.1.1-bin-hadoop2.7.tgz
    6. ln -s spark-3.1.1-bin-hadoop2.7 spark2 # or just mv
    7. $SPARK_HOME2/bin/spark-submit --version

    如果一切执行正常,最后一条命令将会打印 Spark 版本信息

    1. 在一个 Shell 任务下验证 Spark
    1. $SPARK_HOME2/bin/spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME2/examples/jars/spark-examples_2.12-3.1.1.jar

    检查任务日志是否包含输出 Pi is roughly 3.146015

    如何在 Master、Worker 和 Api 服务之间支持共享存储?

    例如, Master、Worker 和 Api 服务可能同时使用 Hadoop

    1. 修改 values.yaml 文件中下面的配置项
    1. common:
    2. sharedStoragePersistence:
    3. enabled: false
    4. mountPath: "/opt/soft"
    5. accessModes:
    6. - "ReadWriteMany"
    7. storageClassName: "-"
    8. storage: "20Gi"

    storageClassNamestorage 需要被修改为实际值

    注意: storageClassName 必须支持访问模式: ReadWriteMany

    1. 将 Hadoop 复制到目录 /opt/soft

    2. 确保 $HADOOP_HOME$HADOOP_CONF_DIR 正确

    如何支持本地文件存储而非 HDFS 和 S3?

    修改 values.yaml 文件中下面的配置项

    1. common:
    2. configmap:
    3. RESOURCE_STORAGE_TYPE: "HDFS"
    4. RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
    5. FS_DEFAULT_FS: "file:///"
    6. fsFileResourcePersistence:
    7. enabled: true
    8. accessModes:
    9. - "ReadWriteMany"
    10. storageClassName: "-"
    11. storage: "20Gi"

    storageClassNamestorage 需要被修改为实际值

    如何支持 S3 资源存储,例如 MinIO?

    以 MinIO 为例: 修改 values.yaml 文件中下面的配置项

    1. common:
    2. configmap:
    3. RESOURCE_STORAGE_TYPE: "S3"
    4. RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
    5. FS_DEFAULT_FS: "s3a://BUCKET_NAME"
    6. FS_S3A_ENDPOINT: "http://MINIO_IP:9000"
    7. FS_S3A_ACCESS_KEY: "MINIO_ACCESS_KEY"
    8. FS_S3A_SECRET_KEY: "MINIO_SECRET_KEY"

    BUCKET_NAME, MINIO_IP, MINIO_ACCESS_KEYMINIO_SECRET_KEY 需要被修改为实际值

    修改 values.yaml 文件中的 SKYWALKING 配置项

    1. common:
    2. configmap:
    3. SKYWALKING_ENABLE: "true"
    4. SW_AGENT_COLLECTOR_BACKEND_SERVICES: "127.0.0.1:11800"
    5. SW_GRPC_LOG_SERVER_HOST: "127.0.0.1"
    6. SW_GRPC_LOG_SERVER_PORT: "11800"

    附录-配置

    ParameterDescriptionDefault
    timezoneWorld time and date for cities in all time zonesAsia/Shanghai
    image.repositoryDocker image repository for the DolphinSchedulerapache/dolphinscheduler
    image.tagDocker image version for the DolphinSchedulerlatest
    image.pullPolicyImage pull policy. One of Always, Never, IfNotPresentIfNotPresent
    image.pullSecretImage pull secret. An optional reference to secret in the same namespace to use for pulling any of the imagesnil
    postgresql.enabledIf not exists external PostgreSQL, by default, the DolphinScheduler will use a internal PostgreSQLtrue
    postgresql.postgresqlUsernameThe username for internal PostgreSQLroot
    postgresql.postgresqlPasswordThe password for internal PostgreSQLroot
    postgresql.postgresqlDatabaseThe database for internal PostgreSQLdolphinscheduler
    postgresql.persistence.enabledSet postgresql.persistence.enabled to true to mount a new volume for internal PostgreSQLfalse
    postgresql.persistence.sizePersistentVolumeClaim size20Gi
    postgresql.persistence.storageClassPostgreSQL data persistent volume storage class. If set to “-“, storageClassName: “”, which disables dynamic provisioning
    externalDatabase.typeIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database type will use itpostgresql
    externalDatabase.driverIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database driver will use itorg.postgresql.Driver
    externalDatabase.hostIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database host will use itlocalhost
    externalDatabase.portIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database port will use it5432
    externalDatabase.usernameIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database username will use itroot
    externalDatabase.passwordIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database password will use itroot
    externalDatabase.databaseIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database database will use itdolphinscheduler
    externalDatabase.paramsIf exists external PostgreSQL, and set postgresql.enabled value to false. DolphinScheduler’s database params will use itcharacterEncoding=utf8
    zookeeper.enabledIf not exists external Zookeeper, by default, the DolphinScheduler will use a internal Zookeepertrue
    zookeeper.fourlwCommandsWhitelistA list of comma separated Four Letter Words commands to usesrvr,ruok,wchs,cons
    zookeeper.persistence.enabledSet zookeeper.persistence.enabled to true to mount a new volume for internal Zookeeperfalse
    zookeeper.persistence.sizePersistentVolumeClaim size20Gi
    zookeeper.persistence.storageClassZookeeper data persistent volume storage class. If set to “-“, storageClassName: “”, which disables dynamic provisioning-
    zookeeper.zookeeperRootSpecify dolphinscheduler root directory in Zookeeper/dolphinscheduler
    externalZookeeper.zookeeperQuorumIf exists external Zookeeper, and set zookeeper.enabled value to false. Specify Zookeeper quorum127.0.0.1:2181
    externalZookeeper.zookeeperRootIf exists external Zookeeper, and set zookeeper.enabled value to false. Specify dolphinscheduler root directory in Zookeeper/dolphinscheduler
    common.configmap.DOLPHINSCHEDULER_OPTSThe jvm options for dolphinscheduler, suitable for all servers“”
    common.configmap.DATA_BASEDIR_PATHUser data directory path, self configuration, please make sure the directory exists and have read write permissions/tmp/dolphinscheduler
    common.configmap.RESOURCE_STORAGE_TYPEResource storage type: HDFS, S3, NONEHDFS
    common.configmap.RESOURCE_UPLOAD_PATHResource store on HDFS/S3 path, please make sure the directory exists on hdfs and have read write permissions/dolphinscheduler
    common.configmap.FS_DEFAULT_FSResource storage file system like file:///, hdfs://mycluster:8020 or s3a://dolphinschedulerfile:///
    common.configmap.FS_S3A_ENDPOINTS3 endpoint when common.configmap.RESOURCE_STORAGE_TYPE is set to S3s3.xxx.amazonaws.com
    common.configmap.FS_S3A_ACCESS_KEYS3 access key when common.configmap.RESOURCE_STORAGE_TYPE is set to S3xxxxxxx
    common.configmap.FS_S3A_SECRET_KEYS3 secret key when common.configmap.RESOURCE_STORAGE_TYPE is set to S3xxxxxxx
    common.configmap.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATEWhether to startup kerberosfalse
    common.configmap.JAVA_SECURITY_KRB5_CONF_PATHThe java.security.krb5.conf path/opt/krb5.conf
    common.configmap.LOGIN_USER_KEYTAB_USERNAMEThe login user from keytab usernamehdfs@HADOOP.COM
    common.configmap.LOGIN_USER_KEYTAB_PATHThe login user from keytab path/opt/hdfs.keytab
    common.configmap.KERBEROS_EXPIRE_TIMEThe kerberos expire time, the unit is hour2
    common.configmap.HDFS_ROOT_USERThe HDFS root user who must have the permission to create directories under the HDFS root pathhdfs
    common.configmap.RESOURCE_MANAGER_HTTPADDRESS_PORTSet resource manager httpaddress port for yarn8088
    common.configmap.YARN_RESOURCEMANAGER_HA_RM_IDSIf resourcemanager HA is enabled, please set the HA IPsnil
    common.configmap.YARN_APPLICATION_STATUS_ADDRESSIf resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname, otherwise keep default
    common.configmap.SKYWALKING_ENABLESet whether to enable skywalkingfalse
    common.configmap.SW_AGENT_COLLECTOR_BACKEND_SERVICESSet agent collector backend services for skywalking127.0.0.1:11800
    common.configmap.SW_GRPC_LOG_SERVER_HOSTSet grpc log server host for skywalking127.0.0.1
    common.configmap.SW_GRPC_LOG_SERVER_PORTSet grpc log server port for skywalking11800
    common.configmap.HADOOP_HOMESet HADOOP_HOME for DolphinScheduler’s task environment/opt/soft/hadoop
    common.configmap.HADOOP_CONF_DIRSet HADOOP_CONF_DIR for DolphinScheduler’s task environment/opt/soft/hadoop/etc/hadoop
    common.configmap.SPARK_HOME1Set SPARK_HOME1 for DolphinScheduler’s task environment/opt/soft/spark1
    common.configmap.SPARK_HOME2Set SPARK_HOME2 for DolphinScheduler’s task environment/opt/soft/spark2
    common.configmap.PYTHON_HOMESet PYTHON_HOME for DolphinScheduler’s task environment/usr/bin/python
    common.configmap.JAVA_HOMESet JAVA_HOME for DolphinScheduler’s task environment/usr/local/openjdk-8
    common.configmap.HIVE_HOMESet HIVE_HOME for DolphinScheduler’s task environment/opt/soft/hive
    common.configmap.FLINK_HOMESet FLINK_HOME for DolphinScheduler’s task environment/opt/soft/flink
    common.configmap.DATAX_HOMESet DATAX_HOME for DolphinScheduler’s task environment/opt/soft/datax
    common.sharedStoragePersistence.enabledSet common.sharedStoragePersistence.enabled to true to mount a shared storage volume for Hadoop, Spark binary and etcfalse
    common.sharedStoragePersistence.mountPathThe mount path for the shared storage volume/opt/soft
    common.sharedStoragePersistence.accessModesPersistentVolumeClaim access modes, must be ReadWriteMany[ReadWriteMany]
    common.sharedStoragePersistence.storageClassNameShared Storage persistent volume storage class, must support the access mode: ReadWriteMany-
    common.sharedStoragePersistence.storagePersistentVolumeClaim size20Gi
    common.fsFileResourcePersistence.enabledSet common.fsFileResourcePersistence.enabled to true to mount a new file resource volume for api and workerfalse
    common.fsFileResourcePersistence.accessModesPersistentVolumeClaim access modes, must be ReadWriteMany[ReadWriteMany]
    common.fsFileResourcePersistence.storageClassNameResource persistent volume storage class, must support the access mode: ReadWriteMany-
    common.fsFileResourcePersistence.storagePersistentVolumeClaim size20Gi
    master.podManagementPolicyPodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling downParallel
    master.replicasReplicas is the desired number of replicas of the given Template3
    master.annotationsThe annotations for master server{}
    master.affinityIf specified, the pod’s scheduling constraints{}
    master.nodeSelectorNodeSelector is a selector which must be true for the pod to fit on a node{}
    master.tolerationsIf specified, the pod’s tolerations{}
    master.resourcesThe resource limit and request config for master server{}
    master.configmap.MASTER_SERVER_OPTSThe jvm options for master server-Xms1g -Xmx1g -Xmn512m
    master.configmap.MASTER_EXEC_THREADSMaster execute thread number to limit process instances100
    master.configmap.MASTER_EXEC_TASK_NUMMaster execute task number in parallel per process instance20
    master.configmap.MASTER_DISPATCH_TASK_NUMMaster dispatch task number per batch3
    master.configmap.MASTER_HOST_SELECTORMaster host selector to select a suitable worker, optional values include Random, RoundRobin, LowerWeightLowerWeight
    master.configmap.MASTER_HEARTBEAT_INTERVALMaster heartbeat interval, the unit is second10
    master.configmap.MASTER_TASK_COMMIT_RETRYTIMESMaster commit task retry times5
    master.configmap.MASTER_TASK_COMMIT_INTERVALmaster commit task interval, the unit is second1
    master.configmap.MASTER_MAX_CPULOAD_AVGMaster max cpuload avg, only higher than the system cpu load average, master server can schedule-1 (the number of cpu cores 2)
    master.configmap.MASTER_RESERVED_MEMORYMaster reserved memory, only lower than system available memory, master server can schedule, the unit is G0.3
    master.livenessProbe.enabledTurn on and off liveness probetrue
    master.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
    master.livenessProbe.periodSecondsHow often to perform the probe30
    master.livenessProbe.timeoutSecondsWhen the probe times out5
    master.livenessProbe.failureThresholdMinimum consecutive successes for the probe3
    master.livenessProbe.successThresholdMinimum consecutive failures for the probe1
    master.readinessProbe.enabledTurn on and off readiness probetrue
    master.readinessProbe.initialDelaySecondsDelay before readiness probe is initiated30
    master.readinessProbe.periodSecondsHow often to perform the probe30
    master.readinessProbe.timeoutSecondsWhen the probe times out5
    master.readinessProbe.failureThresholdMinimum consecutive successes for the probe3
    master.readinessProbe.successThresholdMinimum consecutive failures for the probe1
    master.persistentVolumeClaim.enabledSet master.persistentVolumeClaim.enabled to true to mount a new volume for masterfalse
    master.persistentVolumeClaim.accessModesPersistentVolumeClaim access modes[ReadWriteOnce]
    master.persistentVolumeClaim.storageClassNameMaster logs data persistent volume storage class. If set to “-“, storageClassName: “”, which disables dynamic provisioning-
    master.persistentVolumeClaim.storagePersistentVolumeClaim size20Gi
    worker.podManagementPolicyPodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling downParallel
    worker.replicasReplicas is the desired number of replicas of the given Template3
    worker.annotationsThe annotations for worker server{}
    worker.affinityIf specified, the pod’s scheduling constraints{}
    worker.nodeSelectorNodeSelector is a selector which must be true for the pod to fit on a node{}
    worker.tolerationsIf specified, the pod’s tolerations{}
    worker.resourcesThe resource limit and request config for worker server{}
    worker.configmap.LOGGER_SERVER_OPTSThe jvm options for logger server-Xms512m -Xmx512m -Xmn256m
    worker.configmap.WORKER_SERVER_OPTSThe jvm options for worker server-Xms1g -Xmx1g -Xmn512m
    worker.configmap.WORKER_EXEC_THREADSWorker execute thread number to limit task instances100
    worker.configmap.WORKER_HEARTBEAT_INTERVALWorker heartbeat interval, the unit is second10
    worker.configmap.WORKER_MAX_CPULOAD_AVGWorker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks (the number of cpu cores 2)
    worker.configmap.WORKER_RESERVED_MEMORYWorker reserved memory, only lower than system available memory, worker server can be dispatched tasks, the unit is G0.3
    worker.configmap.WORKER_GROUPSWorker groupsdefault
    worker.livenessProbe.enabledTurn on and off liveness probetrue
    worker.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
    worker.livenessProbe.periodSecondsHow often to perform the probe30
    worker.livenessProbe.timeoutSecondsWhen the probe times out5
    worker.livenessProbe.failureThresholdMinimum consecutive successes for the probe3
    worker.livenessProbe.successThresholdMinimum consecutive failures for the probe1
    worker.readinessProbe.enabledTurn on and off readiness probetrue
    worker.readinessProbe.initialDelaySecondsDelay before readiness probe is initiated30
    worker.readinessProbe.periodSecondsHow often to perform the probe30
    worker.readinessProbe.timeoutSecondsWhen the probe times out5
    worker.readinessProbe.failureThresholdMinimum consecutive successes for the probe3
    worker.readinessProbe.successThresholdMinimum consecutive failures for the probe1
    worker.persistentVolumeClaim.enabledSet worker.persistentVolumeClaim.enabled to true to enable persistentVolumeClaim for workerfalse
    worker.persistentVolumeClaim.dataPersistentVolume.enabledSet worker.persistentVolumeClaim.dataPersistentVolume.enabled to true to mount a data volume for workerfalse
    worker.persistentVolumeClaim.dataPersistentVolume.accessModesPersistentVolumeClaim access modes[ReadWriteOnce]
    worker.persistentVolumeClaim.dataPersistentVolume.storageClassNameWorker data persistent volume storage class. If set to “-“, storageClassName: “”, which disables dynamic provisioning-
    worker.persistentVolumeClaim.dataPersistentVolume.storagePersistentVolumeClaim size20Gi
    worker.persistentVolumeClaim.logsPersistentVolume.enabledSet worker.persistentVolumeClaim.logsPersistentVolume.enabled to true to mount a logs volume for workerfalse
    worker.persistentVolumeClaim.logsPersistentVolume.accessModesPersistentVolumeClaim access modes[ReadWriteOnce]
    worker.persistentVolumeClaim.logsPersistentVolume.storageClassNameWorker logs data persistent volume storage class. If set to “-“, storageClassName: “”, which disables dynamic provisioning-
    worker.persistentVolumeClaim.logsPersistentVolume.storagePersistentVolumeClaim size20Gi
    alert.replicasReplicas is the desired number of replicas of the given Template1
    alert.strategy.typeType of deployment. Can be “Recreate” or “RollingUpdate”RollingUpdate
    alert.strategy.rollingUpdate.maxSurgeThe maximum number of pods that can be scheduled above the desired number of pods25%
    alert.strategy.rollingUpdate.maxUnavailableThe maximum number of pods that can be unavailable during the update25%
    alert.annotationsThe annotations for alert server{}
    alert.affinityIf specified, the pod’s scheduling constraints{}
    alert.nodeSelectorNodeSelector is a selector which must be true for the pod to fit on a node{}
    alert.tolerationsIf specified, the pod’s tolerations{}
    alert.resourcesThe resource limit and request config for alert server{}
    alert.configmap.ALERT_SERVER_OPTSThe jvm options for alert server-Xms512m -Xmx512m -Xmn256m
    alert.configmap.XLS_FILE_PATHXLS file path/tmp/xls
    alert.configmap.MAIL_SERVER_HOSTMail SERVER HOSTnil
    alert.configmap.MAIL_SERVER_PORTMail SERVER PORTnil
    alert.configmap.MAIL_SENDERMail SENDERnil
    alert.configmap.MAIL_USERMail USERnil
    alert.configmap.MAIL_PASSWDMail PASSWORDnil
    alert.configmap.MAIL_SMTP_STARTTLS_ENABLEMail SMTP STARTTLS enablefalse
    alert.configmap.MAIL_SMTP_SSL_ENABLEMail SMTP SSL enablefalse
    alert.configmap.MAIL_SMTP_SSL_TRUSTMail SMTP SSL TRUSTnil
    alert.configmap.ENTERPRISE_WECHAT_ENABLEEnterprise Wechat enablefalse
    alert.configmap.ENTERPRISE_WECHAT_CORP_IDEnterprise Wechat corp idnil
    alert.configmap.ENTERPRISE_WECHAT_SECRETEnterprise Wechat secretnil
    alert.configmap.ENTERPRISE_WECHAT_AGENT_IDEnterprise Wechat agent idnil
    alert.configmap.ENTERPRISE_WECHAT_USERSEnterprise Wechat usersnil
    alert.livenessProbe.enabledTurn on and off liveness probetrue
    alert.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
    alert.livenessProbe.periodSecondsHow often to perform the probe30
    alert.livenessProbe.timeoutSecondsWhen the probe times out5
    alert.livenessProbe.failureThresholdMinimum consecutive successes for the probe3
    alert.livenessProbe.successThresholdMinimum consecutive failures for the probe1
    alert.readinessProbe.enabledTurn on and off readiness probetrue
    alert.readinessProbe.initialDelaySecondsDelay before readiness probe is initiated30
    alert.readinessProbe.periodSecondsHow often to perform the probe30
    alert.readinessProbe.timeoutSecondsWhen the probe times out5
    alert.readinessProbe.failureThresholdMinimum consecutive successes for the probe3
    alert.readinessProbe.successThresholdMinimum consecutive failures for the probe1
    alert.persistentVolumeClaim.enabledSet alert.persistentVolumeClaim.enabled to true to mount a new volume for alertfalse
    alert.persistentVolumeClaim.accessModesPersistentVolumeClaim access modes[ReadWriteOnce]
    alert.persistentVolumeClaim.storageClassNameAlert logs data persistent volume storage class. If set to “-“, storageClassName: “”, which disables dynamic provisioning-
    alert.persistentVolumeClaim.storagePersistentVolumeClaim size20Gi
    api.replicasReplicas is the desired number of replicas of the given Template1
    api.strategy.typeType of deployment. Can be “Recreate” or “RollingUpdate”RollingUpdate
    api.strategy.rollingUpdate.maxSurgeThe maximum number of pods that can be scheduled above the desired number of pods25%
    api.strategy.rollingUpdate.maxUnavailableThe maximum number of pods that can be unavailable during the update25%
    api.annotationsThe annotations for api server{}
    api.affinityIf specified, the pod’s scheduling constraints{}
    api.nodeSelectorNodeSelector is a selector which must be true for the pod to fit on a node{}
    api.tolerationsIf specified, the pod’s tolerations{}
    api.resourcesThe resource limit and request config for api server{}
    api.configmap.API_SERVER_OPTSThe jvm options for api server-Xms512m -Xmx512m -Xmn256m
    api.livenessProbe.enabledTurn on and off liveness probetrue
    api.livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
    api.livenessProbe.periodSecondsHow often to perform the probe30
    api.livenessProbe.timeoutSecondsWhen the probe times out5
    api.livenessProbe.failureThresholdMinimum consecutive successes for the probe3
    api.livenessProbe.successThresholdMinimum consecutive failures for the probe1
    api.readinessProbe.enabledTurn on and off readiness probetrue
    api.readinessProbe.initialDelaySecondsDelay before readiness probe is initiated30
    api.readinessProbe.periodSecondsHow often to perform the probe30
    api.readinessProbe.timeoutSecondsWhen the probe times out5
    api.readinessProbe.failureThresholdMinimum consecutive successes for the probe3
    api.readinessProbe.successThresholdMinimum consecutive failures for the probe1
    api.persistentVolumeClaim.enabledSet api.persistentVolumeClaim.enabled to true to mount a new volume for apifalse
    api.persistentVolumeClaim.accessModesPersistentVolumeClaim access modes[ReadWriteOnce]
    api.persistentVolumeClaim.storageClassNameapi logs data persistent volume storage class. If set to “-“, storageClassName: “”, which disables dynamic provisioning-
    api.persistentVolumeClaim.storagePersistentVolumeClaim size20Gi
    api.service.typetype determines how the Service is exposed. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancerClusterIP
    api.service.clusterIPclusterIP is the IP address of the service and is usually assigned randomly by the masternil
    api.service.nodePortnodePort is the port on each node on which this service is exposed when type=NodePortnil
    api.service.externalIPsexternalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service[]
    api.service.externalNameexternalName is the external reference that kubedns or equivalent will return as a CNAME record for this servicenil
    api.service.loadBalancerIPloadBalancerIP when service.type is LoadBalancer. LoadBalancer will get created with the IP specified in this fieldnil
    api.service.annotationsannotations may need to be set when service.type is LoadBalancer{}
    ingress.enabledEnable ingressfalse
    ingress.hostIngress hostdolphinscheduler.org
    ingress.pathIngress path/dolphinscheduler
    ingress.tls.enabledEnable ingress tlsfalse
    Ingress tls secret namedolphinscheduler-tls