Browse Source

[Improvement][Helm] using helm-docs to generate docs automatically (#15299)

3.2.1-prepare
Gallardot 11 months ago committed by GitHub
parent
commit
1c1d4bd592
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 35
      .github/workflows/docs.yml
  2. 1
      .licenserc.yaml
  3. 37
      deploy/kubernetes/dolphinscheduler/.helmignore
  4. 350
      deploy/kubernetes/dolphinscheduler/README.md
  5. 11
      deploy/kubernetes/dolphinscheduler/README.md.gotmpl
  6. 570
      deploy/kubernetes/dolphinscheduler/values.yaml
  7. 257
      docs/docs/en/guide/installation/kubernetes.md
  8. 257
      docs/docs/zh/guide/installation/kubernetes.md
  9. 32
      pom.xml

35
.github/workflows/docs.yml

@ -62,6 +62,41 @@ jobs:
for file in $(find . -name "*.md" -not \( -path ./deploy/terraform/aws/README.md -prune \)); do for file in $(find . -name "*.md" -not \( -path ./deploy/terraform/aws/README.md -prune \)); do
markdown-link-check -c .dlc.json -q "$file" markdown-link-check -c .dlc.json -q "$file"
done done
paths-filter:
name: Helm-Doc-Path-Filter
runs-on: ubuntu-latest
outputs:
helm-doc: ${{ steps.filter.outputs.helm-doc }}
steps:
- uses: actions/checkout@v2
- uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
id: filter
with:
filters: |
helm-doc:
- 'deploy/**'
helm-doc:
name: Helm-Doc-Execute
needs: paths-filter
if: ${{ (needs.paths-filter.outputs.helm-doc == 'true') || (github.event_name == 'push') }}
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Generating helm-doc
run: |
./mvnw validate -P helm-doc -pl :dolphinscheduler
- name: Check helm-doc
run: |
DIFF=$(git diff ${GITHUB_WORKSPACE}/deploy/kubernetes/*md)
if [ ! -z "$DIFF" ]; then
echo "###### ERROR: helm-doc is not up to date ######"
echo "Please execute './mvnw validate -P helm-doc -pl :dolphinscheduler' in your clone, of your fork, of the project, and commit an updated deploy/kubernetes/README.md for the chart."
echo "###### ERROR: helm-doc is not up to date ######"
fi
git diff --exit-code
result: result:
name: Docs name: Docs
runs-on: ubuntu-latest runs-on: ubuntu-latest

1
.licenserc.yaml

@ -50,5 +50,6 @@ header:
- tools/dependencies/known-dependencies.txt - tools/dependencies/known-dependencies.txt
- '**/banner.txt' - '**/banner.txt'
- '.terraform.lock.hcl' - '.terraform.lock.hcl'
- deploy/kubernetes/dolphinscheduler/README.md.gotmpl
comment: on-failure comment: on-failure

37
deploy/kubernetes/dolphinscheduler/.helmignore

@ -0,0 +1,37 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

350
deploy/kubernetes/dolphinscheduler/README.md

@ -0,0 +1,350 @@
## About DolphinScheduler for Kubernetes
Dolphin Scheduler is a distributed and easy-to-expand visual DAG workflow scheduling system, dedicated to solving the complex dependencies in data processing, making the scheduling system out of the box for data processing.
This chart bootstraps all the components needed to run Apache DolphinScheduler on a Kubernetes Cluster using [Helm](https://helm.sh).
## QuickStart in Kubernetes
Please refer to the [Quick Start in Kubernetes](../../../docs/docs/en/guide/installation/kubernetes.md)
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| alert.affinity | object | `{}` | Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints. More info: [node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) |
| alert.annotations | object | `{}` | You can use annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata. |
| alert.enabled | bool | `true` | Enable or disable the Alert-Server component |
| alert.env.JAVA_OPTS | string | `"-Xms512m -Xmx512m -Xmn256m"` | The jvm options for alert server |
| alert.livenessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container liveness. Container will be restarted if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| alert.livenessProbe.enabled | bool | `true` | Turn on and off liveness probe |
| alert.livenessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| alert.livenessProbe.initialDelaySeconds | string | `"30"` | Delay before liveness probe is initiated |
| alert.livenessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| alert.livenessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| alert.livenessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| alert.nodeSelector | object | `{}` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: [assign-pod-node](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) |
| alert.persistentVolumeClaim | object | `{"accessModes":["ReadWriteOnce"],"enabled":false,"storage":"20Gi","storageClassName":"-"}` | PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace. More info: [persistentvolumeclaims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) |
| alert.persistentVolumeClaim.accessModes | list | `["ReadWriteOnce"]` | `PersistentVolumeClaim` access modes |
| alert.persistentVolumeClaim.enabled | bool | `false` | Set `alert.persistentVolumeClaim.enabled` to `true` to mount a new volume for `alert` |
| alert.persistentVolumeClaim.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| alert.persistentVolumeClaim.storageClassName | string | `"-"` | `Alert` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| alert.readinessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| alert.readinessProbe.enabled | bool | `true` | Turn on and off readiness probe |
| alert.readinessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| alert.readinessProbe.initialDelaySeconds | string | `"30"` | Delay before readiness probe is initiated |
| alert.readinessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| alert.readinessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| alert.readinessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| alert.replicas | int | `1` | Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. |
| alert.resources | object | `{}` | Compute Resources required by this container. More info: [manage-resources-containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) |
| alert.service.annotations | object | `{}` | annotations may need to be set when want to scrapy metrics by prometheus but not install prometheus operator |
| alert.service.serviceMonitor | object | `{"annotations":{},"enabled":false,"interval":"15s","labels":{},"path":"/actuator/prometheus"}` | serviceMonitor for prometheus operator |
| alert.service.serviceMonitor.annotations | object | `{}` | serviceMonitor.annotations ServiceMonitor annotations |
| alert.service.serviceMonitor.enabled | bool | `false` | Enable or disable alert-server serviceMonitor |
| alert.service.serviceMonitor.interval | string | `"15s"` | serviceMonitor.interval interval at which metrics should be scraped |
| alert.service.serviceMonitor.labels | object | `{}` | serviceMonitor.labels ServiceMonitor extra labels |
| alert.service.serviceMonitor.path | string | `"/actuator/prometheus"` | serviceMonitor.path path of the metrics endpoint |
| alert.strategy | object | `{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"}` | The deployment strategy to use to replace existing pods with new ones. |
| alert.strategy.rollingUpdate.maxSurge | string | `"25%"` | The maximum number of pods that can be scheduled above the desired number of pods |
| alert.strategy.rollingUpdate.maxUnavailable | string | `"25%"` | The maximum number of pods that can be unavailable during the update |
| alert.strategy.type | string | `"RollingUpdate"` | Type of deployment. Can be "Recreate" or "RollingUpdate" |
| alert.tolerations | list | `[]` | Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission, effectively unioning the set of nodes tolerated by the pod and the RuntimeClass. |
| api.affinity | object | `{}` | Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints. More info: [node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) |
| api.annotations | object | `{}` | You can use annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata. |
| api.enabled | bool | `true` | Enable or disable the API-Server component |
| api.env.JAVA_OPTS | string | `"-Xms512m -Xmx512m -Xmn256m"` | The jvm options for api server |
| api.livenessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container liveness. Container will be restarted if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| api.livenessProbe.enabled | bool | `true` | Turn on and off liveness probe |
| api.livenessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| api.livenessProbe.initialDelaySeconds | string | `"30"` | Delay before liveness probe is initiated |
| api.livenessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| api.livenessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| api.livenessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| api.nodeSelector | object | `{}` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: [assign-pod-node](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) |
| api.persistentVolumeClaim | object | `{"accessModes":["ReadWriteOnce"],"enabled":false,"storage":"20Gi","storageClassName":"-"}` | PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace. More info: [persistentvolumeclaims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) |
| api.persistentVolumeClaim.accessModes | list | `["ReadWriteOnce"]` | `PersistentVolumeClaim` access modes |
| api.persistentVolumeClaim.enabled | bool | `false` | Set `api.persistentVolumeClaim.enabled` to `true` to mount a new volume for `api` |
| api.persistentVolumeClaim.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| api.persistentVolumeClaim.storageClassName | string | `"-"` | `api` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| api.readinessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| api.readinessProbe.enabled | bool | `true` | Turn on and off readiness probe |
| api.readinessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| api.readinessProbe.initialDelaySeconds | string | `"30"` | Delay before readiness probe is initiated |
| api.readinessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| api.readinessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| api.readinessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| api.replicas | string | `"1"` | Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. |
| api.resources | object | `{}` | Compute Resources required by this container. More info: [manage-resources-containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) |
| api.service.annotations | object | `{}` | annotations may need to be set when service.type is LoadBalancer service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT |
| api.service.clusterIP | string | `""` | clusterIP is the IP address of the service and is usually assigned randomly by the master |
| api.service.externalIPs | list | `[]` | externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service |
| api.service.externalName | string | `""` | externalName is the external reference that kubedns or equivalent will return as a CNAME record for this service, requires Type to be ExternalName |
| api.service.loadBalancerIP | string | `""` | loadBalancerIP when service.type is LoadBalancer. LoadBalancer will get created with the IP specified in this field |
| api.service.nodePort | string | `""` | nodePort is the port on each node on which this api service is exposed when type=NodePort |
| api.service.pythonNodePort | string | `""` | pythonNodePort is the port on each node on which this python api service is exposed when type=NodePort |
| api.service.serviceMonitor | object | `{"annotations":{},"enabled":false,"interval":"15s","labels":{},"path":"/dolphinscheduler/actuator/prometheus"}` | serviceMonitor for prometheus operator |
| api.service.serviceMonitor.annotations | object | `{}` | serviceMonitor.annotations ServiceMonitor annotations |
| api.service.serviceMonitor.enabled | bool | `false` | Enable or disable api-server serviceMonitor |
| api.service.serviceMonitor.interval | string | `"15s"` | serviceMonitor.interval interval at which metrics should be scraped |
| api.service.serviceMonitor.labels | object | `{}` | serviceMonitor.labels ServiceMonitor extra labels |
| api.service.serviceMonitor.path | string | `"/dolphinscheduler/actuator/prometheus"` | serviceMonitor.path path of the metrics endpoint |
| api.service.type | string | `"ClusterIP"` | type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer |
| api.strategy | object | `{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"}` | The deployment strategy to use to replace existing pods with new ones. |
| api.strategy.rollingUpdate.maxSurge | string | `"25%"` | The maximum number of pods that can be scheduled above the desired number of pods |
| api.strategy.rollingUpdate.maxUnavailable | string | `"25%"` | The maximum number of pods that can be unavailable during the update |
| api.strategy.type | string | `"RollingUpdate"` | Type of deployment. Can be "Recreate" or "RollingUpdate" |
| api.taskTypeFilter.enabled | bool | `false` | Enable or disable the task type filter. If set to true, the API-Server will return tasks of a specific type set in api.taskTypeFilter.task Note: This feature only filters tasks to return a specific type on the WebUI. However, you can still create any task that DolphinScheduler supports via the API. |
| api.taskTypeFilter.task | object | `{}` | ref: [task-type-config.yaml](https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-api/src/main/resources/task-type-config.yaml) |
| api.tolerations | list | `[]` | Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission, effectively unioning the set of nodes tolerated by the pod and the RuntimeClass. |
| common.configmap.DATAX_LAUNCHER | string | `"/opt/soft/datax/bin/datax.py"` | Set `DATAX_LAUNCHER` for DolphinScheduler's task environment |
| common.configmap.DATA_BASEDIR_PATH | string | `"/tmp/dolphinscheduler"` | User data directory path, self configuration, please make sure the directory exists and have read write permissions |
| common.configmap.DOLPHINSCHEDULER_OPTS | string | `""` | The jvm options for dolphinscheduler, suitable for all servers |
| common.configmap.FLINK_HOME | string | `"/opt/soft/flink"` | Set `FLINK_HOME` for DolphinScheduler's task environment |
| common.configmap.HADOOP_CONF_DIR | string | `"/opt/soft/hadoop/etc/hadoop"` | Set `HADOOP_CONF_DIR` for DolphinScheduler's task environment |
| common.configmap.HADOOP_HOME | string | `"/opt/soft/hadoop"` | Set `HADOOP_HOME` for DolphinScheduler's task environment |
| common.configmap.HIVE_HOME | string | `"/opt/soft/hive"` | Set `HIVE_HOME` for DolphinScheduler's task environment |
| common.configmap.JAVA_HOME | string | `"/opt/java/openjdk"` | Set `JAVA_HOME` for DolphinScheduler's task environment |
| common.configmap.PYTHON_LAUNCHER | string | `"/usr/bin/python/bin/python3"` | Set `PYTHON_LAUNCHER` for DolphinScheduler's task environment |
| common.configmap.RESOURCE_UPLOAD_PATH | string | `"/dolphinscheduler"` | Resource store on HDFS/S3 path, please make sure the directory exists on hdfs and have read write permissions |
| common.configmap.SPARK_HOME | string | `"/opt/soft/spark"` | Set `SPARK_HOME` for DolphinScheduler's task environment |
| common.fsFileResourcePersistence.accessModes | list | `["ReadWriteMany"]` | `PersistentVolumeClaim` access modes, must be `ReadWriteMany` |
| common.fsFileResourcePersistence.enabled | bool | `false` | Set `common.fsFileResourcePersistence.enabled` to `true` to mount a new file resource volume for `api` and `worker` |
| common.fsFileResourcePersistence.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| common.fsFileResourcePersistence.storageClassName | string | `"-"` | Resource persistent volume storage class, must support the access mode: `ReadWriteMany` |
| common.sharedStoragePersistence.accessModes | list | `["ReadWriteMany"]` | `PersistentVolumeClaim` access modes, must be `ReadWriteMany` |
| common.sharedStoragePersistence.enabled | bool | `false` | Set `common.sharedStoragePersistence.enabled` to `true` to mount a shared storage volume for Hadoop, Spark binary and etc |
| common.sharedStoragePersistence.mountPath | string | `"/opt/soft"` | The mount path for the shared storage volume |
| common.sharedStoragePersistence.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| common.sharedStoragePersistence.storageClassName | string | `"-"` | Shared Storage persistent volume storage class, must support the access mode: ReadWriteMany |
| conf.auto | bool | `false` | auto restart, if true, all components will be restarted automatically after the common configuration is updated. if false, you need to restart the components manually. default is false |
| conf.common."alert.rpc.port" | int | `50052` | rpc port |
| conf.common."appId.collect" | string | `"log"` | way to collect applicationId: log, aop |
| conf.common."conda.path" | string | `"/opt/anaconda3/etc/profile.d/conda.sh"` | set path of conda.sh |
| conf.common."data-quality.jar.name" | string | `"dolphinscheduler-data-quality-dev-SNAPSHOT.jar"` | data quality option |
| conf.common."data.basedir.path" | string | `"/tmp/dolphinscheduler"` | user data local directory path, please make sure the directory exists and have read write permissions |
| conf.common."datasource.encryption.enable" | bool | `false` | datasource encryption enable |
| conf.common."datasource.encryption.salt" | string | `"!@#$%^&*"` | datasource encryption salt |
| conf.common."development.state" | bool | `false` | development state |
| conf.common."hadoop.security.authentication.startup.state" | bool | `false` | whether to startup kerberos |
| conf.common."java.security.krb5.conf.path" | string | `"/opt/krb5.conf"` | java.security.krb5.conf path |
| conf.common."kerberos.expire.time" | int | `2` | kerberos expire time, the unit is hour |
| conf.common."login.user.keytab.path" | string | `"/opt/hdfs.headless.keytab"` | login user from keytab path |
| conf.common."login.user.keytab.username" | string | `"hdfs-mycluster@ESZ.COM"` | login user from keytab username |
| conf.common."ml.mlflow.preset_repository" | string | `"https://github.com/apache/dolphinscheduler-mlflow"` | mlflow task plugin preset repository |
| conf.common."ml.mlflow.preset_repository_version" | string | `"main"` | mlflow task plugin preset repository version |
| conf.common."resource.alibaba.cloud.access.key.id" | string | `"<your-access-key-id>"` | alibaba cloud access key id, required if you set resource.storage.type=OSS |
| conf.common."resource.alibaba.cloud.access.key.secret" | string | `"<your-access-key-secret>"` | alibaba cloud access key secret, required if you set resource.storage.type=OSS |
| conf.common."resource.alibaba.cloud.oss.bucket.name" | string | `"dolphinscheduler"` | oss bucket name, required if you set resource.storage.type=OSS |
| conf.common."resource.alibaba.cloud.oss.endpoint" | string | `"https://oss-cn-hangzhou.aliyuncs.com"` | oss bucket endpoint, required if you set resource.storage.type=OSS |
| conf.common."resource.alibaba.cloud.region" | string | `"cn-hangzhou"` | alibaba cloud region, required if you set resource.storage.type=OSS |
| conf.common."resource.aws.access.key.id" | string | `"minioadmin"` | The AWS access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required |
| conf.common."resource.aws.region" | string | `"ca-central-1"` | The AWS Region to use. if resource.storage.type=S3 or use EMR-Task, This configuration is required |
| conf.common."resource.aws.s3.bucket.name" | string | `"dolphinscheduler"` | The name of the bucket. You need to create them by yourself. Otherwise, the system cannot start. All buckets in Amazon S3 share a single namespace; ensure the bucket is given a unique name. |
| conf.common."resource.aws.s3.endpoint" | string | `"http://minio:9000"` | You need to set this parameter when private cloud s3. If S3 uses public cloud, you only need to set resource.aws.region or set to the endpoint of a public cloud such as S3.cn-north-1.amazonaws.com.cn |
| conf.common."resource.aws.secret.access.key" | string | `"minioadmin"` | The AWS secret access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required |
| conf.common."resource.azure.client.id" | string | `"minioadmin"` | azure storage account name, required if you set resource.storage.type=ABS |
| conf.common."resource.azure.client.secret" | string | `"minioadmin"` | azure storage account key, required if you set resource.storage.type=ABS |
| conf.common."resource.azure.subId" | string | `"minioadmin"` | azure storage subId, required if you set resource.storage.type=ABS |
| conf.common."resource.azure.tenant.id" | string | `"minioadmin"` | azure storage tenantId, required if you set resource.storage.type=ABS |
| conf.common."resource.hdfs.fs.defaultFS" | string | `"hdfs://mycluster:8020"` | if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir |
| conf.common."resource.hdfs.root.user" | string | `"hdfs"` | if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path |
| conf.common."resource.manager.httpaddress.port" | int | `8088` | resourcemanager port, the default value is 8088 if not specified |
| conf.common."resource.storage.type" | string | `"S3"` | resource storage type: HDFS, S3, OSS, GCS, ABS, NONE |
| conf.common."resource.storage.upload.base.path" | string | `"/dolphinscheduler"` | resource store on HDFS/S3 path, resource file will store to this base path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended |
| conf.common."sudo.enable" | bool | `true` | use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions |
| conf.common."support.hive.oneSession" | bool | `false` | Whether hive SQL is executed in the same session |
| conf.common."task.resource.limit.state" | bool | `false` | Task resource limit state |
| conf.common."yarn.application.status.address" | string | `"http://ds1:%s/ws/v1/cluster/apps/%s"` | if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname |
| conf.common."yarn.job.history.status.address" | string | `"http://ds1:19888/ws/v1/history/mapreduce/jobs/%s"` | job history status url when application number threshold is reached(default 10000, maybe it was set to 1000) |
| conf.common."yarn.resourcemanager.ha.rm.ids" | string | `"192.168.xx.xx,192.168.xx.xx"` | if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty |
| externalDatabase.database | string | `"dolphinscheduler"` | The database of external database |
| externalDatabase.driverClassName | string | `"org.postgresql.Driver"` | The driverClassName of external database |
| externalDatabase.enabled | bool | `false` | If exists external database, and set postgresql.enable value to false. external database will be used, otherwise Dolphinscheduler's internal database will be used. |
| externalDatabase.host | string | `"localhost"` | The host of external database |
| externalDatabase.params | string | `"characterEncoding=utf8"` | The params of external database |
| externalDatabase.password | string | `"root"` | The password of external database |
| externalDatabase.port | string | `"5432"` | The port of external database |
| externalDatabase.type | string | `"postgresql"` | The type of external database, supported types: postgresql, mysql |
| externalDatabase.username | string | `"root"` | The username of external database |
| externalRegistry.registryPluginName | string | `"zookeeper"` | If exists external registry and set `zookeeper.enable` && `registryEtcd.enabled` && `registryJdbc.enabled` to false, specify the external registry plugin name |
| externalRegistry.registryServers | string | `"127.0.0.1:2181"` | If exists external registry and set `zookeeper.enable` && `registryEtcd.enabled` && `registryJdbc.enabled` to false, specify the external registry servers |
| image.alert | string | `"dolphinscheduler-alert-server"` | alert-server image |
| image.api | string | `"dolphinscheduler-api"` | api-server image |
| image.master | string | `"dolphinscheduler-master"` | master image |
| image.pullPolicy | string | `"IfNotPresent"` | Image pull policy. Options: Always, Never, IfNotPresent |
| image.pullSecret | string | `""` | Specify a imagePullSecrets |
| image.registry | string | `"apache/dolphinscheduler"` | Docker image repository for the DolphinScheduler |
| image.tag | string | `"latest"` | Docker image version for the DolphinScheduler |
| image.tools | string | `"dolphinscheduler-tools"` | tools image |
| image.worker | string | `"dolphinscheduler-worker"` | worker image |
| ingress.annotations | object | `{}` | Ingress annotations |
| ingress.enabled | bool | `false` | Enable ingress |
| ingress.host | string | `"dolphinscheduler.org"` | Ingress host |
| ingress.path | string | `"/dolphinscheduler"` | Ingress path |
| ingress.tls.enabled | bool | `false` | Enable ingress tls |
| ingress.tls.secretName | string | `"dolphinscheduler-tls"` | Ingress tls secret name |
| initImage | object | `{"busybox":"busybox:1.30.1","pullPolicy":"IfNotPresent"}` | Used to detect whether dolphinscheduler dependent services such as database are ready |
| initImage.busybox | string | `"busybox:1.30.1"` | Specify initImage repository |
| initImage.pullPolicy | string | `"IfNotPresent"` | Image pull policy. Options: Always, Never, IfNotPresent |
| master.affinity | object | `{}` | Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints. More info: [node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) |
| master.annotations | object | `{}` | You can use annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata. |
| master.enabled | bool | `true` | Enable or disable the Master component |
| master.env.JAVA_OPTS | string | `"-Xms1g -Xmx1g -Xmn512m"` | The jvm options for master server |
| master.env.MASTER_DISPATCH_TASK_NUM | string | `"3"` | Master dispatch task number per batch |
| master.env.MASTER_EXEC_TASK_NUM | string | `"20"` | Master execute task number in parallel per process instance |
| master.env.MASTER_EXEC_THREADS | string | `"100"` | Master execute thread number to limit process instances |
| master.env.MASTER_FAILOVER_INTERVAL | string | `"10m"` | Master failover interval, the unit is minute |
| master.env.MASTER_HEARTBEAT_ERROR_THRESHOLD | string | `"5"` | Master heartbeat error threshold |
| master.env.MASTER_HEARTBEAT_INTERVAL | string | `"10s"` | Master heartbeat interval, the unit is second |
| master.env.MASTER_HOST_SELECTOR | string | `"LowerWeight"` | Master host selector to select a suitable worker, optional values include Random, RoundRobin, LowerWeight |
| master.env.MASTER_KILL_APPLICATION_WHEN_HANDLE_FAILOVER | string | `"true"` | Master kill application when handle failover |
| master.env.MASTER_MAX_CPU_LOAD_AVG | string | `"1"` | Master max cpuload avg, only higher than the system cpu load average, master server can schedule |
| master.env.MASTER_RESERVED_MEMORY | string | `"0.3"` | Master reserved memory, only lower than system available memory, master server can schedule, the unit is G |
| master.env.MASTER_STATE_WHEEL_INTERVAL | string | `"5s"` | master state wheel interval, the unit is second |
| master.env.MASTER_TASK_COMMIT_INTERVAL | string | `"1s"` | master commit task interval, the unit is second |
| master.env.MASTER_TASK_COMMIT_RETRYTIMES | string | `"5"` | Master commit task retry times |
| master.livenessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container liveness. Container will be restarted if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| master.livenessProbe.enabled | bool | `true` | Turn on and off liveness probe |
| master.livenessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| master.livenessProbe.initialDelaySeconds | string | `"30"` | Delay before liveness probe is initiated |
| master.livenessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| master.livenessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| master.livenessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| master.nodeSelector | object | `{}` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: [assign-pod-node](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) |
| master.persistentVolumeClaim | object | `{"accessModes":["ReadWriteOnce"],"enabled":false,"storage":"20Gi","storageClassName":"-"}` | PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name. |
| master.persistentVolumeClaim.accessModes | list | `["ReadWriteOnce"]` | `PersistentVolumeClaim` access modes |
| master.persistentVolumeClaim.enabled | bool | `false` | Set `master.persistentVolumeClaim.enabled` to `true` to mount a new volume for `master` |
| master.persistentVolumeClaim.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| master.persistentVolumeClaim.storageClassName | string | `"-"` | `Master` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| master.podManagementPolicy | string | `"Parallel"` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. |
| master.readinessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| master.readinessProbe.enabled | bool | `true` | Turn on and off readiness probe |
| master.readinessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| master.readinessProbe.initialDelaySeconds | string | `"30"` | Delay before readiness probe is initiated |
| master.readinessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| master.readinessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| master.readinessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| master.replicas | string | `"3"` | Replicas is the desired number of replicas of the given Template. |
| master.resources | object | `{}` | Compute Resources required by this container. More info: [manage-resources-containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) |
| master.service.annotations | object | `{}` | annotations may need to be set when want to scrapy metrics by prometheus but not install prometheus operator |
| master.service.serviceMonitor | object | `{"annotations":{},"enabled":false,"interval":"15s","labels":{},"path":"/actuator/prometheus"}` | serviceMonitor for prometheus operator |
| master.service.serviceMonitor.annotations | object | `{}` | serviceMonitor.annotations ServiceMonitor annotations |
| master.service.serviceMonitor.enabled | bool | `false` | Enable or disable master serviceMonitor |
| master.service.serviceMonitor.interval | string | `"15s"` | serviceMonitor.interval interval at which metrics should be scraped |
| master.service.serviceMonitor.labels | object | `{}` | serviceMonitor.labels ServiceMonitor extra labels |
| master.service.serviceMonitor.path | string | `"/actuator/prometheus"` | serviceMonitor.path path of the metrics endpoint |
| master.tolerations | list | `[]` | Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission, effectively unioning the set of nodes tolerated by the pod and the RuntimeClass. |
| minio.auth.rootPassword | string | `"minioadmin"` | minio password |
| minio.auth.rootUser | string | `"minioadmin"` | minio username |
| minio.defaultBuckets | string | `"dolphinscheduler"` | minio default buckets |
| minio.enabled | bool | `true` | Deploy minio and configure it as the default storage for DolphinScheduler, note this is for demo only, not for production. |
| minio.persistence.enabled | bool | `false` | Set minio.persistence.enabled to true to mount a new volume for internal minio |
| mysql.auth.database | string | `"dolphinscheduler"` | mysql database |
| mysql.auth.params | string | `"characterEncoding=utf8"` | mysql params |
| mysql.auth.password | string | `"ds"` | mysql password |
| mysql.auth.username | string | `"ds"` | mysql username |
| mysql.driverClassName | string | `"com.mysql.cj.jdbc.Driver"` | mysql driverClassName |
| mysql.enabled | bool | `false` | If not exists external MySQL, by default, the DolphinScheduler will use a internal MySQL |
| mysql.primary.persistence.enabled | bool | `false` | Set mysql.primary.persistence.enabled to true to mount a new volume for internal MySQL |
| mysql.primary.persistence.size | string | `"20Gi"` | `PersistentVolumeClaim` size |
| mysql.primary.persistence.storageClass | string | `"-"` | MySQL data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| postgresql.driverClassName | string | `"org.postgresql.Driver"` | The driverClassName for internal PostgreSQL |
| postgresql.enabled | bool | `true` | If not exists external PostgreSQL, by default, the DolphinScheduler will use a internal PostgreSQL |
| postgresql.params | string | `"characterEncoding=utf8"` | The params for internal PostgreSQL |
| postgresql.persistence.enabled | bool | `false` | Set postgresql.persistence.enabled to true to mount a new volume for internal PostgreSQL |
| postgresql.persistence.size | string | `"20Gi"` | `PersistentVolumeClaim` size |
| postgresql.persistence.storageClass | string | `"-"` | PostgreSQL data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| postgresql.postgresqlDatabase | string | `"dolphinscheduler"` | The database for internal PostgreSQL |
| postgresql.postgresqlPassword | string | `"root"` | The password for internal PostgreSQL |
| postgresql.postgresqlUsername | string | `"root"` | The username for internal PostgreSQL |
| registryEtcd.authority | string | `""` | Etcd authority |
| registryEtcd.enabled | bool | `false` | If you want to use Etcd for your registry center, change this value to true. And set zookeeper.enabled to false |
| registryEtcd.endpoints | string | `""` | Etcd endpoints |
| registryEtcd.namespace | string | `"dolphinscheduler"` | Etcd namespace |
| registryEtcd.passWord | string | `""` | Etcd passWord |
| registryEtcd.ssl.certFile | string | `"etcd-certs/ca.crt"` | CertFile file path |
| registryEtcd.ssl.enabled | bool | `false` | If your Etcd server has configured with ssl, change this value to true. About certification files you can see [here](https://github.com/etcd-io/jetcd/blob/main/docs/SslConfig.md) for how to convert. |
| registryEtcd.ssl.keyCertChainFile | string | `"etcd-certs/client.crt"` | keyCertChainFile file path |
| registryEtcd.ssl.keyFile | string | `"etcd-certs/client.pem"` | keyFile file path |
| registryEtcd.user | string | `""` | Etcd user |
| registryJdbc.enabled | bool | `false` | If you want to use JDbc for your registry center, change this value to true. And set zookeeper.enabled and registryEtcd.enabled to false |
| registryJdbc.hikariConfig.driverClassName | string | `"com.mysql.cj.jdbc.Driver"` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it |
| registryJdbc.hikariConfig.enabled | bool | `false` | Default use same Dolphinscheduler's database, if you want to use other database please change `enabled` to `true` and change other configs |
| registryJdbc.hikariConfig.jdbcurl | string | `"jdbc:mysql://"` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it |
| registryJdbc.hikariConfig.password | string | `""` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it |
| registryJdbc.hikariConfig.username | string | `""` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it |
| registryJdbc.termExpireTimes | int | `3` | Used to calculate the expire time |
| registryJdbc.termRefreshInterval | string | `"2s"` | Used to schedule refresh the ephemeral data/ lock |
| security.authentication.ldap.basedn | string | `"dc=example,dc=com"` | LDAP base dn |
| security.authentication.ldap.password | string | `"password"` | LDAP password |
| security.authentication.ldap.ssl.enable | bool | `false` | LDAP ssl switch |
| security.authentication.ldap.ssl.jksbase64content | string | `""` | LDAP jks file base64 content. If you use macOS, please run `base64 -b 0 -i /path/to/your.jks`. If you use Linux, please run `base64 -w 0 /path/to/your.jks`. If you use Windows, please run `certutil -f -encode /path/to/your.jks`. Then copy the base64 content to below field in one line |
| security.authentication.ldap.ssl.truststore | string | `"/opt/ldapkeystore.jks"` | LDAP jks file absolute path, do not change this value |
| security.authentication.ldap.ssl.truststorepassword | string | `""` | LDAP jks password |
| security.authentication.ldap.urls | string | `"ldap://ldap.forumsys.com:389/"` | LDAP urls |
| security.authentication.ldap.user.admin | string | `"read-only-admin"` | Admin user account when you log-in with LDAP |
| security.authentication.ldap.user.emailattribute | string | `"mail"` | LDAP user email attribute |
| security.authentication.ldap.user.identityattribute | string | `"uid"` | LDAP user identity attribute |
| security.authentication.ldap.user.notexistaction | string | `"CREATE"` | action when ldap user is not exist,default value: CREATE. Optional values include(CREATE,DENY) |
| security.authentication.ldap.username | string | `"cn=read-only-admin,dc=example,dc=com"` | LDAP username |
| security.authentication.type | string | `"PASSWORD"` | Authentication types (supported types: PASSWORD,LDAP,CASDOOR_SSO) |
| timezone | string | `"Asia/Shanghai"` | World time and date for cities in all time zones |
| worker.affinity | object | `{}` | Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints. More info: [node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity) |
| worker.annotations | object | `{}` | You can use annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata. |
| worker.enabled | bool | `true` | Enable or disable the Worker component |
| worker.env.WORKER_EXEC_THREADS | string | `"100"` | Worker execute thread number to limit task instances |
| worker.env.WORKER_HEARTBEAT_INTERVAL | string | `"10s"` | Worker heartbeat interval, the unit is second |
| worker.env.WORKER_HEART_ERROR_THRESHOLD | string | `"5"` | Worker heartbeat error threshold |
| worker.env.WORKER_HOST_WEIGHT | string | `"100"` | Worker host weight to dispatch tasks |
| worker.env.WORKER_MAX_CPU_LOAD_AVG | string | `"1"` | Worker max cpu load avg, only higher than the system cpu load average, worker server can be dispatched tasks |
| worker.env.WORKER_RESERVED_MEMORY | string | `"0.3"` | Worker reserved memory, only lower than system available memory, worker server can be dispatched tasks, the unit is G |
| worker.keda.advanced | object | `{}` | Specify HPA related options |
| worker.keda.cooldownPeriod | int | `30` | How many seconds KEDA will wait before scaling to zero. Note that HPA has a separate cooldown period for scale-downs |
| worker.keda.enabled | bool | `false` | Enable or disable the Keda component |
| worker.keda.maxReplicaCount | int | `3` | Maximum number of workers created by keda |
| worker.keda.minReplicaCount | int | `0` | Minimum number of workers created by keda |
| worker.keda.namespaceLabels | object | `{}` | Keda namespace labels |
| worker.keda.pollingInterval | int | `5` | How often KEDA polls the DolphinScheduler DB to report new scale requests to the HPA |
| worker.livenessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container liveness. Container will be restarted if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| worker.livenessProbe.enabled | bool | `true` | Turn on and off liveness probe |
| worker.livenessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| worker.livenessProbe.initialDelaySeconds | string | `"30"` | Delay before liveness probe is initiated |
| worker.livenessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| worker.livenessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| worker.livenessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| worker.nodeSelector | object | `{}` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: [assign-pod-node](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) |
| worker.persistentVolumeClaim | object | `{"dataPersistentVolume":{"accessModes":["ReadWriteOnce"],"enabled":false,"storage":"20Gi","storageClassName":"-"},"enabled":false,"logsPersistentVolume":{"accessModes":["ReadWriteOnce"],"enabled":false,"storage":"20Gi","storageClassName":"-"}}` | PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name. |
| worker.persistentVolumeClaim.dataPersistentVolume.accessModes | list | `["ReadWriteOnce"]` | `PersistentVolumeClaim` access modes |
| worker.persistentVolumeClaim.dataPersistentVolume.enabled | bool | `false` | Set `worker.persistentVolumeClaim.dataPersistentVolume.enabled` to `true` to mount a data volume for `worker` |
| worker.persistentVolumeClaim.dataPersistentVolume.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| worker.persistentVolumeClaim.dataPersistentVolume.storageClassName | string | `"-"` | `Worker` data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| worker.persistentVolumeClaim.enabled | bool | `false` | Set `worker.persistentVolumeClaim.enabled` to `true` to enable `persistentVolumeClaim` for `worker` |
| worker.persistentVolumeClaim.logsPersistentVolume.accessModes | list | `["ReadWriteOnce"]` | `PersistentVolumeClaim` access modes |
| worker.persistentVolumeClaim.logsPersistentVolume.enabled | bool | `false` | Set `worker.persistentVolumeClaim.logsPersistentVolume.enabled` to `true` to mount a logs volume for `worker` |
| worker.persistentVolumeClaim.logsPersistentVolume.storage | string | `"20Gi"` | `PersistentVolumeClaim` size |
| worker.persistentVolumeClaim.logsPersistentVolume.storageClassName | string | `"-"` | `Worker` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| worker.podManagementPolicy | string | `"Parallel"` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. |
| worker.readinessProbe | object | `{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}` | Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. More info: [container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) |
| worker.readinessProbe.enabled | bool | `true` | Turn on and off readiness probe |
| worker.readinessProbe.failureThreshold | string | `"3"` | Minimum consecutive failures for the probe |
| worker.readinessProbe.initialDelaySeconds | string | `"30"` | Delay before readiness probe is initiated |
| worker.readinessProbe.periodSeconds | string | `"30"` | How often to perform the probe |
| worker.readinessProbe.successThreshold | string | `"1"` | Minimum consecutive successes for the probe |
| worker.readinessProbe.timeoutSeconds | string | `"5"` | When the probe times out |
| worker.replicas | string | `"3"` | Replicas is the desired number of replicas of the given Template. |
| worker.resources | object | `{}` | Compute Resources required by this container. More info: [manage-resources-containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) |
| worker.service.annotations | object | `{}` | annotations may need to be set when want to scrapy metrics by prometheus but not install prometheus operator |
| worker.service.serviceMonitor | object | `{"annotations":{},"enabled":false,"interval":"15s","labels":{},"path":"/actuator/prometheus"}` | serviceMonitor for prometheus operator |
| worker.service.serviceMonitor.annotations | object | `{}` | serviceMonitor.annotations ServiceMonitor annotations |
| worker.service.serviceMonitor.enabled | bool | `false` | Enable or disable worker serviceMonitor |
| worker.service.serviceMonitor.interval | string | `"15s"` | serviceMonitor.interval interval at which metrics should be scraped |
| worker.service.serviceMonitor.labels | object | `{}` | serviceMonitor.labels ServiceMonitor extra labels |
| worker.service.serviceMonitor.path | string | `"/actuator/prometheus"` | serviceMonitor.path path of the metrics endpoint |
| worker.tolerations | list | `[]` | Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission, effectively unioning the set of nodes tolerated by the pod and the RuntimeClass. |
| zookeeper.enabled | bool | `true` | If not exists external registry, the zookeeper registry will be used by default. |
| zookeeper.fourlwCommandsWhitelist | string | `"srvr,ruok,wchs,cons"` | A list of comma separated Four Letter Words commands to use |
| zookeeper.persistence.enabled | bool | `false` | Set `zookeeper.persistence.enabled` to true to mount a new volume for internal ZooKeeper |
| zookeeper.persistence.size | string | `"20Gi"` | PersistentVolumeClaim size |
| zookeeper.persistence.storageClass | string | `"-"` | ZooKeeper data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning |
| zookeeper.service.port | int | `2181` | The port of zookeeper |

11
deploy/kubernetes/dolphinscheduler/README.md.gotmpl

@ -0,0 +1,11 @@
## About DolphinScheduler for Kubernetes
{{ template "chart.description" . }}
This chart bootstraps all the components needed to run Apache DolphinScheduler on a Kubernetes Cluster using [Helm](https://helm.sh).
## QuickStart in Kubernetes
Please refer to the [Quick Start in Kubernetes](../../../docs/docs/en/guide/installation/kubernetes.md)
{{ template "chart.valuesSection" . }}

570
deploy/kubernetes/dolphinscheduler/values.yaml

File diff suppressed because it is too large Load Diff

257
docs/docs/en/guide/installation/kubernetes.md

@ -528,259 +528,4 @@ helm install dolphinscheduler-gpu-worker . \
## Appendix-Configuration ## Appendix-Configuration
| Parameter | Description | Default | Ref: [DolphinScheduler Helm Charts](https://github.com/apache/dolphinscheduler/blob/dev/deploy/kubernetes/dolphinscheduler/README.md) <!-- markdown-link-check-disable-line -->
|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------|
| `timezone` | World time and date for cities in all time zones | `Asia/Shanghai` |
| <br/> | | |
| `image.repository` | Docker image repository for the DolphinScheduler | `apache/dolphinscheduler` |
| `image.tag` | Docker image version for the DolphinScheduler | `latest` |
| `image.pullPolicy` | Image pull policy. Options: Always, Never, IfNotPresent | `IfNotPresent` |
| `image.pullSecret` | Image pull secret. An optional reference to secret in the same namespace to use for pulling any of the images | `nil` |
| <br/> | | |
| `postgresql.enabled` | If not exists external PostgreSQL, by default, the DolphinScheduler will use a internal PostgreSQL | `true` |
| `postgresql.postgresqlUsername` | The username for internal PostgreSQL | `root` |
| `postgresql.postgresqlPassword` | The password for internal PostgreSQL | `root` |
| `postgresql.postgresqlDatabase` | The database for internal PostgreSQL | `dolphinscheduler` |
| `postgresql.persistence.enabled` | Set `postgresql.persistence.enabled` to `true` to mount a new volume for internal PostgreSQL | `false` |
| `postgresql.persistence.size` | `PersistentVolumeClaim` size | `20Gi` |
| `postgresql.persistence.storageClass` | PostgreSQL data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `minio.enabled` | Deploy minio and configure it as the default storage for DolphinScheduler, note this is for demo only, not for production. | `false` |
| `externalDatabase.type` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database type will use it | `postgresql` |
| `externalDatabase.driver` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database driver will use it | `org.postgresql.Driver` |
| `externalDatabase.host` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database host will use it | `localhost` |
| `externalDatabase.port` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database port will use it | `5432` |
| `externalDatabase.username` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database username will use it | `root` |
| `externalDatabase.password` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database password will use it | `root` |
| `externalDatabase.database` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database database will use it | `dolphinscheduler` |
| `externalDatabase.params` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database params will use it | `characterEncoding=utf8` |
| <br/> | | |
| `zookeeper.enabled` | If not exists external ZooKeeper, by default, the DolphinScheduler will use a internal ZooKeeper | `true` |
| `zookeeper.service.port` | The port of zookeeper | `2181` |
| `zookeeper.fourlwCommandsWhitelist` | A list of comma separated Four Letter Words commands to use | `srvr,ruok,wchs,cons` |
| `zookeeper.persistence.enabled` | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for internal ZooKeeper | `false` |
| `zookeeper.persistence.size` | `PersistentVolumeClaim` size | `20Gi` |
| `zookeeper.persistence.storageClass` | ZooKeeper data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `registryEtcd.enabled` | If you want to use Etcd for your registry center, change this value to true. And set `zookeeper.enabled` to false | `false` |
| `registryEtcd.endpoints` | Etcd endpoints | `""` |
| `registryEtcd.namespace` | Etcd namespace | `dolphinscheduler` |
| `registryEtcd.user` | Etcd user | `""` |
| `registryEtcd.passWord` | Etcd passWord | `""` |
| `registryEtcd.authority` | Etcd authority | `""` |
| `registryEtcd.ssl.enabled` | If your Etcd server has configured with ssl, change this value to true. About certification files you can see [here](https://github.com/etcd-io/jetcd/blob/main/docs/SslConfig.md) for how to convert. | `false` |
| `registryEtcd.ssl.certFile` | CertFile file path | `etcd-certs/ca.crt` |
| `registryEtcd.ssl.keyCertChainFile` | keyCertChainFile file path | `etcd-certs/client.crt` |
| `registryEtcd.ssl.keyFile` | keyFile file path | `etcd-certs/client.pem` |
| `registryJdbc.enabled` | If you want to use JDbc for your registry center, change this value to true. And set `zookeeper.enabled` and `registryEtcd.enabled` to false | `false` |
| `registryJdbc.termRefreshInterval` | Used to schedule refresh the ephemeral data/ lock | `2s` |
| `registryJdbc.termExpireTimes` | Used to calculate the expire time | `3` |
| `registryJdbc.hikariConfig.driverClassName` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `com.mysql.cj.jdbc.Driver` |
| `registryJdbc.hikariConfig.jdbcurl` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `jdbc:mysql://` |
| `registryJdbc.hikariConfig.username` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `""` |
| `registryJdbc.hikariConfig.password` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `""` |
| `externalRegistry.registryPluginName` | If exists external registry and set `zookeeper.enable` && `registryEtcd.enabled` && `registryJdbc.enabled` to `false`, specify the external registry plugin name | `zookeeper` |
| `externalRegistry.registryServers` | If exists external registry and set `zookeeper.enable` && `registryEtcd.enabled` && `registryJdbc.enabled` to `false`, specify the external registry servers | `127.0.0.1:2181` |
| <br/> | | `PASSWORD` |
| `security.authentication.type` | Authentication types (supported types: PASSWORD,LDAP,CASDOOR_SSO) | `ldap://ldap.forumsys.com:389/` |
| `security.authentication.ldap.urls` | LDAP urls | `dc=example,dc=com` |
| `security.authentication.ldap.basedn` | LDAP base dn | `cn=read-only-admin,dc=example,dc=com` |
| `security.authentication.ldap.username` | LDAP username | `password` |
| `security.authentication.ldap.password` | LDAP password | `read-only-admin` |
| `security.authentication.ldap.user.admin` | Admin user account when you log-in with LDAP | `uid` |
| `security.authentication.ldap.user.identityattribute` | LDAP user identity attribute | `mail` |
| `security.authentication.ldap.user.emailattribute` | LDAP user email attribute | `CREATE` |
| `security.authentication.ldap.user.notexistaction` | action when ldap user is not exist,default value: CREATE. Optional values include(CREATE,DENY) | `false` |
| `security.authentication.ldap.ssl.enable` | LDAP ssl switch | `false` |
| `security.authentication.ldap.ssl.truststore` | LDAP jks file absolute path, do not change this value | `/opt/ldapkeystore.jks` |
| `security.authentication.ldap.ssl.jksbase64content` | LDAP jks file base64 content | `""` |
| `security.authentication.ldap.ssl.truststorepassword` | LDAP jks password | `""` |
| <br/> | | |
| `common.configmap.DOLPHINSCHEDULER_OPTS` | The jvm options for dolphinscheduler, suitable for all servers | `""` |
| `common.configmap.DATA_BASEDIR_PATH` | User data directory path, self configuration, please make sure the directory exists and have read write permissions | `/tmp/dolphinscheduler` |
| `common.configmap.RESOURCE_STORAGE_TYPE` | Resource storage type: HDFS, S3, OSS, GCS, ABS, NONE | `HDFS` |
| `common.configmap.RESOURCE_UPLOAD_PATH` | Resource store on HDFS/S3 path, please make sure the directory exists on hdfs and have read write permissions | `/dolphinscheduler` |
| `common.configmap.FS_DEFAULT_FS` | Resource storage file system like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler` | `file:///` |
| `common.configmap.FS_S3A_ENDPOINT` | S3 endpoint when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3` | `s3.xxx.amazonaws.com` |
| `common.configmap.FS_S3A_ACCESS_KEY` | S3 access key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3` | `xxxxxxx` |
| `common.configmap.FS_S3A_SECRET_KEY` | S3 secret key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3` | `xxxxxxx` |
| `common.configmap.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE` | Whether to startup kerberos | `false` |
| `common.configmap.JAVA_SECURITY_KRB5_CONF_PATH` | The java.security.krb5.conf path | `/opt/krb5.conf` |
| `common.configmap.LOGIN_USER_KEYTAB_USERNAME` | The login user from keytab username | `hdfs@HADOOP.COM` |
| `common.configmap.LOGIN_USER_KEYTAB_PATH` | The login user from keytab path | `/opt/hdfs.keytab` |
| `common.configmap.KERBEROS_EXPIRE_TIME` | The kerberos expire time, the unit is hour | `2` |
| `common.configmap.HDFS_ROOT_USER` | The HDFS root user who must have the permission to create directories under the HDFS root path | `hdfs` |
| `common.configmap.RESOURCE_MANAGER_HTTPADDRESS_PORT` | Set resource manager httpaddress port for yarn | `8088` |
| `common.configmap.YARN_RESOURCEMANAGER_HA_RM_IDS` | If resourcemanager HA is enabled, please set the HA IPs | `nil` |
| `common.configmap.YARN_APPLICATION_STATUS_ADDRESS` | If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname, otherwise keep default | `http://ds1:%s/ws/v1/cluster/apps/%s` |
| `common.configmap.HADOOP_HOME` | Set `HADOOP_HOME` for DolphinScheduler's task environment | `/opt/soft/hadoop` |
| `common.configmap.HADOOP_CONF_DIR` | Set `HADOOP_CONF_DIR` for DolphinScheduler's task environment | `/opt/soft/hadoop/etc/hadoop` |
| `common.configmap.SPARK_HOME` | Set `SPARK_HOME` for DolphinScheduler's task environment | `/opt/soft/spark` |
| `common.configmap.PYTHON_LAUNCHER` | Set `PYTHON_LAUNCHER` for DolphinScheduler's task environment | `/usr/bin/python` |
| `common.configmap.JAVA_HOME` | Set `JAVA_HOME` for DolphinScheduler's task environment | `/opt/java/openjdk` |
| `common.configmap.HIVE_HOME` | Set `HIVE_HOME` for DolphinScheduler's task environment | `/opt/soft/hive` |
| `common.configmap.FLINK_HOME` | Set `FLINK_HOME` for DolphinScheduler's task environment | `/opt/soft/flink` |
| `common.configmap.DATAX_LAUNCHER` | Set `DATAX_LAUNCHER` for DolphinScheduler's task environment | `/opt/soft/datax` |
| `common.sharedStoragePersistence.enabled` | Set `common.sharedStoragePersistence.enabled` to `true` to mount a shared storage volume for Hadoop, Spark binary and etc | `false` |
| `common.sharedStoragePersistence.mountPath` | The mount path for the shared storage volume | `/opt/soft` |
| `common.sharedStoragePersistence.accessModes` | `PersistentVolumeClaim` access modes, must be `ReadWriteMany` | `[ReadWriteMany]` |
| `common.sharedStoragePersistence.storageClassName` | Shared Storage persistent volume storage class, must support the access mode: ReadWriteMany | `-` |
| `common.sharedStoragePersistence.storage` | `PersistentVolumeClaim` size | `20Gi` |
| `common.fsFileResourcePersistence.enabled` | Set `common.fsFileResourcePersistence.enabled` to `true` to mount a new file resource volume for `api` and `worker` | `false` |
| `common.fsFileResourcePersistence.accessModes` | `PersistentVolumeClaim` access modes, must be `ReadWriteMany` | `[ReadWriteMany]` |
| `common.fsFileResourcePersistence.storageClassName` | Resource persistent volume storage class, must support the access mode: ReadWriteMany | `-` |
| `common.fsFileResourcePersistence.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `master.enabled` | Enable or disable the Master component | true |
| `master.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
| `master.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
| `master.annotations` | The `annotations` for master server | `{}` |
| `master.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `master.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `master.tolerations` | If specified, the pod's tolerations | `{}` |
| `master.resources` | The `resource` limit and request config for master server | `{}` |
| `master.env.JAVA_OPTS` | The jvm options for master server | `-Xms1g -Xmx1g -Xmn512m` |
| `master.env.MASTER_EXEC_THREADS` | Master execute thread number to limit process instances | `100` |
| `master.env.MASTER_EXEC_TASK_NUM` | Master execute task number in parallel per process instance | `20` |
| `master.env.MASTER_DISPATCH_TASK_NUM` | Master dispatch task number per batch | `3` |
| `master.env.MASTER_HOST_SELECTOR` | Master host selector to select a suitable worker, optional values include Random, RoundRobin, LowerWeight | `LowerWeight` |
| `master.env.MASTER_HEARTBEAT_INTERVAL` | Master heartbeat interval, the unit is second | `10s` |
| `master.env.MASTER_TASK_COMMIT_RETRYTIMES` | Master commit task retry times | `5` |
| `master.env.MASTER_TASK_COMMIT_INTERVAL` | master commit task interval, the unit is second | `1s` |
| `master.env.MASTER_MAX_CPULOAD_AVG` | Master max cpuload avg, only higher than the system cpu load average, master server can schedule | `-1` (`the number of cpu cores * 2`) |
| `master.env.MASTER_RESERVED_MEMORY` | Master reserved memory, only lower than system available memory, master server can schedule, the unit is G | `0.3` |
| `master.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `master.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `master.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `master.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `master.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `master.persistentVolumeClaim.enabled` | Set `master.persistentVolumeClaim.enabled` to `true` to mount a new volume for `master` | `false` |
| `master.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `master.persistentVolumeClaim.storageClassName` | `Master` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `master.persistentVolumeClaim.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `worker.enabled` | Enable or disable the Worker component | true |
| `worker.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
| `worker.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
| `worker.annotations` | The `annotations` for worker server | `{}` |
| `worker.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `worker.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `worker.tolerations` | If specified, the pod's tolerations | `{}` |
| `worker.resources` | The `resource` limit and request config for worker server | `{}` |
| `worker.env.WORKER_EXEC_THREADS` | Worker execute thread number to limit task instances | `100` |
| `worker.env.WORKER_HEARTBEAT_INTERVAL` | Worker heartbeat interval, the unit is second | `10s` |
| `worker.env.WORKER_MAX_CPU_LOAD_AVG` | Worker max cpu load avg, only higher than the system cpu load average, worker server can be dispatched tasks | `-1` (`the number of cpu cores * 2`) |
| `worker.env.WORKER_RESERVED_MEMORY` | Worker reserved memory, only lower than system available memory, worker server can be dispatched tasks, the unit is G | `0.3` |
| `worker.env.HOST_WEIGHT` | Worker host weight to dispatch tasks | `100` |
| `worker.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `worker.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `worker.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `worker.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `worker.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `worker.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `worker.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `worker.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `worker.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `worker.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `worker.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `worker.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `worker.persistentVolumeClaim.enabled` | Set `worker.persistentVolumeClaim.enabled` to `true` to enable `persistentVolumeClaim` for `worker` | `false` |
| `worker.persistentVolumeClaim.dataPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.dataPersistentVolume.enabled` to `true` to mount a data volume for `worker` | `false` |
| `worker.persistentVolumeClaim.dataPersistentVolume.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `worker.persistentVolumeClaim.dataPersistentVolume.storageClassName` | `Worker` data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `worker.persistentVolumeClaim.dataPersistentVolume.storage` | `PersistentVolumeClaim` size | `20Gi` |
| `worker.persistentVolumeClaim.logsPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.logsPersistentVolume.enabled` to `true` to mount a logs volume for `worker` | `false` |
| `worker.persistentVolumeClaim.logsPersistentVolume.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `worker.persistentVolumeClaim.logsPersistentVolume.storageClassName` | `Worker` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `worker.persistentVolumeClaim.logsPersistentVolume.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `alert.enabled` | Enable or disable the Alert-Server component | true |
| `alert.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
| `alert.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
| `alert.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
| `alert.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
| `alert.annotations` | The `annotations` for alert server | `{}` |
| `alert.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `alert.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `alert.tolerations` | If specified, the pod's tolerations | `{}` |
| `alert.resources` | The `resource` limit and request config for alert server | `{}` |
| `alert.configmap.ALERT_SERVER_OPTS` | The jvm options for alert server | `-Xms512m -Xmx512m -Xmn256m` |
| `alert.configmap.XLS_FILE_PATH` | XLS file path | `/tmp/xls` |
| `alert.configmap.MAIL_SERVER_HOST` | Mail `SERVER HOST ` | `nil` |
| `alert.configmap.MAIL_SERVER_PORT` | Mail `SERVER PORT` | `nil` |
| `alert.configmap.MAIL_SENDER` | Mail `SENDER` | `nil` |
| `alert.configmap.MAIL_USER` | Mail `USER` | `nil` |
| `alert.configmap.MAIL_PASSWD` | Mail `PASSWORD` | `nil` |
| `alert.configmap.MAIL_SMTP_STARTTLS_ENABLE` | Mail `SMTP STARTTLS` enable | `false` |
| `alert.configmap.MAIL_SMTP_SSL_ENABLE` | Mail `SMTP SSL` enable | `false` |
| `alert.configmap.MAIL_SMTP_SSL_TRUST` | Mail `SMTP SSL TRUST` | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_ENABLE` | `Enterprise Wechat` enable | `false` |
| `alert.configmap.ENTERPRISE_WECHAT_CORP_ID` | `Enterprise Wechat` corp id | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_SECRET` | `Enterprise Wechat` secret | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_AGENT_ID` | `Enterprise Wechat` agent id | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_USERS` | `Enterprise Wechat` users | `nil` |
| `alert.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `alert.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `alert.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `alert.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `alert.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `alert.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `alert.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `alert.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `alert.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `alert.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `alert.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `alert.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `alert.persistentVolumeClaim.enabled` | Set `alert.persistentVolumeClaim.enabled` to `true` to mount a new volume for `alert` | `false` |
| `alert.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `alert.persistentVolumeClaim.storageClassName` | `Alert` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `alert.persistentVolumeClaim.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `api.enabled` | Enable or disable the API-Server component | true |
| `api.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
| `api.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
| `api.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
| `api.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
| `api.annotations` | The `annotations` for api server | `{}` |
| `api.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `api.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `api.tolerations` | If specified, the pod's tolerations | `{}` |
| `api.resources` | The `resource` limit and request config for api server | `{}` |
| `api.configmap.API_SERVER_OPTS` | The jvm options for api server | `-Xms512m -Xmx512m -Xmn256m` |
| `api.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `api.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `api.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `api.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `api.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `api.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `api.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `api.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `api.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `api.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `api.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `api.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `api.persistentVolumeClaim.enabled` | Set `api.persistentVolumeClaim.enabled` to `true` to mount a new volume for `api` | `false` |
| `api.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `api.persistentVolumeClaim.storageClassName` | `api` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `api.persistentVolumeClaim.storage` | `PersistentVolumeClaim` size | `20Gi` |
| `api.service.type` | `type` determines how the Service is exposed. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer | `ClusterIP` |
| `api.service.clusterIP` | `clusterIP` is the IP address of the service and is usually assigned randomly by the master | `nil` |
| `api.service.nodePort` | `nodePort` is the port on each node on which this service is exposed when type=NodePort | `nil` |
| `api.service.externalIPs` | `externalIPs` is a list of IP addresses for which nodes in the cluster will also accept traffic for this service | `[]` |
| `api.service.externalName` | `externalName` is the external reference that kubedns or equivalent will return as a CNAME record for this service | `nil` |
| `api.service.loadBalancerIP` | `loadBalancerIP` when service.type is LoadBalancer. LoadBalancer will get created with the IP specified in this field | `nil` |
| `api.service.annotations` | `annotations` may need to be set when service.type is LoadBalancer | `{}` |
| `api.taskTypeFilter.enabled` | Enable or disable the task type filter. If set to true, the API-Server will return tasks of a specific type set in api.taskTypeFilter.task.Note: This feature only filters tasks to return a specific type on the WebUI. However, you can still create any task that DolphinScheduler supports via the API. | `false` |
| `api.taskTypeFilter.task` | Task type ref: [task-type-config.yaml](https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-api/src/main/resources/task-type-config.yaml) | `{}` |
| <br/> | | |
| `ingress.enabled` | Enable ingress | `false` |
| `ingress.host` | Ingress host | `dolphinscheduler.org` |
| `ingress.path` | Ingress path | `/dolphinscheduler` |
| `ingress.tls.enabled` | Enable ingress tls | `false` |
| `ingress.tls.secretName` | Ingress tls secret name | `dolphinscheduler-tls` |

257
docs/docs/zh/guide/installation/kubernetes.md

@ -527,259 +527,4 @@ helm install dolphinscheduler-gpu-worker . \
## 附录-配置 ## 附录-配置
| Parameter | Description | Default | 参考 [DolphinScheduler Helm Charts](https://github.com/apache/dolphinscheduler/blob/dev/deploy/kubernetes/dolphinscheduler/README.md) <!-- markdown-link-check-disable-line -->
|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------|
| `timezone` | World time and date for cities in all time zones | `Asia/Shanghai` |
| <br/> | | |
| `image.repository` | Docker image repository for the DolphinScheduler | `apache/dolphinscheduler` |
| `image.tag` | Docker image version for the DolphinScheduler | `latest` |
| `image.pullPolicy` | Image pull policy. Options: Always, Never, IfNotPresent | `IfNotPresent` |
| `image.pullSecret` | Image pull secret. An optional reference to secret in the same namespace to use for pulling any of the images | `nil` |
| <br/> | | |
| `postgresql.enabled` | If not exists external PostgreSQL, by default, the DolphinScheduler will use a internal PostgreSQL | `true` |
| `postgresql.postgresqlUsername` | The username for internal PostgreSQL | `root` |
| `postgresql.postgresqlPassword` | The password for internal PostgreSQL | `root` |
| `postgresql.postgresqlDatabase` | The database for internal PostgreSQL | `dolphinscheduler` |
| `postgresql.persistence.enabled` | Set `postgresql.persistence.enabled` to `true` to mount a new volume for internal PostgreSQL | `false` |
| `postgresql.persistence.size` | `PersistentVolumeClaim` size | `20Gi` |
| `postgresql.persistence.storageClass` | PostgreSQL data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `minio.enabled` | Deploy minio and configure it as the default storage for DolphinScheduler, note this is for demo only, not for production. | `false` |
| `externalDatabase.type` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database type will use it | `postgresql` |
| `externalDatabase.driver` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database driver will use it | `org.postgresql.Driver` |
| `externalDatabase.host` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database host will use it | `localhost` |
| `externalDatabase.port` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database port will use it | `5432` |
| `externalDatabase.username` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database username will use it | `root` |
| `externalDatabase.password` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database password will use it | `root` |
| `externalDatabase.database` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database database will use it | `dolphinscheduler` |
| `externalDatabase.params` | If exists external PostgreSQL, and set `postgresql.enabled` value to false. DolphinScheduler's database params will use it | `characterEncoding=utf8` |
| <br/> | | |
| `zookeeper.enabled` | If not exists external ZooKeeper, by default, the DolphinScheduler will use a internal ZooKeeper | `true` |
| `zookeeper.service.port` | The port of zookeeper | `2181` |
| `zookeeper.fourlwCommandsWhitelist` | A list of comma separated Four Letter Words commands to use | `srvr,ruok,wchs,cons` |
| `zookeeper.persistence.enabled` | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for internal ZooKeeper | `false` |
| `zookeeper.persistence.size` | `PersistentVolumeClaim` size | `20Gi` |
| `zookeeper.persistence.storageClass` | ZooKeeper data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `registryEtcd.enabled` | If you want to use Etcd for your registry center, change this value to true. And set `zookeeper.enabled` to false | `false` |
| `registryEtcd.endpoints` | Etcd endpoints | `""` |
| `registryEtcd.namespace` | Etcd namespace | `dolphinscheduler` |
| `registryEtcd.user` | Etcd user | `""` |
| `registryEtcd.passWord` | Etcd passWord | `""` |
| `registryEtcd.authority` | Etcd authority | `""` |
| `registryEtcd.ssl.enabled` | If your Etcd server has configured with ssl, change this value to true. About certification files you can see [here](https://github.com/etcd-io/jetcd/blob/main/docs/SslConfig.md) for how to convert. | `false` |
| `registryEtcd.ssl.certFile` | CertFile file path | `etcd-certs/ca.crt` |
| `registryEtcd.ssl.keyCertChainFile` | keyCertChainFile file path | `etcd-certs/client.crt` |
| `registryEtcd.ssl.keyFile` | keyFile file path | `etcd-certs/client.pem` |
| `registryJdbc.enabled` | If you want to use JDbc for your registry center, change this value to true. And set `zookeeper.enabled` and `registryEtcd.enabled` to false | `false` |
| `registryJdbc.termRefreshInterval` | Used to schedule refresh the ephemeral data/ lock | `2s` |
| `registryJdbc.termExpireTimes` | Used to calculate the expire time | `3` |
| `registryJdbc.hikariConfig.driverClassName` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `com.mysql.cj.jdbc.Driver` |
| `registryJdbc.hikariConfig.jdbcurl` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `jdbc:mysql://` |
| `registryJdbc.hikariConfig.username` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `""` |
| `registryJdbc.hikariConfig.password` | Default use same Dolphinscheduler's database if you don't change this value. If you set this value, Registry jdbc's database type will use it | `""` |
| `externalRegistry.registryPluginName` | If exists external registry and set `zookeeper.enable` && `registryEtcd.enabled` && `registryJdbc.enabled` to `false`, specify the external registry plugin name | `zookeeper` |
| `externalRegistry.registryServers` | If exists external registry and set `zookeeper.enable` && `registryEtcd.enabled` && `registryJdbc.enabled` to `false`, specify the external registry servers | `127.0.0.1:2181` |
| <br/> | | `PASSWORD` |
| `security.authentication.type` | Authentication types (supported types: PASSWORD,LDAP,CASDOOR_SSO) | `ldap://ldap.forumsys.com:389/` |
| `security.authentication.ldap.urls` | LDAP urls | `dc=example,dc=com` |
| `security.authentication.ldap.basedn` | LDAP base dn | `cn=read-only-admin,dc=example,dc=com` |
| `security.authentication.ldap.username` | LDAP username | `password` |
| `security.authentication.ldap.password` | LDAP password | `read-only-admin` |
| `security.authentication.ldap.user.admin` | Admin user account when you log-in with LDAP | `uid` |
| `security.authentication.ldap.user.identityattribute` | LDAP user identity attribute | `mail` |
| `security.authentication.ldap.user.emailattribute` | LDAP user email attribute | `CREATE` |
| `security.authentication.ldap.user.notexistaction` | action when ldap user is not exist,default value: CREATE. Optional values include(CREATE,DENY) | `false` |
| `security.authentication.ldap.ssl.enable` | LDAP ssl switch | `false` |
| `security.authentication.ldap.ssl.truststore` | LDAP jks file absolute path, do not change this value | `/opt/ldapkeystore.jks` |
| `security.authentication.ldap.ssl.jksbase64content` | LDAP jks file base64 content | `""` |
| `security.authentication.ldap.ssl.truststorepassword` | LDAP jks password | `""` |
| <br/> | | |
| `common.configmap.DOLPHINSCHEDULER_OPTS` | The jvm options for dolphinscheduler, suitable for all servers | `""` |
| `common.configmap.DATA_BASEDIR_PATH` | User data directory path, self configuration, please make sure the directory exists and have read write permissions | `/tmp/dolphinscheduler` |
| `common.configmap.RESOURCE_STORAGE_TYPE` | Resource storage type: HDFS, S3, OSS, GCS, ABS, NONE | `HDFS` |
| `common.configmap.RESOURCE_UPLOAD_PATH` | Resource store on HDFS/S3 path, please make sure the directory exists on hdfs and have read write permissions | `/dolphinscheduler` |
| `common.configmap.FS_DEFAULT_FS` | Resource storage file system like `file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler` | `file:///` |
| `common.configmap.FS_S3A_ENDPOINT` | S3 endpoint when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3` | `s3.xxx.amazonaws.com` |
| `common.configmap.FS_S3A_ACCESS_KEY` | S3 access key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3` | `xxxxxxx` |
| `common.configmap.FS_S3A_SECRET_KEY` | S3 secret key when `common.configmap.RESOURCE_STORAGE_TYPE` is set to `S3` | `xxxxxxx` |
| `common.configmap.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE` | Whether to startup kerberos | `false` |
| `common.configmap.JAVA_SECURITY_KRB5_CONF_PATH` | The java.security.krb5.conf path | `/opt/krb5.conf` |
| `common.configmap.LOGIN_USER_KEYTAB_USERNAME` | The login user from keytab username | `hdfs@HADOOP.COM` |
| `common.configmap.LOGIN_USER_KEYTAB_PATH` | The login user from keytab path | `/opt/hdfs.keytab` |
| `common.configmap.KERBEROS_EXPIRE_TIME` | The kerberos expire time, the unit is hour | `2` |
| `common.configmap.HDFS_ROOT_USER` | The HDFS root user who must have the permission to create directories under the HDFS root path | `hdfs` |
| `common.configmap.RESOURCE_MANAGER_HTTPADDRESS_PORT` | Set resource manager httpaddress port for yarn | `8088` |
| `common.configmap.YARN_RESOURCEMANAGER_HA_RM_IDS` | If resourcemanager HA is enabled, please set the HA IPs | `nil` |
| `common.configmap.YARN_APPLICATION_STATUS_ADDRESS` | If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname, otherwise keep default | `http://ds1:%s/ws/v1/cluster/apps/%s` |
| `common.configmap.HADOOP_HOME` | Set `HADOOP_HOME` for DolphinScheduler's task environment | `/opt/soft/hadoop` |
| `common.configmap.HADOOP_CONF_DIR` | Set `HADOOP_CONF_DIR` for DolphinScheduler's task environment | `/opt/soft/hadoop/etc/hadoop` |
| `common.configmap.SPARK_HOME` | Set `SPARK_HOME` for DolphinScheduler's task environment | `/opt/soft/spark` |
| `common.configmap.PYTHON_LAUNCHER` | Set `PYTHON_LAUNCHER` for DolphinScheduler's task environment | `/usr/bin/python` |
| `common.configmap.JAVA_HOME` | Set `JAVA_HOME` for DolphinScheduler's task environment | `/opt/java/openjdk` |
| `common.configmap.HIVE_HOME` | Set `HIVE_HOME` for DolphinScheduler's task environment | `/opt/soft/hive` |
| `common.configmap.FLINK_HOME` | Set `FLINK_HOME` for DolphinScheduler's task environment | `/opt/soft/flink` |
| `common.configmap.DATAX_LAUNCHER` | Set `DATAX_LAUNCHER` for DolphinScheduler's task environment | `/opt/soft/datax` |
| `common.sharedStoragePersistence.enabled` | Set `common.sharedStoragePersistence.enabled` to `true` to mount a shared storage volume for Hadoop, Spark binary and etc | `false` |
| `common.sharedStoragePersistence.mountPath` | The mount path for the shared storage volume | `/opt/soft` |
| `common.sharedStoragePersistence.accessModes` | `PersistentVolumeClaim` access modes, must be `ReadWriteMany` | `[ReadWriteMany]` |
| `common.sharedStoragePersistence.storageClassName` | Shared Storage persistent volume storage class, must support the access mode: ReadWriteMany | `-` |
| `common.sharedStoragePersistence.storage` | `PersistentVolumeClaim` size | `20Gi` |
| `common.fsFileResourcePersistence.enabled` | Set `common.fsFileResourcePersistence.enabled` to `true` to mount a new file resource volume for `api` and `worker` | `false` |
| `common.fsFileResourcePersistence.accessModes` | `PersistentVolumeClaim` access modes, must be `ReadWriteMany` | `[ReadWriteMany]` |
| `common.fsFileResourcePersistence.storageClassName` | Resource persistent volume storage class, must support the access mode: ReadWriteMany | `-` |
| `common.fsFileResourcePersistence.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `master.enabled` | Enable or disable the Master component | true |
| `master.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
| `master.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
| `master.annotations` | The `annotations` for master server | `{}` |
| `master.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `master.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `master.tolerations` | If specified, the pod's tolerations | `{}` |
| `master.resources` | The `resource` limit and request config for master server | `{}` |
| `master.env.JAVA_OPTS` | The jvm options for master server | `-Xms1g -Xmx1g -Xmn512m` |
| `master.env.MASTER_EXEC_THREADS` | Master execute thread number to limit process instances | `100` |
| `master.env.MASTER_EXEC_TASK_NUM` | Master execute task number in parallel per process instance | `20` |
| `master.env.MASTER_DISPATCH_TASK_NUM` | Master dispatch task number per batch | `3` |
| `master.env.MASTER_HOST_SELECTOR` | Master host selector to select a suitable worker, optional values include Random, RoundRobin, LowerWeight | `LowerWeight` |
| `master.env.MASTER_HEARTBEAT_INTERVAL` | Master heartbeat interval, the unit is second | `10s` |
| `master.env.MASTER_TASK_COMMIT_RETRYTIMES` | Master commit task retry times | `5` |
| `master.env.MASTER_TASK_COMMIT_INTERVAL` | master commit task interval, the unit is second | `1s` |
| `master.env.MASTER_MAX_CPULOAD_AVG` | Master max cpuload avg, only higher than the system cpu load average, master server can schedule | `-1` (`the number of cpu cores * 2`) |
| `master.env.MASTER_RESERVED_MEMORY` | Master reserved memory, only lower than system available memory, master server can schedule, the unit is G | `0.3` |
| `master.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `master.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `master.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `master.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `master.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `master.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `master.persistentVolumeClaim.enabled` | Set `master.persistentVolumeClaim.enabled` to `true` to mount a new volume for `master` | `false` |
| `master.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `master.persistentVolumeClaim.storageClassName` | `Master` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `master.persistentVolumeClaim.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `worker.enabled` | Enable or disable the Worker component | true |
| `worker.podManagementPolicy` | PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | `Parallel` |
| `worker.replicas` | Replicas is the desired number of replicas of the given Template | `3` |
| `worker.annotations` | The `annotations` for worker server | `{}` |
| `worker.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `worker.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `worker.tolerations` | If specified, the pod's tolerations | `{}` |
| `worker.resources` | The `resource` limit and request config for worker server | `{}` |
| `worker.env.WORKER_EXEC_THREADS` | Worker execute thread number to limit task instances | `100` |
| `worker.env.WORKER_HEARTBEAT_INTERVAL` | Worker heartbeat interval, the unit is second | `10s` |
| `worker.env.WORKER_MAX_CPU_LOAD_AVG` | Worker max cpu load avg, only higher than the system cpu load average, worker server can be dispatched tasks | `-1` (`the number of cpu cores * 2`) |
| `worker.env.WORKER_RESERVED_MEMORY` | Worker reserved memory, only lower than system available memory, worker server can be dispatched tasks, the unit is G | `0.3` |
| `worker.env.HOST_WEIGHT` | Worker host weight to dispatch tasks | `100` |
| `worker.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `worker.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `worker.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `worker.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `worker.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `worker.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `worker.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `worker.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `worker.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `worker.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `worker.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `worker.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `worker.persistentVolumeClaim.enabled` | Set `worker.persistentVolumeClaim.enabled` to `true` to enable `persistentVolumeClaim` for `worker` | `false` |
| `worker.persistentVolumeClaim.dataPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.dataPersistentVolume.enabled` to `true` to mount a data volume for `worker` | `false` |
| `worker.persistentVolumeClaim.dataPersistentVolume.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `worker.persistentVolumeClaim.dataPersistentVolume.storageClassName` | `Worker` data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `worker.persistentVolumeClaim.dataPersistentVolume.storage` | `PersistentVolumeClaim` size | `20Gi` |
| `worker.persistentVolumeClaim.logsPersistentVolume.enabled` | Set `worker.persistentVolumeClaim.logsPersistentVolume.enabled` to `true` to mount a logs volume for `worker` | `false` |
| `worker.persistentVolumeClaim.logsPersistentVolume.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `worker.persistentVolumeClaim.logsPersistentVolume.storageClassName` | `Worker` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `worker.persistentVolumeClaim.logsPersistentVolume.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `alert.enabled` | Enable or disable the Alert-Server component | true |
| `alert.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
| `alert.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
| `alert.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
| `alert.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
| `alert.annotations` | The `annotations` for alert server | `{}` |
| `alert.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `alert.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `alert.tolerations` | If specified, the pod's tolerations | `{}` |
| `alert.resources` | The `resource` limit and request config for alert server | `{}` |
| `alert.configmap.ALERT_SERVER_OPTS` | The jvm options for alert server | `-Xms512m -Xmx512m -Xmn256m` |
| `alert.configmap.XLS_FILE_PATH` | XLS file path | `/tmp/xls` |
| `alert.configmap.MAIL_SERVER_HOST` | Mail `SERVER HOST ` | `nil` |
| `alert.configmap.MAIL_SERVER_PORT` | Mail `SERVER PORT` | `nil` |
| `alert.configmap.MAIL_SENDER` | Mail `SENDER` | `nil` |
| `alert.configmap.MAIL_USER` | Mail `USER` | `nil` |
| `alert.configmap.MAIL_PASSWD` | Mail `PASSWORD` | `nil` |
| `alert.configmap.MAIL_SMTP_STARTTLS_ENABLE` | Mail `SMTP STARTTLS` enable | `false` |
| `alert.configmap.MAIL_SMTP_SSL_ENABLE` | Mail `SMTP SSL` enable | `false` |
| `alert.configmap.MAIL_SMTP_SSL_TRUST` | Mail `SMTP SSL TRUST` | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_ENABLE` | `Enterprise Wechat` enable | `false` |
| `alert.configmap.ENTERPRISE_WECHAT_CORP_ID` | `Enterprise Wechat` corp id | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_SECRET` | `Enterprise Wechat` secret | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_AGENT_ID` | `Enterprise Wechat` agent id | `nil` |
| `alert.configmap.ENTERPRISE_WECHAT_USERS` | `Enterprise Wechat` users | `nil` |
| `alert.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `alert.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `alert.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `alert.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `alert.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `alert.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `alert.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `alert.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `alert.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `alert.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `alert.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `alert.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `alert.persistentVolumeClaim.enabled` | Set `alert.persistentVolumeClaim.enabled` to `true` to mount a new volume for `alert` | `false` |
| `alert.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `alert.persistentVolumeClaim.storageClassName` | `Alert` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `alert.persistentVolumeClaim.storage` | `PersistentVolumeClaim` size | `20Gi` |
| <br/> | | |
| `api.enabled` | Enable or disable the API-Server component | true |
| `api.replicas` | Replicas is the desired number of replicas of the given Template | `1` |
| `api.strategy.type` | Type of deployment. Can be "Recreate" or "RollingUpdate" | `RollingUpdate` |
| `api.strategy.rollingUpdate.maxSurge` | The maximum number of pods that can be scheduled above the desired number of pods | `25%` |
| `api.strategy.rollingUpdate.maxUnavailable` | The maximum number of pods that can be unavailable during the update | `25%` |
| `api.annotations` | The `annotations` for api server | `{}` |
| `api.affinity` | If specified, the pod's scheduling constraints | `{}` |
| `api.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node | `{}` |
| `api.tolerations` | If specified, the pod's tolerations | `{}` |
| `api.resources` | The `resource` limit and request config for api server | `{}` |
| `api.configmap.API_SERVER_OPTS` | The jvm options for api server | `-Xms512m -Xmx512m -Xmn256m` |
| `api.livenessProbe.enabled` | Turn on and off liveness probe | `true` |
| `api.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `api.livenessProbe.periodSeconds` | How often to perform the probe | `30` |
| `api.livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `api.livenessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `api.livenessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `api.readinessProbe.enabled` | Turn on and off readiness probe | `true` |
| `api.readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
| `api.readinessProbe.periodSeconds` | How often to perform the probe | `30` |
| `api.readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `api.readinessProbe.failureThreshold` | Minimum consecutive successes for the probe | `3` |
| `api.readinessProbe.successThreshold` | Minimum consecutive failures for the probe | `1` |
| `api.persistentVolumeClaim.enabled` | Set `api.persistentVolumeClaim.enabled` to `true` to mount a new volume for `api` | `false` |
| `api.persistentVolumeClaim.accessModes` | `PersistentVolumeClaim` access modes | `[ReadWriteOnce]` |
| `api.persistentVolumeClaim.storageClassName` | `api` logs data persistent volume storage class. If set to "-", storageClassName: "", which disables dynamic provisioning | `-` |
| `api.persistentVolumeClaim.storage` | `PersistentVolumeClaim` size | `20Gi` |
| `api.service.type` | `type` determines how the Service is exposed. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer | `ClusterIP` |
| `api.service.clusterIP` | `clusterIP` is the IP address of the service and is usually assigned randomly by the master | `nil` |
| `api.service.nodePort` | `nodePort` is the port on each node on which this service is exposed when type=NodePort | `nil` |
| `api.service.externalIPs` | `externalIPs` is a list of IP addresses for which nodes in the cluster will also accept traffic for this service | `[]` |
| `api.service.externalName` | `externalName` is the external reference that kubedns or equivalent will return as a CNAME record for this service | `nil` |
| `api.service.loadBalancerIP` | `loadBalancerIP` when service.type is LoadBalancer. LoadBalancer will get created with the IP specified in this field | `nil` |
| `api.service.annotations` | `annotations` may need to be set when service.type is LoadBalancer | `{}` |
| `api.taskTypeFilter.enabled` | Enable or disable the task type filter. If set to true, the API-Server will return tasks of a specific type set in api.taskTypeFilter.task.Note: This feature only filters tasks to return a specific type on the WebUI. However, you can still create any task that DolphinScheduler supports via the API. | `false` |
| `api.taskTypeFilter.task` | Task type ref: [task-type-config.yaml](https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-api/src/main/resources/task-type-config.yaml) | `{}` |
| <br/> | | |
| `ingress.enabled` | Enable ingress | `false` |
| `ingress.host` | Ingress host | `dolphinscheduler.org` |
| `ingress.path` | Ingress path | `/dolphinscheduler` |
| `ingress.tls.enabled` | Enable ingress tls | `false` |
| `ingress.tls.secretName` | Ingress tls secret name | `dolphinscheduler-tls` |

32
pom.xml

@ -828,5 +828,37 @@
<docker.push.skip>false</docker.push.skip> <docker.push.skip>false</docker.push.skip>
</properties> </properties>
</profile> </profile>
<profile>
<id>helm-doc</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${exec-maven-plugin.version}</version>
<executions>
<execution>
<id>helm-doc</id>
<goals>
<goal>exec</goal>
</goals>
<phase>validate</phase>
<configuration>
<executable>docker</executable>
<workingDirectory>${project.basedir}</workingDirectory>
<arguments>
<argument>run</argument>
<argument>--rm</argument>
<argument>--volume</argument>
<argument>${project.basedir}/deploy/kubernetes:/helm-docs</argument>
<argument>jnorwood/helm-docs:v1.11.3</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles> </profiles>
</project> </project>

Loading…
Cancel
Save