wangdazhong001
73b2d5b7fc
|
4 years ago | |
---|---|---|
.. | ||
templates | 4 years ago | |
Chart.yaml | 4 years ago | |
README.md | 4 years ago | |
values.yaml | 4 years ago |
README.md
Dolphin Scheduler
Dolphin Scheduler is a distributed and easy-to-expand visual DAG workflow scheduling system, dedicated to solving the complex dependencies in data processing, making the scheduling system out of the box for data processing.
Introduction
This chart bootstraps a Dolphin Scheduler distributed deployment on a Kubernetes cluster using the Helm package manager.
Prerequisites
- Helm 3.1.0+
- Kubernetes 1.12+
- PV provisioner support in the underlying infrastructure
Installing the Chart
To install the chart with the release name dolphinscheduler
:
$ git clone https://github.com/apache/incubator-dolphinscheduler.git
$ cd incubator-dolphinscheduler/docker/kubernetes/dolphinscheduler
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm dependency update .
$ helm install dolphinscheduler .
These commands deploy Dolphin Scheduler on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Uninstalling the Chart
To uninstall/delete the dolphinscheduler
deployment:
$ helm uninstall dolphinscheduler
The command removes all the Kubernetes components associated with the chart and deletes the release.
Configuration
The following tables lists the configurable parameters of the Dolphins Scheduler chart and their default values.
Parameter | Description | Default |
---|---|---|
timezone |
World time and date for cities in all time zones | Asia/Shanghai |
image.registry |
Docker image registry for the Dolphins Scheduler | docker.io |
image.repository |
Docker image repository for the Dolphins Scheduler | dolphinscheduler |
image.tag |
Docker image version for the Dolphins Scheduler | 1.2.1 |
image.imagePullPolicy |
Image pull policy. One of Always, Never, IfNotPresent | IfNotPresent |
image.pullSecres |
PullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images | [] |
postgresql.enabled |
If not exists external PostgreSQL, by default, the Dolphins Scheduler will use a internal PostgreSQL | true |
postgresql.postgresqlUsername |
The username for internal PostgreSQL | root |
postgresql.postgresqlPassword |
The password for internal PostgreSQL | root |
postgresql.postgresqlDatabase |
The database for internal PostgreSQL | dolphinscheduler |
postgresql.persistence.enabled |
Set postgresql.persistence.enabled to true to mount a new volume for internal PostgreSQL |
false |
postgresql.persistence.size |
PersistentVolumeClaim Size |
20Gi |
postgresql.persistence.storageClass |
PostgreSQL data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | - |
externalDatabase.type |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database type will use it. |
postgresql |
externalDatabase.driver |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database driver will use it. |
org.postgresql.Driver |
externalDatabase.host |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database host will use it. |
localhost |
externalDatabase.port |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database port will use it. |
5432 |
externalDatabase.username |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database username will use it. |
root |
externalDatabase.password |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database password will use it. |
root |
externalDatabase.database |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database database will use it. |
dolphinscheduler |
externalDatabase.params |
If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database params will use it. |
characterEncoding=utf8 |
zookeeper.enabled |
If not exists external Zookeeper, by default, the Dolphin Scheduler will use a internal Zookeeper | true |
zookeeper.taskQueue |
Specify task queue for master and worker |
zookeeper |
zookeeper.persistence.enabled |
Set zookeeper.persistence.enabled to true to mount a new volume for internal Zookeeper |
false |
zookeeper.persistence.size |
PersistentVolumeClaim Size |
20Gi |
zookeeper.persistence.storageClass |
Zookeeper data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning | - |
externalZookeeper.taskQueue |
If exists external Zookeeper, and set zookeeper.enable value to false. Specify task queue for master and worker |
zookeeper |
externalZookeeper.zookeeperQuorum |
If exists external Zookeeper, and set zookeeper.enable value to false. Specify Zookeeper quorum |
127.0.0.1:2181 |
externalZookeeper.zookeeperRoot |
If exists external Zookeeper, and set zookeeper.enable value to false. Specify Zookeeper root path for master and worker |
dolphinscheduler |
common.configmap.DOLPHINSCHEDULER_ENV_PATH |
Extra env file path. | /tmp/dolphinscheduler/env |
common.configmap.DOLPHINSCHEDULER_DATA_BASEDIR_PATH |
File uploaded path of DS. | /tmp/dolphinscheduler/files |
common.configmap.RESOURCE_STORAGE_TYPE |
Resource Storate type, support type are: S3、HDFS、NONE. | NONE |
common.configmap.RESOURCE_UPLOAD_PATH |
The base path of resource. | /ds |
common.configmap.FS_DEFAULT_FS |
The default fs of resource, for s3 is the s3a prefix and bucket name. |
s3a://xxxx |
common.configmap.FS_S3A_ENDPOINT |
If the resource type is S3 , you should fill this filed, it's the endpoint of s3. |
s3.xxx.amazonaws.com |
common.configmap.FS_S3A_ACCESS_KEY |
The access key for your s3 bucket. | xxxxxxx |
common.configmap.FS_S3A_SECRET_KEY |
The secret key for your s3 bucket. | xxxxxxx |
master.podManagementPolicy |
PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | Parallel |
master.replicas |
Replicas is the desired number of replicas of the given Template | 3 |
master.nodeSelector |
NodeSelector is a selector which must be true for the pod to fit on a node | {} |
master.tolerations |
If specified, the pod's tolerations | {} |
master.affinity |
If specified, the pod's scheduling constraints | {} |
master.jvmOptions |
The JVM options for master server. | "" |
master.resources |
The resource limit and request config for master server. |
{} |
master.annotations |
The annotations for master server. |
{} |
master.configmap.MASTER_EXEC_THREADS |
Master execute thread num | 100 |
master.configmap.MASTER_EXEC_TASK_NUM |
Master execute task number in parallel | 20 |
master.configmap.MASTER_HEARTBEAT_INTERVAL |
Master heartbeat interval | 10 |
master.configmap.MASTER_TASK_COMMIT_RETRYTIMES |
Master commit task retry times | 5 |
master.configmap.MASTER_TASK_COMMIT_INTERVAL |
Master commit task interval | 1000 |
master.configmap.MASTER_MAX_CPULOAD_AVG |
Only less than cpu avg load, master server can work. default value : the number of cpu cores * 2 | 100 |
master.configmap.MASTER_RESERVED_MEMORY |
Only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G | 0.1 |
master.livenessProbe.enabled |
Turn on and off liveness probe | true |
master.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated | 30 |
master.livenessProbe.periodSeconds |
How often to perform the probe | 30 |
master.livenessProbe.timeoutSeconds |
When the probe times out | 5 |
master.livenessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
master.livenessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
master.readinessProbe.enabled |
Turn on and off readiness probe | true |
master.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated | 30 |
master.readinessProbe.periodSeconds |
How often to perform the probe | 30 |
master.readinessProbe.timeoutSeconds |
When the probe times out | 5 |
master.readinessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
master.readinessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
master.persistentVolumeClaim.enabled |
Set master.persistentVolumeClaim.enabled to true to mount a new volume for master |
false |
master.persistentVolumeClaim.accessModes |
PersistentVolumeClaim Access Modes |
[ReadWriteOnce] |
master.persistentVolumeClaim.storageClassName |
Master logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning |
- |
master.persistentVolumeClaim.storage |
PersistentVolumeClaim Size |
20Gi |
worker.podManagementPolicy |
PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down | Parallel |
worker.replicas |
Replicas is the desired number of replicas of the given Template | 3 |
worker.nodeSelector |
NodeSelector is a selector which must be true for the pod to fit on a node | {} |
worker.tolerations |
If specified, the pod's tolerations | {} |
worker.affinity |
If specified, the pod's scheduling constraints | {} |
worker.jvmOptions |
The JVM options for worker server. | "" |
worker.resources |
The resource limit and request config for worker server. |
{} |
worker.annotations |
The annotations for worker server. |
{} |
worker.configmap.WORKER_EXEC_THREADS |
Worker execute thread num | 100 |
worker.configmap.WORKER_HEARTBEAT_INTERVAL |
Worker heartbeat interval | 10 |
worker.configmap.WORKER_FETCH_TASK_NUM |
Submit the number of tasks at a time | 3 |
worker.configmap.WORKER_MAX_CPULOAD_AVG |
Only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2 | 100 |
worker.configmap.WORKER_RESERVED_MEMORY |
Only larger than reserved memory, worker server can work. default value : physical memory * 1/10, unit is G | 0.1 |
worker.configmap.DOLPHINSCHEDULER_DATA_BASEDIR_PATH |
User data directory path, self configuration, please make sure the directory exists and have read write permissions | /tmp/dolphinscheduler |
worker.configmap.DOLPHINSCHEDULER_ENV |
System env path, self configuration, please read values.yaml |
[] |
worker.livenessProbe.enabled |
Turn on and off liveness probe | true |
worker.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated | 30 |
worker.livenessProbe.periodSeconds |
How often to perform the probe | 30 |
worker.livenessProbe.timeoutSeconds |
When the probe times out | 5 |
worker.livenessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
worker.livenessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
worker.readinessProbe.enabled |
Turn on and off readiness probe | true |
worker.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated | 30 |
worker.readinessProbe.periodSeconds |
How often to perform the probe | 30 |
worker.readinessProbe.timeoutSeconds |
When the probe times out | 5 |
worker.readinessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
worker.readinessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
worker.persistentVolumeClaim.enabled |
Set worker.persistentVolumeClaim.enabled to true to enable persistentVolumeClaim for worker |
false |
worker.persistentVolumeClaim.dataPersistentVolume.enabled |
Set worker.persistentVolumeClaim.dataPersistentVolume.enabled to true to mount a data volume for worker |
false |
worker.persistentVolumeClaim.dataPersistentVolume.accessModes |
PersistentVolumeClaim Access Modes |
[ReadWriteOnce] |
worker.persistentVolumeClaim.dataPersistentVolume.storageClassName |
Worker data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning |
- |
worker.persistentVolumeClaim.dataPersistentVolume.storage |
PersistentVolumeClaim Size |
20Gi |
worker.persistentVolumeClaim.logsPersistentVolume.enabled |
Set worker.persistentVolumeClaim.logsPersistentVolume.enabled to true to mount a logs volume for worker |
false |
worker.persistentVolumeClaim.logsPersistentVolume.accessModes |
PersistentVolumeClaim Access Modes |
[ReadWriteOnce] |
worker.persistentVolumeClaim.logsPersistentVolume.storageClassName |
Worker logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning |
- |
worker.persistentVolumeClaim.logsPersistentVolume.storage |
PersistentVolumeClaim Size |
20Gi |
alert.strategy.type |
Type of deployment. Can be "Recreate" or "RollingUpdate" | RollingUpdate |
alert.strategy.rollingUpdate.maxSurge |
The maximum number of pods that can be scheduled above the desired number of pods | 25% |
alert.strategy.rollingUpdate.maxUnavailable |
The maximum number of pods that can be unavailable during the update | 25% |
alert.replicas |
Replicas is the desired number of replicas of the given Template | 1 |
alert.nodeSelector |
NodeSelector is a selector which must be true for the pod to fit on a node | {} |
alert.tolerations |
If specified, the pod's tolerations | {} |
alert.affinity |
If specified, the pod's scheduling constraints | {} |
alert.jvmOptions |
The JVM options for alert server. | "" |
alert.resources |
The resource limit and request config for alert server. |
{} |
alert.annotations |
The annotations for alert server. |
{} |
alert.configmap.ALERT_PLUGIN_DIR |
Alert plugin path. | /opt/dolphinscheduler/alert/plugin |
alert.configmap.XLS_FILE_PATH |
XLS file path | /tmp/xls |
alert.configmap.MAIL_SERVER_HOST |
Mail SERVER HOST |
nil |
alert.configmap.MAIL_SERVER_PORT |
Mail SERVER PORT |
nil |
alert.configmap.MAIL_SENDER |
Mail SENDER |
nil |
alert.configmap.MAIL_USER |
Mail USER |
nil |
alert.configmap.MAIL_PASSWD |
Mail PASSWORD |
nil |
alert.configmap.MAIL_SMTP_STARTTLS_ENABLE |
Mail SMTP STARTTLS enable |
false |
alert.configmap.MAIL_SMTP_SSL_ENABLE |
Mail SMTP SSL enable |
false |
alert.configmap.MAIL_SMTP_SSL_TRUST |
Mail SMTP SSL TRUST |
nil |
alert.configmap.ENTERPRISE_WECHAT_ENABLE |
Enterprise Wechat enable |
false |
alert.configmap.ENTERPRISE_WECHAT_CORP_ID |
Enterprise Wechat corp id |
nil |
alert.configmap.ENTERPRISE_WECHAT_SECRET |
Enterprise Wechat secret |
nil |
alert.configmap.ENTERPRISE_WECHAT_AGENT_ID |
Enterprise Wechat agent id |
nil |
alert.configmap.ENTERPRISE_WECHAT_USERS |
Enterprise Wechat users |
nil |
alert.livenessProbe.enabled |
Turn on and off liveness probe | true |
alert.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated | 30 |
alert.livenessProbe.periodSeconds |
How often to perform the probe | 30 |
alert.livenessProbe.timeoutSeconds |
When the probe times out | 5 |
alert.livenessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
alert.livenessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
alert.readinessProbe.enabled |
Turn on and off readiness probe | true |
alert.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated | 30 |
alert.readinessProbe.periodSeconds |
How often to perform the probe | 30 |
alert.readinessProbe.timeoutSeconds |
When the probe times out | 5 |
alert.readinessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
alert.readinessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
alert.persistentVolumeClaim.enabled |
Set alert.persistentVolumeClaim.enabled to true to mount a new volume for alert |
false |
alert.persistentVolumeClaim.accessModes |
PersistentVolumeClaim Access Modes |
[ReadWriteOnce] |
alert.persistentVolumeClaim.storageClassName |
Alert logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning |
- |
alert.persistentVolumeClaim.storage |
PersistentVolumeClaim Size |
20Gi |
api.strategy.type |
Type of deployment. Can be "Recreate" or "RollingUpdate" | RollingUpdate |
api.strategy.rollingUpdate.maxSurge |
The maximum number of pods that can be scheduled above the desired number of pods | 25% |
api.strategy.rollingUpdate.maxUnavailable |
The maximum number of pods that can be unavailable during the update | 25% |
api.replicas |
Replicas is the desired number of replicas of the given Template | 1 |
api.nodeSelector |
NodeSelector is a selector which must be true for the pod to fit on a node | {} |
api.tolerations |
If specified, the pod's tolerations | {} |
api.affinity |
If specified, the pod's scheduling constraints | {} |
api.jvmOptions |
The JVM options for api server. | "" |
api.resources |
The resource limit and request config for api server. |
{} |
api.annotations |
The annotations for api server. |
{} |
api.livenessProbe.enabled |
Turn on and off liveness probe | true |
api.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated | 30 |
api.livenessProbe.periodSeconds |
How often to perform the probe | 30 |
api.livenessProbe.timeoutSeconds |
When the probe times out | 5 |
api.livenessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
api.livenessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
api.readinessProbe.enabled |
Turn on and off readiness probe | true |
api.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated | 30 |
api.readinessProbe.periodSeconds |
How often to perform the probe | 30 |
api.readinessProbe.timeoutSeconds |
When the probe times out | 5 |
api.readinessProbe.failureThreshold |
Minimum consecutive successes for the probe | 3 |
api.readinessProbe.successThreshold |
Minimum consecutive failures for the probe | 1 |
api.persistentVolumeClaim.enabled |
Set api.persistentVolumeClaim.enabled to true to mount a new volume for api |
false |
api.persistentVolumeClaim.accessModes |
PersistentVolumeClaim Access Modes |
[ReadWriteOnce] |
api.persistentVolumeClaim.storageClassName |
api logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning |
- |
api.persistentVolumeClaim.storage |
PersistentVolumeClaim Size |
20Gi |
ingress.enabled |
Enable ingress | false |
ingress.host |
Ingress host | dolphinscheduler.org |
ingress.path |
Ingress path | / |
ingress.tls.enabled |
Enable ingress tls | false |
ingress.tls.hosts |
Ingress tls hosts | dolphinscheduler.org |
ingress.tls.secretName |
Ingress tls secret name | dolphinscheduler-tls |
For more information please refer to the chart documentation.