分布式调度框架。
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
liwenhe1993 78f2fe253e Modify docker-compose and kubernetes config 5 years ago
..
templates Modify docker-compose and kubernetes config 5 years ago
Chart.yaml Modify docker-compose and kubernetes config 5 years ago
README.md Modify docker-compose and kubernetes config 5 years ago
requirements.yaml Modify docker-compose and kubernetes config 5 years ago
values.yaml Modify docker-compose and kubernetes config 5 years ago

README.md

Dolphin Scheduler

Dolphin Scheduler is a distributed and easy-to-expand visual DAG workflow scheduling system, dedicated to solving the complex dependencies in data processing, making the scheduling system out of the box for data processing.

Introduction

This chart bootstraps a Dolphin Scheduler distributed deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.10+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

$ git clone https://github.com/apache/incubator-dolphinscheduler.git
$ cd incubator-dolphinscheduler/kubernetes/dolphinscheduler
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm dependency update .
$ helm install --name dolphinscheduler .

These commands deploy Dolphin Scheduler on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the dolphinscheduler deployment:

$ helm delete --purge dolphinscheduler

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following tables lists the configurable parameters of the Dolphins Scheduler chart and their default values.

Parameter Description Default
timezone World time and date for cities in all time zones Asia/Shanghai
image.registry Docker image registry for the Dolphins Scheduler docker.io
image.repository Docker image repository for the Dolphins Scheduler dolphinscheduler
image.tag Docker image version for the Dolphins Scheduler 1.2.1
image.imagePullPolicy Image pull policy. One of Always, Never, IfNotPresent IfNotPresent
imagePullSecrets ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images []
postgresql.enabled If not exists external PostgreSQL, by default, the Dolphins Scheduler will use a internal PostgreSQL true
postgresql.postgresqlUsername The username for internal PostgreSQL root
postgresql.postgresqlPassword The password for internal PostgreSQL root
postgresql.postgresqlDatabase The database for internal PostgreSQL dolphinscheduler
postgresql.persistence.enabled Set postgresql.persistence.enabled to true to mount a new volume for internal PostgreSQL false
postgresql.persistence.size PersistentVolumeClaim Size 20Gi
postgresql.persistence.storageClass PostgreSQL data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
externalDatabase.host If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database host will use it. localhost
externalDatabase.port If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database port will use it. 5432
externalDatabase.username If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database username will use it. root
externalDatabase.password If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database password will use it. root
externalDatabase.database If exists external PostgreSQL, and set postgresql.enable value to false. Dolphins Scheduler's database database will use it. dolphinscheduler
zookeeper.enabled If not exists external Zookeeper, by default, the Dolphin Scheduler will use a internal Zookeeper true
zookeeper.taskQueue Specify task queue for master and worker zookeeper
zookeeper.persistence.enabled Set zookeeper.persistence.enabled to true to mount a new volume for internal Zookeeper false
zookeeper.persistence.size PersistentVolumeClaim Size 20Gi
zookeeper.persistence.storageClass Zookeeper data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
externalZookeeper.taskQueue If exists external Zookeeper, and set zookeeper.enable value to false. Specify task queue for master and worker zookeeper
externalZookeeper.zookeeperQuorum If exists external Zookeeper, and set zookeeper.enable value to false. Specify Zookeeper quorum 127.0.0.1:2181
master.podManagementPolicy PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down Parallel
master.replicas Replicas is the desired number of replicas of the given Template 3
master.nodeSelector NodeSelector is a selector which must be true for the pod to fit on a node {}
master.tolerations If specified, the pod's tolerations {}
master.affinity If specified, the pod's scheduling constraints {}
master.configmap.MASTER_EXEC_THREADS Master execute thread num 100
master.configmap.MASTER_EXEC_TASK_NUM Master execute task number in parallel 20
master.configmap.MASTER_HEARTBEAT_INTERVAL Master heartbeat interval 10
master.configmap.MASTER_TASK_COMMIT_RETRYTIMES Master commit task retry times 5
master.configmap.MASTER_TASK_COMMIT_INTERVAL Master commit task interval 1000
master.configmap.MASTER_MAX_CPULOAD_AVG Only less than cpu avg load, master server can work. default value : the number of cpu cores * 2 100
master.configmap.MASTER_RESERVED_MEMORY Only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G 0.1
master.livenessProbe.enabled Turn on and off liveness probe true
master.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 30
master.livenessProbe.periodSeconds How often to perform the probe 30
master.livenessProbe.timeoutSeconds When the probe times out 5
master.livenessProbe.failureThreshold Minimum consecutive successes for the probe 3
master.livenessProbe.successThreshold Minimum consecutive failures for the probe 1
master.readinessProbe.enabled Turn on and off readiness probe true
master.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated 30
master.readinessProbe.periodSeconds How often to perform the probe 30
master.readinessProbe.timeoutSeconds When the probe times out 5
master.readinessProbe.failureThreshold Minimum consecutive successes for the probe 3
master.readinessProbe.successThreshold Minimum consecutive failures for the probe 1
master.persistentVolumeClaim.enabled Set master.persistentVolumeClaim.enabled to true to mount a new volume for master false
master.persistentVolumeClaim.accessModes PersistentVolumeClaim Access Modes [ReadWriteOnce]
master.persistentVolumeClaim.storageClassName Master logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
master.persistentVolumeClaim.storage PersistentVolumeClaim Size 20Gi
worker.podManagementPolicy PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down Parallel
worker.replicas Replicas is the desired number of replicas of the given Template 3
worker.nodeSelector NodeSelector is a selector which must be true for the pod to fit on a node {}
worker.tolerations If specified, the pod's tolerations {}
worker.affinity If specified, the pod's scheduling constraints {}
worker.configmap.WORKER_EXEC_THREADS Worker execute thread num 100
worker.configmap.WORKER_HEARTBEAT_INTERVAL Worker heartbeat interval 10
worker.configmap.WORKER_FETCH_TASK_NUM Submit the number of tasks at a time 3
worker.configmap.WORKER_MAX_CPULOAD_AVG Only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2 100
worker.configmap.WORKER_RESERVED_MEMORY Only larger than reserved memory, worker server can work. default value : physical memory * 1/10, unit is G 0.1
worker.configmap.DOLPHINSCHEDULER_DATA_BASEDIR_PATH User data directory path, self configuration, please make sure the directory exists and have read write permissions /tmp/dolphinscheduler
worker.configmap.DOLPHINSCHEDULER_ENV System env path, self configuration, please read values.yaml []
worker.livenessProbe.enabled Turn on and off liveness probe true
worker.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 30
worker.livenessProbe.periodSeconds How often to perform the probe 30
worker.livenessProbe.timeoutSeconds When the probe times out 5
worker.livenessProbe.failureThreshold Minimum consecutive successes for the probe 3
worker.livenessProbe.successThreshold Minimum consecutive failures for the probe 1
worker.readinessProbe.enabled Turn on and off readiness probe true
worker.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated 30
worker.readinessProbe.periodSeconds How often to perform the probe 30
worker.readinessProbe.timeoutSeconds When the probe times out 5
worker.readinessProbe.failureThreshold Minimum consecutive successes for the probe 3
worker.readinessProbe.successThreshold Minimum consecutive failures for the probe 1
worker.persistentVolumeClaim.enabled Set worker.persistentVolumeClaim.enabled to true to enable persistentVolumeClaim for worker false
worker.persistentVolumeClaim.dataPersistentVolume.enabled Set worker.persistentVolumeClaim.dataPersistentVolume.enabled to true to mount a data volume for worker false
worker.persistentVolumeClaim.dataPersistentVolume.accessModes PersistentVolumeClaim Access Modes [ReadWriteOnce]
worker.persistentVolumeClaim.dataPersistentVolume.storageClassName Worker data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
worker.persistentVolumeClaim.dataPersistentVolume.storage PersistentVolumeClaim Size 20Gi
worker.persistentVolumeClaim.logsPersistentVolume.enabled Set worker.persistentVolumeClaim.logsPersistentVolume.enabled to true to mount a logs volume for worker false
worker.persistentVolumeClaim.logsPersistentVolume.accessModes PersistentVolumeClaim Access Modes [ReadWriteOnce]
worker.persistentVolumeClaim.logsPersistentVolume.storageClassName Worker logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
worker.persistentVolumeClaim.logsPersistentVolume.storage PersistentVolumeClaim Size 20Gi
alert.strategy.type Type of deployment. Can be "Recreate" or "RollingUpdate" RollingUpdate
alert.strategy.rollingUpdate.maxSurge The maximum number of pods that can be scheduled above the desired number of pods 25%
alert.strategy.rollingUpdate.maxUnavailable The maximum number of pods that can be unavailable during the update 25%
alert.replicas Replicas is the desired number of replicas of the given Template 1
alert.nodeSelector NodeSelector is a selector which must be true for the pod to fit on a node {}
alert.tolerations If specified, the pod's tolerations {}
alert.affinity If specified, the pod's scheduling constraints {}
alert.configmap.XLS_FILE_PATH XLS file path /tmp/xls
alert.configmap.MAIL_SERVER_HOST Mail SERVER HOST nil
alert.configmap.MAIL_SERVER_PORT Mail SERVER PORT nil
alert.configmap.MAIL_SENDER Mail SENDER nil
alert.configmap.MAIL_USER Mail USER nil
alert.configmap.MAIL_PASSWD Mail PASSWORD nil
alert.configmap.MAIL_SMTP_STARTTLS_ENABLE Mail SMTP STARTTLS enable false
alert.configmap.MAIL_SMTP_SSL_ENABLE Mail SMTP SSL enable false
alert.configmap.MAIL_SMTP_SSL_TRUST Mail SMTP SSL TRUST nil
alert.configmap.ENTERPRISE_WECHAT_ENABLE Enterprise Wechat enable false
alert.configmap.ENTERPRISE_WECHAT_CORP_ID Enterprise Wechat corp id nil
alert.configmap.ENTERPRISE_WECHAT_SECRET Enterprise Wechat secret nil
alert.configmap.ENTERPRISE_WECHAT_AGENT_ID Enterprise Wechat agent id nil
alert.configmap.ENTERPRISE_WECHAT_USERS Enterprise Wechat users nil
alert.livenessProbe.enabled Turn on and off liveness probe true
alert.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 30
alert.livenessProbe.periodSeconds How often to perform the probe 30
alert.livenessProbe.timeoutSeconds When the probe times out 5
alert.livenessProbe.failureThreshold Minimum consecutive successes for the probe 3
alert.livenessProbe.successThreshold Minimum consecutive failures for the probe 1
alert.readinessProbe.enabled Turn on and off readiness probe true
alert.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated 30
alert.readinessProbe.periodSeconds How often to perform the probe 30
alert.readinessProbe.timeoutSeconds When the probe times out 5
alert.readinessProbe.failureThreshold Minimum consecutive successes for the probe 3
alert.readinessProbe.successThreshold Minimum consecutive failures for the probe 1
alert.persistentVolumeClaim.enabled Set alert.persistentVolumeClaim.enabled to true to mount a new volume for alert false
alert.persistentVolumeClaim.accessModes PersistentVolumeClaim Access Modes [ReadWriteOnce]
alert.persistentVolumeClaim.storageClassName Alert logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
alert.persistentVolumeClaim.storage PersistentVolumeClaim Size 20Gi
api.strategy.type Type of deployment. Can be "Recreate" or "RollingUpdate" RollingUpdate
api.strategy.rollingUpdate.maxSurge The maximum number of pods that can be scheduled above the desired number of pods 25%
api.strategy.rollingUpdate.maxUnavailable The maximum number of pods that can be unavailable during the update 25%
api.replicas Replicas is the desired number of replicas of the given Template 1
api.nodeSelector NodeSelector is a selector which must be true for the pod to fit on a node {}
api.tolerations If specified, the pod's tolerations {}
api.affinity If specified, the pod's scheduling constraints {}
api.livenessProbe.enabled Turn on and off liveness probe true
api.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 30
api.livenessProbe.periodSeconds How often to perform the probe 30
api.livenessProbe.timeoutSeconds When the probe times out 5
api.livenessProbe.failureThreshold Minimum consecutive successes for the probe 3
api.livenessProbe.successThreshold Minimum consecutive failures for the probe 1
api.readinessProbe.enabled Turn on and off readiness probe true
api.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated 30
api.readinessProbe.periodSeconds How often to perform the probe 30
api.readinessProbe.timeoutSeconds When the probe times out 5
api.readinessProbe.failureThreshold Minimum consecutive successes for the probe 3
api.readinessProbe.successThreshold Minimum consecutive failures for the probe 1
api.persistentVolumeClaim.enabled Set api.persistentVolumeClaim.enabled to true to mount a new volume for api false
api.persistentVolumeClaim.accessModes PersistentVolumeClaim Access Modes [ReadWriteOnce]
api.persistentVolumeClaim.storageClassName api logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
api.persistentVolumeClaim.storage PersistentVolumeClaim Size 20Gi
frontend.strategy.type Type of deployment. Can be "Recreate" or "RollingUpdate" RollingUpdate
frontend.strategy.rollingUpdate.maxSurge The maximum number of pods that can be scheduled above the desired number of pods 25%
frontend.strategy.rollingUpdate.maxUnavailable The maximum number of pods that can be unavailable during the update 25%
frontend.replicas Replicas is the desired number of replicas of the given Template 1
frontend.nodeSelector NodeSelector is a selector which must be true for the pod to fit on a node {}
frontend.tolerations If specified, the pod's tolerations {}
frontend.affinity If specified, the pod's scheduling constraints {}
frontend.livenessProbe.enabled Turn on and off liveness probe true
frontend.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 30
frontend.livenessProbe.periodSeconds How often to perform the probe 30
frontend.livenessProbe.timeoutSeconds When the probe times out 5
frontend.livenessProbe.failureThreshold Minimum consecutive successes for the probe 3
frontend.livenessProbe.successThreshold Minimum consecutive failures for the probe 1
frontend.readinessProbe.enabled Turn on and off readiness probe true
frontend.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated 30
frontend.readinessProbe.periodSeconds How often to perform the probe 30
frontend.readinessProbe.timeoutSeconds When the probe times out 5
frontend.readinessProbe.failureThreshold Minimum consecutive successes for the probe 3
frontend.readinessProbe.successThreshold Minimum consecutive failures for the probe 1
frontend.persistentVolumeClaim.enabled Set frontend.persistentVolumeClaim.enabled to true to mount a new volume for frontend false
frontend.persistentVolumeClaim.accessModes PersistentVolumeClaim Access Modes [ReadWriteOnce]
frontend.persistentVolumeClaim.storageClassName frontend logs data Persistent Volume Storage Class. If set to "-", storageClassName: "", which disables dynamic provisioning -
frontend.persistentVolumeClaim.storage PersistentVolumeClaim Size 20Gi
ingress.enabled Enable ingress false
ingress.host Ingress host dolphinscheduler.org
ingress.path Ingress path /
ingress.tls.enabled Enable ingress tls false
ingress.tls.hosts Ingress tls hosts dolphinscheduler.org
ingress.tls.secretName Ingress tls secret name dolphinscheduler-tls

For more information please refer to the chart documentation.