dolphinscheduler.queue.implzookeeper
Task queue implementation, default "zookeeper"
zookeeper.dolphinscheduler.root/dolphinscheduler
dolphinscheduler root directory
zookeeper.session.timeout300intzookeeper.connection.timeout300intzookeeper.retry.base.sleep100intzookeeper.retry.max.sleep30000intzookeeper.retry.maxtime5intres.upload.startup.typeChoose Resource Upload Startup Type
Resource upload startup type : HDFS,S3,NONE
NONEvalue-listHDFSS3NONE1hdfs.root.userhdfs
Users who have permission to create directories under the HDFS root path
data.store2hdfs.basepath/dolphinscheduler
Data base dir, resource file will store to this hadoop hdfs path, self configuration,
please make sure the directory exists on hdfs and have read write permissions。
"/dolphinscheduler" is recommended
data.basedir.path/tmp/dolphinscheduler
User data directory path, self configuration,
please make sure the directory exists and have read write permissions
hadoop.security.authentication.startup.statefalsevalue-listtruefalse1java.security.krb5.conf.path/opt/krb5.conf
java.security.krb5.conf path
login.user.keytab.usernamehdfs-mycluster@ESZ.COM
LoginUserFromKeytab user
login.user.keytab.path/opt/hdfs.headless.keytab
LoginUserFromKeytab path
resource.view.suffixstxt,log,sh,conf,cfg,py,java,sql,hql,xml,propertiesfs.defaultFShdfs://mycluster:8020
HA or single namenode,
If namenode ha needs to copy core-site.xml and hdfs-site.xml to the conf directory,
support s3,for example : s3a://dolphinscheduler
fs.s3a.endpointhttp://host:9010
s3 need,s3 endpoint
fs.s3a.access.keyA3DXS30FO22544RE
s3 need,s3 access key
fs.s3a.secret.keyOloCLq3n+8+sdPHUhJ21XrSxTC+JK
s3 need,s3 secret key
loggerserver.rpc.port50051intF