dolphinscheduler.queue.impl zookeeper Task queue implementation, default "zookeeper" zookeeper.dolphinscheduler.root /dolphinscheduler dolphinscheduler root directory zookeeper.session.timeout 300 int zookeeper.connection.timeout 30000 int zookeeper.retry.base.sleep 100 int zookeeper.retry.max.sleep 30000 int zookeeper.retry.maxtime 10 int res.upload.startup.type Choose Resource Upload Startup Type Resource upload startup type : HDFS,S3,NONE NONE value-list HDFS S3 NONE 1 hdfs.root.user hdfs Users who have permission to create directories under the HDFS root path data.store2hdfs.basepath /dolphinscheduler Data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。 "/dolphinscheduler" is recommended data.basedir.path /tmp/dolphinscheduler User data directory path, self configuration, please make sure the directory exists and have read write permissions hadoop.security.authentication.startup.state false value-list true false 1 java.security.krb5.conf.path /opt/krb5.conf java.security.krb5.conf path login.user.keytab.username hdfs-mycluster@ESZ.COM LoginUserFromKeytab user login.user.keytab.path /opt/hdfs.headless.keytab LoginUserFromKeytab path resource.view.suffixs txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties fs.defaultFS hdfs://mycluster:8020 HA or single namenode, If namenode ha needs to copy core-site.xml and hdfs-site.xml to the conf directory, support s3,for example : s3a://dolphinscheduler fs.s3a.endpoint http://host:9010 s3 need,s3 endpoint fs.s3a.access.key A3DXS30FO22544RE s3 need,s3 access key fs.s3a.secret.key OloCLq3n+8+sdPHUhJ21XrSxTC+JK s3 need,s3 secret key loggerserver.rpc.port 50051 intF