分布式调度框架。
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

90 lines
3.6 KiB

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# user data local directory path, please make sure the directory exists and have read write permissions
data.basedir.path=/tmp/dolphinscheduler
# resource storage type: HDFS, S3, NONE
resource.storage.type=S3
# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resource.upload.path=/dolphinscheduler
# whether to startup kerberos
hadoop.security.authentication.startup.state=false
# java.security.krb5.conf path
java.security.krb5.conf.path=/opt/krb5.conf
# login user from keytab username
login.user.keytab.username=hdfs-mycluster@ESZ.COM
# login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab
# kerberos expire time, the unit is hour
kerberos.expire.time=2
# resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
hdfs.root.user=hdfs
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=s3a://dolphinscheduler
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://ds1:%s/ws/v1/cluster/apps/%s
# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://ds1:19888/ws/v1/history/mapreduce/jobs/%s
# datasource encryption enable
datasource.encryption.enable=false
# datasource encryption salt
datasource.encryption.salt=!@#$%^&*
# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true
# network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=
# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default
# system env path
#dolphinscheduler.env.path=dolphinscheduler_env.sh
# development state
development.state=false
# rpc port
alert.rpc.port=50052
[Feature-8612][RESOURCE] extend s3 to the storage of ds (#8637) * feat(resource manager): extend s3 to the storage of ds 1.fix some spell question 2.extend the type of storage 3.add the s3utils to manager resource 4.automatic inject the storage in addition to your config * fix(resource manager): update the dependency * fix(resource manager): extend s3 to the storage of ds fix the constant of hadooputils * fix(resource manager): extend s3 to the storage of ds 1.fix some spell question 2.delete the import * * fix(resource manager): merge the unitTest: 1.TenantServiceImpl 2.ResourceServiceImpl 3.UserServiceImpl * fix(resource manager): extend s3 to the storage of ds merge the resourceServiceTest * fix(resource manager): test cancel the test method createTenant verifyTenant * fix(resource manager): merge the code follow the check-result of sonar * fix(resource manager): extend s3 to the storage of ds fit the spell question * fix(resource manager): extend s3 to the storage of ds revert the common.properties * fix(resource manager): extend s3 to the storage of ds update the storageConfig with None * fix(resource manager): extend s3 to the storage of ds fix the judge of resourceType * fix(resource manager): extend s3 to the storage of ds undo the compile-mysql * fix(resource manager): extend s3 to the storage of ds delete hadoop aws * fix(resource manager): extend s3 to the storage of ds update the know-dependencies to delete aws 1.7.4 update the e2e file-manager common.properties * fix(resource manager): extend s3 to the storage of ds update the aws-region * fix(resource manager): extend s3 to the storage of ds fix the storageconfig init * fix(resource manager): update e2e docker-compose update e2e docker-compose * fix(resource manager): extend s3 to the storage of ds revent the e2e common.proprites print the resource type in propertyUtil * fix(resource manager): extend s3 to the storage of ds 1.println the properties * fix(resource manager): println the s3 info * fix(resource manager): extend s3 to the storage of ds delete the info and upgrade the s3 info to e2e * fix(resource manager): extend s3 to the storage of ds add the bucket init * fix(resource manager): extend s3 to the storage of ds 1.fix some spell question 2.delete the import * * fix(resource manager): extend s3 to the storage of ds upgrade the s3 endpoint * fix(resource manager): withPathStyleAccessEnabled(true) * fix(resource manager): extend s3 to the storage of ds 1.fix some spell question 2.delete the import * * fix(resource manager): upgrade the s3client builder * fix(resource manager): correct the s3 point to s3client * fix(resource manager): update the constant BUCKET_NAME * fix(resource manager): e2e s3 endpoint -> s3:9000 * fix(resource manager): extend s3 to the storage of ds 1.fix some spell question 2.delete the import * * style(resource manager): add info to createBucket * style(resource manager): debug the log * ci(resource manager): test test s3 * ci(ci): add INSERT INTO dolphinscheduler.t_ds_tenant (id, tenant_code, description, queue_id, create_time, update_time) VALUES(1, 'root', NULL, 1, NULL, NULL); to h2.sql * fix(resource manager): update the h2 sql * fix(resource manager): solve to delete the tenant * style(resource manager): merge the style end delete the unuse s3 config * fix(resource manager): extend s3 to the storage of ds UPDATE the rename resources when s3 * fix(resource manager): extend s3 to the storage of ds 1.fix the code style of QuartzImpl * fix(resource manager): extend s3 to the storage of ds 1.impoort restore_type to CommonUtils * fix(resource manager): update the work thread * fix(resource manager): update the baseTaskProcessor * fix(resource manager): upgrade dolphinscheduler-standalone-server.xml * fix(resource manager): add user Info to dolphinscheduler_h2.sql * fix(resource manager): merge the resourceType to NONE * style(upgrade the log level to info): * fix(resource manager): sysnc the h2.sql * fix(resource manager): update the merge the user tenant * fix(resource manager): merge the resourcesServiceImpl * fix(resource manager): when the storage is s3 ,that the directory can't be renamed * fix(resource manager): in s3 ,the directory cannot be renamed * fix(resource manager): delete the deleteRenameDirectory in E2E * fix(resource manager): check the style and recoverd the test * fix(resource manager): delete the log.print(LoginUser)
3 years ago
aws.access.key.id=accessKey123
aws.secret.access.key=secretKey123
aws.region=us-east-1
aws.endpoint=http://s3:9000
# Task resource limit state
task.resource.limit.state=false