* Use try-with-resource to close resource, and add heart error threshold to avoid worker cannot close due to heart beat check failed
* Move heartbeat error threshold to applicaiton.yml
* Optimize master log, add workflow instance id and task instance id in log
* Use MDC to set the workflow info in log4j
* Add workflowInstanceId and taskInstanceId in MDC
* [Fix-10181] Fix the logic of judging that the tenant does not exist
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
* [Fix-10181] Fix the logic of judging that the tenant does not exist
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
* [Fix-10181] Fix the logic of judging that the tenant does not exist
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
* [Fix-10181] Fix the logic of judging that the tenant does not exist
The configuration item adds 'tenant-distributed-user' in worker application.yaml to make it suitable for distributed users. If it is false, the original logic remains unchanged.
At present, considering that it is a distributed user, it should not be allowed to create users in linux
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
* [Fix-10181] Fix the logic of judging that the tenant does not exist
Add test method
The configuration item adds 'tenant-distributed-user' in worker application.yaml to make it suitable for distributed users. If it is false, the original logic remains unchanged.
At present, considering that it is a distributed user, it should not be allowed to create users in linux
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
* [Fix-10181] Fix the logic of judging that the tenant does not exist
Add parameter description to configuration.md
Add test method
The configuration item adds 'tenant-distributed-user' in worker application.yaml to make it suitable for distributed users. If it is false, the original logic remains unchanged.
At present, considering that it is a distributed user, it should not be allowed to create users in linux
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
* [Fix-10181] Fix the logic of judging that the tenant does not exist
Add parameter description to configuration.md
Add test method
The configuration item adds 'tenant-distributed-user' in worker application.yaml to make it suitable for distributed users. If it is false, the original logic remains unchanged.
At present, considering that it is a distributed user, it should not be allowed to create users in linux
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
* [Fix-10181] Fix the logic of judging that the tenant does not exist
Add parameter description to configuration.md
Add test method
The configuration item adds 'tenant-distributed-user' in worker application.yaml to make it suitable for distributed users. If it is false, the original logic remains unchanged.
At present, considering that it is a distributed user, it should not be allowed to create users in linux
Use the linux command as id to get the user information that exists in /etc/passwd file and the cached sssd user.
for example:
id test
1. exist in /etc/passwd file or ldap : uid=1030(test) gid=1030(test) groups=1030(test)
2. no exist in /etc/passwd file and ldap: id: test: no such user
Temporarily unable to test the system for windows and mac
Co-authored-by: ouyangl <ouyangl@tebon.com.cn>
* [common] Make dolphinscheduler_env.sh work
* Change dist tarball `dolphinscheduler_env.sh` location
from `bin/` to `conf/`, which users could finish their
change configuration operation in one single directory.
and we only need to add `$DOLPHINSCHEDULER_HOME/conf`
when we start our sever instead of adding both
`$DOLPHINSCHEDULER_HOME/conf` and `$DOLPHINSCHEDULER_HOME/bin`
* Change the `start.sh`'s path of `dolphinscheduler_env.sh`
* Change the setting order of `dolphinscheduler_env.sh`
* `bin/env/dolphinscheduler_env.sh` will overwrite the `<server>/conf/dolphinscheduler_env.sh`
when start the server using `bin/dolphinsceduler_daemon.sh` or `bin/install.sh`
* Change the related docs
Currently the size of our distribute package is up to
800MB, this patch is migrate python gateway server into
api server
The distribute package size before and after this patch is:
```sh
# before
796M apache-dolphinscheduler-2.0.4-SNAPSHOT-bin.tar.gz
# after
647M apache-dolphinscheduler-2.0.4-SNAPSHOT-bin.tar.gz
```
* feat(resource manager): extend s3 to the storage of ds
1.fix some spell question
2.extend the type of storage
3.add the s3utils
to manager resource
4.automatic inject the storage in addition to your
config
* fix(resource manager): update the dependency
* fix(resource manager): extend s3 to the storage of ds
fix the constant of hadooputils
* fix(resource manager): extend s3 to the storage of ds
1.fix some spell question
2.delete the import *
* fix(resource manager):
merge the unitTest:
1.TenantServiceImpl
2.ResourceServiceImpl
3.UserServiceImpl
* fix(resource manager): extend s3 to the storage of ds
merge the resourceServiceTest
* fix(resource manager): test cancel the test method
createTenant verifyTenant
* fix(resource manager): merge the code follow the check-result of sonar
* fix(resource manager): extend s3 to the storage of ds
fit the spell question
* fix(resource manager): extend s3 to the storage of ds
revert the common.properties
* fix(resource manager): extend s3 to the storage of ds
update the storageConfig with None
* fix(resource manager): extend s3 to the storage of ds
fix the judge of resourceType
* fix(resource manager): extend s3 to the storage of ds
undo the compile-mysql
* fix(resource manager): extend s3 to the storage of ds
delete hadoop aws
* fix(resource manager): extend s3 to the storage of ds
update the know-dependencies to delete aws 1.7.4
update the e2e
file-manager common.properties
* fix(resource manager): extend s3 to the storage of ds
update the aws-region
* fix(resource manager): extend s3 to the storage of ds
fix the storageconfig init
* fix(resource manager): update e2e docker-compose
update e2e docker-compose
* fix(resource manager): extend s3 to the storage of ds
revent the e2e common.proprites
print the resource type in propertyUtil
* fix(resource manager): extend s3 to the storage of ds
1.println the properties
* fix(resource manager): println the s3 info
* fix(resource manager): extend s3 to the storage of ds
delete the info and upgrade the s3 info to e2e
* fix(resource manager): extend s3 to the storage of ds
add the bucket init
* fix(resource manager): extend s3 to the storage of ds
1.fix some spell question
2.delete the import *
* fix(resource manager): extend s3 to the storage of ds
upgrade the s3 endpoint
* fix(resource manager): withPathStyleAccessEnabled(true)
* fix(resource manager): extend s3 to the storage of ds
1.fix some spell question
2.delete the import *
* fix(resource manager): upgrade the s3client builder
* fix(resource manager): correct the s3 point to s3client
* fix(resource manager): update the constant BUCKET_NAME
* fix(resource manager): e2e s3 endpoint -> s3:9000
* fix(resource manager): extend s3 to the storage of ds
1.fix some spell question
2.delete the import *
* style(resource manager): add info to createBucket
* style(resource manager): debug the log
* ci(resource manager): test
test s3
* ci(ci): add INSERT INTO dolphinscheduler.t_ds_tenant (id, tenant_code, description, queue_id, create_time, update_time) VALUES(1, 'root', NULL, 1, NULL, NULL); to h2.sql
* fix(resource manager): update the h2 sql
* fix(resource manager): solve to delete the tenant
* style(resource manager): merge the style end delete the unuse s3 config
* fix(resource manager): extend s3 to the storage of ds
UPDATE the rename resources when s3
* fix(resource manager): extend s3 to the storage of ds
1.fix the code style of QuartzImpl
* fix(resource manager): extend s3 to the storage of ds
1.impoort restore_type to CommonUtils
* fix(resource manager): update the work thread
* fix(resource manager): update the baseTaskProcessor
* fix(resource manager): upgrade dolphinscheduler-standalone-server.xml
* fix(resource manager): add user Info to dolphinscheduler_h2.sql
* fix(resource manager): merge the resourceType to NONE
* style(upgrade the log level to info):
* fix(resource manager): sysnc the h2.sql
* fix(resource manager): update the merge the user tenant
* fix(resource manager): merge the resourcesServiceImpl
* fix(resource manager):
when the storage is s3 ,that the directory can't be renamed
* fix(resource manager): in s3 ,the directory cannot be renamed
* fix(resource manager): delete the deleteRenameDirectory in E2E
* fix(resource manager): check the style and recoverd the test
* fix(resource manager): delete the log.print(LoginUser)
* [python] Add integrated test to python gateway server
* Build java code and create standalone server image in GA
* Add component start docker in python
* Run example to make sure it work to it
close: #8035
* Fix build docker image working directory
* Fix working directory
* date convert of timezone
* remove @JsonFormat
* add unit test
* fix time preview in scheduler
* optimization & add env config
Co-authored-by: caishunfeng <534328519@qq.com>
For now, python API could only communicate python gateway server
in the same hosts, this patch makes it could work with different hosts,
and export java gateway setting to configure file
Co-authored-by: kezhenxu94 <kezhenxu94@apache.org>
Co-authored-by: ruanwenjun <861923274@qq.com>
* Split the components into individual package
A follow-up PR will be made to build dedicated Docker images for each
component, so that every component Docker image has minimal jars, which
is easy to maintain and good for security fixes.
* Split the components into individual package
A follow-up PR will be made to build dedicated Docker images for each
component, so that every component Docker image has minimal jars, which
is easy to maintain and good for security fixes.
* Split the components into individual package
A follow-up PR will be made to build dedicated Docker images for each
component, so that every component Docker image has minimal jars, which
is easy to maintain and good for security fixes.