Browse Source
* [Feture-3327][ui]Add the function of re-uploading files in the resource center * [Feture-3327][ui]Add the function of re-uploading files in the resource center (#3394) * Before creating a workflow, clear the canvas * [Fix-3256][ui] herry pick commit from dev for Fix admin user info update error (#3306) * [Feture-3327][ui]Add the function of re-uploading files in the resource center Co-authored-by: wuchunfu <319355703@qq.com> * [Improvement-3327][api]support re-upload the resource file (#3395) * [Fix-3390][server]Running hive sql task need find the hdfs path correctly (#3396) * [Fix-3390][api]Running hive sql task need find the hdfs path correctly * [Fix-3390][api]Running hive sql task need find the hdfs path correctly * update soft version * hive UDF function to modify the background color * fix * fix bug: Fix master task dependency check bug * cancel spark task version check (#3406) Co-authored-by: Eights-Li <yelli.hl@gmail.com> * [Bug][ui]Fix front-end bug #3413 * [Feature][ambari_plugin]support one worker can belongs different worker groups when execute install script (#3410) * Optimize dag * Update actions.js (#3401) * [Fix-3256][ui] Fix admin user info update error (#3425) (#3428) * [PROPOSAL-3139] Datasource selection changes from radio to select * [PROPOSAL-3139] Datasource selection changes from radio to select * [BUG FIX] issues #3256 * [BUG FIX] issues #3256 * [BUG FIX] issues #3256 * [Fix-3256][ui] Fix admin user info update error * [Fix-3256][ui] Fix admin user info update error * [Fix-3256][ui] Fix admin user info update error * [Fix-3256][ui] Fix admin user info update error * reset createUser.vue * [Fix-3256][ui] Fix admin user info update error Co-authored-by: dailidong <dailidong66@gmail.com> Co-authored-by: wuchunfu <319355703@qq.com> Co-authored-by: dailidong <dailidong66@gmail.com> * [Fix-3433][api]Fixed that release the imported process definition which version is below 1.3.0 will be failure * dag connection add check * fix * [Fix-3423][dao][sql]Fixed that the resource file of the task node can't be found when upgrade from 1.2.0 to 1.3.x (#3454) * Remove node deep monitoring * If worker group id is null,don't need to set the value of the worker group (#3460) * [Fix-3423][dao][sql]Fixed that the resource file of the task node can't be found when upgrade from 1.2.0 to 1.3.x * [Fix-3423][dao]If worker group id is null,don't need to set the value of the worker group * [ui]Code optimization * fix * fix * [fix-3058][ui]Move rtTargetArr to jsPlumbHandle.js * [optimization][ui]Prevent the shell script input box from being empty * [Fix-3462][api]If login user is admin,need list all udfs (#3465) * [Fix-3462][api]If login user is admin,need list all udfs * [Fix-3462][api]add the test on the method of QueryUdfFuncList * [Fix-3462][api]fix the code smell * [Fix-3462][api]fix the code smell * [Fix-3462][api]fix the code smell * [Fix-3463][api]Fixed that run the sql task will be failure after rename the udf resource (#3482) * [fixBug-3058][ui]Fix connection abnormalities in historical workflow instance data * [Feture-3327][ui]Add the function of re-uploading files in the udf subdirectory * fix bug: Fix master task dependency check bug (#3473) Co-authored-by: lenboo <baoliang@analysys.com.cn> * [maven-release-plugin] prepare release 1.3.2 * [maven-release-plugin] prepare for next development iteration * fix ci_e2e fail (#3497) * [Fix-3469][api]Should filter the resource by the different program type (#3498) * [Fix-3463][api]Fixed that run the sql task will be failure after rename the udf resource * [Fix-3469][api]Should list python file and jar file * [Fix-3469][api]Should filter the resource by the different program type * [Fix-3469][api]fix the code smell * test release 1.3.2 version rollback * test release 1.3.2 version rollback * test release 1.3.2 version rollback (#3499) * [Feature] JVM parameter optimization , related issue #3370 * [Feature] JVM parameter optimization , related issue #3370 * test release 1.3.2 version rollback * test release 1.3.2 version rollback Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * [maven-release-plugin] prepare release 1.3.2 * [maven-release-plugin] prepare for next development iteration * [Fix-3469][ui]The value of maintenance resources and the filtering of resources according to different program types * fix * Revert "fix ci_e2e fail (#3497)" This reverts commitpull/3/MERGEe367f90bb7
. * test * test release 1.3.2 version rollback * test release 1.3.2 version rollback (#3503) * [Feature] JVM parameter optimization , related issue #3370 * [Feature] JVM parameter optimization , related issue #3370 * test release 1.3.2 version rollback * test release 1.3.2 version rollback * test * test release 1.3.2 version rollback Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * [maven-release-plugin] prepare release 1.3.2 * [maven-release-plugin] prepare for next development iteration * test release 1.3.2 version rollback (#3504) * [Feature] JVM parameter optimization , related issue #3370 * [Feature] JVM parameter optimization , related issue #3370 * test release 1.3.2 version rollback * test release 1.3.2 version rollback * test * test release 1.3.2 version rollback Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * [maven-release-plugin] prepare release 1.3.2 * [maven-release-plugin] prepare for next development iteration * fix ds muti-level directory in zk, which lead to fail to assign work * add login user check some actions in api * [Hotfix][ci] Fix e2e ci docker image build error * modify tag 1.3.0 to HEAD * modify tag 1.3.0 to HEAD (#3525) * [Feature] JVM parameter optimization , related issue #3370 * [Feature] JVM parameter optimization , related issue #3370 * test release 1.3.2 version rollback * test release 1.3.2 version rollback * test * test release 1.3.2 version rollback * modify tag 1.3.0 to HEAD Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * remove OGNL part of the mybaits notice (#3526) * [maven-release-plugin] prepare release 1.3.2 * [maven-release-plugin] prepare for next development iteration * release 1.3.2 version rollback (#3527) * [Feature] JVM parameter optimization , related issue #3370 * [Feature] JVM parameter optimization , related issue #3370 * test release 1.3.2 version rollback * test release 1.3.2 version rollback * test * test release 1.3.2 version rollback * modify tag 1.3.0 to HEAD Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * [ui]Script input box to modify the delay loading time * fix * fix * fix * fix * modify general user can't create token * [ui]It is forbidden to select non-existent resources and modify the tree display data format * modify general user can't create token (#3533) * [Feature] JVM parameter optimization , related issue #3370 * [Feature] JVM parameter optimization , related issue #3370 * test release 1.3.2 version rollback * test release 1.3.2 version rollback * test * test release 1.3.2 version rollback * modify tag 1.3.0 to HEAD * modify general user can't create token Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * if task is null , set task type is null instead of "null" * [Fix-3536][api]If user didn't have tenant,create resource directory will NPE (#3537) * [Fix-3536][api]If user didn't have tenant,create resource will NPE * [Fix-3536][api]If user didn't have tenant,create resource directory will NPE * modify general user can't create,delete,update token (#3538) Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * [ui]Resource delete OK button to increase loading, change the number of homepage display cursor * fix * [Fix-3616][Server] when worker akc/response master exception , async retry (#3748) * [fixbug][ui]Repair the master and worker management instrument display * [Fix-3238][docker]Fix that can not create folder in docker with standalone mode (#3741) * [fixbug][ui]Remove non-existent or deleted resources disabled * [fixBug-3621][ui]If the workflow instance status is executing status, it is forbidden to select * [fix-3553][ui]Repair click workflow connection, select the entire path * fix * fix * [Fix-3238][docker]Fix that can not create folder in docker with standalone mode * [Fix-3616][Server] when worker akc/response master exception , async retry (#3776) * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> * The batch delete function in the workflow definition and workflow instance pages cannot be canceled if selected. * [Improvement-3720][ui] js mailbox verification fix * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect (#3784) * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * The batch delete function in the workflow definition and workflow instance pages cannot be canceled if selected. * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> Co-authored-by: zhuangchong <zhuangchong8@163.com> Co-authored-by: JinyLeeChina <42576980+JinyLeeChina@users.noreply.github.com> * [Fix-3124][docker]Fix that can not build a docker image on windows (#3765) * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect (#3786) * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * The batch delete function in the workflow definition and workflow instance pages cannot be canceled if selected. * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> Co-authored-by: zhuangchong <zhuangchong8@163.com> Co-authored-by: JinyLeeChina <42576980+JinyLeeChina@users.noreply.github.com> * [Fix-3258][Security][Worker group manage] Connot get create time and update time,report DateTimeParseException (#3787) * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * [Fix-3616][Server] when worker akc/response master exception , async retry * The batch delete function in the workflow definition and workflow instance pages cannot be canceled if selected. * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect * [Fix-3549] [Server][sqlTask]The alias column in the query SQL does not take effect * [BugFixed] issue #3258 (#3265) * 'ExecutionStatus' * '3258' * Update WorkerGroupServiceTest.java * Delete UserState.java * Delete ResourceSyncService.java * Delete core-site.xml * Delete hdfs-site.xml Co-authored-by: dailidong <dailidong66@gmail.com> Co-authored-by: qiaozhanwei <qiaozhanwei@outlook.com> Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> Co-authored-by: zhuangchong <zhuangchong8@163.com> Co-authored-by: JinyLeeChina <42576980+JinyLeeChina@users.noreply.github.com> Co-authored-by: dailidong <dailidong66@gmail.com> * [fixBug-3792][ui]Click on the sidebar to adapt the width of the pie chart on the project homepage * [Bug-3713][HadoopUtils] catfile method Stream not closed (#3715) * fix bug Delete invalid field: executorcores Modify verification prompt * fix bug Delete invalid field: executorcores Modify verification prompt * fix bug Delete invalid field: executorcores Modify verification prompt * dag add close button * reset last version * reset last version * dag add close buttion dag add close buttion * update CLICK_SAVE_WORKFLOW_BUTTON xpath * updae CLICK_SAVE_WORKFLOW_BUTTON xpath * updae CLICK_SAVE_WORKFLOW_BUTTON xpath * updae CLICK_SAVE_WORKFLOW_BUTTON xpath * Update CreateWorkflowLocator.java modify submit workflow button * Update CreateWorkflowLocator.java * Update CreateWorkflowLocator.java modify CLICK_ADD_BUTTON * Update CreateWorkflowLocator.java delete print * Update CreateWorkflowLocator.java 1 * Update CreateWorkflowLocator.java 1 * Setting '-XX:+DisableExplicitGC ' causes netty memory leaks in addition update '- XX: largepagesizeinbytes = 128M' to '- XX: largepagesizeinbytes = 10M' * Update dag.vue * Update dag.vue * Update dag.vue * Update CreateWorkflowLocator.java * Revert "Setting '-XX:+DisableExplicitGC ' causes netty memory leaks" This reverts commit3a2cba7a
* Setting '-XX:+DisableExplicitGC ' causes netty memory leaks in addition update '- XX: largepagesizeinbytes = 128M' to '- XX: largepagesizeinbytes = 10M' * Update dolphinscheduler-daemon.sh * catfile method Stream not closed * catfile method Stream not closed Co-authored-by: dailidong <dailidong66@gmail.com> Co-authored-by: xingchun-chen <55787491+xingchun-chen@users.noreply.github.com> * [Fix-#3713][common]Fix that catfile method Stream not closed * [Fix-#3713][common]Fix that catfile method Stream not closed (#3810) * [Bug-3713][HadoopUtils] catfile method Stream not closed (#3715) * fix bug Delete invalid field: executorcores Modify verification prompt * fix bug Delete invalid field: executorcores Modify verification prompt * fix bug Delete invalid field: executorcores Modify verification prompt * dag add close button * reset last version * reset last version * dag add close buttion dag add close buttion * update CLICK_SAVE_WORKFLOW_BUTTON xpath * updae CLICK_SAVE_WORKFLOW_BUTTON xpath * updae CLICK_SAVE_WORKFLOW_BUTTON xpath * updae CLICK_SAVE_WORKFLOW_BUTTON xpath * Update CreateWorkflowLocator.java modify submit workflow button * Update CreateWorkflowLocator.java * Update CreateWorkflowLocator.java modify CLICK_ADD_BUTTON * Update CreateWorkflowLocator.java delete print * Update CreateWorkflowLocator.java 1 * Update CreateWorkflowLocator.java 1 * Setting '-XX:+DisableExplicitGC ' causes netty memory leaks in addition update '- XX: largepagesizeinbytes = 128M' to '- XX: largepagesizeinbytes = 10M' * Update dag.vue * Update dag.vue * Update dag.vue * Update CreateWorkflowLocator.java * Revert "Setting '-XX:+DisableExplicitGC ' causes netty memory leaks" This reverts commit3a2cba7a
* Setting '-XX:+DisableExplicitGC ' causes netty memory leaks in addition update '- XX: largepagesizeinbytes = 128M' to '- XX: largepagesizeinbytes = 10M' * Update dolphinscheduler-daemon.sh * catfile method Stream not closed * catfile method Stream not closed Co-authored-by: dailidong <dailidong66@gmail.com> Co-authored-by: xingchun-chen <55787491+xingchun-chen@users.noreply.github.com> * [Fix-#3713][common]Fix that catfile method Stream not closed Co-authored-by: BoYiZhang <39816903+BoYiZhang@users.noreply.github.com> Co-authored-by: dailidong <dailidong66@gmail.com> Co-authored-by: xingchun-chen <55787491+xingchun-chen@users.noreply.github.com> * [Fix-#3487][api、dao] cherry pick from dev to fix that create folder duplicate name under multithreading * [Hotfix-3131][api] Fix the new tenant already exists prompt (#3132) * Bugfix: Fix the new tenant already exists prompt * Feature: Add test cases * Update TenantServiceTest.java Co-authored-by: dailidong <dailidong66@gmail.com> Co-authored-by: qiaozhanwei <qiaozhanwei@outlook.com> * Set up JDK 11 for SonarCloud in github action. (#3052) * Set up JDK 11 for SonarCloud in github action. * Fix javadoc error with JDK 11. * Prevent Javadoc from stopping if it finds any html errors. * [fixBug-3621][ui]Select the batch checkbox to unfilter the instances in the executing state * add verify tenant name cannot contain special characters. * [fixBug-3840][ui]The tenant code only allows letters or a combination of letters and numbers * fix * fix * fix * [Fix-#3702][api] When re-upload the resource file but don't change the name or desc,it need replace the origin resource file. (#3862) * [Fix-#3702][api] When re-upload the resource file but don't change the name or desc,it will not replace the origin resource file. * [Fix-#3702][api] When re-upload the resource file but don't change the name or desc,it will not replace the origin resource file. * [fixbug-3621][ui]Workflow instance ready to stop and ready to suspend state prohibits checking * [fixbug-3887][ui]Fix missing English translation of re-upload files * add process define name verify. (#3879) * Revert "[1.3.3-release][fix-3835][ui] When the tenantName contains "<", the tenant drop-down list is blankadd verify tenant name cannot contain special characters." * revert pr 3872 * [FIX-3617][Service]after subtask fault tolerance, 2 task instances are generated (#3830) * fix bug(#3617): after subtask fault tolerance, 2 task instances are generated. * delete unused code * update code smell * refactor sub work command process * add process service ut * add license header * fix some code smell * chang ut java8 to java11 * update sonar to java11 * copy ut config from dev * remove checkstyle * revert to 1.3.3 * change proess service test to executor service * add process service test * add process service test * revert * revert * add comments * change dev to 1.3.3-release * revert Co-authored-by: baoliang <baoliang@analysys.com.cn> * [Fix-#3487][sql] add dolphinscheduler_dml.sql under 1.3.3_schema (#3907) * [FIX-3836][1.3.3-release-API] process definition validation name interface prompt information error (#3899) * fix bug : error message * fix code smell * fix code smell * [FIX_#3789][remote]cherry pick from dev to support netty heart beat * [FIX_#3789][remote]cherry pick from dev to support netty heart beat * [FIX_#3789][remote]cherry pick from dev to support netty heart beat (#3913) * [FIX_#3789][remote]cherry pick from dev to support netty heart beat * [FIX_#3789][remote]cherry pick from dev to support netty heart beat Co-authored-by: Kirs <acm_master@163.com> * Repair check box cannot be canceled * [fix-3843][api] When update workflow definition,if name already exists, the prompt is not friendly * [fix-3843][api] When update workflow definition,if name already exists, the prompt is not friendly * [fix-#3843][api]When update workflow definition,if name already exists, the prompt is not friendly (#3918) * [FIX_#3789][remote]cherry pick from dev to support netty heart beat * [FIX_#3789][remote]cherry pick from dev to support netty heart beat * [fix-3843][api] When update workflow definition,if name already exists, the prompt is not friendly * [fix-3843][api] When update workflow definition,if name already exists, the prompt is not friendly Co-authored-by: Kirs <acm_master@163.com> * [Fix-#3487][sql] update uc_dolphin_T_t_ds_resources_un * Workflow definition name re-modified and added check * [fix-#3843][api]When update workflow definition,if name already exists, the prompt is not friendly. * update code. * [#3931][ui]Field name optimization for spark, flink, and mr * change version from 1.3.2-SNAPSHOT to 1.3.3-SNAPSHOT (#3934) * [maven-release-plugin] prepare release 1.3.3 * [maven-release-plugin] prepare for next development iteration * [ambari-plugin]change version 1.3.2 to 1.3.3 (#3935) * fix bug:3615 After the task is executed successfully, but the next task has not been submitted, stop the master * [fixBug-3964][ui]Switch back and forth over timeout alarm, the selected value is empty * solve too many files, close logClientService (#3971) * fix #3966 sub process doesnot send alert mail after process instance ending. (#3972) Co-authored-by: baoliang <baoliang@analysys.com.cn> * [Fix-#3618][server] resolve task executed finished but not release the file handle (#3975) * [Fix-#3618][server] resolve task executed finished but not release the file handle * [Fix-#3618][server] resolve task executed finished but not release the file handle * [Fix-#3958][api] files should not be created successfully in the directory of the authorized file * [FIX-3966] The timeout warning does not take effect in sub_process (#3982) * fix #3966 sub process doesnot send alert mail after process instance ending. * fix bug 3964: sub_process The timeout warning does not take effect add timeout warning for sub_process/dependent task. * fix code smell * fix code smell * fix code smell * update worker group inherit from parent Co-authored-by: baoliang <baoliang@analysys.com.cn> * fix import dolphinscheduler_mysql.sql insert admin user data * [FIX-3929] condition task would post wrong tasks when failover. (#3999) * fix #3966 sub process doesnot send alert mail after process instance ending. * fix bug 3964: sub_process The timeout warning does not take effect add timeout warning for sub_process/dependent task. * fix code smell * fix code smell * fix code smell * update worker group inherit from parent * remove stdout in logback configuration * fix bug #3929 condition task would post error when failover. * remove unused test * add comments * add skip node judge Co-authored-by: baoliang <baoliang@analysys.com.cn> * [FIX-3929] because of no lock, start up failover would dispatch two same tasks. (#4004) * fix #3966 sub process doesnot send alert mail after process instance ending. * fix bug 3964: sub_process The timeout warning does not take effect add timeout warning for sub_process/dependent task. * fix code smell * fix code smell * fix code smell * update worker group inherit from parent * remove stdout in logback configuration * fix bug #3929 condition task would post error when failover. * remove unused test * add comments * add skip node judge * fix bug 3929: because of no lock, start up failover would dispatch two same tasks. Co-authored-by: baoliang <baoliang@analysys.com.cn> * revert pom version to 1.3.3-release * [maven-release-plugin] prepare release 1.3.3 * [maven-release-plugin] prepare for next development iteration * [release]revert pom version to 1.3.3-release * fix bug 4010: remove failed condition tasks from error-task-list. (#4011) Co-authored-by: baoliang <baoliang@analysys.com.cn> * [maven-release-plugin] prepare release 1.3.3 * [maven-release-plugin] prepare for next development iteration * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * refactor code style * refactor code style * refactor code style * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * refactor ut test * refactor ut test * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * refactor ut * merge from 1.3.3-release * refactor ut * refactor ut * refactor * refactor * refactor code style * refactor code style * refactor code style * refactor code style * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * merge from 1.3.3-release * refactor code style Co-authored-by: break60 <790061044@qq.com> Co-authored-by: wuchunfu <319355703@qq.com> Co-authored-by: lgcareer <18610854716@163.com> Co-authored-by: xingchun-chen <55787491+xingchun-chen@users.noreply.github.com> Co-authored-by: lenboo <baoliang@analysys.com.cn> Co-authored-by: qiaozhanwei <qiaozhanwei@analysys.com.cn> Co-authored-by: Yelli <amarantine@my.com> Co-authored-by: Eights-Li <yelli.hl@gmail.com> Co-authored-by: JinyLeeChina <42576980+JinyLeeChina@users.noreply.github.com> Co-authored-by: dailidong <dailidong66@gmail.com> Co-authored-by: qiaozhanwei <qiaozhanwei@outlook.com> Co-authored-by: XiaotaoYi <v-xiayi@hotmail.com> Co-authored-by: Yichao Yang <1048262223@qq.com> Co-authored-by: zhuangchong <zhuangchong8@163.com> Co-authored-by: BoYiZhang <39816903+BoYiZhang@users.noreply.github.com> Co-authored-by: muzhongjiang <mu_zhongjiang@163.com> Co-authored-by: Jave-Chen <baicai.chen@gmail.com> Co-authored-by: zhuangchong <zhuangchong6@163.com> Co-authored-by: zhuangchong <37063904+zhuangchong@users.noreply.github.com> Co-authored-by: Kirs <acm_master@163.com> Co-authored-by: lgcareer <lgcareer@apache.org> Co-authored-by: wulingqi <wulingqi@baijiahulian.com>
bao liang
4 years ago
committed by
GitHub
157 changed files with 4699 additions and 1493 deletions
Binary file not shown.
@ -1,467 +0,0 @@ |
|||||||
<!-- |
|
||||||
~ Licensed to the Apache Software Foundation (ASF) under one or more |
|
||||||
~ contributor license agreements. See the NOTICE file distributed with |
|
||||||
~ this work for additional information regarding copyright ownership. |
|
||||||
~ The ASF licenses this file to You under the Apache License, Version 2.0 |
|
||||||
~ (the "License"); you may not use this file except in compliance with |
|
||||||
~ the License. You may obtain a copy of the License at |
|
||||||
~ |
|
||||||
~ http://www.apache.org/licenses/LICENSE-2.0 |
|
||||||
~ |
|
||||||
~ Unless required by applicable law or agreed to in writing, software |
|
||||||
~ distributed under the License is distributed on an "AS IS" BASIS, |
|
||||||
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
|
||||||
~ See the License for the specific language governing permissions and |
|
||||||
~ limitations under the License. |
|
||||||
--> |
|
||||||
<configuration> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.initialSize</name> |
|
||||||
<value>5</value> |
|
||||||
<description> |
|
||||||
Init connection number |
|
||||||
</description> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.minIdle</name> |
|
||||||
<value>5</value> |
|
||||||
<description> |
|
||||||
Min connection number |
|
||||||
</description> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.maxActive</name> |
|
||||||
<value>50</value> |
|
||||||
<description> |
|
||||||
Max connection number |
|
||||||
</description> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.maxWait</name> |
|
||||||
<value>60000</value> |
|
||||||
<description> |
|
||||||
Max wait time for get a connection in milliseconds. |
|
||||||
If configuring maxWait, fair locks are enabled by default and concurrency efficiency decreases. |
|
||||||
If necessary, unfair locks can be used by configuring the useUnfairLock attribute to true. |
|
||||||
</description> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.timeBetweenEvictionRunsMillis</name> |
|
||||||
<value>60000</value> |
|
||||||
<description> |
|
||||||
Milliseconds for check to close free connections |
|
||||||
</description> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.timeBetweenConnectErrorMillis</name> |
|
||||||
<value>60000</value> |
|
||||||
<description> |
|
||||||
The Destroy thread detects the connection interval and closes the physical connection in milliseconds |
|
||||||
if the connection idle time is greater than or equal to minEvictableIdleTimeMillis. |
|
||||||
</description> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.minEvictableIdleTimeMillis</name> |
|
||||||
<value>300000</value> |
|
||||||
<description> |
|
||||||
The longest time a connection remains idle without being evicted, in milliseconds |
|
||||||
</description> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.validationQuery</name> |
|
||||||
<value>SELECT 1</value> |
|
||||||
<description> |
|
||||||
The SQL used to check whether the connection is valid requires a query statement. |
|
||||||
If validation Query is null, testOnBorrow, testOnReturn, and testWhileIdle will not work. |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.validationQueryTimeout</name> |
|
||||||
<value>3</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
Check whether the connection is valid for timeout, in seconds |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.testWhileIdle</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
When applying for a connection, |
|
||||||
if it is detected that the connection is idle longer than time Between Eviction Runs Millis, |
|
||||||
validation Query is performed to check whether the connection is valid |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.testOnBorrow</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
Execute validation to check if the connection is valid when applying for a connection |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.testOnReturn</name> |
|
||||||
<value>false</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
Execute validation to check if the connection is valid when the connection is returned |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.defaultAutoCommit</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.keepAlive</name> |
|
||||||
<value>false</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
|
|
||||||
<property> |
|
||||||
<name>spring.datasource.poolPreparedStatements</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
Open PSCache, specify count PSCache for every connection |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.maxPoolPreparedStatementPerConnectionSize</name> |
|
||||||
<value>20</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.spring.datasource.filters</name> |
|
||||||
<value>stat,wall,log4j</value> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>spring.datasource.connectionProperties</name> |
|
||||||
<value>druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000</value> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
|
|
||||||
<property> |
|
||||||
<name>mybatis-plus.mapper-locations</name> |
|
||||||
<value>classpath*:/org.apache.dolphinscheduler.dao.mapper/*.xml</value> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.typeEnumsPackage</name> |
|
||||||
<value>org.apache.dolphinscheduler.*.enums</value> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.typeAliasesPackage</name> |
|
||||||
<value>org.apache.dolphinscheduler.dao.entity</value> |
|
||||||
<description> |
|
||||||
Entity scan, where multiple packages are separated by a comma or semicolon |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.global-config.db-config.id-type</name> |
|
||||||
<value>AUTO</value> |
|
||||||
<value-attributes> |
|
||||||
<type>value-list</type> |
|
||||||
<entries> |
|
||||||
<entry> |
|
||||||
<value>AUTO</value> |
|
||||||
<label>AUTO</label> |
|
||||||
</entry> |
|
||||||
<entry> |
|
||||||
<value>INPUT</value> |
|
||||||
<label>INPUT</label> |
|
||||||
</entry> |
|
||||||
<entry> |
|
||||||
<value>ID_WORKER</value> |
|
||||||
<label>ID_WORKER</label> |
|
||||||
</entry> |
|
||||||
<entry> |
|
||||||
<value>UUID</value> |
|
||||||
<label>UUID</label> |
|
||||||
</entry> |
|
||||||
</entries> |
|
||||||
<selection-cardinality>1</selection-cardinality> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
Primary key type AUTO:" database ID AUTO ", |
|
||||||
INPUT:" user INPUT ID", |
|
||||||
ID_WORKER:" global unique ID (numeric type unique ID)", |
|
||||||
UUID:" global unique ID UUID"; |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.global-config.db-config.field-strategy</name> |
|
||||||
<value>NOT_NULL</value> |
|
||||||
<value-attributes> |
|
||||||
<type>value-list</type> |
|
||||||
<entries> |
|
||||||
<entry> |
|
||||||
<value>IGNORED</value> |
|
||||||
<label>IGNORED</label> |
|
||||||
</entry> |
|
||||||
<entry> |
|
||||||
<value>NOT_NULL</value> |
|
||||||
<label>NOT_NULL</label> |
|
||||||
</entry> |
|
||||||
<entry> |
|
||||||
<value>NOT_EMPTY</value> |
|
||||||
<label>NOT_EMPTY</label> |
|
||||||
</entry> |
|
||||||
</entries> |
|
||||||
<selection-cardinality>1</selection-cardinality> |
|
||||||
</value-attributes> |
|
||||||
<description> |
|
||||||
Field policy IGNORED:" ignore judgment ", |
|
||||||
NOT_NULL:" not NULL judgment "), |
|
||||||
NOT_EMPTY:" not NULL judgment" |
|
||||||
</description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.global-config.db-config.column-underline</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.global-config.db-config.logic-delete-value</name> |
|
||||||
<value>1</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.global-config.db-config.logic-not-delete-value</name> |
|
||||||
<value>0</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.global-config.db-config.banner</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
|
|
||||||
<property> |
|
||||||
<name>mybatis-plus.configuration.map-underscore-to-camel-case</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.configuration.cache-enabled</name> |
|
||||||
<value>false</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.configuration.call-setters-on-nulls</name> |
|
||||||
<value>true</value> |
|
||||||
<value-attributes> |
|
||||||
<type>boolean</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>mybatis-plus.configuration.jdbc-type-for-null</name> |
|
||||||
<value>null</value> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>master.exec.threads</name> |
|
||||||
<value>100</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>master.exec.task.num</name> |
|
||||||
<value>20</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>master.heartbeat.interval</name> |
|
||||||
<value>10</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>master.task.commit.retryTimes</name> |
|
||||||
<value>5</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>master.task.commit.interval</name> |
|
||||||
<value>1000</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>master.max.cpuload.avg</name> |
|
||||||
<value>100</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>master.reserved.memory</name> |
|
||||||
<value>0.1</value> |
|
||||||
<value-attributes> |
|
||||||
<type>float</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>worker.exec.threads</name> |
|
||||||
<value>100</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>worker.heartbeat.interval</name> |
|
||||||
<value>10</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>worker.fetch.task.num</name> |
|
||||||
<value>3</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>worker.max.cpuload.avg</name> |
|
||||||
<value>100</value> |
|
||||||
<value-attributes> |
|
||||||
<type>int</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
<property> |
|
||||||
<name>worker.reserved.memory</name> |
|
||||||
<value>0.1</value> |
|
||||||
<value-attributes> |
|
||||||
<type>float</type> |
|
||||||
</value-attributes> |
|
||||||
<description></description> |
|
||||||
<on-ambari-upgrade add="true"/> |
|
||||||
</property> |
|
||||||
|
|
||||||
</configuration> |
|
@ -0,0 +1,206 @@ |
|||||||
|
<!-- |
||||||
|
~ Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
~ contributor license agreements. See the NOTICE file distributed with |
||||||
|
~ this work for additional information regarding copyright ownership. |
||||||
|
~ The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
~ (the "License"); you may not use this file except in compliance with |
||||||
|
~ the License. You may obtain a copy of the License at |
||||||
|
~ |
||||||
|
~ http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
~ |
||||||
|
~ Unless required by applicable law or agreed to in writing, software |
||||||
|
~ distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
~ See the License for the specific language governing permissions and |
||||||
|
~ limitations under the License. |
||||||
|
--> |
||||||
|
<configuration> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.initialSize</name> |
||||||
|
<value>5</value> |
||||||
|
<description> |
||||||
|
Init connection number |
||||||
|
</description> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.minIdle</name> |
||||||
|
<value>5</value> |
||||||
|
<description> |
||||||
|
Min connection number |
||||||
|
</description> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.maxActive</name> |
||||||
|
<value>50</value> |
||||||
|
<description> |
||||||
|
Max connection number |
||||||
|
</description> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.maxWait</name> |
||||||
|
<value>60000</value> |
||||||
|
<description> |
||||||
|
Max wait time for get a connection in milliseconds. |
||||||
|
If configuring maxWait, fair locks are enabled by default and concurrency efficiency decreases. |
||||||
|
If necessary, unfair locks can be used by configuring the useUnfairLock attribute to true. |
||||||
|
</description> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.timeBetweenEvictionRunsMillis</name> |
||||||
|
<value>60000</value> |
||||||
|
<description> |
||||||
|
Milliseconds for check to close free connections |
||||||
|
</description> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.timeBetweenConnectErrorMillis</name> |
||||||
|
<value>60000</value> |
||||||
|
<description> |
||||||
|
The Destroy thread detects the connection interval and closes the physical connection in milliseconds |
||||||
|
if the connection idle time is greater than or equal to minEvictableIdleTimeMillis. |
||||||
|
</description> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.minEvictableIdleTimeMillis</name> |
||||||
|
<value>300000</value> |
||||||
|
<description> |
||||||
|
The longest time a connection remains idle without being evicted, in milliseconds |
||||||
|
</description> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.validationQuery</name> |
||||||
|
<value>SELECT 1</value> |
||||||
|
<description> |
||||||
|
The SQL used to check whether the connection is valid requires a query statement. |
||||||
|
If validation Query is null, testOnBorrow, testOnReturn, and testWhileIdle will not work. |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.validationQueryTimeout</name> |
||||||
|
<value>3</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
Check whether the connection is valid for timeout, in seconds |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.testWhileIdle</name> |
||||||
|
<value>true</value> |
||||||
|
<value-attributes> |
||||||
|
<type>boolean</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
When applying for a connection, |
||||||
|
if it is detected that the connection is idle longer than time Between Eviction Runs Millis, |
||||||
|
validation Query is performed to check whether the connection is valid |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.testOnBorrow</name> |
||||||
|
<value>true</value> |
||||||
|
<value-attributes> |
||||||
|
<type>boolean</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
Execute validation to check if the connection is valid when applying for a connection |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.testOnReturn</name> |
||||||
|
<value>false</value> |
||||||
|
<value-attributes> |
||||||
|
<type>boolean</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
Execute validation to check if the connection is valid when the connection is returned |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.defaultAutoCommit</name> |
||||||
|
<value>true</value> |
||||||
|
<value-attributes> |
||||||
|
<type>boolean</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.keepAlive</name> |
||||||
|
<value>false</value> |
||||||
|
<value-attributes> |
||||||
|
<type>boolean</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
|
||||||
|
<property> |
||||||
|
<name>spring.datasource.poolPreparedStatements</name> |
||||||
|
<value>true</value> |
||||||
|
<value-attributes> |
||||||
|
<type>boolean</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
Open PSCache, specify count PSCache for every connection |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.maxPoolPreparedStatementPerConnectionSize</name> |
||||||
|
<value>20</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description></description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.spring.datasource.filters</name> |
||||||
|
<value>stat,wall,log4j</value> |
||||||
|
<description></description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>spring.datasource.connectionProperties</name> |
||||||
|
<value>druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000</value> |
||||||
|
<description></description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
</configuration> |
@ -0,0 +1,88 @@ |
|||||||
|
<!-- |
||||||
|
~ Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
~ contributor license agreements. See the NOTICE file distributed with |
||||||
|
~ this work for additional information regarding copyright ownership. |
||||||
|
~ The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
~ (the "License"); you may not use this file except in compliance with |
||||||
|
~ the License. You may obtain a copy of the License at |
||||||
|
~ |
||||||
|
~ http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
~ |
||||||
|
~ Unless required by applicable law or agreed to in writing, software |
||||||
|
~ distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
~ See the License for the specific language governing permissions and |
||||||
|
~ limitations under the License. |
||||||
|
--> |
||||||
|
<configuration> |
||||||
|
<property> |
||||||
|
<name>master.exec.threads</name> |
||||||
|
<value>100</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>master execute thread num</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>master.exec.task.num</name> |
||||||
|
<value>20</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>master execute task number in parallel</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>master.heartbeat.interval</name> |
||||||
|
<value>10</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>master heartbeat interval</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>master.task.commit.retryTimes</name> |
||||||
|
<value>5</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>master commit task retry times</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>master.task.commit.interval</name> |
||||||
|
<value>1000</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>master commit task interval</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>master.max.cpuload.avg</name> |
||||||
|
<value>100</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>only less than cpu avg load, master server can work. default value : the number of cpu cores * 2</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>master.reserved.memory</name> |
||||||
|
<value>0.3</value> |
||||||
|
<description>only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G.</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
|
||||||
|
<property> |
||||||
|
<name>master.listen.port</name> |
||||||
|
<value>5678</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>master listen port</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
</configuration> |
@ -0,0 +1,67 @@ |
|||||||
|
<!-- |
||||||
|
~ Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
~ contributor license agreements. See the NOTICE file distributed with |
||||||
|
~ this work for additional information regarding copyright ownership. |
||||||
|
~ The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
~ (the "License"); you may not use this file except in compliance with |
||||||
|
~ the License. You may obtain a copy of the License at |
||||||
|
~ |
||||||
|
~ http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
~ |
||||||
|
~ Unless required by applicable law or agreed to in writing, software |
||||||
|
~ distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
~ See the License for the specific language governing permissions and |
||||||
|
~ limitations under the License. |
||||||
|
--> |
||||||
|
<configuration> |
||||||
|
<property> |
||||||
|
<name>worker.exec.threads</name> |
||||||
|
<value>100</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>worker execute thread num</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>worker.heartbeat.interval</name> |
||||||
|
<value>10</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>worker heartbeat interval</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>worker.max.cpuload.avg</name> |
||||||
|
<value>100</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>worker.reserved.memory</name> |
||||||
|
<value>0.3</value> |
||||||
|
<description>only larger than reserved memory, worker server can work. default value : physical memory * 1/10, unit is G.</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
|
||||||
|
<property> |
||||||
|
<name>worker.listen.port</name> |
||||||
|
<value>1234</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description>worker listen port</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>worker.groups</name> |
||||||
|
<value>default</value> |
||||||
|
<description>default worker group</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
</configuration> |
@ -0,0 +1,76 @@ |
|||||||
|
<!-- |
||||||
|
~ Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
~ contributor license agreements. See the NOTICE file distributed with |
||||||
|
~ this work for additional information regarding copyright ownership. |
||||||
|
~ The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
~ (the "License"); you may not use this file except in compliance with |
||||||
|
~ the License. You may obtain a copy of the License at |
||||||
|
~ |
||||||
|
~ http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
~ |
||||||
|
~ Unless required by applicable law or agreed to in writing, software |
||||||
|
~ distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
~ See the License for the specific language governing permissions and |
||||||
|
~ limitations under the License. |
||||||
|
--> |
||||||
|
<configuration> |
||||||
|
<property> |
||||||
|
<name>zookeeper.dolphinscheduler.root</name> |
||||||
|
<value>/dolphinscheduler</value> |
||||||
|
<description> |
||||||
|
dolphinscheduler root directory |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>zookeeper.session.timeout</name> |
||||||
|
<value>300</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>zookeeper.connection.timeout</name> |
||||||
|
<value>300</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>zookeeper.retry.base.sleep</name> |
||||||
|
<value>100</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>zookeeper.retry.max.sleep</name> |
||||||
|
<value>30000</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
<property> |
||||||
|
<name>zookeeper.retry.maxtime</name> |
||||||
|
<value>5</value> |
||||||
|
<value-attributes> |
||||||
|
<type>int</type> |
||||||
|
</value-attributes> |
||||||
|
<description> |
||||||
|
</description> |
||||||
|
<on-ambari-upgrade add="true"/> |
||||||
|
</property> |
||||||
|
</configuration> |
@ -0,0 +1,20 @@ |
|||||||
|
# |
||||||
|
# Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
# contributor license agreements. See the NOTICE file distributed with |
||||||
|
# this work for additional information regarding copyright ownership. |
||||||
|
# The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
# (the "License"); you may not use this file except in compliance with |
||||||
|
# the License. You may obtain a copy of the License at |
||||||
|
# |
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
# |
||||||
|
# Unless required by applicable law or agreed to in writing, software |
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
# See the License for the specific language governing permissions and |
||||||
|
# limitations under the License. |
||||||
|
# |
||||||
|
|
||||||
|
{% for key, value in dolphin_datasource_map.iteritems() -%} |
||||||
|
{{key}}={{value}} |
||||||
|
{% endfor %} |
@ -0,0 +1,20 @@ |
|||||||
|
# |
||||||
|
# Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
# contributor license agreements. See the NOTICE file distributed with |
||||||
|
# this work for additional information regarding copyright ownership. |
||||||
|
# The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
# (the "License"); you may not use this file except in compliance with |
||||||
|
# the License. You may obtain a copy of the License at |
||||||
|
# |
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
# |
||||||
|
# Unless required by applicable law or agreed to in writing, software |
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
# See the License for the specific language governing permissions and |
||||||
|
# limitations under the License. |
||||||
|
# |
||||||
|
|
||||||
|
{% for key, value in dolphin_master_map.iteritems() -%} |
||||||
|
{{key}}={{value}} |
||||||
|
{% endfor %} |
@ -0,0 +1,20 @@ |
|||||||
|
# |
||||||
|
# Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
# contributor license agreements. See the NOTICE file distributed with |
||||||
|
# this work for additional information regarding copyright ownership. |
||||||
|
# The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
# (the "License"); you may not use this file except in compliance with |
||||||
|
# the License. You may obtain a copy of the License at |
||||||
|
# |
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
# |
||||||
|
# Unless required by applicable law or agreed to in writing, software |
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
# See the License for the specific language governing permissions and |
||||||
|
# limitations under the License. |
||||||
|
# |
||||||
|
|
||||||
|
{% for key, value in dolphin_worker_map.iteritems() -%} |
||||||
|
{{key}}={{value}} |
||||||
|
{% endfor %} |
@ -0,0 +1,20 @@ |
|||||||
|
# |
||||||
|
# Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
# contributor license agreements. See the NOTICE file distributed with |
||||||
|
# this work for additional information regarding copyright ownership. |
||||||
|
# The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
# (the "License"); you may not use this file except in compliance with |
||||||
|
# the License. You may obtain a copy of the License at |
||||||
|
# |
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0 |
||||||
|
# |
||||||
|
# Unless required by applicable law or agreed to in writing, software |
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
# See the License for the specific language governing permissions and |
||||||
|
# limitations under the License. |
||||||
|
# |
||||||
|
|
||||||
|
{% for key, value in dolphin_zookeeper_map.iteritems() -%} |
||||||
|
{{key}}={{value}} |
||||||
|
{% endfor %} |
@ -0,0 +1,23 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.apache.dolphinscheduler.common.enums; |
||||||
|
|
||||||
|
public enum Event { |
||||||
|
ACK, |
||||||
|
RESULT; |
||||||
|
} |
@ -0,0 +1,67 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.apache.dolphinscheduler.dao.upgrade; |
||||||
|
|
||||||
|
import org.apache.dolphinscheduler.common.utils.ConnectionUtils; |
||||||
|
import org.slf4j.Logger; |
||||||
|
import org.slf4j.LoggerFactory; |
||||||
|
|
||||||
|
import java.sql.Connection; |
||||||
|
import java.sql.PreparedStatement; |
||||||
|
import java.sql.ResultSet; |
||||||
|
import java.util.HashMap; |
||||||
|
import java.util.Map; |
||||||
|
|
||||||
|
/** |
||||||
|
* resource dao |
||||||
|
*/ |
||||||
|
public class ResourceDao { |
||||||
|
public static final Logger logger = LoggerFactory.getLogger(ProcessDefinitionDao.class); |
||||||
|
|
||||||
|
/** |
||||||
|
* list all resources |
||||||
|
* @param conn connection |
||||||
|
* @return map that key is full_name and value is id |
||||||
|
*/ |
||||||
|
Map<String,Integer> listAllResources(Connection conn){ |
||||||
|
Map<String,Integer> resourceMap = new HashMap<>(); |
||||||
|
|
||||||
|
String sql = String.format("SELECT id,full_name FROM t_ds_resources"); |
||||||
|
ResultSet rs = null; |
||||||
|
PreparedStatement pstmt = null; |
||||||
|
try { |
||||||
|
pstmt = conn.prepareStatement(sql); |
||||||
|
rs = pstmt.executeQuery(); |
||||||
|
|
||||||
|
while (rs.next()){ |
||||||
|
Integer id = rs.getInt(1); |
||||||
|
String fullName = rs.getString(2); |
||||||
|
resourceMap.put(fullName,id); |
||||||
|
} |
||||||
|
|
||||||
|
} catch (Exception e) { |
||||||
|
logger.error(e.getMessage(),e); |
||||||
|
throw new RuntimeException("sql: " + sql, e); |
||||||
|
} finally { |
||||||
|
ConnectionUtils.releaseResource(rs, pstmt, conn); |
||||||
|
} |
||||||
|
|
||||||
|
return resourceMap; |
||||||
|
} |
||||||
|
|
||||||
|
} |
@ -0,0 +1,73 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
package org.apache.dolphinscheduler.remote.command; |
||||||
|
|
||||||
|
|
||||||
|
import org.apache.dolphinscheduler.common.utils.JSONUtils; |
||||||
|
|
||||||
|
import java.io.Serializable; |
||||||
|
|
||||||
|
/** |
||||||
|
* db task ack request command |
||||||
|
*/ |
||||||
|
public class DBTaskAckCommand implements Serializable { |
||||||
|
|
||||||
|
private int taskInstanceId; |
||||||
|
private int status; |
||||||
|
|
||||||
|
public DBTaskAckCommand(int status,int taskInstanceId) { |
||||||
|
this.status = status; |
||||||
|
this.taskInstanceId = taskInstanceId; |
||||||
|
} |
||||||
|
|
||||||
|
public int getTaskInstanceId() { |
||||||
|
return taskInstanceId; |
||||||
|
} |
||||||
|
|
||||||
|
public void setTaskInstanceId(int taskInstanceId) { |
||||||
|
this.taskInstanceId = taskInstanceId; |
||||||
|
} |
||||||
|
|
||||||
|
public int getStatus() { |
||||||
|
return status; |
||||||
|
} |
||||||
|
|
||||||
|
public void setStatus(int status) { |
||||||
|
this.status = status; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* package response command |
||||||
|
* @return command |
||||||
|
*/ |
||||||
|
public Command convert2Command(){ |
||||||
|
Command command = new Command(); |
||||||
|
command.setType(CommandType.DB_TASK_ACK); |
||||||
|
byte[] body = JSONUtils.toJsonByteArray(this); |
||||||
|
command.setBody(body); |
||||||
|
return command; |
||||||
|
} |
||||||
|
|
||||||
|
|
||||||
|
@Override |
||||||
|
public String toString() { |
||||||
|
return "DBTaskAckCommand{" + |
||||||
|
"taskInstanceId=" + taskInstanceId + |
||||||
|
", status=" + status + |
||||||
|
'}'; |
||||||
|
} |
||||||
|
} |
@ -0,0 +1,71 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
package org.apache.dolphinscheduler.remote.command; |
||||||
|
|
||||||
|
import org.apache.dolphinscheduler.common.utils.JSONUtils; |
||||||
|
|
||||||
|
import java.io.Serializable; |
||||||
|
|
||||||
|
/** |
||||||
|
* db task final result response command |
||||||
|
*/ |
||||||
|
public class DBTaskResponseCommand implements Serializable { |
||||||
|
|
||||||
|
private int taskInstanceId; |
||||||
|
private int status; |
||||||
|
|
||||||
|
public DBTaskResponseCommand(int status,int taskInstanceId) { |
||||||
|
this.status = status; |
||||||
|
this.taskInstanceId = taskInstanceId; |
||||||
|
} |
||||||
|
|
||||||
|
public int getStatus() { |
||||||
|
return status; |
||||||
|
} |
||||||
|
|
||||||
|
public void setStatus(int status) { |
||||||
|
this.status = status; |
||||||
|
} |
||||||
|
|
||||||
|
public int getTaskInstanceId() { |
||||||
|
return taskInstanceId; |
||||||
|
} |
||||||
|
|
||||||
|
public void setTaskInstanceId(int taskInstanceId) { |
||||||
|
this.taskInstanceId = taskInstanceId; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* package response command |
||||||
|
* @return command |
||||||
|
*/ |
||||||
|
public Command convert2Command(){ |
||||||
|
Command command = new Command(); |
||||||
|
command.setType(CommandType.DB_TASK_RESPONSE); |
||||||
|
byte[] body = JSONUtils.toJsonByteArray(this); |
||||||
|
command.setBody(body); |
||||||
|
return command; |
||||||
|
} |
||||||
|
|
||||||
|
@Override |
||||||
|
public String toString() { |
||||||
|
return "DBTaskResponseCommand{" + |
||||||
|
"taskInstanceId=" + taskInstanceId + |
||||||
|
", status=" + status + |
||||||
|
'}'; |
||||||
|
} |
||||||
|
} |
@ -0,0 +1,39 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
package org.apache.dolphinscheduler.server.log; |
||||||
|
|
||||||
|
import ch.qos.logback.classic.spi.ILoggingEvent; |
||||||
|
import ch.qos.logback.core.FileAppender; |
||||||
|
import org.slf4j.Marker; |
||||||
|
|
||||||
|
import static ch.qos.logback.classic.ClassicConstants.FINALIZE_SESSION_MARKER; |
||||||
|
|
||||||
|
/** |
||||||
|
* Task log appender |
||||||
|
*/ |
||||||
|
public class TaskLogAppender extends FileAppender<ILoggingEvent>{ |
||||||
|
@Override |
||||||
|
protected void append(ILoggingEvent event) { |
||||||
|
Marker marker = event.getMarker(); |
||||||
|
if (marker !=null) { |
||||||
|
if (marker.equals(FINALIZE_SESSION_MARKER)) { |
||||||
|
stop(); |
||||||
|
} |
||||||
|
} |
||||||
|
super.subAppend(event); |
||||||
|
} |
||||||
|
} |
@ -0,0 +1,94 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.apache.dolphinscheduler.server.worker.cache; |
||||||
|
|
||||||
|
import org.apache.dolphinscheduler.common.enums.Event; |
||||||
|
import org.apache.dolphinscheduler.remote.command.Command; |
||||||
|
|
||||||
|
import java.util.Map; |
||||||
|
import java.util.concurrent.ConcurrentHashMap; |
||||||
|
|
||||||
|
/** |
||||||
|
* Responce Cache : cache worker send master result |
||||||
|
*/ |
||||||
|
public class ResponceCache { |
||||||
|
|
||||||
|
private static final ResponceCache instance = new ResponceCache(); |
||||||
|
|
||||||
|
private ResponceCache(){} |
||||||
|
|
||||||
|
public static ResponceCache get(){ |
||||||
|
return instance; |
||||||
|
} |
||||||
|
|
||||||
|
private Map<Integer,Command> ackCache = new ConcurrentHashMap<>(); |
||||||
|
private Map<Integer,Command> responseCache = new ConcurrentHashMap<>(); |
||||||
|
|
||||||
|
|
||||||
|
/** |
||||||
|
* cache response |
||||||
|
* @param taskInstanceId taskInstanceId |
||||||
|
* @param command command |
||||||
|
* @param event event ACK/RESULT |
||||||
|
*/ |
||||||
|
public void cache(Integer taskInstanceId, Command command, Event event){ |
||||||
|
switch (event){ |
||||||
|
case ACK: |
||||||
|
ackCache.put(taskInstanceId,command); |
||||||
|
break; |
||||||
|
case RESULT: |
||||||
|
responseCache.put(taskInstanceId,command); |
||||||
|
break; |
||||||
|
default: |
||||||
|
throw new IllegalArgumentException("invalid event type : " + event); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
|
||||||
|
/** |
||||||
|
* remove ack cache |
||||||
|
* @param taskInstanceId taskInstanceId |
||||||
|
*/ |
||||||
|
public void removeAckCache(Integer taskInstanceId){ |
||||||
|
ackCache.remove(taskInstanceId); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* remove reponse cache |
||||||
|
* @param taskInstanceId taskInstanceId |
||||||
|
*/ |
||||||
|
public void removeResponseCache(Integer taskInstanceId){ |
||||||
|
responseCache.remove(taskInstanceId); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* getAckCache |
||||||
|
* @return getAckCache |
||||||
|
*/ |
||||||
|
public Map<Integer,Command> getAckCache(){ |
||||||
|
return ackCache; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* getResponseCache |
||||||
|
* @return getResponseCache |
||||||
|
*/ |
||||||
|
public Map<Integer,Command> getResponseCache(){ |
||||||
|
return responseCache; |
||||||
|
} |
||||||
|
} |
@ -0,0 +1,56 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.apache.dolphinscheduler.server.worker.processor; |
||||||
|
|
||||||
|
import io.netty.channel.Channel; |
||||||
|
import org.apache.dolphinscheduler.common.enums.ExecutionStatus; |
||||||
|
import org.apache.dolphinscheduler.common.utils.JSONUtils; |
||||||
|
import org.apache.dolphinscheduler.common.utils.Preconditions; |
||||||
|
import org.apache.dolphinscheduler.remote.command.*; |
||||||
|
import org.apache.dolphinscheduler.remote.processor.NettyRequestProcessor; |
||||||
|
import org.apache.dolphinscheduler.server.worker.cache.ResponceCache; |
||||||
|
import org.slf4j.Logger; |
||||||
|
import org.slf4j.LoggerFactory; |
||||||
|
|
||||||
|
/** |
||||||
|
* db task ack processor |
||||||
|
*/ |
||||||
|
public class DBTaskAckProcessor implements NettyRequestProcessor { |
||||||
|
|
||||||
|
private final Logger logger = LoggerFactory.getLogger(DBTaskAckProcessor.class); |
||||||
|
|
||||||
|
|
||||||
|
@Override |
||||||
|
public void process(Channel channel, Command command) { |
||||||
|
Preconditions.checkArgument(CommandType.DB_TASK_ACK == command.getType(), |
||||||
|
String.format("invalid command type : %s", command.getType())); |
||||||
|
|
||||||
|
DBTaskAckCommand taskAckCommand = JSONUtils.parseObject( |
||||||
|
command.getBody(), DBTaskAckCommand.class); |
||||||
|
|
||||||
|
if (taskAckCommand == null){ |
||||||
|
return; |
||||||
|
} |
||||||
|
|
||||||
|
if (taskAckCommand.getStatus() == ExecutionStatus.SUCCESS.getCode()){ |
||||||
|
ResponceCache.get().removeAckCache(taskAckCommand.getTaskInstanceId()); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
|
||||||
|
} |
@ -0,0 +1,58 @@ |
|||||||
|
/* |
||||||
|
* Licensed to the Apache Software Foundation (ASF) under one or more |
||||||
|
* contributor license agreements. See the NOTICE file distributed with |
||||||
|
* this work for additional information regarding copyright ownership. |
||||||
|
* The ASF licenses this file to You under the Apache License, Version 2.0 |
||||||
|
* (the "License"); you may not use this file except in compliance with |
||||||
|
* the License. You may obtain a copy of the License at |
||||||
|
* |
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
* |
||||||
|
* Unless required by applicable law or agreed to in writing, software |
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, |
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||||
|
* See the License for the specific language governing permissions and |
||||||
|
* limitations under the License. |
||||||
|
*/ |
||||||
|
|
||||||
|
package org.apache.dolphinscheduler.server.worker.processor; |
||||||
|
|
||||||
|
import io.netty.channel.Channel; |
||||||
|
import org.apache.dolphinscheduler.common.enums.ExecutionStatus; |
||||||
|
import org.apache.dolphinscheduler.common.utils.JSONUtils; |
||||||
|
import org.apache.dolphinscheduler.common.utils.Preconditions; |
||||||
|
import org.apache.dolphinscheduler.remote.command.Command; |
||||||
|
import org.apache.dolphinscheduler.remote.command.CommandType; |
||||||
|
import org.apache.dolphinscheduler.remote.command.DBTaskResponseCommand; |
||||||
|
import org.apache.dolphinscheduler.remote.processor.NettyRequestProcessor; |
||||||
|
import org.apache.dolphinscheduler.server.worker.cache.ResponceCache; |
||||||
|
import org.slf4j.Logger; |
||||||
|
import org.slf4j.LoggerFactory; |
||||||
|
|
||||||
|
/** |
||||||
|
* db task response processor |
||||||
|
*/ |
||||||
|
public class DBTaskResponseProcessor implements NettyRequestProcessor { |
||||||
|
|
||||||
|
private final Logger logger = LoggerFactory.getLogger(DBTaskResponseProcessor.class); |
||||||
|
|
||||||
|
|
||||||
|
@Override |
||||||
|
public void process(Channel channel, Command command) { |
||||||
|
Preconditions.checkArgument(CommandType.DB_TASK_RESPONSE == command.getType(), |
||||||
|
String.format("invalid command type : %s", command.getType())); |
||||||
|
|
||||||
|
DBTaskResponseCommand taskResponseCommand = JSONUtils.parseObject( |
||||||
|
command.getBody(), DBTaskResponseCommand.class); |
||||||
|
|
||||||
|
if (taskResponseCommand == null){ |
||||||
|
return; |
||||||
|
} |
||||||
|
|
||||||
|
if (taskResponseCommand.getStatus() == ExecutionStatus.SUCCESS.getCode()){ |
||||||
|
ResponceCache.get().removeResponseCache(taskResponseCommand.getTaskInstanceId()); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
|
||||||
|
} |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue