Browse Source

[Feature][JsonSplit-api] merge code from dev to json2 (#6023)

* [BUG-#5678][Registry]fix registry init node miss (#5686)

* [Improvement][UI] Update the update time after the user information is successfully modified (#5684)

* improve

edit the userinfo success, but the updatetime is not the latest.

* Improved shell task execution result log information, adding process.waitFor() and process.exitValue() information to the original log (#5691)

Co-authored-by: shenglm <shenglm840722@126.com>

* [Feature-#5565][Master Worker-Server] Global Param passed by sense dependencies (#5603)

* add globalParams new plan with varPool

* add unit test

* add python task varPoolParams


Co-authored-by: wangxj <wangxj31>

* Issue robot translation judgment changed to Chinese (#5694)



Co-authored-by: chenxingchun <438044805@qq.com>

* the update function should use post instead of get (#5703)

* enhance form verify (#5696)

* checkState only supports %s not {} (#5711)

* [Fix-5701]When deleting a user, the accessToken associated with the user should also be deleted (#5697)

* update

* fix the codestyle error

* fix the compile error

* support rollback

* [Fix-5699][UI] Fix update user error in user information (#5700)

* [Improvement] the automatically generated spi service name in alert-plugin is wrong (#5676)

* bug fix

the auto generated spi service can't be recongized



* include a  new method

* [Improvement-5622][project management] Modify the title (#5723)

* [Fix-5714] When updating the existing alarm instance, the creation time should't be updated (#5715)



* add a new init method.

* [Fix#5758] There are some problems in the api documentation that need to be improved (#5759)

* add the necessary parameters

* openapi improve

* fix code style error

* [FIX-#5721][master-server] Global params parameter missing (#5757)



Co-authored-by: wangxj <wangxj31>

* [Fix-5738][UI] The cancel button in the pop-up dialog of `batch copy` and `batch move`  doesn't work. (#5739)

* Update relatedItems.vue

* Update relatedItems.vue

* [Improvement#5741][Worker] Improve task process status log  (#5776)

* [Improvement-5773][server] need to support two parameters related to task (#5774)

* add some new parameter for task

* restore official properties

* improve imports

* modify a variable's name

Co-authored-by: jiang hua <jiang.hua@zhaopin.com.cn>

* [FIX-5786][Improvement][Server] When the Worker turns down, the MasterServer cannot handle the Remove event correctly and throws NPE

* [Improvement][Worker] Task log may be lost #5775 (#5783)

* [Imporvement #5725][CheckStyle] upgrade checkstyle file (#5789)

* [Imporvement #5725][CheckStyle] upgrade checkstyle file
  Upgrade checkstyle.xml to support checkstyle version 8.24+

* change ci checkstyle version

* [Fix-5795][Improvement][Server] The starttime field in the HttpTask log is not displayed as expected.  (#5796)

* improve timestamp format

make the startime in the log of httptask to be easier to read.


* fix bad code smell and update the note.

* [Imporvement #5621][job instance] start-time and end-time (#5621) (#5797)

·the list of workflow instances is sorted by start time and end time
·This closes #5621

* fix (#5803)

Co-authored-by: shuangbofu <fusb@tuya.com>

* fix: Remove duplicate "registryClient.close" method calls (#5805)

Co-authored-by: wen-hemin <wenhemin@apache.com>

* [Improvement][SPI] support load single plugin (#5794)

change load operation of 'registry.plugin.dir'

* [Improvement][Api Module] refactor registry client, remove spring annotation (#5814)

* fix: refactor registry client, remove spring annotation

* fix UT

* fix UT

* fix checkstyle

* fix UT

* fix UT

* fix UT

* fix: Rename RegistryCenterUtils method name

Co-authored-by: wen-hemin <wenhemin@apache.com>

* [Fix-5699][UI] Fix update user error in user information introduced by #5700 (#5735)

* [Fix-5726] When we used the UI page, we found some problems such as parameter validation, parameter update shows success but actually work (#5727)

* enhance the validation in UI

* enchance form verifaction

* simplify disable condition

* fix: Remove unused class (#5833)

Co-authored-by: wen-hemin <wenhemin@apache.com>

* [fix-5737] [Bug][Datasource] datsource other param check error (#5835)

Co-authored-by: wanggang <wanggy01@servyou.com.cn>

* [Fix-5719][K8s] Fix Ingress tls: got map expected array On TLS enabled On Kubernetes

[Fix-5719][K8s] Fix Ingress tls: got map expected array On TLS enabled On Kubernetes

* [Fix-5825][BUG][WEB] the resource tree in the process definition of latest dev branch can't display correctly (#5826)

* resoures-shows-error

* fix codestyle error

* add license header for new js

* fix codesmell

* [Improvement-5852][server] Support two parameters related to task for the rest of type of tasks. (#5867)

* provide two system parameters to support the rest of type of tasks

* provide two system parameters to support the rest of type of tasks

* improve test conversion

* [Improvement][Fix-5769][UI]When we try to delete the existing dag, the console in web browser would shows exception (#5770)

* fix bug

* cache the this variable

* Avoid self name

* fix code style compile error

* [Fix-5781][UT] Fix test coverage in sonar (#5817)

* build(UT): make jacoco running in offline-instrumentation

issue: #5781

* build(UT): remove the jacoco agent dependency in microbench

issue: #5781

* [Fix-5808][Server]  When we try to transfer data using datax between  different types of data sources, the worker will exit with ClassCastException (#5809)

* bug fix

* fix bug

* simplify the code format

* add a new parameter to make it easier to understand.

* [Fix-5830][Improvement][UI] Improve the selection style in dag edit dialog (#5829)

* improve the selection style

* update another file

* remove unnecessary css part.

* [Fix-5904][upgrade]fix dev branch upgrade mysql sql script error (#5821)

* fix dev branch upgrade mysql sql script error.

* Update naming convention.

* [Improvement][Api Module] refactor DataSourceParam and DependentParam, remove spring annotation (#5832)

* fix: refactor api utils class, remove spring annotation.

* fix: Optimization comments

Co-authored-by: wen-hemin <wenhemin@apache.com>

* correct the wrong annotion from zk queue implemented to java priority blocking queue (#5906)

Co-authored-by: ywang46 <ywang46@paypal.com>

* Add a Gitter chat badge to README.md (#5883)

* Add Gitter badge

* Update README.md

Co-authored-by: David <dailidong66@gmail.com>

* ci: improve maven connection in CI builds (#5924)

issue: #5921

* [Improvement][Master]fix typo (#5934)

·fix typo in MasterBaseTaskExecThread

* [Fix-5886][server] Enhanced scheduler delete check (#5936)

* Add:Name verification remove the first and last spaces.

* Update: wrong word: 'WAITTING' ->'WAITING'

* Add: Strengthen verification

Co-authored-by: Squid <2824638304@qq.com>

* [Improvement-5880][api] Optimized data structure of pagination query API results (#5895)

* [5880][refactor]Optimized data structure of pagination query API results
- refactor PageInfo and delete returnDataListPaging in API
- modify the related Controller and Service and the corresponding Test

* Merge branch 'dev' of github.com:apache/dolphinscheduler into dev

 Conflicts:
	dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessDefinitionServiceImpl.java

Co-authored-by: 蔡泽华 <sorea1k@163.com>

* [IMPROVEMENT]fix mysql comment error (#5959)

* [Improvement][Api]fix typo (#5960)

* [Imporvement #5621][job instance] start-time and end-time (#5621)
·the list of workflow instances is sorted by start time and end time
·This closes #5621

* [FIX-5975]queryLastRunningProcess sql in ProcessInstanceMapper.xml (#5980)

* [NEW FEATURE][FIX-4385] compensation task add the ability to configure parallelism  (#5912)

* update

* web improved

* improve the ui

* add the ability to configure the parallelism

* update variables

* enhance the ut and add necessary note

* fix code style

* fix code style issue

* ensure the complation task in parallel mode can run the right numbers of tasks.

* [Improvement][dao]When I search for the keyword description, the web UI shows empty (#5952)

* [Bug][WorkerServer] SqlTask NullPointerException #5549

* [Improvement][dao]When I search for the keyword Modify User, the web UI shows empty #5428

* [Improvement][dao]When I search for the keyword Modify User, the web UI shows empty #5428

* [Improvement][dao]When I search for the keyword Modify User, the web UI shows empty #5428

* [Improvement][dao]When I search for the keyword Modify User, the web UI shows empty #5428

* [Improvement][dao]When I search for the keyword Modify User, the web UI shows empty #5428

* [Improvement][dao]When I search for the keyword Modify User, the web UI shows empty #5428

* [Improvement][dao]When I search for the keyword description, the web UI shows empty #5428

* fix the readme typing issue (#5998)

* Fix unchecked type conversions

* Use indentation level reported by checkstyle

* Reorganize CI workflows to fasten the wasted time and resources (#6011)

* Add standalone server module to make it easier to develop (#6022)

* fix ut

Co-authored-by: Kirs <acm_master@163.com>
Co-authored-by: kyoty <echohlne@gmail.com>
Co-authored-by: ji04xiaogang <ji04xiaogang@163.com>
Co-authored-by: shenglm <shenglm840722@126.com>
Co-authored-by: wangxj3 <857234426@qq.com>
Co-authored-by: xingchun-chen <55787491+xingchun-chen@users.noreply.github.com>
Co-authored-by: chenxingchun <438044805@qq.com>
Co-authored-by: Shiwen Cheng <chengshiwen0103@gmail.com>
Co-authored-by: Jianchao Wang <akingchao@qq.com>
Co-authored-by: Tanvi Moharir <74228962+tanvimoharir@users.noreply.github.com>
Co-authored-by: Hua Jiang <jianghuachinacom@163.com>
Co-authored-by: jiang hua <jiang.hua@zhaopin.com.cn>
Co-authored-by: Wenjun Ruan <861923274@qq.com>
Co-authored-by: Tandoy <56899730+Tandoy@users.noreply.github.com>
Co-authored-by: 傅双波 <786183073@qq.com>
Co-authored-by: shuangbofu <fusb@tuya.com>
Co-authored-by: wen-hemin <39549317+wen-hemin@users.noreply.github.com>
Co-authored-by: wen-hemin <wenhemin@apache.com>
Co-authored-by: geosmart <geosmart@hotmail.com>
Co-authored-by: wanggang <wanggy01@servyou.com.cn>
Co-authored-by: AzureCN <colorazure@163.com>
Co-authored-by: 深刻 <tsund@qq.com>
Co-authored-by: zhuangchong <37063904+zhuangchong@users.noreply.github.com>
Co-authored-by: Yao WANG <Yao.MR.CN@gmail.com>
Co-authored-by: ywang46 <ywang46@paypal.com>
Co-authored-by: The Gitter Badger <badger@gitter.im>
Co-authored-by: David <dailidong66@gmail.com>
Co-authored-by: Squidyu <1297554122@qq.com>
Co-authored-by: Squid <2824638304@qq.com>
Co-authored-by: soreak <60459867+soreak@users.noreply.github.com>
Co-authored-by: 蔡泽华 <sorea1k@163.com>
Co-authored-by: yimaixinchen <yimaixinchen@163.com>
Co-authored-by: atai-555 <74188560+atai-555@users.noreply.github.com>
Co-authored-by: didiaode18 <563646039@qq.com>
Co-authored-by: Roy <yongjuncao1213@gmail.com>
Co-authored-by: lyxell <alyxell@kth.se>
Co-authored-by: Wenjun Ruan <wenjun@apache.org>
Co-authored-by: kezhenxu94 <kezhenxu94@apache.org>
Co-authored-by: JinyLeeChina <297062848@qq.com>
2.0.7-release
JinyLeeChina 3 years ago committed by GitHub
parent
commit
3b495d2fb0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 1
      .github/actions/reviewdog-setup
  2. 53
      .github/actions/sanity-check/action.yml
  3. 38
      .github/workflows/backend.yml
  4. 22
      .github/workflows/e2e.yml
  5. 31
      .github/workflows/frontend.yml
  6. 100
      .github/workflows/unit-test.yml
  7. 6
      .gitmodules
  8. 5
      .licenserc.yaml
  9. 3
      README.md
  10. 4
      docker/build/hooks/build
  11. 4
      docker/build/hooks/build.bat
  12. 11
      dolphinscheduler-alert-plugin/dolphinscheduler-alert-email/pom.xml
  13. 7
      dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/ExecutorController.java
  14. 3
      dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/ExecutorService.java
  15. 37
      dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ExecutorServiceImpl.java
  16. 4
      dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TenantServiceImpl.java
  17. 4
      dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/UsersServiceImpl.java
  18. 1
      dolphinscheduler-api/src/main/resources/i18n/messages_en_US.properties
  19. 1
      dolphinscheduler-api/src/main/resources/i18n/messages_zh_CN.properties
  20. 49
      dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/ExecutorControllerTest.java
  21. 14
      dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/ExecutorServiceTest.java
  22. 66
      dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/DbType.java
  23. 4
      dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/graph/DAG.java
  24. 18
      dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java
  25. 1
      dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/SpringConnectionFactory.java
  26. 4
      dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessDefinitionMapper.xml
  27. 4
      dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessInstanceMapper.xml
  28. 4
      dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProjectMapper.xml
  29. 3
      dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/WorkFlowLineageMapper.xml
  30. 2
      dolphinscheduler-dao/src/test/java/org/apache/dolphinscheduler/dao/utils/ResourceProcessDefinitionUtilsTest.java
  31. 5
      dolphinscheduler-dist/pom.xml
  32. 1
      dolphinscheduler-dist/release-docs/LICENSE
  33. 2
      dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/dispatch/executor/NettyExecutorManager.java
  34. 2
      dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterBaseTaskExecThread.java
  35. 14
      dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/sql/SqlTask.java
  36. 6
      dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/MasterExecThreadTest.java
  37. 4
      dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/http/HttpTaskTest.java
  38. 4
      dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/quartz/ProcessScheduleJob.java
  39. 52
      dolphinscheduler-standalone-server/pom.xml
  40. 82
      dolphinscheduler-standalone-server/src/main/java/org/apache/dolphinscheduler/server/StandaloneServer.java
  41. 22
      dolphinscheduler-standalone-server/src/main/resources/registry.properties
  42. 68
      dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/start.vue
  43. 4
      dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js
  44. 3
      dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js
  45. 103
      install.sh
  46. 18
      pom.xml
  47. 4
      script/dolphinscheduler-daemon.sh
  48. 4
      sql/create/release-1.0.0_schema/mysql/dolphinscheduler_ddl.sql
  49. 943
      sql/dolphinscheduler_h2.sql
  50. 24
      style/checkstyle-suppressions.xml
  51. 6
      style/checkstyle.xml
  52. 1
      tools/dependencies/known-dependencies.txt

1
.github/actions/reviewdog-setup

@ -0,0 +1 @@
Subproject commit 2fc905b1875f2e6b91c4201a4dc6eaa21b86547e

53
.github/actions/sanity-check/action.yml

@ -0,0 +1,53 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
name: "Sanity Check"
description: |
Action to perform some very basic lightweight checks, like code styles, license headers, etc.,
and fail fast to avoid wasting resources running heavyweight checks, like unit tests, e2e tests.
inputs:
token:
description: 'The GitHub API token'
required: false
runs:
using: "composite"
steps:
- name: Check License Header
uses: apache/skywalking-eyes@a63f4afcc287dfb3727ecc45a4afc55a5e69c15f
- uses: ./.github/actions/reviewdog-setup
with:
reviewdog_version: v0.10.2
- shell: bash
run: ./mvnw -B -q checkstyle:checkstyle-aggregate
- shell: bash
env:
REVIEWDOG_GITHUB_API_TOKEN: ${{ inputs.token }}
run: |
if [[ -n "${{ inputs.token }}" ]]; then
reviewdog -f=checkstyle \
-reporter="github-pr-check" \
-filter-mode="added" \
-fail-on-error="true" < target/checkstyle-result.xml
fi

38
.github/workflows/ci_backend.yml → .github/workflows/backend.yml

@ -19,8 +19,10 @@ name: Backend
on: on:
push: push:
branches:
- dev
paths: paths:
- '.github/workflows/ci_backend.yml' - '.github/workflows/backend.yml'
- 'package.xml' - 'package.xml'
- 'pom.xml' - 'pom.xml'
- 'dolphinscheduler-alert/**' - 'dolphinscheduler-alert/**'
@ -31,7 +33,7 @@ on:
- 'dolphinscheduler-server/**' - 'dolphinscheduler-server/**'
pull_request: pull_request:
paths: paths:
- '.github/workflows/ci_backend.yml' - '.github/workflows/backend.yml'
- 'package.xml' - 'package.xml'
- 'pom.xml' - 'pom.xml'
- 'dolphinscheduler-alert/**' - 'dolphinscheduler-alert/**'
@ -41,20 +43,34 @@ on:
- 'dolphinscheduler-rpc/**' - 'dolphinscheduler-rpc/**'
- 'dolphinscheduler-server/**' - 'dolphinscheduler-server/**'
concurrency:
group: backend-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs: jobs:
Compile-check: build:
name: Build
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
submodule: true submodules: true
- name: Check License Header - name: Sanity Check
uses: apache/skywalking-eyes@ec88b7d850018c8983f87729ea88549e100c5c82 uses: ./.github/actions/sanity-check
- name: Set up JDK 1.8 with:
uses: actions/setup-java@v1 token: ${{ secrets.GITHUB_TOKEN }} # We only need to pass this token in one workflow
- uses: actions/cache@v2
with: with:
java-version: 1.8 path: ~/.m2/repository
- name: Compile key: ${{ runner.os }}-maven
run: mvn -B clean compile install -Prelease -Dmaven.test.skip=true - name: Build and Package
run: |
./mvnw -B clean install \
-Prelease \
-Dmaven.test.skip=true \
-Dcheckstyle.skip=true \
-Dhttp.keepAlive=false \
-Dmaven.wagon.http.pool=false \
-Dmaven.wagon.httpconnectionManager.ttlSeconds=120
- name: Check dependency license - name: Check dependency license
run: tools/dependencies/check-LICENSE.sh run: tools/dependencies/check-LICENSE.sh

22
.github/workflows/ci_e2e.yml → .github/workflows/e2e.yml

@ -20,26 +20,26 @@ env:
DOCKER_DIR: ./docker DOCKER_DIR: ./docker
LOG_DIR: /tmp/dolphinscheduler LOG_DIR: /tmp/dolphinscheduler
name: e2e Test name: Test
jobs: concurrency:
group: e2e-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
build: jobs:
name: Test test:
name: E2E
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
submodule: true submodules: true
- name: Check License Header - name: Sanity Check
uses: apache/skywalking-eyes@ec88b7d850018c8983f87729ea88549e100c5c82 uses: ./.github/actions/sanity-check
- uses: actions/cache@v1 - uses: actions/cache@v1
with: with:
path: ~/.m2/repository path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }} key: ${{ runner.os }}-maven
restore-keys: |
${{ runner.os }}-maven-
- name: Build Image - name: Build Image
run: | run: |
sh ./docker/build/hooks/build sh ./docker/build/hooks/build

31
.github/workflows/ci_frontend.yml → .github/workflows/frontend.yml

@ -19,31 +19,44 @@ name: Frontend
on: on:
push: push:
branches:
- dev
paths: paths:
- '.github/workflows/ci_frontend.yml' - '.github/workflows/frontend.yml'
- 'dolphinscheduler-ui/**' - 'dolphinscheduler-ui/**'
pull_request: pull_request:
paths: paths:
- '.github/workflows/ci_frontend.yml' - '.github/workflows/frontend.yml'
- 'dolphinscheduler-ui/**' - 'dolphinscheduler-ui/**'
defaults:
run:
working-directory: dolphinscheduler-ui
concurrency:
group: frontend-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs: jobs:
Compile-check: build:
name: Build
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest, macos-latest] os: [ ubuntu-latest, macos-latest ]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
submodule: true submodules: true
- if: matrix.os == 'ubuntu-latest'
name: Sanity Check
uses: ./.github/actions/sanity-check
- name: Set up Node.js - name: Set up Node.js
uses: actions/setup-node@v1 uses: actions/setup-node@v2
with: with:
version: 8 node-version: 8
- name: Compile - name: Compile and Build
run: | run: |
cd dolphinscheduler-ui
npm install node-sass --unsafe-perm npm install node-sass --unsafe-perm
npm install npm install
npm run lint npm run lint

100
.github/workflows/ci_ut.yml → .github/workflows/unit-test.yml

@ -15,103 +15,91 @@
# limitations under the License. # limitations under the License.
# #
name: Test
on: on:
pull_request: pull_request:
paths-ignore:
- '**/*.md'
- 'dolphinscheduler-ui'
push: push:
paths-ignore:
- '**/*.md'
- 'dolphinscheduler-ui'
branches: branches:
- dev - dev
env: env:
LOG_DIR: /tmp/dolphinscheduler LOG_DIR: /tmp/dolphinscheduler
name: Unit Test concurrency:
group: unit-test-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs: jobs:
unit-test:
build: name: Unit Test
name: Build
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
submodule: true submodules: true
- name: Check License Header - name: Sanity Check
uses: apache/skywalking-eyes@ec88b7d850018c8983f87729ea88549e100c5c82 uses: ./.github/actions/sanity-check
env: - name: Set up JDK 1.8
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Only enable review / suggestion here uses: actions/setup-java@v2
- uses: actions/cache@v1 with:
java-version: 8
distribution: 'adopt'
- uses: actions/cache@v2
with: with:
path: ~/.m2/repository path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }} key: ${{ runner.os }}-maven
restore-keys: |
${{ runner.os }}-maven-
- name: Bootstrap database - name: Bootstrap database
run: | run: |
sed -i "/image: bitnami\/postgresql/a\ ports:\n - 5432:5432" $(pwd)/docker/docker-swarm/docker-compose.yml sed -i "/image: bitnami\/postgresql/a\ ports:\n - 5432:5432" $(pwd)/docker/docker-swarm/docker-compose.yml
sed -i "/image: bitnami\/zookeeper/a\ ports:\n - 2181:2181" $(pwd)/docker/docker-swarm/docker-compose.yml sed -i "/image: bitnami\/zookeeper/a\ ports:\n - 2181:2181" $(pwd)/docker/docker-swarm/docker-compose.yml
docker-compose -f $(pwd)/docker/docker-swarm/docker-compose.yml up -d dolphinscheduler-zookeeper dolphinscheduler-postgresql docker-compose -f $(pwd)/docker/docker-swarm/docker-compose.yml up -d dolphinscheduler-zookeeper dolphinscheduler-postgresql
until docker logs docker-swarm_dolphinscheduler-postgresql_1 2>&1 | grep 'listening on IPv4 address'; do echo "waiting for postgresql ready ..."; sleep 1; done until docker logs docker-swarm_dolphinscheduler-postgresql_1 2>&1 | grep 'listening on IPv4 address'; do echo "waiting for postgresql ready ..."; sleep 1; done
docker run --rm --network docker-swarm_dolphinscheduler -v $(pwd)/sql/dolphinscheduler_postgre.sql:/docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql bitnami/postgresql:latest bash -c "PGPASSWORD=root psql -h docker-swarm_dolphinscheduler-postgresql_1 -U root -d dolphinscheduler -v ON_ERROR_STOP=1 -f /docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql" docker run --rm --network docker-swarm_dolphinscheduler -v $(pwd)/sql/dolphinscheduler_postgre.sql:/docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql bitnami/postgresql:11.11.0 bash -c "PGPASSWORD=root psql -h docker-swarm_dolphinscheduler-postgresql_1 -U root -d dolphinscheduler -v ON_ERROR_STOP=1 -f /docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql"
- name: Set up JDK 1.8
uses: actions/setup-java@v1 - name: Run Unit tests
with: run: ./mvnw clean verify -B -Dmaven.test.skip=false
java-version: 1.8
- name: Git fetch unshallow
run: |
git fetch --unshallow
git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"
git fetch origin
- name: Compile
run: |
export MAVEN_OPTS='-Dmaven.repo.local=.m2/repository -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -Xmx5g'
mvn clean verify -B -Dmaven.test.skip=false
- name: Upload coverage report to codecov - name: Upload coverage report to codecov
run: | run: CODECOV_TOKEN="09c2663f-b091-4258-8a47-c981827eb29a" bash <(curl -s https://codecov.io/bash)
CODECOV_TOKEN="09c2663f-b091-4258-8a47-c981827eb29a" bash <(curl -s https://codecov.io/bash)
# Set up JDK 11 for SonarCloud. # Set up JDK 11 for SonarCloud.
- name: Set up JDK 1.11 - name: Set up JDK 11
uses: actions/setup-java@v1 uses: actions/setup-java@v2
with: with:
java-version: 1.11 java-version: 11
distribution: 'adopt'
- name: Run SonarCloud Analysis - name: Run SonarCloud Analysis
run: > run: >
mvn --batch-mode verify sonar:sonar ./mvnw --batch-mode verify sonar:sonar
-Dsonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml -Dsonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
-Dmaven.test.skip=true -Dmaven.test.skip=true
-Dcheckstyle.skip=true
-Dsonar.host.url=https://sonarcloud.io -Dsonar.host.url=https://sonarcloud.io
-Dsonar.organization=apache -Dsonar.organization=apache
-Dsonar.core.codeCoveragePlugin=jacoco -Dsonar.core.codeCoveragePlugin=jacoco
-Dsonar.projectKey=apache-dolphinscheduler -Dsonar.projectKey=apache-dolphinscheduler
-Dsonar.login=e4058004bc6be89decf558ac819aa1ecbee57682 -Dsonar.login=e4058004bc6be89decf558ac819aa1ecbee57682
-Dsonar.exclusions=dolphinscheduler-ui/src/**/i18n/locale/*.js,dolphinscheduler-microbench/src/**/* -Dsonar.exclusions=dolphinscheduler-ui/src/**/i18n/locale/*.js,dolphinscheduler-microbench/src/**/*
-Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
- name: Collect logs - name: Collect logs
continue-on-error: true
run: | run: |
mkdir -p ${LOG_DIR} mkdir -p ${LOG_DIR}
docker-compose -f $(pwd)/docker/docker-swarm/docker-compose.yml logs dolphinscheduler-postgresql > ${LOG_DIR}/db.txt docker-compose -f $(pwd)/docker/docker-swarm/docker-compose.yml logs dolphinscheduler-postgresql > ${LOG_DIR}/db.txt
continue-on-error: true
Checkstyle: - name: Upload logs
name: Check code style uses: actions/upload-artifact@v2
runs-on: ubuntu-latest continue-on-error: true
steps:
- name: Checkout
uses: actions/checkout@v2
with: with:
submodule: true name: unit-test-logs
- name: check code style path: ${LOG_DIR}
env:
WORKDIR: ./
REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CHECKSTYLE_CONFIG: style/checkstyle.xml
REVIEWDOG_VERSION: v0.10.2
run: |
wget -O - -q https://github.com/checkstyle/checkstyle/releases/download/checkstyle-8.43/checkstyle-8.43-all.jar > /opt/checkstyle.jar
wget -O - -q https://raw.githubusercontent.com/reviewdog/reviewdog/master/install.sh | sh -s -- -b /opt ${REVIEWDOG_VERSION}
java -jar /opt/checkstyle.jar "${WORKDIR}" -c "${CHECKSTYLE_CONFIG}" -f xml \
| /opt/reviewdog -f=checkstyle \
-reporter="${INPUT_REPORTER:-github-pr-check}" \
-filter-mode="${INPUT_FILTER_MODE:-added}" \
-fail-on-error="${INPUT_FAIL_ON_ERROR:-false}"

6
.gitmodules vendored

@ -21,3 +21,9 @@
[submodule ".github/actions/lable-on-issue"] [submodule ".github/actions/lable-on-issue"]
path = .github/actions/lable-on-issue path = .github/actions/lable-on-issue
url = https://github.com/xingchun-chen/labeler url = https://github.com/xingchun-chen/labeler
[submodule ".github/actions/translate-on-issue"]
path = .github/actions/translate-on-issue
url = https://github.com/xingchun-chen/translation-helper.git
[submodule ".github/actions/reviewdog-setup"]
path = .github/actions/reviewdog-setup
url = https://github.com/reviewdog/action-setup

5
.licenserc.yaml

@ -40,5 +40,10 @@ header:
- '**/.gitignore' - '**/.gitignore'
- '**/LICENSE' - '**/LICENSE'
- '**/NOTICE' - '**/NOTICE'
- '**/node_modules/**'
- '.github/actions/comment-on-issue/**'
- '.github/actions/lable-on-issue/**'
- '.github/actions/reviewdog-setup/**'
- '.github/actions/translate-on-issue/**'
comment: on-failure comment: on-failure

3
README.md

@ -8,6 +8,7 @@ Dolphin Scheduler Official Website
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apache-dolphinscheduler&metric=alert_status)](https://sonarcloud.io/dashboard?id=apache-dolphinscheduler) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apache-dolphinscheduler&metric=alert_status)](https://sonarcloud.io/dashboard?id=apache-dolphinscheduler)
[![Twitter Follow](https://img.shields.io/twitter/follow/dolphinschedule.svg?style=social&label=Follow)](https://twitter.com/dolphinschedule) [![Twitter Follow](https://img.shields.io/twitter/follow/dolphinschedule.svg?style=social&label=Follow)](https://twitter.com/dolphinschedule)
[![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw) [![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw)
[![Join the chat at https://gitter.im/apache-dolphinscheduler/community](https://badges.gitter.im/apache-dolphinscheduler/community.svg)](https://gitter.im/apache-dolphinscheduler/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
@ -91,7 +92,7 @@ We would like to express our deep gratitude to all the open-source projects used
You are very welcome to communicate with the developers and users of Dolphin Scheduler. There are two ways to find them: You are very welcome to communicate with the developers and users of Dolphin Scheduler. There are two ways to find them:
1. Join the Slack channel by [this invitation link](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw). 1. Join the Slack channel by [this invitation link](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw).
2. Follow the [Twitter account of Dolphin Scheduler](https://twitter.com/dolphinschedule) and get the latest news on time. 2. Follow the [Twitter account of DolphinScheduler](https://twitter.com/dolphinschedule) and get the latest news on time.
### Contributor over time ### Contributor over time

4
docker/build/hooks/build

@ -39,8 +39,8 @@ echo "Repo: $DOCKER_REPO"
echo -e "Current Directory is $(pwd)\n" echo -e "Current Directory is $(pwd)\n"
# maven package(Project Directory) # maven package(Project Directory)
echo -e "mvn -B clean compile package -Prelease -Dmaven.test.skip=true" echo -e "./mvnw -B clean package -Prelease -Dmaven.test.skip=true -Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120"
mvn -B clean compile package -Prelease -Dmaven.test.skip=true ./mvnw -B clean package -Prelease -Dmaven.test.skip=true -Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
# mv dolphinscheduler-bin.tar.gz file to docker/build directory # mv dolphinscheduler-bin.tar.gz file to docker/build directory
echo -e "mv $(pwd)/dolphinscheduler-dist/target/apache-dolphinscheduler-${VERSION}-bin.tar.gz $(pwd)/docker/build/\n" echo -e "mv $(pwd)/dolphinscheduler-dist/target/apache-dolphinscheduler-${VERSION}-bin.tar.gz $(pwd)/docker/build/\n"

4
docker/build/hooks/build.bat

@ -39,8 +39,8 @@ echo "Repo: %DOCKER_REPO%"
echo "Current Directory is %cd%" echo "Current Directory is %cd%"
:: maven package(Project Directory) :: maven package(Project Directory)
echo "call mvn clean compile package -Prelease" echo "mvn clean package -Prelease -DskipTests=true -Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120"
call mvn clean compile package -Prelease -DskipTests=true call mvn clean package -Prelease -DskipTests=true -Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
if "%errorlevel%"=="1" goto :mvnFailed if "%errorlevel%"=="1" goto :mvnFailed
:: move dolphinscheduler-bin.tar.gz file to docker/build directory :: move dolphinscheduler-bin.tar.gz file to docker/build directory

11
dolphinscheduler-alert-plugin/dolphinscheduler-alert-email/pom.xml

@ -31,17 +31,6 @@
<packaging>dolphinscheduler-plugin</packaging> <packaging>dolphinscheduler-plugin</packaging>
<dependencies> <dependencies>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<scope>provided</scope>
</dependency>
<dependency> <dependency>
<groupId>org.apache.commons</groupId> <groupId>org.apache.commons</groupId>
<artifactId>commons-collections4</artifactId> <artifactId>commons-collections4</artifactId>

7
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/ExecutorController.java

@ -99,6 +99,7 @@ public class ExecutorController extends BaseController {
@ApiImplicitParam(name = "processInstancePriority", value = "PROCESS_INSTANCE_PRIORITY", required = true, dataType = "Priority"), @ApiImplicitParam(name = "processInstancePriority", value = "PROCESS_INSTANCE_PRIORITY", required = true, dataType = "Priority"),
@ApiImplicitParam(name = "workerGroup", value = "WORKER_GROUP", dataType = "String", example = "default"), @ApiImplicitParam(name = "workerGroup", value = "WORKER_GROUP", dataType = "String", example = "default"),
@ApiImplicitParam(name = "timeout", value = "TIMEOUT", dataType = "Int", example = "100"), @ApiImplicitParam(name = "timeout", value = "TIMEOUT", dataType = "Int", example = "100"),
@ApiImplicitParam(name = "expectedParallelismNumber", value = "EXPECTED_PARALLELISM_NUMBER", dataType = "Int", example = "8")
}) })
@PostMapping(value = "start-process-instance") @PostMapping(value = "start-process-instance")
@ResponseStatus(HttpStatus.OK) @ResponseStatus(HttpStatus.OK)
@ -118,7 +119,8 @@ public class ExecutorController extends BaseController {
@RequestParam(value = "processInstancePriority", required = false) Priority processInstancePriority, @RequestParam(value = "processInstancePriority", required = false) Priority processInstancePriority,
@RequestParam(value = "workerGroup", required = false, defaultValue = "default") String workerGroup, @RequestParam(value = "workerGroup", required = false, defaultValue = "default") String workerGroup,
@RequestParam(value = "timeout", required = false) Integer timeout, @RequestParam(value = "timeout", required = false) Integer timeout,
@RequestParam(value = "startParams", required = false) String startParams) { @RequestParam(value = "startParams", required = false) String startParams,
@RequestParam(value = "timeout", required = false) Integer expectedParallelismNumber) {
if (timeout == null) { if (timeout == null) {
timeout = Constants.MAX_TASK_TIMEOUT; timeout = Constants.MAX_TASK_TIMEOUT;
@ -128,8 +130,7 @@ public class ExecutorController extends BaseController {
startParamMap = JSONUtils.toMap(startParams); startParamMap = JSONUtils.toMap(startParams);
} }
Map<String, Object> result = execService.execProcessInstance(loginUser, projectCode, processDefinitionCode, scheduleTime, execType, failureStrategy, Map<String, Object> result = execService.execProcessInstance(loginUser, projectCode, processDefinitionCode, scheduleTime, execType, failureStrategy,
startNodeList, taskDependType, warningType, startNodeList, taskDependType, warningType, warningGroupId, runMode, processInstancePriority, workerGroup, timeout, startParamMap, expectedParallelismNumber);
warningGroupId, runMode, processInstancePriority, workerGroup, timeout, startParamMap);
return returnDataList(result); return returnDataList(result);
} }

3
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/ExecutorService.java

@ -52,6 +52,7 @@ public interface ExecutorService {
* @param runMode run mode * @param runMode run mode
* @param timeout timeout * @param timeout timeout
* @param startParams the global param values which pass to new process instance * @param startParams the global param values which pass to new process instance
* @param expectedParallelismNumber the expected parallelism number when execute complement in parallel mode
* @return execute process instance code * @return execute process instance code
*/ */
Map<String, Object> execProcessInstance(User loginUser, long projectCode, Map<String, Object> execProcessInstance(User loginUser, long projectCode,
@ -60,7 +61,7 @@ public interface ExecutorService {
TaskDependType taskDependType, WarningType warningType, int warningGroupId, TaskDependType taskDependType, WarningType warningType, int warningGroupId,
RunMode runMode, RunMode runMode,
Priority processInstancePriority, String workerGroup, Integer timeout, Priority processInstancePriority, String workerGroup, Integer timeout,
Map<String, String> startParams); Map<String, String> startParams, Integer expectedParallelismNumber);
/** /**
* check whether the process definition can be executed * check whether the process definition can be executed

37
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ExecutorServiceImpl.java

@ -51,6 +51,7 @@ import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.Tenant; import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper; import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.quartz.cron.CronUtils; import org.apache.dolphinscheduler.service.quartz.cron.CronUtils;
@ -89,6 +90,11 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
@Autowired @Autowired
private MonitorService monitorService; private MonitorService monitorService;
@Autowired
private ProcessInstanceMapper processInstanceMapper;
@Autowired @Autowired
private ProcessService processService; private ProcessService processService;
@ -100,7 +106,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
* @param processDefinitionCode process definition code * @param processDefinitionCode process definition code
* @param cronTime cron time * @param cronTime cron time
* @param commandType command type * @param commandType command type
* @param failureStrategy failuer strategy * @param failureStrategy failure strategy
* @param startNodeList start nodelist * @param startNodeList start nodelist
* @param taskDependType node dependency type * @param taskDependType node dependency type
* @param warningType warning type * @param warningType warning type
@ -110,6 +116,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
* @param runMode run mode * @param runMode run mode
* @param timeout timeout * @param timeout timeout
* @param startParams the global param values which pass to new process instance * @param startParams the global param values which pass to new process instance
* @param expectedParallelismNumber the expected parallelism number when execute complement in parallel mode
* @return execute process instance code * @return execute process instance code
*/ */
@Override @Override
@ -119,7 +126,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
TaskDependType taskDependType, WarningType warningType, int warningGroupId, TaskDependType taskDependType, WarningType warningType, int warningGroupId,
RunMode runMode, RunMode runMode,
Priority processInstancePriority, String workerGroup, Integer timeout, Priority processInstancePriority, String workerGroup, Integer timeout,
Map<String, String> startParams) { Map<String, String> startParams, Integer expectedParallelismNumber) {
Project project = projectMapper.queryByCode(projectCode); Project project = projectMapper.queryByCode(projectCode);
//check user access for project //check user access for project
Map<String, Object> result = projectService.checkProjectAndAuth(loginUser, project, projectCode); Map<String, Object> result = projectService.checkProjectAndAuth(loginUser, project, projectCode);
@ -156,7 +163,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
*/ */
int create = this.createCommand(commandType, processDefinition.getCode(), int create = this.createCommand(commandType, processDefinition.getCode(),
taskDependType, failureStrategy, startNodeList, cronTime, warningType, loginUser.getId(), taskDependType, failureStrategy, startNodeList, cronTime, warningType, loginUser.getId(),
warningGroupId, runMode, processInstancePriority, workerGroup, startParams); warningGroupId, runMode, processInstancePriority, workerGroup, startParams, expectedParallelismNumber);
if (create > 0) { if (create > 0) {
processDefinition.setWarningGroupId(warningGroupId); processDefinition.setWarningGroupId(warningGroupId);
@ -485,7 +492,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
String startNodeList, String schedule, WarningType warningType, String startNodeList, String schedule, WarningType warningType,
int executorId, int warningGroupId, int executorId, int warningGroupId,
RunMode runMode, Priority processInstancePriority, String workerGroup, RunMode runMode, Priority processInstancePriority, String workerGroup,
Map<String, String> startParams) { Map<String, String> startParams, Integer expectedParallelismNumber) {
/** /**
* instantiate command schedule instance * instantiate command schedule instance
@ -542,21 +549,31 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
return processService.createCommand(command); return processService.createCommand(command);
} else if (runMode == RunMode.RUN_MODE_PARALLEL) { } else if (runMode == RunMode.RUN_MODE_PARALLEL) {
List<Schedule> schedules = processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefineCode); List<Schedule> schedules = processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefineCode);
List<Date> listDate = new LinkedList<>(); LinkedList<Date> listDate = new LinkedList<>();
if (!CollectionUtils.isEmpty(schedules)) { if (!CollectionUtils.isEmpty(schedules)) {
for (Schedule item : schedules) { for (Schedule item : schedules) {
listDate.addAll(CronUtils.getSelfFireDateList(start, end, item.getCrontab())); listDate.addAll(CronUtils.getSelfFireDateList(start, end, item.getCrontab()));
} }
} }
if (!CollectionUtils.isEmpty(listDate)) { if (!CollectionUtils.isEmpty(listDate)) {
// loop by schedule date int effectThreadsCount = expectedParallelismNumber == null ? listDate.size() : Math.min(listDate.size(), expectedParallelismNumber);
for (Date date : listDate) { logger.info("In parallel mode, current expectedParallelismNumber:{}", effectThreadsCount);
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, DateUtils.dateToString(date));
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, DateUtils.dateToString(date)); int chunkSize = listDate.size() / effectThreadsCount;
listDate.addFirst(start);
listDate.addLast(end);
for (int i = 0; i < effectThreadsCount; i++) {
int rangeStart = i == 0 ? i : (i * chunkSize);
int rangeEnd = i == effectThreadsCount - 1 ? listDate.size() - 1
: rangeStart + chunkSize + 1;
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, DateUtils.dateToString(listDate.get(rangeStart)));
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, DateUtils.dateToString(listDate.get(rangeEnd)));
command.setCommandParam(JSONUtils.toJsonString(cmdParam)); command.setCommandParam(JSONUtils.toJsonString(cmdParam));
processService.createCommand(command); processService.createCommand(command);
} }
return listDate.size();
return effectThreadsCount;
} else { } else {
// loop by day // loop by day
int runCunt = 0; int runCunt = 0;

4
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TenantServiceImpl.java

@ -155,8 +155,8 @@ public class TenantServiceImpl extends BaseServiceImpl implements TenantService
* updateProcessInstance tenant * updateProcessInstance tenant
* *
* @param loginUser login user * @param loginUser login user
* @param id tennat id * @param id tenant id
* @param tenantCode tennat code * @param tenantCode tenant code
* @param queueId queue id * @param queueId queue id
* @param desc description * @param desc description
* @return update result code * @return update result code

4
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/UsersServiceImpl.java

@ -313,7 +313,7 @@ public class UsersServiceImpl extends BaseServiceImpl implements UsersService {
* *
* @param loginUser login user * @param loginUser login user
* @param pageNo page number * @param pageNo page number
* @param searchVal search avlue * @param searchVal search value
* @param pageSize page size * @param pageSize page size
* @return user list page * @return user list page
*/ */
@ -347,7 +347,7 @@ public class UsersServiceImpl extends BaseServiceImpl implements UsersService {
* @param userName user name * @param userName user name
* @param userPassword user password * @param userPassword user password
* @param email email * @param email email
* @param tenantId tennat id * @param tenantId tenant id
* @param phone phone * @param phone phone
* @param queue queue * @param queue queue
* @return update result code * @return update result code

1
dolphinscheduler-api/src/main/resources/i18n/messages_en_US.properties

@ -171,6 +171,7 @@ PROCESS_INSTANCE_START_TIME=process instance start time
PROCESS_INSTANCE_END_TIME=process instance end time PROCESS_INSTANCE_END_TIME=process instance end time
PROCESS_INSTANCE_SIZE=process instance size PROCESS_INSTANCE_SIZE=process instance size
PROCESS_INSTANCE_PRIORITY=process instance priority PROCESS_INSTANCE_PRIORITY=process instance priority
EXPECTED_PARALLELISM_NUMBER=custom parallelism to set the complement task threads
UPDATE_SCHEDULE_NOTES=update schedule UPDATE_SCHEDULE_NOTES=update schedule
SCHEDULE_ID=schedule id SCHEDULE_ID=schedule id
ONLINE_SCHEDULE_NOTES=online schedule ONLINE_SCHEDULE_NOTES=online schedule

1
dolphinscheduler-api/src/main/resources/i18n/messages_zh_CN.properties

@ -157,6 +157,7 @@ RECEIVERS=收件人
RECEIVERS_CC=收件人(抄送) RECEIVERS_CC=收件人(抄送)
WORKER_GROUP_ID=Worker Server分组ID WORKER_GROUP_ID=Worker Server分组ID
PROCESS_INSTANCE_PRIORITY=流程实例优先级 PROCESS_INSTANCE_PRIORITY=流程实例优先级
EXPECTED_PARALLELISM_NUMBER=补数任务自定义并行度
UPDATE_SCHEDULE_NOTES=更新定时 UPDATE_SCHEDULE_NOTES=更新定时
SCHEDULE_ID=定时ID SCHEDULE_ID=定时ID
ONLINE_SCHEDULE_NOTES=定时上线 ONLINE_SCHEDULE_NOTES=定时上线

49
dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/ExecutorControllerTest.java

@ -22,23 +22,16 @@ import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status; import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
import org.apache.dolphinscheduler.api.enums.ExecuteType; import org.apache.dolphinscheduler.api.enums.ExecuteType;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.ExecutorService;
import org.apache.dolphinscheduler.api.utils.Result; import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.FailureStrategy; import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.WarningType; import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.utils.JSONUtils; import org.apache.dolphinscheduler.common.utils.JSONUtils;
import java.util.HashMap;
import java.util.Map;
import org.junit.Assert; import org.junit.Assert;
import org.junit.Ignore;
import org.junit.Test; import org.junit.Test;
import org.mockito.Mockito;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
import org.springframework.boot.test.mock.mockito.MockBean;
import org.springframework.http.MediaType; import org.springframework.http.MediaType;
import org.springframework.test.web.servlet.MvcResult; import org.springframework.test.web.servlet.MvcResult;
import org.springframework.util.LinkedMultiValueMap; import org.springframework.util.LinkedMultiValueMap;
@ -51,79 +44,65 @@ public class ExecutorControllerTest extends AbstractControllerTest {
private static Logger logger = LoggerFactory.getLogger(ExecutorControllerTest.class); private static Logger logger = LoggerFactory.getLogger(ExecutorControllerTest.class);
@MockBean @Ignore
private ExecutorService executorService;
@Test @Test
public void testStartProcessInstance() throws Exception { public void testStartProcessInstance() throws Exception {
Map<String, Object> resultData = new HashMap<>();
resultData.put(Constants.STATUS, Status.SUCCESS);
Mockito.when(executorService.execProcessInstance(Mockito.any(), Mockito.anyLong(), Mockito.anyLong(), Mockito.any(),
Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any(), Mockito.anyInt(),
Mockito.any(), Mockito.any(), Mockito.any(), Mockito.anyInt(), Mockito.any())).thenReturn(resultData);
MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>(); MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
paramsMap.add("processDefinitionCode", "1"); paramsMap.add("processDefinitionId", "40");
paramsMap.add("scheduleTime", ""); paramsMap.add("scheduleTime", "");
paramsMap.add("failureStrategy", String.valueOf(FailureStrategy.CONTINUE)); paramsMap.add("failureStrategy", String.valueOf(FailureStrategy.CONTINUE));
paramsMap.add("startNodeList", ""); paramsMap.add("startNodeList", "");
paramsMap.add("taskDependType", ""); paramsMap.add("taskDependType", "");
paramsMap.add("execType", ""); paramsMap.add("execType", "");
paramsMap.add("warningType", String.valueOf(WarningType.NONE)); paramsMap.add("warningType", String.valueOf(WarningType.NONE));
paramsMap.add("warningGroupId", "1"); paramsMap.add("warningGroupId", "");
paramsMap.add("receivers", ""); paramsMap.add("receivers", "");
paramsMap.add("receiversCc", ""); paramsMap.add("receiversCc", "");
paramsMap.add("runMode", ""); paramsMap.add("runMode", "");
paramsMap.add("processInstancePriority", ""); paramsMap.add("processInstancePriority", "");
paramsMap.add("workerGroupId", "1"); paramsMap.add("workerGroupId", "");
paramsMap.add("timeout", ""); paramsMap.add("timeout", "");
MvcResult mvcResult = mockMvc.perform(post("/projects/{projectCode}/executors/start-process-instance", 1L) MvcResult mvcResult = mockMvc.perform(post("/projects/{projectName}/executors/start-process-instance", "cxc_1113")
.header("sessionId", sessionId) .header("sessionId", sessionId)
.params(paramsMap)) .params(paramsMap))
.andExpect(status().isOk()) .andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8)) .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
.andReturn(); .andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class); Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue()); Assert.assertTrue(result != null && result.isSuccess());
logger.info(mvcResult.getResponse().getContentAsString()); logger.info(mvcResult.getResponse().getContentAsString());
} }
@Ignore
@Test @Test
public void testExecute() throws Exception { public void testExecute() throws Exception {
Map<String, Object> resultData = new HashMap<>();
resultData.put(Constants.STATUS, Status.SUCCESS);
Mockito.when(executorService.execute(Mockito.any(), Mockito.anyLong(), Mockito.anyInt(), Mockito.any())).thenReturn(resultData);
MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>(); MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
paramsMap.add("processInstanceId", "40"); paramsMap.add("processInstanceId", "40");
paramsMap.add("executeType", String.valueOf(ExecuteType.NONE)); paramsMap.add("executeType", String.valueOf(ExecuteType.NONE));
MvcResult mvcResult = mockMvc.perform(post("/projects/{projectCode}/executors/execute", 1L) MvcResult mvcResult = mockMvc.perform(post("/projects/{projectName}/executors/execute", "cxc_1113")
.header("sessionId", sessionId) .header("sessionId", sessionId)
.params(paramsMap)) .params(paramsMap))
.andExpect(status().isOk()) .andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8)) .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
.andReturn(); .andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class); Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue()); Assert.assertTrue(result != null && result.isSuccess());
logger.info(mvcResult.getResponse().getContentAsString()); logger.info(mvcResult.getResponse().getContentAsString());
} }
@Test @Test
public void testStartCheck() throws Exception { public void testStartCheckProcessDefinition() throws Exception {
Map<String, Object> resultData = new HashMap<>();
resultData.put(Constants.STATUS, Status.SUCCESS);
Mockito.when(executorService.startCheckByProcessDefinedCode(Mockito.anyLong())).thenReturn(resultData);
MvcResult mvcResult = mockMvc.perform(post("/projects/{projectCode}/executors/start-check", 1L) MvcResult mvcResult = mockMvc.perform(post("/projects/{projectName}/executors/start-check", "cxc_1113")
.header(SESSION_ID, sessionId) .header(SESSION_ID, sessionId)
.param("processDefinitionCode", "1")) .param("processDefinitionId", "40"))
.andExpect(status().isOk()) .andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8)) .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
.andReturn(); .andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class); Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue()); Assert.assertTrue(result != null && result.isSuccess());
logger.info(mvcResult.getResponse().getContentAsString()); logger.info(mvcResult.getResponse().getContentAsString());
} }

14
dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/ExecutorServiceTest.java

@ -158,7 +158,7 @@ public class ExecutorServiceTest {
null, null, null, null,
null, null, 0, null, null, 0,
RunMode.RUN_MODE_SERIAL, RunMode.RUN_MODE_SERIAL,
Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null); Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
verify(processService, times(1)).createCommand(any(Command.class)); verify(processService, times(1)).createCommand(any(Command.class));
@ -176,7 +176,7 @@ public class ExecutorServiceTest {
null, "n1,n2", null, "n1,n2",
null, null, 0, null, null, 0,
RunMode.RUN_MODE_SERIAL, RunMode.RUN_MODE_SERIAL,
Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null); Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
verify(processService, times(1)).createCommand(any(Command.class)); verify(processService, times(1)).createCommand(any(Command.class));
@ -194,7 +194,7 @@ public class ExecutorServiceTest {
null, null, null, null,
null, null, 0, null, null, 0,
RunMode.RUN_MODE_SERIAL, RunMode.RUN_MODE_SERIAL,
Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null); Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
Assert.assertEquals(Status.START_PROCESS_INSTANCE_ERROR, result.get(Constants.STATUS)); Assert.assertEquals(Status.START_PROCESS_INSTANCE_ERROR, result.get(Constants.STATUS));
verify(processService, times(0)).createCommand(any(Command.class)); verify(processService, times(0)).createCommand(any(Command.class));
} }
@ -211,7 +211,7 @@ public class ExecutorServiceTest {
null, null, null, null,
null, null, 0, null, null, 0,
RunMode.RUN_MODE_SERIAL, RunMode.RUN_MODE_SERIAL,
Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null); Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
verify(processService, times(1)).createCommand(any(Command.class)); verify(processService, times(1)).createCommand(any(Command.class));
} }
@ -228,7 +228,7 @@ public class ExecutorServiceTest {
null, null, null, null,
null, null, 0, null, null, 0,
RunMode.RUN_MODE_PARALLEL, RunMode.RUN_MODE_PARALLEL,
Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null); Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
verify(processService, times(31)).createCommand(any(Command.class)); verify(processService, times(31)).createCommand(any(Command.class));
@ -246,7 +246,7 @@ public class ExecutorServiceTest {
null, null, null, null,
null, null, 0, null, null, 0,
RunMode.RUN_MODE_PARALLEL, RunMode.RUN_MODE_PARALLEL,
Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null); Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 15);
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
verify(processService, times(15)).createCommand(any(Command.class)); verify(processService, times(15)).createCommand(any(Command.class));
@ -261,7 +261,7 @@ public class ExecutorServiceTest {
null, null, null, null,
null, null, 0, null, null, 0,
RunMode.RUN_MODE_PARALLEL, RunMode.RUN_MODE_PARALLEL,
Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null); Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
Assert.assertEquals(result.get(Constants.STATUS), Status.MASTER_NOT_EXISTS); Assert.assertEquals(result.get(Constants.STATUS), Status.MASTER_NOT_EXISTS);
} }

66
dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/DbType.java

@ -14,65 +14,45 @@
* See the License for the specific language governing permissions and * See the License for the specific language governing permissions and
* limitations under the License. * limitations under the License.
*/ */
package org.apache.dolphinscheduler.common.enums; package org.apache.dolphinscheduler.common.enums;
import com.baomidou.mybatisplus.annotation.EnumValue; import static java.util.stream.Collectors.toMap;
import java.util.HashMap; import java.util.Arrays;
import java.util.Map;
/** import com.baomidou.mybatisplus.annotation.EnumValue;
* data base types import com.google.common.base.Functions;
*/
public enum DbType {
/**
* 0 mysql
* 1 postgresql
* 2 hive
* 3 spark
* 4 clickhouse
* 5 oracle
* 6 sqlserver
* 7 db2
* 8 presto
*/
MYSQL(0, "mysql"),
POSTGRESQL(1, "postgresql"),
HIVE(2, "hive"),
SPARK(3, "spark"),
CLICKHOUSE(4, "clickhouse"),
ORACLE(5, "oracle"),
SQLSERVER(6, "sqlserver"),
DB2(7, "db2"),
PRESTO(8, "presto");
DbType(int code, String descp) { public enum DbType {
MYSQL(0),
POSTGRESQL(1),
HIVE(2),
SPARK(3),
CLICKHOUSE(4),
ORACLE(5),
SQLSERVER(6),
DB2(7),
PRESTO(8),
H2(9);
DbType(int code) {
this.code = code; this.code = code;
this.descp = descp;
} }
@EnumValue @EnumValue
private final int code; private final int code;
private final String descp;
public int getCode() { public int getCode() {
return code; return code;
} }
public String getDescp() { private static final Map<Integer, DbType> DB_TYPE_MAP =
return descp; Arrays.stream(DbType.values()).collect(toMap(DbType::getCode, Functions.identity()));
}
private static HashMap<Integer, DbType> DB_TYPE_MAP =new HashMap<>();
static {
for (DbType dbType:DbType.values()){
DB_TYPE_MAP.put(dbType.getCode(),dbType);
}
}
public static DbType of(int type){ public static DbType of(int type) {
if(DB_TYPE_MAP.containsKey(type)){ if (DB_TYPE_MAP.containsKey(type)) {
return DB_TYPE_MAP.get(type); return DB_TYPE_MAP.get(type);
} }
return null; return null;

4
dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/graph/DAG.java

@ -433,11 +433,9 @@ public class DAG<Node, NodeInfo, EdgeInfo> {
*/ */
private Set<Node> getNeighborNodes(Node node, final Map<Node, Map<Node, EdgeInfo>> edges) { private Set<Node> getNeighborNodes(Node node, final Map<Node, Map<Node, EdgeInfo>> edges) {
final Map<Node, EdgeInfo> neighborEdges = edges.get(node); final Map<Node, EdgeInfo> neighborEdges = edges.get(node);
if (neighborEdges == null) { if (neighborEdges == null) {
return Collections.EMPTY_MAP.keySet(); return Collections.emptySet();
} }
return neighborEdges.keySet(); return neighborEdges.keySet();
} }

18
dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java

@ -34,15 +34,7 @@ import java.util.Set;
import org.slf4j.Logger; import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import org.slf4j.LoggerFactory;
/**
* property utils
* single instance
*/
public class PropertyUtils { public class PropertyUtils {
/**
* logger
*/
private static final Logger logger = LoggerFactory.getLogger(PropertyUtils.class); private static final Logger logger = LoggerFactory.getLogger(PropertyUtils.class);
private static final Properties properties = new Properties(); private static final Properties properties = new Properties();
@ -55,9 +47,6 @@ public class PropertyUtils {
loadPropertyFile(COMMON_PROPERTIES_PATH); loadPropertyFile(COMMON_PROPERTIES_PATH);
} }
/**
* init properties
*/
public static synchronized void loadPropertyFile(String... propertyFiles) { public static synchronized void loadPropertyFile(String... propertyFiles) {
for (String fileName : propertyFiles) { for (String fileName : propertyFiles) {
try (InputStream fis = PropertyUtils.class.getResourceAsStream(fileName);) { try (InputStream fis = PropertyUtils.class.getResourceAsStream(fileName);) {
@ -68,6 +57,13 @@ public class PropertyUtils {
System.exit(1); System.exit(1);
} }
} }
// Override from system properties
System.getProperties().forEach((k, v) -> {
final String key = String.valueOf(k);
logger.info("Overriding property from system property: {}", key);
PropertyUtils.setValue(key, String.valueOf(v));
});
} }
/** /**

1
dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/SpringConnectionFactory.java

@ -166,6 +166,7 @@ public class SpringConnectionFactory {
Properties properties = new Properties(); Properties properties = new Properties();
properties.setProperty("MySQL", "mysql"); properties.setProperty("MySQL", "mysql");
properties.setProperty("PostgreSQL", "pg"); properties.setProperty("PostgreSQL", "pg");
properties.setProperty("h2", "h2");
databaseIdProvider.setProperties(properties); databaseIdProvider.setProperties(properties);
return databaseIdProvider; return databaseIdProvider;
} }

4
dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessDefinitionMapper.xml

@ -77,7 +77,9 @@
left join t_ds_user tu on td.user_id = tu.id left join t_ds_user tu on td.user_id = tu.id
where td.project_code = #{projectCode} where td.project_code = #{projectCode}
<if test=" searchVal != null and searchVal != ''"> <if test=" searchVal != null and searchVal != ''">
and td.name like concat('%', #{searchVal}, '%') AND (td.name like concat('%', #{searchVal}, '%')
OR td.description like concat('%', #{searchVal}, '%')
)
</if> </if>
<if test=" userId != 0"> <if test=" userId != 0">
and td.user_id = #{userId} and td.user_id = #{userId}

4
dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessInstanceMapper.xml

@ -191,8 +191,8 @@
</foreach> </foreach>
</if> </if>
<if test="startTime!=null and endTime != null "> <if test="startTime!=null and endTime != null ">
and (schedule_time <![CDATA[ >= ]]> #{startTime} and schedule_time <![CDATA[ <= ]]> #{endTime} and ((schedule_time <![CDATA[ >= ]]> #{startTime} and schedule_time <![CDATA[ <= ]]> #{endTime})
or start_time <![CDATA[ >= ]]> #{startTime} and start_time <![CDATA[ <= ]]> #{endTime}) or (start_time <![CDATA[ >= ]]> #{startTime} and start_time <![CDATA[ <= ]]> #{endTime}))
</if> </if>
order by start_time desc limit 1 order by start_time desc limit 1
</select> </select>

4
dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProjectMapper.xml

@ -88,7 +88,9 @@
) )
</if> </if>
<if test="searchName!=null and searchName != ''"> <if test="searchName!=null and searchName != ''">
and p.name like concat('%', #{searchName}, '%') AND (p.name LIKE concat('%', #{searchName}, '%')
OR p.description LIKE concat('%', #{searchName}, '%')
)
</if> </if>
order by p.create_time desc order by p.create_time desc
</select> </select>

3
dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/WorkFlowLineageMapper.xml

@ -64,8 +64,7 @@
and project_code = #{projectCode} and project_code = #{projectCode}
</select> </select>
<select id="queryWorkFlowLineageByCode" resultType="org.apache.dolphinscheduler.dao.entity.WorkFlowLineage" <select id="queryWorkFlowLineageByCode" resultType="org.apache.dolphinscheduler.dao.entity.WorkFlowLineage">
databaseId="mysql">
select tepd.id as work_flow_id,tepd.name as work_flow_name, select tepd.id as work_flow_id,tepd.name as work_flow_name,
"" as source_work_flow_id, "" as source_work_flow_id,
tepd.release_state as work_flow_publish_status, tepd.release_state as work_flow_publish_status,

2
dolphinscheduler-dao/src/test/java/org/apache/dolphinscheduler/dao/utils/ResourceProcessDefinitionUtilsTest.java

@ -31,7 +31,7 @@ public class ResourceProcessDefinitionUtilsTest {
@Test @Test
public void getResourceProcessDefinitionMapTest(){ public void getResourceProcessDefinitionMapTest(){
List<Map<String,Object>> mapList = new ArrayList<>(); List<Map<String,Object>> mapList = new ArrayList<>();
Map<String,Object> map = new HashMap(); Map<String,Object> map = new HashMap<>();
map.put("code",1L); map.put("code",1L);
map.put("resource_ids","1,2,3"); map.put("resource_ids","1,2,3");
mapList.add(map); mapList.add(map);

5
dolphinscheduler-dist/pom.xml vendored

@ -37,6 +37,11 @@
<artifactId>dolphinscheduler-server</artifactId> <artifactId>dolphinscheduler-server</artifactId>
</dependency> </dependency>
<dependency>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler-standalone-server</artifactId>
</dependency>
<dependency> <dependency>
<groupId>org.apache.dolphinscheduler</groupId> <groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler-api</artifactId> <artifactId>dolphinscheduler-api</artifactId>

1
dolphinscheduler-dist/release-docs/LICENSE vendored

@ -249,6 +249,7 @@ The text of each license is also included at licenses/LICENSE-[project].txt.
curator-client 4.3.0: https://mvnrepository.com/artifact/org.apache.curator/curator-client/4.3.0, Apache 2.0 curator-client 4.3.0: https://mvnrepository.com/artifact/org.apache.curator/curator-client/4.3.0, Apache 2.0
curator-framework 4.3.0: https://mvnrepository.com/artifact/org.apache.curator/curator-framework/4.3.0, Apache 2.0 curator-framework 4.3.0: https://mvnrepository.com/artifact/org.apache.curator/curator-framework/4.3.0, Apache 2.0
curator-recipes 4.3.0: https://mvnrepository.com/artifact/org.apache.curator/curator-recipes/4.3.0, Apache 2.0 curator-recipes 4.3.0: https://mvnrepository.com/artifact/org.apache.curator/curator-recipes/4.3.0, Apache 2.0
curator-test 2.12.0: https://mvnrepository.com/artifact/org.apache.curator/curator-test/2.12.0, Apache 2.0
datanucleus-api-jdo 4.2.1: https://mvnrepository.com/artifact/org.datanucleus/datanucleus-api-jdo/4.2.1, Apache 2.0 datanucleus-api-jdo 4.2.1: https://mvnrepository.com/artifact/org.datanucleus/datanucleus-api-jdo/4.2.1, Apache 2.0
datanucleus-core 4.1.6: https://mvnrepository.com/artifact/org.datanucleus/datanucleus-core/4.1.6, Apache 2.0 datanucleus-core 4.1.6: https://mvnrepository.com/artifact/org.datanucleus/datanucleus-core/4.1.6, Apache 2.0
datanucleus-rdbms 4.1.7: https://mvnrepository.com/artifact/org.datanucleus/datanucleus-rdbms/4.1.7, Apache 2.0 datanucleus-rdbms 4.1.7: https://mvnrepository.com/artifact/org.datanucleus/datanucleus-rdbms/4.1.7, Apache 2.0

2
dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/dispatch/executor/NettyExecutorManager.java

@ -178,7 +178,7 @@ public class NettyExecutorManager extends AbstractExecutorManager<Boolean>{
* @return nodes * @return nodes
*/ */
private Set<String> getAllNodes(ExecutionContext context){ private Set<String> getAllNodes(ExecutionContext context){
Set<String> nodes = Collections.EMPTY_SET; Set<String> nodes = Collections.emptySet();
/** /**
* executor type * executor type
*/ */

2
dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterBaseTaskExecThread.java

@ -191,7 +191,7 @@ public class MasterBaseTaskExecThread implements Callable<Boolean> {
} }
/** /**
* dispatcht task * dispatch task
* *
* @param taskInstance taskInstance * @param taskInstance taskInstance
* @return whether submit task success * @return whether submit task success

14
dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/sql/SqlTask.java

@ -42,6 +42,8 @@ import org.apache.dolphinscheduler.server.worker.task.AbstractTask;
import org.apache.dolphinscheduler.service.alert.AlertClientService; import org.apache.dolphinscheduler.service.alert.AlertClientService;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext; import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.commons.collections.MapUtils;
import java.sql.Connection; import java.sql.Connection;
import java.sql.PreparedStatement; import java.sql.PreparedStatement;
import java.sql.ResultSet; import java.sql.ResultSet;
@ -271,11 +273,11 @@ public class SqlTask extends AbstractTask {
public String setNonQuerySqlReturn(String updateResult, List<Property> properties) { public String setNonQuerySqlReturn(String updateResult, List<Property> properties) {
String result = null; String result = null;
for (Property info :properties) { for (Property info : properties) {
if (Direct.OUT == info.getDirect()) { if (Direct.OUT == info.getDirect()) {
List<Map<String,String>> updateRL = new ArrayList<>(); List<Map<String, String>> updateRL = new ArrayList<>();
Map<String,String> updateRM = new HashMap<>(); Map<String, String> updateRM = new HashMap<>();
updateRM.put(info.getProp(),updateResult); updateRM.put(info.getProp(), updateResult);
updateRL.add(updateRM); updateRL.add(updateRM);
result = JSONUtils.toJsonString(updateRL); result = JSONUtils.toJsonString(updateRL);
break; break;
@ -490,6 +492,10 @@ public class SqlTask extends AbstractTask {
public void printReplacedSql(String content, String formatSql, String rgex, Map<Integer, Property> sqlParamsMap) { public void printReplacedSql(String content, String formatSql, String rgex, Map<Integer, Property> sqlParamsMap) {
//parameter print style //parameter print style
logger.info("after replace sql , preparing : {}", formatSql); logger.info("after replace sql , preparing : {}", formatSql);
if (MapUtils.isEmpty(sqlParamsMap)) {
logger.info("sqlParamsMap should not be Empty");
return;
}
StringBuilder logPrint = new StringBuilder("replaced sql , parameters:"); StringBuilder logPrint = new StringBuilder("replaced sql , parameters:");
if (sqlParamsMap == null) { if (sqlParamsMap == null) {
logger.info("printReplacedSql: sqlParamsMap is null."); logger.info("printReplacedSql: sqlParamsMap is null.");

6
dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/MasterExecThreadTest.java

@ -101,8 +101,8 @@ public class MasterExecThreadTest {
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, "2020-01-20 23:00:00"); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, "2020-01-20 23:00:00");
Mockito.when(processInstance.getCommandParam()).thenReturn(JSONUtils.toJsonString(cmdParam)); Mockito.when(processInstance.getCommandParam()).thenReturn(JSONUtils.toJsonString(cmdParam));
ProcessDefinition processDefinition = new ProcessDefinition(); ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setGlobalParamMap(Collections.EMPTY_MAP); processDefinition.setGlobalParamMap(Collections.emptyMap());
processDefinition.setGlobalParamList(Collections.EMPTY_LIST); processDefinition.setGlobalParamList(Collections.emptyList());
Mockito.when(processInstance.getProcessDefinition()).thenReturn(processDefinition); Mockito.when(processInstance.getProcessDefinition()).thenReturn(processDefinition);
Mockito.when(processInstance.getProcessDefinitionCode()).thenReturn(processDefinitionCode); Mockito.when(processInstance.getProcessDefinitionCode()).thenReturn(processDefinitionCode);
@ -257,7 +257,7 @@ public class MasterExecThreadTest {
} }
private List<Schedule> zeroSchedulerList() { private List<Schedule> zeroSchedulerList() {
return Collections.EMPTY_LIST; return Collections.emptyList();
} }
private List<Schedule> oneSchedulerList() { private List<Schedule> oneSchedulerList() {

4
dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/http/HttpTaskTest.java

@ -55,8 +55,6 @@ import org.springframework.context.ApplicationContext;
public class HttpTaskTest { public class HttpTaskTest {
private static final Logger logger = LoggerFactory.getLogger(HttpTaskTest.class); private static final Logger logger = LoggerFactory.getLogger(HttpTaskTest.class);
private HttpTask httpTask; private HttpTask httpTask;
private ProcessService processService; private ProcessService processService;
@ -168,7 +166,7 @@ public class HttpTaskTest {
} catch (IOException e) { } catch (IOException e) {
e.printStackTrace(); e.printStackTrace();
}; }
} }
@Test @Test

4
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/quartz/ProcessScheduleJob.java

@ -75,8 +75,8 @@ public class ProcessScheduleJob implements Job {
// query schedule // query schedule
Schedule schedule = getProcessService().querySchedule(scheduleId); Schedule schedule = getProcessService().querySchedule(scheduleId);
if (schedule == null) { if (schedule == null || ReleaseState.OFFLINE == schedule.getReleaseState()) {
logger.warn("process schedule does not exist in db,delete schedule job in quartz, projectId:{}, scheduleId:{}", projectId, scheduleId); logger.warn("process schedule does not exist in db or process schedule offline,delete schedule job in quartz, projectId:{}, scheduleId:{}", projectId, scheduleId);
deleteJob(projectId, scheduleId); deleteJob(projectId, scheduleId);
return; return;
} }

52
dolphinscheduler-standalone-server/pom.xml

@ -0,0 +1,52 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ Licensed to the Apache Software Foundation (ASF) under one or more
~ contributor license agreements. See the NOTICE file distributed with
~ this work for additional information regarding copyright ownership.
~ The ASF licenses this file to You under the Apache License, Version 2.0
~ (the "License"); you may not use this file except in compliance with
~ the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>1.3.6-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-standalone-server</artifactId>
<dependencies>
<dependency>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler-server</artifactId>
</dependency>
<dependency>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-test</artifactId>
<version>${curator.test}</version>
<exclusions>
<exclusion>
<groupId>org.javassist</groupId>
<artifactId>javassist</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</project>

82
dolphinscheduler-standalone-server/src/main/java/org/apache/dolphinscheduler/server/StandaloneServer.java

@ -0,0 +1,82 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server;
import static org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_DRIVER_CLASS_NAME;
import static org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_PASSWORD;
import static org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_URL;
import static org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_USERNAME;
import org.apache.dolphinscheduler.api.ApiApplicationServer;
import org.apache.dolphinscheduler.common.utils.ScriptRunner;
import org.apache.dolphinscheduler.dao.datasource.ConnectionFactory;
import org.apache.dolphinscheduler.server.master.MasterServer;
import org.apache.dolphinscheduler.server.worker.WorkerServer;
import org.apache.curator.test.TestingServer;
import java.io.FileReader;
import java.nio.file.Files;
import java.nio.file.Path;
import javax.sql.DataSource;
import org.h2.tools.Server;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
@SpringBootApplication
public class StandaloneServer {
private static final Logger LOGGER = LoggerFactory.getLogger(StandaloneServer.class);
public static void main(String[] args) throws Exception {
System.setProperty("spring.profiles.active", "api");
final Path temp = Files.createTempDirectory("dolphinscheduler_");
LOGGER.info("H2 database directory: {}", temp);
System.setProperty(
SPRING_DATASOURCE_DRIVER_CLASS_NAME,
org.h2.Driver.class.getName()
);
System.setProperty(
SPRING_DATASOURCE_URL,
String.format("jdbc:h2:tcp://localhost/%s", temp.toAbsolutePath())
);
System.setProperty(SPRING_DATASOURCE_USERNAME, "sa");
System.setProperty(SPRING_DATASOURCE_PASSWORD, "");
Server.createTcpServer("-ifNotExists").start();
final DataSource ds = ConnectionFactory.getInstance().getDataSource();
final ScriptRunner runner = new ScriptRunner(ds.getConnection(), true, true);
runner.runScript(new FileReader("sql/dolphinscheduler_h2.sql"));
final TestingServer server = new TestingServer(true);
System.setProperty("registry.servers", server.getConnectString());
Thread.currentThread().setName("Standalone-Server");
new SpringApplicationBuilder(
ApiApplicationServer.class,
MasterServer.class,
WorkerServer.class
).run(args);
}
}

22
dolphinscheduler-standalone-server/src/main/resources/registry.properties

@ -0,0 +1,22 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This file is only to override the production configurations in standalone server.
registry.plugin.dir=./dolphinscheduler-dist/target/dolphinscheduler-dist-1.3.6-SNAPSHOT/lib/plugin/registry/zookeeper
registry.plugin.name=zookeeper
registry.servers=127.0.0.1:2181

68
dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/start.vue

@ -116,12 +116,31 @@
{{$t('Mode of execution')}} {{$t('Mode of execution')}}
</div> </div>
<div class="cont"> <div class="cont">
<el-radio-group v-model="runMode" style="margin-top: 7px;"> <el-radio-group @change="_updateParallelStatus" style="margin-top: 7px;"
v-model="runMode">
<el-radio :label="'RUN_MODE_SERIAL'">{{$t('Serial execution')}}</el-radio> <el-radio :label="'RUN_MODE_SERIAL'">{{$t('Serial execution')}}</el-radio>
<el-radio :label="'RUN_MODE_PARALLEL'">{{$t('Parallel execution')}}</el-radio> <el-radio :label="'RUN_MODE_PARALLEL'">{{$t('Parallel execution')}}</el-radio>
</el-radio-group> </el-radio-group>
</div> </div>
</div> </div>
<div class="clearfix list" style="margin:-6px 0 16px 0" v-if="runMode === 'RUN_MODE_PARALLEL'">
<div class="text" style="padding-top: 6px;">
<em @click="_showParallelismInfo" class="ans el-icon-warning"></em>
{{$t('Parallelism')}}
</div>
<div class="cont" style="padding-top: 8px;">
<el-checkbox @change="_updateEnableCustomParallel" size="small"
v-model="enableCustomParallelism">{{$t('Custom Parallelism')}}
<el-input :disabled="!enableCustomParallelism"
:placeholder="$t('Please enter Parallelism')"
ref="parallelismInput"
size="mini"
type="input"
v-model="parallismNumber">
</el-input>
</el-checkbox>
</div>
</div>
<div class="clearfix list"> <div class="clearfix list">
<div class="text"> <div class="text">
{{$t('Schedule date')}} {{$t('Schedule date')}}
@ -164,6 +183,7 @@
</template> </template>
<script> <script>
import _ from 'lodash' import _ from 'lodash'
import i18n from '@/module/i18n'
import dayjs from 'dayjs' import dayjs from 'dayjs'
import store from '@/conf/home/store' import store from '@/conf/home/store'
import { warningTypeList } from './util' import { warningTypeList } from './util'
@ -188,6 +208,8 @@
scheduleTime: '', scheduleTime: '',
spinnerLoading: false, spinnerLoading: false,
execType: false, execType: false,
enableCustomParallelism: false,
parallismNumber: null,
taskDependType: 'TASK_POST', taskDependType: 'TASK_POST',
runMode: 'RUN_MODE_SERIAL', runMode: 'RUN_MODE_SERIAL',
processInstancePriority: 'MEDIUM', processInstancePriority: 'MEDIUM',
@ -208,13 +230,33 @@
}, },
methods: { methods: {
...mapMutations('dag', ['setIsDetails', 'resetParams']), ...mapMutations('dag', ['setIsDetails', 'resetParams']),
_showParallelismInfo () {
this.$message.warning({
dangerouslyUseHTMLString: true,
message: `<p style='font-size: 14px;'>${i18n.$t('Parallelism tip')}</p>`
})
},
_onLocalParams (a) { _onLocalParams (a) {
this.udpList = a this.udpList = a
}, },
_datepicker (val) { _datepicker (val) {
this.scheduleTime = val this.scheduleTime = val
}, },
_verification () {
if (this.enableCustomParallelism && !this.parallismNumber) {
this.$message.warning(`${i18n.$t('Parallelism number should be positive integer')}`)
return false
}
if (this.parallismNumber && !(/(^[1-9]\d*$)/.test(this.parallismNumber))) {
this.$message.warning(`${i18n.$t('Parallelism number should be positive integer')}`)
return false
}
return true
},
_start () { _start () {
if (!this._verification()) {
return
}
this.spinnerLoading = true this.spinnerLoading = true
let startParams = {} let startParams = {}
for (const item of this.udpList) { for (const item of this.udpList) {
@ -234,7 +276,8 @@
runMode: this.runMode, runMode: this.runMode,
processInstancePriority: this.processInstancePriority, processInstancePriority: this.processInstancePriority,
workerGroup: this.workerGroup, workerGroup: this.workerGroup,
startParams: !_.isEmpty(startParams) ? JSON.stringify(startParams) : '' startParams: !_.isEmpty(startParams) ? JSON.stringify(startParams) : '',
expectedParallelismNumber: this.parallismNumber
} }
// Executed from the specified node // Executed from the specified node
if (this.sourceType === 'contextmenu') { if (this.sourceType === 'contextmenu') {
@ -262,6 +305,19 @@
}) })
}) })
}, },
_updateParallelStatus () {
this.enableCustomParallelism = false
this.parallismNumber = null
},
_updateEnableCustomParallel () {
if (!this.enableCustomParallelism) {
this.parallismNumber = null
} else {
this.$nextTick(() => {
this.$refs.parallelismInput.focus()
})
}
},
_getGlobalParams () { _getGlobalParams () {
this.store.dispatch('dag/getProcessDetails', this.startData.id).then(res => { this.store.dispatch('dag/getProcessDetails', this.startData.id).then(res => {
this.definitionGlobalParams = _.cloneDeep(this.store.state.dag.globalParams) this.definitionGlobalParams = _.cloneDeep(this.store.state.dag.globalParams)
@ -325,6 +381,14 @@
display: block; display: block;
} }
} }
.ans {
color: #0097e0;
font-size: 14px;
vertical-align: middle;
cursor: pointer;
}
.list { .list {
margin-bottom: 14px; margin-bottom: 14px;
.text { .text {

4
dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js

@ -125,7 +125,11 @@ export default {
'Slot Number': 'Slot Number', 'Slot Number': 'Slot Number',
'Please enter Slot number': 'Please enter Slot number', 'Please enter Slot number': 'Please enter Slot number',
Parallelism: 'Parallelism', Parallelism: 'Parallelism',
'Custom Parallelism': 'Configure parallelism',
'Please enter Parallelism': 'Please enter Parallelism', 'Please enter Parallelism': 'Please enter Parallelism',
'Parallelism tip': 'If there are a large number of tasks requiring complement, you can use the custom parallelism to ' +
'set the complement task thread to a reasonable value to avoid too large impact on the server.',
'Parallelism number should be positive integer': 'Parallelism number should be positive integer',
'TaskManager Number': 'TaskManager Number', 'TaskManager Number': 'TaskManager Number',
'Please enter TaskManager number': 'Please enter TaskManager number', 'Please enter TaskManager number': 'Please enter TaskManager number',
'App Name': 'App Name', 'App Name': 'App Name',

3
dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js

@ -125,7 +125,10 @@ export default {
'Slot Number': 'Slot数量', 'Slot Number': 'Slot数量',
'Please enter Slot number': '请输入Slot数量', 'Please enter Slot number': '请输入Slot数量',
Parallelism: '并行度', Parallelism: '并行度',
'Custom Parallelism': '自定义并行度',
'Please enter Parallelism': '请输入并行度', 'Please enter Parallelism': '请输入并行度',
'Parallelism number should be positive integer': '并行度必须为正整数',
'Parallelism tip': '如果存在大量任务需要补数时,可以利用自定义并行度将补数的任务线程设置成合理的数值,避免对服务器造成过大的影响',
'TaskManager Number': 'TaskManager数量', 'TaskManager Number': 'TaskManager数量',
'Please enter TaskManager number': '请输入TaskManager数量', 'Please enter TaskManager number': '请输入TaskManager数量',
'App Name': '任务名称', 'App Name': '任务名称',

103
install.sh

@ -1,103 +0,0 @@
#!/bin/sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
workDir=`dirname $0`
workDir=`cd ${workDir};pwd`
source ${workDir}/conf/config/install_config.conf
# 1.replace file
echo "1.replace file"
txt=""
if [[ "$OSTYPE" == "darwin"* ]]; then
# Mac OSX
txt="''"
fi
datasourceDriverClassname="com.mysql.jdbc.Driver"
if [ $dbtype == "postgresql" ];then
datasourceDriverClassname="org.postgresql.Driver"
fi
sed -i ${txt} "s@^spring.datasource.driver-class-name=.*@spring.datasource.driver-class-name=${datasourceDriverClassname}@g" conf/datasource.properties
sed -i ${txt} "s@^spring.datasource.url=.*@spring.datasource.url=jdbc:${dbtype}://${dbhost}/${dbname}?characterEncoding=UTF-8\&allowMultiQueries=true@g" conf/datasource.properties
sed -i ${txt} "s@^spring.datasource.username=.*@spring.datasource.username=${username}@g" conf/datasource.properties
sed -i ${txt} "s@^spring.datasource.password=.*@spring.datasource.password=${password}@g" conf/datasource.properties
sed -i ${txt} "s@^#\?zookeeper.quorum=.*@zookeeper.quorum=${zkQuorum}@g" conf/zookeeper.properties
sed -i ${txt} "s@^#\?zookeeper.dolphinscheduler.root=.*@zookeeper.dolphinscheduler.root=${zkRoot}@g" conf/zookeeper.properties
sed -i ${txt} "s@^data.basedir.path=.*@data.basedir.path=${dataBasedirPath}@g" conf/common.properties
sed -i ${txt} "s@^resource.storage.type=.*@resource.storage.type=${resourceStorageType}@g" conf/common.properties
sed -i ${txt} "s@^resource.upload.path=.*@resource.upload.path=${resourceUploadPath}@g" conf/common.properties
sed -i ${txt} "s@^hadoop.security.authentication.startup.state=.*@hadoop.security.authentication.startup.state=${kerberosStartUp}@g" conf/common.properties
sed -i ${txt} "s@^java.security.krb5.conf.path=.*@java.security.krb5.conf.path=${krb5ConfPath}@g" conf/common.properties
sed -i ${txt} "s@^login.user.keytab.username=.*@login.user.keytab.username=${keytabUserName}@g" conf/common.properties
sed -i ${txt} "s@^login.user.keytab.path=.*@login.user.keytab.path=${keytabPath}@g" conf/common.properties
sed -i ${txt} "s@^kerberos.expire.time=.*@kerberos.expire.time=${kerberosExpireTime}@g" conf/common.properties
sed -i ${txt} "s@^hdfs.root.user=.*@hdfs.root.user=${hdfsRootUser}@g" conf/common.properties
sed -i ${txt} "s@^fs.defaultFS=.*@fs.defaultFS=${defaultFS}@g" conf/common.properties
sed -i ${txt} "s@^fs.s3a.endpoint=.*@fs.s3a.endpoint=${s3Endpoint}@g" conf/common.properties
sed -i ${txt} "s@^fs.s3a.access.key=.*@fs.s3a.access.key=${s3AccessKey}@g" conf/common.properties
sed -i ${txt} "s@^fs.s3a.secret.key=.*@fs.s3a.secret.key=${s3SecretKey}@g" conf/common.properties
sed -i ${txt} "s@^resource.manager.httpaddress.port=.*@resource.manager.httpaddress.port=${resourceManagerHttpAddressPort}@g" conf/common.properties
sed -i ${txt} "s@^yarn.resourcemanager.ha.rm.ids=.*@yarn.resourcemanager.ha.rm.ids=${yarnHaIps}@g" conf/common.properties
sed -i ${txt} "s@^yarn.application.status.address=.*@yarn.application.status.address=http://${singleYarnIp}:%s/ws/v1/cluster/apps/%s@g" conf/common.properties
sed -i ${txt} "s@^yarn.job.history.status.address=.*@yarn.job.history.status.address=http://${singleYarnIp}:19888/ws/v1/history/mapreduce/jobs/%s@g" conf/common.properties
sed -i ${txt} "s@^sudo.enable=.*@sudo.enable=${sudoEnable}@g" conf/common.properties
# the following configurations may be commented, so ddd #\? to ensure successful sed
sed -i ${txt} "s@^#\?worker.tenant.auto.create=.*@worker.tenant.auto.create=${workerTenantAutoCreate}@g" conf/worker.properties
sed -i ${txt} "s@^#\?alert.listen.host=.*@alert.listen.host=${alertServer}@g" conf/worker.properties
sed -i ${txt} "s@^#\?alert.plugin.dir=.*@alert.plugin.dir=${alertPluginDir}@g" conf/alert.properties
sed -i ${txt} "s@^#\?server.port=.*@server.port=${apiServerPort}@g" conf/application-api.properties
# 2.create directory
echo "2.create directory"
if [ ! -d $installPath ];then
sudo mkdir -p $installPath
sudo chown -R $deployUser:$deployUser $installPath
fi
# 3.scp resources
echo "3.scp resources"
sh ${workDir}/script/scp-hosts.sh
if [ $? -eq 0 ]
then
echo 'scp copy completed'
else
echo 'scp copy failed to exit'
exit 1
fi
# 4.stop server
echo "4.stop server"
sh ${workDir}/script/stop-all.sh
# 5.delete zk node
echo "5.delete zk node"
sh ${workDir}/script/remove-zk-node.sh $zkRoot
# 6.startup
echo "6.startup"
sh ${workDir}/script/start-all.sh

18
pom.xml

@ -97,7 +97,7 @@
<mssql.jdbc.version>6.1.0.jre8</mssql.jdbc.version> <mssql.jdbc.version>6.1.0.jre8</mssql.jdbc.version>
<presto.jdbc.version>0.238.1</presto.jdbc.version> <presto.jdbc.version>0.238.1</presto.jdbc.version>
<spotbugs.version>3.1.12</spotbugs.version> <spotbugs.version>3.1.12</spotbugs.version>
<checkstyle.version>3.0.0</checkstyle.version> <checkstyle.version>3.1.2</checkstyle.version>
<zookeeper.version>3.4.14</zookeeper.version> <zookeeper.version>3.4.14</zookeeper.version>
<curator.test>2.12.0</curator.test> <curator.test>2.12.0</curator.test>
<frontend-maven-plugin.version>1.6</frontend-maven-plugin.version> <frontend-maven-plugin.version>1.6</frontend-maven-plugin.version>
@ -206,6 +206,11 @@
<artifactId>dolphinscheduler-server</artifactId> <artifactId>dolphinscheduler-server</artifactId>
<version>${project.version}</version> <version>${project.version}</version>
</dependency> </dependency>
<dependency>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler-standalone-server</artifactId>
<version>${project.version}</version>
</dependency>
<dependency> <dependency>
<groupId>org.apache.dolphinscheduler</groupId> <groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler-common</artifactId> <artifactId>dolphinscheduler-common</artifactId>
@ -310,7 +315,6 @@
<groupId>org.apache.curator</groupId> <groupId>org.apache.curator</groupId>
<artifactId>curator-test</artifactId> <artifactId>curator-test</artifactId>
<version>${curator.test}</version> <version>${curator.test}</version>
<scope>test</scope>
</dependency> </dependency>
<dependency> <dependency>
<groupId>commons-codec</groupId> <groupId>commons-codec</groupId>
@ -661,7 +665,6 @@
<artifactId>javax.mail</artifactId> <artifactId>javax.mail</artifactId>
<version>1.6.2</version> <version>1.6.2</version>
</dependency> </dependency>
</dependencies> </dependencies>
</dependencyManagement> </dependencyManagement>
@ -900,7 +903,6 @@
<include>**/api/utils/ResultTest.java</include> <include>**/api/utils/ResultTest.java</include>
<include>**/common/graph/DAGTest.java</include> <include>**/common/graph/DAGTest.java</include>
<include>**/common/os/OshiTest.java</include> <include>**/common/os/OshiTest.java</include>
<include>**/common/os/OSUtilsTest.java</include>
<include>**/common/shell/ShellExecutorTest.java</include> <include>**/common/shell/ShellExecutorTest.java</include>
<include>**/common/task/DataxParametersTest.java</include> <include>**/common/task/DataxParametersTest.java</include>
<include>**/common/task/EntityTestUtils.java</include> <include>**/common/task/EntityTestUtils.java</include>
@ -920,7 +922,6 @@
<include>**/common/utils/JSONUtilsTest.java</include> <include>**/common/utils/JSONUtilsTest.java</include>
<include>**/common/utils/LoggerUtilsTest.java</include> <include>**/common/utils/LoggerUtilsTest.java</include>
<include>**/common/utils/NetUtilsTest.java</include> <include>**/common/utils/NetUtilsTest.java</include>
<include>**/common/utils/OSUtilsTest.java</include>
<include>**/common/utils/ParameterUtilsTest.java</include> <include>**/common/utils/ParameterUtilsTest.java</include>
<include>**/common/utils/TimePlaceholderUtilsTest.java</include> <include>**/common/utils/TimePlaceholderUtilsTest.java</include>
<include>**/common/utils/PreconditionsTest.java</include> <include>**/common/utils/PreconditionsTest.java</include>
@ -1066,7 +1067,6 @@
<include>**/plugin/alert/email/EmailAlertChannelFactoryTest.java</include> <include>**/plugin/alert/email/EmailAlertChannelFactoryTest.java</include>
<include>**/plugin/alert/email/EmailAlertChannelTest.java</include> <include>**/plugin/alert/email/EmailAlertChannelTest.java</include>
<include>**/plugin/alert/email/ExcelUtilsTest.java</include> <include>**/plugin/alert/email/ExcelUtilsTest.java</include>
<include>**/plugin/alert/email/MailUtilsTest.java</include>
<include>**/plugin/alert/email/template/DefaultHTMLTemplateTest.java</include> <include>**/plugin/alert/email/template/DefaultHTMLTemplateTest.java</include>
<include>**/plugin/alert/dingtalk/DingTalkSenderTest.java</include> <include>**/plugin/alert/dingtalk/DingTalkSenderTest.java</include>
<include>**/plugin/alert/dingtalk/DingTalkAlertChannelFactoryTest.java</include> <include>**/plugin/alert/dingtalk/DingTalkAlertChannelFactoryTest.java</include>
@ -1154,15 +1154,13 @@
<dependency> <dependency>
<groupId>com.puppycrawl.tools</groupId> <groupId>com.puppycrawl.tools</groupId>
<artifactId>checkstyle</artifactId> <artifactId>checkstyle</artifactId>
<version>8.18</version> <version>8.45</version>
</dependency> </dependency>
</dependencies> </dependencies>
<configuration> <configuration>
<consoleOutput>true</consoleOutput> <consoleOutput>true</consoleOutput>
<encoding>UTF-8</encoding> <encoding>UTF-8</encoding>
<configLocation>style/checkstyle.xml</configLocation> <configLocation>style/checkstyle.xml</configLocation>
<suppressionsLocation>style/checkstyle-suppressions.xml</suppressionsLocation>
<suppressionsFileExpression>checkstyle.suppressions.file</suppressionsFileExpression>
<failOnViolation>true</failOnViolation> <failOnViolation>true</failOnViolation>
<violationSeverity>warning</violationSeverity> <violationSeverity>warning</violationSeverity>
<includeTestSourceDirectory>true</includeTestSourceDirectory> <includeTestSourceDirectory>true</includeTestSourceDirectory>
@ -1170,7 +1168,6 @@
<sourceDirectory>${project.build.sourceDirectory}</sourceDirectory> <sourceDirectory>${project.build.sourceDirectory}</sourceDirectory>
</sourceDirectories> </sourceDirectories>
<excludes>**\/generated-sources\/</excludes> <excludes>**\/generated-sources\/</excludes>
<skip>true</skip>
</configuration> </configuration>
<executions> <executions>
<execution> <execution>
@ -1216,5 +1213,6 @@
<module>dolphinscheduler-remote</module> <module>dolphinscheduler-remote</module>
<module>dolphinscheduler-service</module> <module>dolphinscheduler-service</module>
<module>dolphinscheduler-microbench</module> <module>dolphinscheduler-microbench</module>
<module>dolphinscheduler-standalone-server</module>
</modules> </modules>
</project> </project>

4
script/dolphinscheduler-daemon.sh

@ -16,7 +16,7 @@
# limitations under the License. # limitations under the License.
# #
usage="Usage: dolphinscheduler-daemon.sh (start|stop|status) <api-server|master-server|worker-server|alert-server> " usage="Usage: dolphinscheduler-daemon.sh (start|stop|status) <api-server|master-server|worker-server|alert-server|standalone-server> "
# if no args specified, show usage # if no args specified, show usage
if [ $# -le 1 ]; then if [ $# -le 1 ]; then
@ -87,6 +87,8 @@ elif [ "$command" = "zookeeper-server" ]; then
#note: this command just for getting a quick experience,not recommended for production. this operation will start a standalone zookeeper server #note: this command just for getting a quick experience,not recommended for production. this operation will start a standalone zookeeper server
LOG_FILE="-Dlogback.configurationFile=classpath:logback-zookeeper.xml" LOG_FILE="-Dlogback.configurationFile=classpath:logback-zookeeper.xml"
CLASS=org.apache.dolphinscheduler.service.zk.ZKServer CLASS=org.apache.dolphinscheduler.service.zk.ZKServer
elif [ "$command" = "standalone-server" ]; then
CLASS=org.apache.dolphinscheduler.server.StandaloneServer
else else
echo "Error: No command named '$command' was found." echo "Error: No command named '$command' was found."
exit 1 exit 1

4
sql/create/release-1.0.0_schema/mysql/dolphinscheduler_ddl.sql

@ -113,9 +113,9 @@ CREATE TABLE `t_escheduler_master_server` (
`host` varchar(45) DEFAULT NULL COMMENT 'ip', `host` varchar(45) DEFAULT NULL COMMENT 'ip',
`port` int(11) DEFAULT NULL COMMENT 'port', `port` int(11) DEFAULT NULL COMMENT 'port',
`zk_directory` varchar(64) DEFAULT NULL COMMENT 'the server path in zk directory', `zk_directory` varchar(64) DEFAULT NULL COMMENT 'the server path in zk directory',
`res_info` varchar(256) DEFAULT NULL COMMENT 'json resource information:{"cpu":xxx,"memroy":xxx}', `res_info` varchar(255) DEFAULT NULL COMMENT 'json resource information:{"cpu":xxx,"memory":xxx}',
`create_time` datetime DEFAULT NULL COMMENT 'create time', `create_time` datetime DEFAULT NULL COMMENT 'create time',
`last_heartbeat_time` datetime DEFAULT NULL COMMENT 'last head beat time', `last_heartbeat_time` datetime DEFAULT NULL COMMENT 'last heart beat time',
PRIMARY KEY (`id`) PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8; ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

943
sql/dolphinscheduler_h2.sql

@ -0,0 +1,943 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
SET FOREIGN_KEY_CHECKS=0;
-- ----------------------------
-- Table structure for QRTZ_JOB_DETAILS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_JOB_DETAILS;
CREATE TABLE QRTZ_JOB_DETAILS (
SCHED_NAME varchar(120) NOT NULL,
JOB_NAME varchar(200) NOT NULL,
JOB_GROUP varchar(200) NOT NULL,
DESCRIPTION varchar(250) DEFAULT NULL,
JOB_CLASS_NAME varchar(250) NOT NULL,
IS_DURABLE varchar(1) NOT NULL,
IS_NONCONCURRENT varchar(1) NOT NULL,
IS_UPDATE_DATA varchar(1) NOT NULL,
REQUESTS_RECOVERY varchar(1) NOT NULL,
JOB_DATA blob,
PRIMARY KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
);
-- ----------------------------
-- Table structure for QRTZ_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_TRIGGERS;
CREATE TABLE QRTZ_TRIGGERS (
SCHED_NAME varchar(120) NOT NULL,
TRIGGER_NAME varchar(200) NOT NULL,
TRIGGER_GROUP varchar(200) NOT NULL,
JOB_NAME varchar(200) NOT NULL,
JOB_GROUP varchar(200) NOT NULL,
DESCRIPTION varchar(250) DEFAULT NULL,
NEXT_FIRE_TIME bigint(13) DEFAULT NULL,
PREV_FIRE_TIME bigint(13) DEFAULT NULL,
PRIORITY int(11) DEFAULT NULL,
TRIGGER_STATE varchar(16) NOT NULL,
TRIGGER_TYPE varchar(8) NOT NULL,
START_TIME bigint(13) NOT NULL,
END_TIME bigint(13) DEFAULT NULL,
CALENDAR_NAME varchar(200) DEFAULT NULL,
MISFIRE_INSTR smallint(2) DEFAULT NULL,
JOB_DATA blob,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
CONSTRAINT QRTZ_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, JOB_NAME, JOB_GROUP) REFERENCES QRTZ_JOB_DETAILS (SCHED_NAME, JOB_NAME, JOB_GROUP)
);
-- ----------------------------
-- Table structure for QRTZ_BLOB_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS;
CREATE TABLE QRTZ_BLOB_TRIGGERS (
SCHED_NAME varchar(120) NOT NULL,
TRIGGER_NAME varchar(200) NOT NULL,
TRIGGER_GROUP varchar(200) NOT NULL,
BLOB_DATA blob,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
);
-- ----------------------------
-- Records of QRTZ_BLOB_TRIGGERS
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_CALENDARS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_CALENDARS;
CREATE TABLE QRTZ_CALENDARS (
SCHED_NAME varchar(120) NOT NULL,
CALENDAR_NAME varchar(200) NOT NULL,
CALENDAR blob NOT NULL,
PRIMARY KEY (SCHED_NAME,CALENDAR_NAME)
);
-- ----------------------------
-- Records of QRTZ_CALENDARS
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_CRON_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS;
CREATE TABLE QRTZ_CRON_TRIGGERS (
SCHED_NAME varchar(120) NOT NULL,
TRIGGER_NAME varchar(200) NOT NULL,
TRIGGER_GROUP varchar(200) NOT NULL,
CRON_EXPRESSION varchar(120) NOT NULL,
TIME_ZONE_ID varchar(80) DEFAULT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
CONSTRAINT QRTZ_CRON_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
);
-- ----------------------------
-- Records of QRTZ_CRON_TRIGGERS
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_FIRED_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS;
CREATE TABLE QRTZ_FIRED_TRIGGERS (
SCHED_NAME varchar(120) NOT NULL,
ENTRY_ID varchar(200) NOT NULL,
TRIGGER_NAME varchar(200) NOT NULL,
TRIGGER_GROUP varchar(200) NOT NULL,
INSTANCE_NAME varchar(200) NOT NULL,
FIRED_TIME bigint(13) NOT NULL,
SCHED_TIME bigint(13) NOT NULL,
PRIORITY int(11) NOT NULL,
STATE varchar(16) NOT NULL,
JOB_NAME varchar(200) DEFAULT NULL,
JOB_GROUP varchar(200) DEFAULT NULL,
IS_NONCONCURRENT varchar(1) DEFAULT NULL,
REQUESTS_RECOVERY varchar(1) DEFAULT NULL,
PRIMARY KEY (SCHED_NAME,ENTRY_ID)
);
-- ----------------------------
-- Records of QRTZ_FIRED_TRIGGERS
-- ----------------------------
-- ----------------------------
-- Records of QRTZ_JOB_DETAILS
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_LOCKS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_LOCKS;
CREATE TABLE QRTZ_LOCKS (
SCHED_NAME varchar(120) NOT NULL,
LOCK_NAME varchar(40) NOT NULL,
PRIMARY KEY (SCHED_NAME,LOCK_NAME)
);
-- ----------------------------
-- Records of QRTZ_LOCKS
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_PAUSED_TRIGGER_GRPS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS;
CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS (
SCHED_NAME varchar(120) NOT NULL,
TRIGGER_GROUP varchar(200) NOT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_GROUP)
);
-- ----------------------------
-- Records of QRTZ_PAUSED_TRIGGER_GRPS
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_SCHEDULER_STATE
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE;
CREATE TABLE QRTZ_SCHEDULER_STATE (
SCHED_NAME varchar(120) NOT NULL,
INSTANCE_NAME varchar(200) NOT NULL,
LAST_CHECKIN_TIME bigint(13) NOT NULL,
CHECKIN_INTERVAL bigint(13) NOT NULL,
PRIMARY KEY (SCHED_NAME,INSTANCE_NAME)
);
-- ----------------------------
-- Records of QRTZ_SCHEDULER_STATE
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_SIMPLE_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS;
CREATE TABLE QRTZ_SIMPLE_TRIGGERS (
SCHED_NAME varchar(120) NOT NULL,
TRIGGER_NAME varchar(200) NOT NULL,
TRIGGER_GROUP varchar(200) NOT NULL,
REPEAT_COUNT bigint(7) NOT NULL,
REPEAT_INTERVAL bigint(12) NOT NULL,
TIMES_TRIGGERED bigint(10) NOT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
CONSTRAINT QRTZ_SIMPLE_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
);
-- ----------------------------
-- Records of QRTZ_SIMPLE_TRIGGERS
-- ----------------------------
-- ----------------------------
-- Table structure for QRTZ_SIMPROP_TRIGGERS
-- ----------------------------
DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS;
CREATE TABLE QRTZ_SIMPROP_TRIGGERS (
SCHED_NAME varchar(120) NOT NULL,
TRIGGER_NAME varchar(200) NOT NULL,
TRIGGER_GROUP varchar(200) NOT NULL,
STR_PROP_1 varchar(512) DEFAULT NULL,
STR_PROP_2 varchar(512) DEFAULT NULL,
STR_PROP_3 varchar(512) DEFAULT NULL,
INT_PROP_1 int(11) DEFAULT NULL,
INT_PROP_2 int(11) DEFAULT NULL,
LONG_PROP_1 bigint(20) DEFAULT NULL,
LONG_PROP_2 bigint(20) DEFAULT NULL,
DEC_PROP_1 decimal(13,4) DEFAULT NULL,
DEC_PROP_2 decimal(13,4) DEFAULT NULL,
BOOL_PROP_1 varchar(1) DEFAULT NULL,
BOOL_PROP_2 varchar(1) DEFAULT NULL,
PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
CONSTRAINT QRTZ_SIMPROP_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
);
-- ----------------------------
-- Records of QRTZ_SIMPROP_TRIGGERS
-- ----------------------------
-- ----------------------------
-- Records of QRTZ_TRIGGERS
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_access_token
-- ----------------------------
DROP TABLE IF EXISTS t_ds_access_token;
CREATE TABLE t_ds_access_token (
id int(11) NOT NULL AUTO_INCREMENT,
user_id int(11) DEFAULT NULL,
token varchar(64) DEFAULT NULL,
expire_time datetime DEFAULT NULL,
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
);
-- ----------------------------
-- Records of t_ds_access_token
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_alert
-- ----------------------------
DROP TABLE IF EXISTS t_ds_alert;
CREATE TABLE t_ds_alert (
id int(11) NOT NULL AUTO_INCREMENT,
title varchar(64) DEFAULT NULL,
content text,
alert_status tinyint(4) DEFAULT '0',
log text,
alertgroup_id int(11) DEFAULT NULL,
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_alert
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_alertgroup
-- ----------------------------
DROP TABLE IF EXISTS t_ds_alertgroup;
CREATE TABLE t_ds_alertgroup(
id int(11) NOT NULL AUTO_INCREMENT,
alert_instance_ids varchar (255) DEFAULT NULL,
create_user_id int(11) DEFAULT NULL,
group_name varchar(255) DEFAULT NULL,
description varchar(255) DEFAULT NULL,
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY t_ds_alertgroup_name_un (group_name)
) ;
-- ----------------------------
-- Records of t_ds_alertgroup
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_command
-- ----------------------------
DROP TABLE IF EXISTS t_ds_command;
CREATE TABLE t_ds_command (
id int(11) NOT NULL AUTO_INCREMENT,
command_type tinyint(4) DEFAULT NULL,
process_definition_id int(11) DEFAULT NULL,
command_param text,
task_depend_type tinyint(4) DEFAULT NULL,
failure_strategy tinyint(4) DEFAULT '0',
warning_type tinyint(4) DEFAULT '0',
warning_group_id int(11) DEFAULT NULL,
schedule_time datetime DEFAULT NULL,
start_time datetime DEFAULT NULL,
executor_id int(11) DEFAULT NULL,
update_time datetime DEFAULT NULL,
process_instance_priority int(11) DEFAULT NULL,
worker_group varchar(64) ,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_command
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_datasource
-- ----------------------------
DROP TABLE IF EXISTS t_ds_datasource;
CREATE TABLE t_ds_datasource (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(64) NOT NULL,
note varchar(255) DEFAULT NULL,
type tinyint(4) NOT NULL,
user_id int(11) NOT NULL,
connection_params text NOT NULL,
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY t_ds_datasource_name_un (name, type)
) ;
-- ----------------------------
-- Records of t_ds_datasource
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_error_command
-- ----------------------------
DROP TABLE IF EXISTS t_ds_error_command;
CREATE TABLE t_ds_error_command (
id int(11) NOT NULL,
command_type tinyint(4) DEFAULT NULL,
executor_id int(11) DEFAULT NULL,
process_definition_id int(11) DEFAULT NULL,
command_param text,
task_depend_type tinyint(4) DEFAULT NULL,
failure_strategy tinyint(4) DEFAULT '0',
warning_type tinyint(4) DEFAULT '0',
warning_group_id int(11) DEFAULT NULL,
schedule_time datetime DEFAULT NULL,
start_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
process_instance_priority int(11) DEFAULT NULL,
worker_group varchar(64) ,
message text,
PRIMARY KEY (id)
);
-- ----------------------------
-- Records of t_ds_error_command
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_process_definition
-- ----------------------------
DROP TABLE IF EXISTS t_ds_process_definition;
CREATE TABLE t_ds_process_definition (
id int(11) NOT NULL AUTO_INCREMENT,
code bigint(20) NOT NULL,
name varchar(255) DEFAULT NULL,
version int(11) DEFAULT NULL,
description text,
project_code bigint(20) NOT NULL,
release_state tinyint(4) DEFAULT NULL,
user_id int(11) DEFAULT NULL,
global_params text,
flag tinyint(4) DEFAULT NULL,
locations text,
connects text,
warning_group_id int(11) DEFAULT NULL,
timeout int(11) DEFAULT '0',
tenant_id int(11) NOT NULL DEFAULT '-1',
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY process_unique (name,project_code) USING BTREE,
UNIQUE KEY code_unique (code)
) ;
-- ----------------------------
-- Records of t_ds_process_definition
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_process_definition_log
-- ----------------------------
DROP TABLE IF EXISTS t_ds_process_definition_log;
CREATE TABLE t_ds_process_definition_log (
id int(11) NOT NULL AUTO_INCREMENT,
code bigint(20) NOT NULL,
name varchar(200) DEFAULT NULL,
version int(11) DEFAULT NULL,
description text,
project_code bigint(20) NOT NULL,
release_state tinyint(4) DEFAULT NULL,
user_id int(11) DEFAULT NULL,
global_params text,
flag tinyint(4) DEFAULT NULL,
locations text,
connects text,
warning_group_id int(11) DEFAULT NULL,
timeout int(11) DEFAULT '0',
tenant_id int(11) NOT NULL DEFAULT '-1',
operator int(11) DEFAULT NULL,
operate_time datetime DEFAULT NULL,
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Table structure for t_ds_task_definition
-- ----------------------------
DROP TABLE IF EXISTS t_ds_task_definition;
CREATE TABLE t_ds_task_definition (
id int(11) NOT NULL AUTO_INCREMENT,
code bigint(20) NOT NULL,
name varchar(200) DEFAULT NULL,
version int(11) DEFAULT NULL,
description text,
project_code bigint(20) NOT NULL,
user_id int(11) DEFAULT NULL,
task_type varchar(50) NOT NULL,
task_params longtext,
flag tinyint(2) DEFAULT NULL,
task_priority tinyint(4) DEFAULT NULL,
worker_group varchar(200) DEFAULT NULL,
fail_retry_times int(11) DEFAULT NULL,
fail_retry_interval int(11) DEFAULT NULL,
timeout_flag tinyint(2) DEFAULT '0',
timeout_notify_strategy tinyint(4) DEFAULT NULL,
timeout int(11) DEFAULT '0',
delay_time int(11) DEFAULT '0',
resource_ids varchar(255) DEFAULT NULL,
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id,code),
UNIQUE KEY task_unique (name,project_code) USING BTREE
) ;
-- ----------------------------
-- Table structure for t_ds_task_definition_log
-- ----------------------------
DROP TABLE IF EXISTS t_ds_task_definition_log;
CREATE TABLE t_ds_task_definition_log (
id int(11) NOT NULL AUTO_INCREMENT,
code bigint(20) NOT NULL,
name varchar(200) DEFAULT NULL,
version int(11) DEFAULT NULL,
description text,
project_code bigint(20) NOT NULL,
user_id int(11) DEFAULT NULL,
task_type varchar(50) NOT NULL,
task_params text,
flag tinyint(2) DEFAULT NULL,
task_priority tinyint(4) DEFAULT NULL,
worker_group varchar(200) DEFAULT NULL,
fail_retry_times int(11) DEFAULT NULL,
fail_retry_interval int(11) DEFAULT NULL,
timeout_flag tinyint(2) DEFAULT '0',
timeout_notify_strategy tinyint(4) DEFAULT NULL,
timeout int(11) DEFAULT '0',
delay_time int(11) DEFAULT '0',
resource_ids varchar(255) DEFAULT NULL,
operator int(11) DEFAULT NULL,
operate_time datetime DEFAULT NULL,
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Table structure for t_ds_process_task_relation
-- ----------------------------
DROP TABLE IF EXISTS t_ds_process_task_relation;
CREATE TABLE t_ds_process_task_relation (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(200) DEFAULT NULL,
process_definition_version int(11) DEFAULT NULL,
project_code bigint(20) NOT NULL,
process_definition_code bigint(20) NOT NULL,
pre_task_code bigint(20) NOT NULL,
pre_task_version int(11) NOT NULL,
post_task_code bigint(20) NOT NULL,
post_task_version int(11) NOT NULL,
condition_type tinyint(2) DEFAULT NULL,
condition_params text,
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Table structure for t_ds_process_task_relation_log
-- ----------------------------
DROP TABLE IF EXISTS t_ds_process_task_relation_log;
CREATE TABLE t_ds_process_task_relation_log (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(200) DEFAULT NULL,
process_definition_version int(11) DEFAULT NULL,
project_code bigint(20) NOT NULL,
process_definition_code bigint(20) NOT NULL,
pre_task_code bigint(20) NOT NULL,
pre_task_version int(11) NOT NULL,
post_task_code bigint(20) NOT NULL,
post_task_version int(11) NOT NULL,
condition_type tinyint(2) DEFAULT NULL,
condition_params text,
operator int(11) DEFAULT NULL,
operate_time datetime DEFAULT NULL,
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Table structure for t_ds_process_instance
-- ----------------------------
DROP TABLE IF EXISTS t_ds_process_instance;
CREATE TABLE t_ds_process_instance (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(255) DEFAULT NULL,
process_definition_version int(11) DEFAULT NULL,
process_definition_code bigint(20) not NULL,
state tinyint(4) DEFAULT NULL,
recovery tinyint(4) DEFAULT NULL,
start_time datetime DEFAULT NULL,
end_time datetime DEFAULT NULL,
run_times int(11) DEFAULT NULL,
host varchar(135) DEFAULT NULL,
command_type tinyint(4) DEFAULT NULL,
command_param text,
task_depend_type tinyint(4) DEFAULT NULL,
max_try_times tinyint(4) DEFAULT '0',
failure_strategy tinyint(4) DEFAULT '0',
warning_type tinyint(4) DEFAULT '0',
warning_group_id int(11) DEFAULT NULL,
schedule_time datetime DEFAULT NULL,
command_start_time datetime DEFAULT NULL,
global_params text,
flag tinyint(4) DEFAULT '1',
update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
is_sub_process int(11) DEFAULT '0',
executor_id int(11) NOT NULL,
history_cmd text,
process_instance_priority int(11) DEFAULT NULL,
worker_group varchar(64) DEFAULT NULL,
timeout int(11) DEFAULT '0',
tenant_id int(11) NOT NULL DEFAULT '-1',
var_pool longtext,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_process_instance
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_project
-- ----------------------------
DROP TABLE IF EXISTS t_ds_project;
CREATE TABLE t_ds_project (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(100) DEFAULT NULL,
code bigint(20) NOT NULL,
description varchar(200) DEFAULT NULL,
user_id int(11) DEFAULT NULL,
flag tinyint(4) DEFAULT '1',
create_time datetime NOT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_project
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_queue
-- ----------------------------
DROP TABLE IF EXISTS t_ds_queue;
CREATE TABLE t_ds_queue (
id int(11) NOT NULL AUTO_INCREMENT,
queue_name varchar(64) DEFAULT NULL,
queue varchar(64) DEFAULT NULL,
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_queue
-- ----------------------------
INSERT INTO t_ds_queue VALUES ('1', 'default', 'default', null, null);
-- ----------------------------
-- Table structure for t_ds_relation_datasource_user
-- ----------------------------
DROP TABLE IF EXISTS t_ds_relation_datasource_user;
CREATE TABLE t_ds_relation_datasource_user (
id int(11) NOT NULL AUTO_INCREMENT,
user_id int(11) NOT NULL,
datasource_id int(11) DEFAULT NULL,
perm int(11) DEFAULT '1',
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_relation_datasource_user
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_relation_process_instance
-- ----------------------------
DROP TABLE IF EXISTS t_ds_relation_process_instance;
CREATE TABLE t_ds_relation_process_instance (
id int(11) NOT NULL AUTO_INCREMENT,
parent_process_instance_id int(11) DEFAULT NULL,
parent_task_instance_id int(11) DEFAULT NULL,
process_instance_id int(11) DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_relation_process_instance
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_relation_project_user
-- ----------------------------
DROP TABLE IF EXISTS t_ds_relation_project_user;
CREATE TABLE t_ds_relation_project_user (
id int(11) NOT NULL AUTO_INCREMENT,
user_id int(11) NOT NULL,
project_id int(11) DEFAULT NULL,
perm int(11) DEFAULT '1',
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_relation_project_user
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_relation_resources_user
-- ----------------------------
DROP TABLE IF EXISTS t_ds_relation_resources_user;
CREATE TABLE t_ds_relation_resources_user (
id int(11) NOT NULL AUTO_INCREMENT,
user_id int(11) NOT NULL,
resources_id int(11) DEFAULT NULL,
perm int(11) DEFAULT '1',
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_relation_resources_user
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_relation_udfs_user
-- ----------------------------
DROP TABLE IF EXISTS t_ds_relation_udfs_user;
CREATE TABLE t_ds_relation_udfs_user (
id int(11) NOT NULL AUTO_INCREMENT,
user_id int(11) NOT NULL,
udf_id int(11) DEFAULT NULL,
perm int(11) DEFAULT '1',
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Table structure for t_ds_resources
-- ----------------------------
DROP TABLE IF EXISTS t_ds_resources;
CREATE TABLE t_ds_resources (
id int(11) NOT NULL AUTO_INCREMENT,
alias varchar(64) DEFAULT NULL,
file_name varchar(64) DEFAULT NULL,
description varchar(255) DEFAULT NULL,
user_id int(11) DEFAULT NULL,
type tinyint(4) DEFAULT NULL,
size bigint(20) DEFAULT NULL,
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
pid int(11) DEFAULT NULL,
full_name varchar(64) DEFAULT NULL,
is_directory tinyint(4) DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY t_ds_resources_un (full_name,type)
) ;
-- ----------------------------
-- Records of t_ds_resources
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_schedules
-- ----------------------------
DROP TABLE IF EXISTS t_ds_schedules;
CREATE TABLE t_ds_schedules (
id int(11) NOT NULL AUTO_INCREMENT,
process_definition_id int(11) NOT NULL,
start_time datetime NOT NULL,
end_time datetime NOT NULL,
timezone_id varchar(40) DEFAULT NULL,
crontab varchar(255) NOT NULL,
failure_strategy tinyint(4) NOT NULL,
user_id int(11) NOT NULL,
release_state tinyint(4) NOT NULL,
warning_type tinyint(4) NOT NULL,
warning_group_id int(11) DEFAULT NULL,
process_instance_priority int(11) DEFAULT NULL,
worker_group varchar(64) DEFAULT '',
create_time datetime NOT NULL,
update_time datetime NOT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_schedules
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_session
-- ----------------------------
DROP TABLE IF EXISTS t_ds_session;
CREATE TABLE t_ds_session (
id varchar(64) NOT NULL,
user_id int(11) DEFAULT NULL,
ip varchar(45) DEFAULT NULL,
last_login_time datetime DEFAULT NULL,
PRIMARY KEY (id)
);
-- ----------------------------
-- Records of t_ds_session
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_task_instance
-- ----------------------------
DROP TABLE IF EXISTS t_ds_task_instance;
CREATE TABLE t_ds_task_instance (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(255) DEFAULT NULL,
task_type varchar(50) NOT NULL,
task_code bigint(20) NOT NULL,
task_definition_version int(11) DEFAULT NULL,
process_instance_id int(11) DEFAULT NULL,
state tinyint(4) DEFAULT NULL,
submit_time datetime DEFAULT NULL,
start_time datetime DEFAULT NULL,
end_time datetime DEFAULT NULL,
host varchar(135) DEFAULT NULL,
execute_path varchar(200) DEFAULT NULL,
log_path varchar(200) DEFAULT NULL,
alert_flag tinyint(4) DEFAULT NULL,
retry_times int(4) DEFAULT '0',
pid int(4) DEFAULT NULL,
app_link text,
task_params text,
flag tinyint(4) DEFAULT '1',
retry_interval int(4) DEFAULT NULL,
max_retry_times int(2) DEFAULT NULL,
task_instance_priority int(11) DEFAULT NULL,
worker_group varchar(64) DEFAULT NULL,
executor_id int(11) DEFAULT NULL,
first_submit_time datetime DEFAULT NULL,
delay_time int(4) DEFAULT '0',
var_pool longtext,
PRIMARY KEY (id),
FOREIGN KEY (process_instance_id) REFERENCES t_ds_process_instance (id) ON DELETE CASCADE
) ;
-- ----------------------------
-- Records of t_ds_task_instance
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_tenant
-- ----------------------------
DROP TABLE IF EXISTS t_ds_tenant;
CREATE TABLE t_ds_tenant (
id int(11) NOT NULL AUTO_INCREMENT,
tenant_code varchar(64) DEFAULT NULL,
description varchar(255) DEFAULT NULL,
queue_id int(11) DEFAULT NULL,
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_tenant
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_udfs
-- ----------------------------
DROP TABLE IF EXISTS t_ds_udfs;
CREATE TABLE t_ds_udfs (
id int(11) NOT NULL AUTO_INCREMENT,
user_id int(11) NOT NULL,
func_name varchar(100) NOT NULL,
class_name varchar(255) NOT NULL,
type tinyint(4) NOT NULL,
arg_types varchar(255) DEFAULT NULL,
database varchar(255) DEFAULT NULL,
description varchar(255) DEFAULT NULL,
resource_id int(11) NOT NULL,
resource_name varchar(255) NOT NULL,
create_time datetime NOT NULL,
update_time datetime NOT NULL,
PRIMARY KEY (id)
) ;
-- ----------------------------
-- Records of t_ds_udfs
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_user
-- ----------------------------
DROP TABLE IF EXISTS t_ds_user;
CREATE TABLE t_ds_user (
id int(11) NOT NULL AUTO_INCREMENT,
user_name varchar(64) DEFAULT NULL,
user_password varchar(64) DEFAULT NULL,
user_type tinyint(4) DEFAULT NULL,
email varchar(64) DEFAULT NULL,
phone varchar(11) DEFAULT NULL,
tenant_id int(11) DEFAULT NULL,
create_time datetime DEFAULT NULL,
update_time datetime DEFAULT NULL,
queue varchar(64) DEFAULT NULL,
state int(1) DEFAULT 1,
PRIMARY KEY (id),
UNIQUE KEY user_name_unique (user_name)
) ;
-- ----------------------------
-- Records of t_ds_user
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_worker_group
-- ----------------------------
DROP TABLE IF EXISTS t_ds_worker_group;
CREATE TABLE t_ds_worker_group (
id bigint(11) NOT NULL AUTO_INCREMENT,
name varchar(255) NOT NULL,
addr_list text NULL DEFAULT NULL,
create_time datetime NULL DEFAULT NULL,
update_time datetime NULL DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY name_unique (name)
) ;
-- ----------------------------
-- Records of t_ds_worker_group
-- ----------------------------
-- ----------------------------
-- Table structure for t_ds_version
-- ----------------------------
DROP TABLE IF EXISTS t_ds_version;
CREATE TABLE t_ds_version (
id int(11) NOT NULL AUTO_INCREMENT,
version varchar(200) NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY version_UNIQUE (version)
) ;
-- ----------------------------
-- Records of t_ds_version
-- ----------------------------
INSERT INTO t_ds_version VALUES ('1', '1.4.0');
-- ----------------------------
-- Records of t_ds_alertgroup
-- ----------------------------
INSERT INTO t_ds_alertgroup(alert_instance_ids, create_user_id, group_name, description, create_time, update_time)
VALUES ('1,2', 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39');
-- ----------------------------
-- Records of t_ds_user
-- ----------------------------
INSERT INTO t_ds_user
VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1);
-- ----------------------------
-- Table structure for t_ds_plugin_define
-- ----------------------------
DROP TABLE IF EXISTS t_ds_plugin_define;
CREATE TABLE t_ds_plugin_define (
id int NOT NULL AUTO_INCREMENT,
plugin_name varchar(100) NOT NULL,
plugin_type varchar(100) NOT NULL,
plugin_params text,
create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (id),
UNIQUE KEY t_ds_plugin_define_UN (plugin_name,plugin_type)
);
-- ----------------------------
-- Table structure for t_ds_alert_plugin_instance
-- ----------------------------
DROP TABLE IF EXISTS t_ds_alert_plugin_instance;
CREATE TABLE t_ds_alert_plugin_instance (
id int NOT NULL AUTO_INCREMENT,
plugin_define_id int NOT NULL,
plugin_instance_params text,
create_time timestamp NULL DEFAULT CURRENT_TIMESTAMP,
update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
instance_name varchar(200) DEFAULT NULL,
PRIMARY KEY (id)
);

24
style/checkstyle-suppressions.xml

@ -1,24 +0,0 @@
<?xml version="1.0"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!DOCTYPE suppressions PUBLIC
"-//Puppy Crawl//DTD Suppressions 1.0//EN"
"http://www.puppycrawl.com/dtds/suppressions_1_0.dtd">
<suppressions>
</suppressions>

6
style/checkstyle.xml

@ -29,9 +29,9 @@
<property name="eachLine" value="true"/> <property name="eachLine" value="true"/>
</module> </module>
<module name="SuppressionFilter"> <module name="LineLength">
<property name="file" value="${checkstyle.suppressions.file}" default="checkstyle-suppressions.xml"/> <property name="max" value="200"/>
<property name="optional" value="true"/> <property name="ignorePattern" value="^ *\* *[^ ]+$"/>
</module> </module>
<module name="LineLength"> <module name="LineLength">

1
tools/dependencies/known-dependencies.txt

@ -49,6 +49,7 @@ cron-utils-5.0.5.jar
curator-client-4.3.0.jar curator-client-4.3.0.jar
curator-framework-4.3.0.jar curator-framework-4.3.0.jar
curator-recipes-4.3.0.jar curator-recipes-4.3.0.jar
curator-test-2.12.0.jar
curvesapi-1.06.jar curvesapi-1.06.jar
datanucleus-api-jdo-4.2.1.jar datanucleus-api-jdo-4.2.1.jar
datanucleus-core-4.1.6.jar datanucleus-core-4.1.6.jar

Loading…
Cancel
Save