* impv: Correct datax execute and python execute script name
we use PYTHON_LAUNCHER for python script execute and
DATAX_LAUNCHER for datax script name
* Add pr number
* fix ut
* style
---------
Co-authored-by: xiangzihao <460888207@qq.com>
* add initial api tests for process definition controller
* Update
* Add test cases for project page
* Remove unrelated stuff
* Remove useless imports
* Add project api test case to github workflow matrix
* try fix api test error
* add workergroupapitest to ci and fix log output
* fix log output
* skip remote shell UT
---------
Co-authored-by: SbloodyS <460888207@qq.com>
Find out there are some docs formatter that will fail our CI,
due to we merge some wrong PR accidents, such as
https://github.com/apache/dolphinscheduler/pull/12940
This patch asks spotless check run in docs only check to
avoid regression too
Currently, our Python API code is a module in apache/dolphinscheduler codebase,
each time users change Python API code, they need to run all requests CI check
for dolphinscheduler and Python API, But if the user does only change Python
code, it could be merged if Python API CI pass and do not dependent on others CI.
Besides, we release Python API as the same version of dolphinscheduler. It is
easy for user to match Python API version. But when Python API does not change
any code, but dolphinscheduler release a bugfix version, Python API has to
release the new version to match dolphinscheduler. This happened when we
released Python API 2.0.6 and 2.0.7. 2.0.6 and 2.0.7 is bugfix version, and
Python API does not change any code, so the PyPI package is the same.
Separate Python API also makes our code more sense, we will have more
distinguished code in dolphinscheduler and Python API new repository.
Have separate issue tracker and changelog for information to users.
ref PR in other repository: apache/dolphinscheduler-sdk-python#1
see more detail in mail thread: https://lists.apache.org/thread/4z7l5l54c4d81smjlk1n8nq380p9f0oo
* Provide aop way as an optional way to collect yarn job's applicationId, and import new module `dolphinscheduler-aop` to place the aop code.
* Add user property `appId.collect` for user to decide how to collect applicationId.
* Add new environment configuration for each type of yarn tasks to support aop in `dolphinscheduler_env.sh`
* Update docs to declare how to use aop way.
* Update `LogUtils` to support fetch applicationId in different ways based on the user property.
Co-authored-by: gabrywu <gabrywu@apache.com>