Browse Source
* add quick start document * update test * add backend deployment document * add frontend deployment document * add system manual * Supplementary translation Translated untranslated places * 1.0.1-release.md add 1.0.1-release document * 1.0.2-release.md add 1.0.2-release document * 1.0.3-release.md add 1.0.3-release document * 1.1.0-release.md add 1.1.0-release document * EasyScheduler-FAQ.md add FAQ document * Backend development documentation.md add backend development documentation * Upgrade documentation.md add Upgrade documentation * Frontend development documentation.md add frontend development documentationpull/2/head
bao liang
5 years ago
committed by
GitHub
12 changed files with 2267 additions and 0 deletions
@ -0,0 +1,16 @@ |
|||||||
|
Easy Scheduler Release 1.0.1 |
||||||
|
=== |
||||||
|
Easy Scheduler 1.0.2 is the second version in the 1.x series. The update is as follows: |
||||||
|
|
||||||
|
- 1,outlook TSL email support |
||||||
|
- 2,servlet and protobuf jar conflict resolution |
||||||
|
- 3,create a tenant and establish a Linux user at the same time |
||||||
|
- 4,the re-run time is negative |
||||||
|
- 5,stand-alone and cluster can be deployed with one click of install.sh |
||||||
|
- 6,queue support interface added |
||||||
|
- 7,escheduler.t_escheduler_queue added create_time and update_time fields |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,49 @@ |
|||||||
|
Easy Scheduler Release 1.0.2 |
||||||
|
=== |
||||||
|
Easy Scheduler 1.0.2 is the third version in the 1.x series. This version adds scheduling open interfaces, worker grouping (the machine group for which the specified task runs), task flow and service monitoring, and support for oracle, clickhouse, etc., as follows: |
||||||
|
|
||||||
|
New features: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-79](https://github.com/analysys/EasyScheduler/issues/79)] scheduling the open interface through the token mode, which can be operated through the api. |
||||||
|
- [[EasyScheduler-138](https://github.com/analysys/EasyScheduler/issues/138)] can specify the machine (group) where the task runs. |
||||||
|
- [[EasyScheduler-139](https://github.com/analysys/EasyScheduler/issues/139)] task Process Monitoring and Master, Worker, Zookeeper Operation Status Monitoring |
||||||
|
- [[EasyScheduler-140](https://github.com/analysys/EasyScheduler/issues/140)] workflow Definition - Increase Process Timeout Alarm |
||||||
|
- [[EasyScheduler-134](https://github.com/analysys/EasyScheduler/issues/134)] task type supports Oracle, CLICKHOUSE, SQLSERVER, IMPALA |
||||||
|
- [[EasyScheduler-136](https://github.com/analysys/EasyScheduler/issues/136)] sql task node can independently select CC mail users |
||||||
|
- [[EasyScheduler-141](https://github.com/analysys/EasyScheduler/issues/141)] user Management—Users can bind queues. The user queue level is higher than the tenant queue level. If the user queue is empty, look for the tenant queue. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Enhanced: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-154](https://github.com/analysys/EasyScheduler/issues/154)] Tenant code allows encoding of pure numbers or underscores |
||||||
|
|
||||||
|
|
||||||
|
Repair: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-135](https://github.com/analysys/EasyScheduler/issues/135)] Python task can specify python version |
||||||
|
|
||||||
|
- [[EasyScheduler-125](https://github.com/analysys/EasyScheduler/issues/125)] The mobile phone number in the user account does not recognize the opening of Unicom's latest number 166 |
||||||
|
|
||||||
|
- [[EasyScheduler-178](https://github.com/analysys/EasyScheduler/issues/178)] Fix subtle spelling mistakes in ProcessDao |
||||||
|
|
||||||
|
- [[EasyScheduler-129](https://github.com/analysys/EasyScheduler/issues/129)] Tenant code, underlined and other special characters cannot pass the check. |
||||||
|
|
||||||
|
|
||||||
|
Thank: |
||||||
|
=== |
||||||
|
Last but not least, no new version was born without the contributions of the following partners: |
||||||
|
|
||||||
|
Baoqi , chubbyjiang , coreychen , chgxtony, cmdares , datuzi , dingchao, fanguanqun , 风清扬, gaojun416 , googlechorme, hyperknob , hujiang75277381 , huanzui , kinssun, ivivi727 ,jimmy, jiangzhx , kevin5210 , lidongdai , lshmouse , lenboo, lyf198972 , lgcareer , lzy305 , moranrr , millionfor , mazhong8808, programlief, qiaozhanwei , roy110 , swxchappy , sherlock111 , samz406 , swxchappy, qq389401879 , lzy305, vkingnew, William-GuoWei , woniulinux, yyl861, zhangxin1988, yangjiajun2014, yangqinlong, yangjiajun2014, zhzhenqin, zhangluck, zhanghaicheng1, zhuyizhizhi |
||||||
|
|
||||||
|
And many enthusiastic partners in the WeChat group! Thank you very much! |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,30 @@ |
|||||||
|
Easy Scheduler Release 1.0.3 |
||||||
|
=== |
||||||
|
Easy Scheduler 1.0.3 is the fourth version in the 1.x series. |
||||||
|
|
||||||
|
Enhanced: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-482]](https://github.com/analysys/EasyScheduler/issues/482)sql task mail header added support for custom variables |
||||||
|
- [[EasyScheduler-483]](https://github.com/analysys/EasyScheduler/issues/483)sql task failed to send mail, then this sql task is failed |
||||||
|
- [[EasyScheduler-484]](https://github.com/analysys/EasyScheduler/issues/484)modify the replacement rule of the custom variable in the sql task, and support the replacement of multiple single quotes and double quotes. |
||||||
|
- [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/485)when creating a resource file, increase the verification that the resource file already exists on hdfs |
||||||
|
|
||||||
|
Repair: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-198]](https://github.com/analysys/EasyScheduler/issues/198) the process definition list is sorted according to the timing status and update time |
||||||
|
- [[EasyScheduler-419]](https://github.com/analysys/EasyScheduler/issues/419) fixes online creation of files, hdfs file is not created, but returns successfully |
||||||
|
- [[EasyScheduler-481] ](https://github.com/analysys/EasyScheduler/issues/481)fixes the problem that the job does not exist at the same time. |
||||||
|
- [[EasyScheduler-425]](https://github.com/analysys/EasyScheduler/issues/425) kills the kill of its child process when killing the task |
||||||
|
- [[EasyScheduler-422]](https://github.com/analysys/EasyScheduler/issues/422) fixed an issue where the update time and size were not updated when updating resource files |
||||||
|
- [[EasyScheduler-431]](https://github.com/analysys/EasyScheduler/issues/431) fixed an issue where deleting a tenant failed if hdfs was not started when the tenant was deleted |
||||||
|
- [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/486) the shell process exits, the yarn state is not final and waits for judgment. |
||||||
|
|
||||||
|
Thank: |
||||||
|
=== |
||||||
|
Last but not least, no new version was born without the contributions of the following partners: |
||||||
|
|
||||||
|
Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, |
||||||
|
feloxx, coding-now, hymzcn, nysyxxg, chgxtony |
||||||
|
|
||||||
|
And many enthusiastic partners in the WeChat group! Thank you very much! |
||||||
|
|
@ -0,0 +1,55 @@ |
|||||||
|
Easy Scheduler Release 1.1.0 |
||||||
|
=== |
||||||
|
Easy Scheduler 1.1.0 is the first release in the 1.1.x series. |
||||||
|
|
||||||
|
New features: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-391](https://github.com/analysys/EasyScheduler/issues/391)] run a process under a specified tenement user |
||||||
|
- [[EasyScheduler-288](https://github.com/analysys/EasyScheduler/issues/288)] feature/qiye_weixin |
||||||
|
- [[EasyScheduler-189](https://github.com/analysys/EasyScheduler/issues/189)] security support such as Kerberos |
||||||
|
- [[EasyScheduler-398](https://github.com/analysys/EasyScheduler/issues/398)]dministrator, with tenants (install.sh set default tenant), can create resources, projects and data sources (limited to one administrator) |
||||||
|
- [[EasyScheduler-293](https://github.com/analysys/EasyScheduler/issues/293)]click on the parameter selected when running the process, there is no place to view, no save |
||||||
|
- [[EasyScheduler-401](https://github.com/analysys/EasyScheduler/issues/401)]timing is easy to time every second. After the timing is completed, you can display the next trigger time on the page. |
||||||
|
- [[EasyScheduler-493](https://github.com/analysys/EasyScheduler/pull/493)]add datasource kerberos auth and FAQ modify and add resource upload s3 |
||||||
|
|
||||||
|
|
||||||
|
Enhanced: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-227](https://github.com/analysys/EasyScheduler/issues/227)] upgrade spring-boot to 2.1.x and spring to 5.x |
||||||
|
- [[EasyScheduler-434](https://github.com/analysys/EasyScheduler/issues/434)] number of worker nodes zk and mysql are inconsistent |
||||||
|
- [[EasyScheduler-435](https://github.com/analysys/EasyScheduler/issues/435)]authentication of the mailbox format |
||||||
|
- [[EasyScheduler-441](https://github.com/analysys/EasyScheduler/issues/441)] prohibits running nodes from joining completed node detection |
||||||
|
- [[EasyScheduler-400](https://github.com/analysys/EasyScheduler/issues/400)] Home page, queue statistics are not harmonious, command statistics have no data |
||||||
|
- [[EasyScheduler-395](https://github.com/analysys/EasyScheduler/issues/395)] For fault-tolerant recovery processes, the status cannot be ** is running |
||||||
|
- [[EasyScheduler-529](https://github.com/analysys/EasyScheduler/issues/529)] optimize poll task from zookeeper |
||||||
|
- [[EasyScheduler-242](https://github.com/analysys/EasyScheduler/issues/242)]worker-server node gets task performance problem |
||||||
|
- [[EasyScheduler-352](https://github.com/analysys/EasyScheduler/issues/352)]worker grouping, queue consumption problem |
||||||
|
- [[EasyScheduler-461](https://github.com/analysys/EasyScheduler/issues/461)]view data source parameters, need to encrypt account password information |
||||||
|
- [[EasyScheduler-396](https://github.com/analysys/EasyScheduler/issues/396)]Dockerfile optimization, and associated Dockerfile and github to achieve automatic mirroring |
||||||
|
- [[EasyScheduler-389](https://github.com/analysys/EasyScheduler/issues/389)]service monitor cannot find the change of master/worker |
||||||
|
- [[EasyScheduler-511](https://github.com/analysys/EasyScheduler/issues/511)]support recovery process from stop/kill nodes. |
||||||
|
- [[EasyScheduler-399](https://github.com/analysys/EasyScheduler/issues/399)]HadoopUtils specifies user actions instead of **Deploying users |
||||||
|
|
||||||
|
Repair: |
||||||
|
=== |
||||||
|
- [[EasyScheduler-394](https://github.com/analysys/EasyScheduler/issues/394)] When the master&worker is deployed on the same machine, if the master&worker service is restarted, the previously scheduled tasks cannot be scheduled. |
||||||
|
- [[EasyScheduler-469](https://github.com/analysys/EasyScheduler/issues/469)]Fix naming errors,monitor page |
||||||
|
- [[EasyScheduler-392](https://github.com/analysys/EasyScheduler/issues/392)]Feature request: fix email regex check |
||||||
|
- [[EasyScheduler-405](https://github.com/analysys/EasyScheduler/issues/405)]timed modification/addition page, start time and end time cannot be the same |
||||||
|
- [[EasyScheduler-517](https://github.com/analysys/EasyScheduler/issues/517)]complement - subworkflow - time parameter |
||||||
|
- [[EasyScheduler-532](https://github.com/analysys/EasyScheduler/issues/532)] python node does not execute the problem |
||||||
|
- [[EasyScheduler-543](https://github.com/analysys/EasyScheduler/issues/543)]optimize datasource connection params safety |
||||||
|
- [[EasyScheduler-569](https://github.com/analysys/EasyScheduler/issues/569)] timed tasks can't really stop |
||||||
|
- [[EasyScheduler-463](https://github.com/analysys/EasyScheduler/issues/463)]mailbox verification does not support very suffixed mailboxes |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Thank: |
||||||
|
=== |
||||||
|
Last but not least, no new version was born without the contributions of the following partners: |
||||||
|
|
||||||
|
Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, chgxtony, Stanfan, lfyee, thisnew, hujiang75277381, sunnyingit, lgbo-ustc, ivivi, lzy305, JackIllkid, telltime, lipengbo2018, wuchunfu, telltime |
||||||
|
|
||||||
|
And many enthusiastic partners in the WeChat group! Thank you very much! |
||||||
|
|
@ -0,0 +1,223 @@ |
|||||||
|
# Backend Deployment Document |
||||||
|
|
||||||
|
There are two deployment modes for the backend: |
||||||
|
|
||||||
|
- 1. automatic deployment |
||||||
|
- 2. source code compile and then deployment |
||||||
|
|
||||||
|
## 1、Preparations |
||||||
|
|
||||||
|
Download the latest version of the installation package, download address: [gitee download](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) , download escheduler-backend-x.x.x.tar.gz(back-end referred to as escheduler-backend),escheduler-ui-x.x.x.tar.gz(front-end referred to as escheduler-ui) |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### Preparations 1: Installation of basic software (self-installation of required items) |
||||||
|
|
||||||
|
* [Mysql](http://geek.analysys.cn/topic/124) (5.5+) : Mandatory |
||||||
|
* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Mandatory |
||||||
|
* [ZooKeeper](https://www.jianshu.com/p/de90172ea680)(3.4.6+) :Mandatory |
||||||
|
* [Hadoop](https://blog.csdn.net/Evankaka/article/details/51612437)(2.6+) :Optionally, if you need to use the resource upload function, MapReduce task submission needs to configure Hadoop (uploaded resource files are currently stored on Hdfs) |
||||||
|
* [Hive](https://staroon.pro/2017/12/09/HiveInstall/)(1.2.1) : Optional, hive task submission needs to be installed |
||||||
|
* Spark(1.x,2.x) : Optional, Spark task submission needs to be installed |
||||||
|
* PostgreSQL(8.2.15+) : Optional, PostgreSQL PostgreSQL stored procedures need to be installed |
||||||
|
|
||||||
|
``` |
||||||
|
Note: Easy Scheduler itself does not rely on Hadoop, Hive, Spark, PostgreSQL, but only calls their Client to run the corresponding tasks. |
||||||
|
``` |
||||||
|
|
||||||
|
#### Preparations 2: Create deployment users |
||||||
|
|
||||||
|
- Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in sudo-u {linux-user}, so deployment users need sudo privileges and are confidential. |
||||||
|
|
||||||
|
```Deployment account |
||||||
|
vi /etc/sudoers |
||||||
|
|
||||||
|
# For example, the deployment user is an escheduler account |
||||||
|
escheduler ALL=(ALL) NOPASSWD: NOPASSWD: ALL |
||||||
|
|
||||||
|
# And you need to comment out the Default requiretty line |
||||||
|
#Default requiretty |
||||||
|
``` |
||||||
|
|
||||||
|
#### Preparations 3: SSH Secret-Free Configuration |
||||||
|
Configure SSH secret-free login on deployment machines and other installation machines. If you want to install easyscheduler on deployment machines, you need to configure native password-free login itself. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
- [Connect the host and other machines SSH](http://geek.analysys.cn/topic/113) |
||||||
|
|
||||||
|
|
||||||
|
#### Preparations 4: database initialization |
||||||
|
|
||||||
|
* Create databases and accounts |
||||||
|
|
||||||
|
Enter the mysql command line service by following MySQL commands: |
||||||
|
|
||||||
|
> mysql -h {host} -u {user} -p{password} |
||||||
|
|
||||||
|
Then execute the following command to create database and account |
||||||
|
|
||||||
|
```sql |
||||||
|
CREATE DATABASE escheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; |
||||||
|
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}'; |
||||||
|
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}'; |
||||||
|
flush privileges; |
||||||
|
``` |
||||||
|
|
||||||
|
* Versions 1.0.0 and 1.0.1 create tables and import basic data |
||||||
|
Instructions:在escheduler-backend/sql/escheduler.sql和quartz.sql |
||||||
|
|
||||||
|
```sql |
||||||
|
mysql -h {host} -u {user} -p{password} -D {db} < escheduler.sql |
||||||
|
|
||||||
|
mysql -h {host} -u {user} -p{password} -D {db} < quartz.sql |
||||||
|
``` |
||||||
|
|
||||||
|
* Version 1.0.2 later (including 1.0.2) creates tables and imports basic data |
||||||
|
Modify the following attributes in conf/dao/data_source.properties |
||||||
|
|
||||||
|
``` |
||||||
|
spring.datasource.url |
||||||
|
spring.datasource.username |
||||||
|
spring.datasource.password |
||||||
|
``` |
||||||
|
Execute scripts for creating tables and importing basic data |
||||||
|
``` |
||||||
|
sh ./script/create_escheduler.sh |
||||||
|
``` |
||||||
|
|
||||||
|
#### Preparations 5: Modify the deployment directory permissions and operation parameters |
||||||
|
|
||||||
|
Let's first get a general idea of the role of files (folders) in the escheduler-backend directory after decompression. |
||||||
|
|
||||||
|
```directory |
||||||
|
bin : Basic service startup script |
||||||
|
conf : Project Profile |
||||||
|
lib : The project relies on jar packages, including individual module jars and third-party jars |
||||||
|
script : Cluster Start, Stop and Service Monitor Start and Stop scripts |
||||||
|
sql : The project relies on SQL files |
||||||
|
install.sh : One-click deployment script |
||||||
|
``` |
||||||
|
|
||||||
|
- Modify permissions (please modify the deployUser to the corresponding deployment user) so that the deployment user has operational privileges on the escheduler-backend directory |
||||||
|
|
||||||
|
`sudo chown -R deployUser:deployUser escheduler-backend` |
||||||
|
|
||||||
|
- Modify the `.escheduler_env.sh` environment variable in the conf/env/directory |
||||||
|
|
||||||
|
- Modify deployment parameters (depending on your server and business situation): |
||||||
|
|
||||||
|
- Modify the parameters in **install.sh** to replace the values required by your business |
||||||
|
- MonitorServerState switch variable, added in version 1.0.3, controls whether to start the self-start script (monitor master, worker status, if off-line will start automatically). The default value of "false" means that the self-start script is not started, and if it needs to start, it is changed to "true". |
||||||
|
- hdfsStartupSate switch variable controls whether to starthdfs |
||||||
|
The default value of "false" means not to start hdfs |
||||||
|
If you need to start hdfs instead of "true", you need to create the hdfs root path by yourself, that is, hdfsPath in install.sh. |
||||||
|
|
||||||
|
- If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory |
||||||
|
|
||||||
|
|
||||||
|
## 2、Deployment |
||||||
|
Automated deployment is recommended, and experienced partners can use source deployment as well. |
||||||
|
|
||||||
|
### 2.1 Automated Deployment |
||||||
|
|
||||||
|
- Install zookeeper tools |
||||||
|
|
||||||
|
`pip install kazoo` |
||||||
|
|
||||||
|
- Switch to deployment user, one-click deployment |
||||||
|
|
||||||
|
`sh install.sh` |
||||||
|
|
||||||
|
- Use the jps command to see if the service is started (jps comes with Java JDK) |
||||||
|
|
||||||
|
```aidl |
||||||
|
MasterServer ----- Master Service |
||||||
|
WorkerServer ----- Worker Service |
||||||
|
LoggerServer ----- Logger Service |
||||||
|
ApiApplicationServer ----- API Service |
||||||
|
AlertServer ----- Alert Service |
||||||
|
``` |
||||||
|
If there are more than five services, the automatic deployment is successful |
||||||
|
|
||||||
|
|
||||||
|
After successful deployment, the log can be viewed and stored in a specified folder. |
||||||
|
|
||||||
|
```log path |
||||||
|
logs/ |
||||||
|
├── escheduler-alert-server.log |
||||||
|
├── escheduler-master-server.log |
||||||
|
|—— escheduler-worker-server.log |
||||||
|
|—— escheduler-api-server.log |
||||||
|
|—— escheduler-logger-server.log |
||||||
|
``` |
||||||
|
|
||||||
|
### 2.2 Compile source code to deploy |
||||||
|
|
||||||
|
After downloading the release version of the source package, unzip it into the root directory |
||||||
|
|
||||||
|
* Execute the compilation command: |
||||||
|
|
||||||
|
``` |
||||||
|
mvn -U clean package assembly:assembly -Dmaven.test.skip=true |
||||||
|
``` |
||||||
|
|
||||||
|
* View directory |
||||||
|
|
||||||
|
After normal compilation, target/escheduler-{version}/ is generated in the current directory |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### 2.3 Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details) |
||||||
|
|
||||||
|
* stop all services in the cluster at one click |
||||||
|
|
||||||
|
` sh ./bin/stop_all.sh` |
||||||
|
|
||||||
|
* one click to open all services in the cluster |
||||||
|
|
||||||
|
` sh ./bin/start_all.sh` |
||||||
|
|
||||||
|
* start and stop Master |
||||||
|
|
||||||
|
```start master |
||||||
|
sh ./bin/escheduler-daemon.sh start master-server |
||||||
|
sh ./bin/escheduler-daemon.sh stop master-server |
||||||
|
``` |
||||||
|
|
||||||
|
* start and stop Worker |
||||||
|
|
||||||
|
```start worker |
||||||
|
sh ./bin/escheduler-daemon.sh start worker-server |
||||||
|
sh ./bin/escheduler-daemon.sh stop worker-server |
||||||
|
``` |
||||||
|
|
||||||
|
* start and stop Api |
||||||
|
|
||||||
|
```start Api |
||||||
|
sh ./bin/escheduler-daemon.sh start api-server |
||||||
|
sh ./bin/escheduler-daemon.sh stop api-server |
||||||
|
``` |
||||||
|
* start and stop Logger |
||||||
|
|
||||||
|
```start Logger |
||||||
|
sh ./bin/escheduler-daemon.sh start logger-server |
||||||
|
sh ./bin/escheduler-daemon.sh stop logger-server |
||||||
|
``` |
||||||
|
* start and stop Alert |
||||||
|
|
||||||
|
```start Alert |
||||||
|
sh ./bin/escheduler-daemon.sh start alert-server |
||||||
|
sh ./bin/escheduler-daemon.sh stop alert-server |
||||||
|
``` |
||||||
|
|
||||||
|
## 3、Database Upgrade |
||||||
|
Database upgrade is a function added in version 1.0.2. The database can be upgraded automatically by executing the following commands |
||||||
|
|
||||||
|
```upgrade |
||||||
|
sh ./script/upgrade_escheduler.sh |
||||||
|
``` |
||||||
|
|
||||||
|
|
@ -0,0 +1,48 @@ |
|||||||
|
# Backend development documentation |
||||||
|
|
||||||
|
## Environmental requirements |
||||||
|
|
||||||
|
* [Mysql](http://geek.analysys.cn/topic/124) (5.5+) : Must be installed |
||||||
|
* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Must be installed |
||||||
|
* [ZooKeeper](https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper)(3.4.6+) :Must be installed |
||||||
|
* [Maven](http://maven.apache.org/download.cgi)(3.3+) :Must be installed |
||||||
|
|
||||||
|
Because the escheduler-rpc module in EasyScheduler uses Grpc, you need to use Maven to compile the generated classes. |
||||||
|
For those who are not familiar with Maven, please refer to: [maven in five minutes](http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html)(3.3+) |
||||||
|
|
||||||
|
http://maven.apache.org/install.html |
||||||
|
|
||||||
|
## Project compilation |
||||||
|
After importing the EasyScheduler source code into the development tools such as Idea, first convert to the Maven project (right click and select "Add Framework Support") |
||||||
|
|
||||||
|
* Execute the compile command: |
||||||
|
|
||||||
|
``` |
||||||
|
mvn -U clean package assembly:assembly -Dmaven.test.skip=true |
||||||
|
``` |
||||||
|
|
||||||
|
* View directory |
||||||
|
|
||||||
|
After normal compilation, it will generate target/escheduler-{version}/ in the current directory. |
||||||
|
|
||||||
|
``` |
||||||
|
bin |
||||||
|
conf |
||||||
|
lib |
||||||
|
script |
||||||
|
sql |
||||||
|
install.sh |
||||||
|
``` |
||||||
|
|
||||||
|
- Description |
||||||
|
|
||||||
|
``` |
||||||
|
bin : basic service startup script |
||||||
|
conf : project configuration file |
||||||
|
lib : the project depends on the jar package, including the various module jars and third-party jars |
||||||
|
script : cluster start, stop, and service monitoring start and stop scripts |
||||||
|
sql : project depends on sql file |
||||||
|
install.sh : one-click deployment script |
||||||
|
``` |
||||||
|
|
||||||
|
|
@ -0,0 +1,106 @@ |
|||||||
|
# Front End Deployment Document |
||||||
|
|
||||||
|
The front-end has three deployment modes: automated deployment, manual deployment and compiled source deployment. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## 1、Preparations |
||||||
|
#### Download the installation package |
||||||
|
|
||||||
|
Please download the latest version of the installation package, download address: [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) |
||||||
|
|
||||||
|
After downloading escheduler-ui-x.x.x.tar.gz,decompress`tar -zxvf escheduler-ui-x.x.x.tar.gz ./`and enter the`escheduler-ui`directory |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## 2、Deployment |
||||||
|
Automated deployment is recommended for either of the following two ways |
||||||
|
|
||||||
|
### 2.1 Automated Deployment |
||||||
|
|
||||||
|
Edit the installation file`vi install-escheduler-ui.sh` in the` escheduler-ui` directory |
||||||
|
|
||||||
|
Change the front-end access port and the back-end proxy interface address |
||||||
|
|
||||||
|
``` |
||||||
|
# Configure the front-end access port |
||||||
|
esc_proxy="8888" |
||||||
|
|
||||||
|
# Configure proxy back-end interface |
||||||
|
esc_proxy_port="http://192.168.xx.xx:12345" |
||||||
|
``` |
||||||
|
|
||||||
|
>Front-end automatic deployment based on Linux system`yum`operation, before deployment, please install and update`yum` |
||||||
|
|
||||||
|
under this directory, execute`./install-escheduler-ui.sh` |
||||||
|
|
||||||
|
|
||||||
|
### 2.2 Manual Deployment |
||||||
|
|
||||||
|
Install epel source `yum install epel-release -y` |
||||||
|
|
||||||
|
Install Nginx `yum install nginx -y` |
||||||
|
|
||||||
|
|
||||||
|
> #### Nginx configuration file address |
||||||
|
``` |
||||||
|
/etc/nginx/conf.d/default.conf |
||||||
|
``` |
||||||
|
> #### Configuration information (self-modifying) |
||||||
|
``` |
||||||
|
server { |
||||||
|
listen 8888;# access port |
||||||
|
server_name localhost; |
||||||
|
#charset koi8-r; |
||||||
|
#access_log /var/log/nginx/host.access.log main; |
||||||
|
location / { |
||||||
|
root /xx/dist; # the dist directory address decompressed by the front end above (self-modifying) |
||||||
|
index index.html index.html; |
||||||
|
} |
||||||
|
location /escheduler { |
||||||
|
proxy_pass http://192.168.xx.xx:12345; # nterface address (self-modifying) |
||||||
|
proxy_set_header Host $host; |
||||||
|
proxy_set_header X-Real-IP $remote_addr; |
||||||
|
proxy_set_header x_real_ipP $remote_addr; |
||||||
|
proxy_set_header remote_addr $remote_addr; |
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; |
||||||
|
proxy_http_version 1.1; |
||||||
|
proxy_connect_timeout 4s; |
||||||
|
proxy_read_timeout 30s; |
||||||
|
proxy_send_timeout 12s; |
||||||
|
proxy_set_header Upgrade $http_upgrade; |
||||||
|
proxy_set_header Connection "upgrade"; |
||||||
|
} |
||||||
|
#error_page 404 /404.html; |
||||||
|
# redirect server error pages to the static page /50x.html |
||||||
|
# |
||||||
|
error_page 500 502 503 504 /50x.html; |
||||||
|
location = /50x.html { |
||||||
|
root /usr/share/nginx/html; |
||||||
|
} |
||||||
|
} |
||||||
|
``` |
||||||
|
> #### Restart the Nginx service |
||||||
|
``` |
||||||
|
systemctl restart nginx |
||||||
|
``` |
||||||
|
|
||||||
|
#### nginx command |
||||||
|
|
||||||
|
- enable `systemctl enable nginx` |
||||||
|
|
||||||
|
- restart `systemctl restart nginx` |
||||||
|
|
||||||
|
- status `systemctl status nginx` |
||||||
|
|
||||||
|
|
||||||
|
## Front-end Frequently Asked Questions |
||||||
|
#### 1.Upload file size limit |
||||||
|
Edit the configuration file `vi /etc/nginx/nginx.conf` |
||||||
|
``` |
||||||
|
# change upload size |
||||||
|
client_max_body_size 1024m |
||||||
|
``` |
||||||
|
|
||||||
|
|
@ -0,0 +1,650 @@ |
|||||||
|
# Front-end development documentation |
||||||
|
|
||||||
|
### Technical selection |
||||||
|
``` |
||||||
|
Vue mvvm framework |
||||||
|
|
||||||
|
Es6 ECMAScript 6.0 |
||||||
|
|
||||||
|
Ans-ui Analysys-ui |
||||||
|
|
||||||
|
D3 Visual Library Chart Library |
||||||
|
|
||||||
|
Jsplumb connection plugin library |
||||||
|
|
||||||
|
Lodash high performance JavaScript utility library |
||||||
|
``` |
||||||
|
|
||||||
|
|
||||||
|
### Development environment |
||||||
|
|
||||||
|
- #### Node installation |
||||||
|
Node package download (note version 8.9.4) `https://nodejs.org/download/release/v8.9.4/` |
||||||
|
|
||||||
|
|
||||||
|
- #### Front-end project construction |
||||||
|
Use the command line mode `cd` enter the `escheduler-ui` project directory and execute `npm install` to pull the project dependency package. |
||||||
|
|
||||||
|
> If `npm install` is very slow |
||||||
|
|
||||||
|
> You can enter the Taobao image command line to enter `npm install -g cnpm --registry=https://registry.npm.taobao.org` |
||||||
|
|
||||||
|
> Run `cnpm install` |
||||||
|
|
||||||
|
|
||||||
|
- Create a new `.env` file or the interface that interacts with the backend |
||||||
|
|
||||||
|
Create a new` .env` file in the `escheduler-ui `directory, add the ip address and port of the backend service to the file, and use it to interact with the backend. The contents of the` .env` file are as follows: |
||||||
|
``` |
||||||
|
# Proxy interface address (modified by yourself) |
||||||
|
API_BASE = http://192.168.xx.xx:12345 |
||||||
|
|
||||||
|
# If you need to access the project with ip, you can remove the "#" (example) |
||||||
|
#DEV_HOST = 192.168.xx.xx |
||||||
|
``` |
||||||
|
|
||||||
|
> ##### ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution. |
||||||
|
``` |
||||||
|
npm install node-sass --unsafe-perm //单独安装node-sass依赖 |
||||||
|
``` |
||||||
|
|
||||||
|
- #### Development environment operation |
||||||
|
- `npm start` project development environment (after startup address http://localhost:8888/#/) |
||||||
|
|
||||||
|
|
||||||
|
#### Front-end project release |
||||||
|
|
||||||
|
- `npm run build` project packaging (after packaging, the root directory will create a folder called dist for publishing Nginx online) |
||||||
|
|
||||||
|
Run the `npm run build` command to generate a package file (dist) package |
||||||
|
|
||||||
|
Copy it to the corresponding directory of the server (front-end service static page storage directory) |
||||||
|
|
||||||
|
Visit address` http://localhost:8888/#/` |
||||||
|
|
||||||
|
|
||||||
|
#### Start with node and daemon under Liunx |
||||||
|
|
||||||
|
Install pm2 `npm install -g pm2` |
||||||
|
|
||||||
|
Execute `pm2 start npm -- run dev` to start the project in the project `escheduler-ui `root directory |
||||||
|
|
||||||
|
#### command |
||||||
|
|
||||||
|
- Start `pm2 start npm -- run dev` |
||||||
|
|
||||||
|
- Stop `pm2 stop npm` |
||||||
|
|
||||||
|
- delete `pm2 delete npm` |
||||||
|
|
||||||
|
- Status `pm2 list` |
||||||
|
|
||||||
|
``` |
||||||
|
|
||||||
|
[root@localhost escheduler-ui]# pm2 start npm -- run dev |
||||||
|
[PM2] Applying action restartProcessId on app [npm](ids: 0) |
||||||
|
[PM2] [npm](0) ✓ |
||||||
|
[PM2] Process successfully started |
||||||
|
┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐ |
||||||
|
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ |
||||||
|
├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤ |
||||||
|
│ npm │ 0 │ N/A │ fork │ 6168 │ online │ 31 │ 0s │ 0% │ 5.6 MB │ root │ disabled │ |
||||||
|
└──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘ |
||||||
|
Use `pm2 show <id|name>` to get more details about an app |
||||||
|
|
||||||
|
``` |
||||||
|
|
||||||
|
|
||||||
|
### Project directory structure |
||||||
|
|
||||||
|
`build` some webpack configurations for packaging and development environment projects |
||||||
|
|
||||||
|
`node_modules` development environment node dependency package |
||||||
|
|
||||||
|
`src` project required documents |
||||||
|
|
||||||
|
`src => combo` project third-party resource localization `npm run combo` specific view `build/combo.js` |
||||||
|
|
||||||
|
`src => font` Font icon library can be added by visiting https://www.iconfont.cn Note: The font library uses its own secondary development to reintroduce its own library `src/sass/common/_font.scss` |
||||||
|
|
||||||
|
`src => images` public image storage |
||||||
|
|
||||||
|
`src => js` js/vue |
||||||
|
|
||||||
|
`src => lib` internal components of the company (company component library can be deleted after open source) |
||||||
|
|
||||||
|
`src => sass` sass file One page corresponds to a sass file |
||||||
|
|
||||||
|
`src => view` page file One page corresponds to an html file |
||||||
|
|
||||||
|
``` |
||||||
|
> Projects are developed using vue single page application (SPA) |
||||||
|
- All page entry files are in the `src/js/conf/${ corresponding page filename => home} index.js` entry file |
||||||
|
- The corresponding sass file is in `src/sass/conf/${corresponding page filename => home}/index.scss` |
||||||
|
- The corresponding html file is in `src/view/${corresponding page filename => home}/index.html` |
||||||
|
``` |
||||||
|
|
||||||
|
Public module and utill `src/js/module` |
||||||
|
|
||||||
|
`components` => internal project common components |
||||||
|
|
||||||
|
`download` => download component |
||||||
|
|
||||||
|
`echarts` => chart component |
||||||
|
|
||||||
|
`filter` => filter and vue pipeline |
||||||
|
|
||||||
|
`i18n` => internationalization |
||||||
|
|
||||||
|
`io` => io request encapsulation based on axios |
||||||
|
|
||||||
|
`mixin` => vue mixin public part for disabled operation |
||||||
|
|
||||||
|
`permissions` => permission operation |
||||||
|
|
||||||
|
`util` => tool |
||||||
|
|
||||||
|
|
||||||
|
### System function module |
||||||
|
|
||||||
|
Home => `http://localhost:8888/#/home` |
||||||
|
|
||||||
|
Project Management => `http://localhost:8888/#/projects/list` |
||||||
|
``` |
||||||
|
| Project Home |
||||||
|
| Workflow |
||||||
|
- Workflow definition |
||||||
|
- Workflow instance |
||||||
|
- Task instance |
||||||
|
``` |
||||||
|
|
||||||
|
Resource Management => `http://localhost:8888/#/resource/file` |
||||||
|
``` |
||||||
|
| File Management |
||||||
|
| udf Management |
||||||
|
- Resource Management |
||||||
|
- Function management |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
``` |
||||||
|
|
||||||
|
Data Source Management => `http://localhost:8888/#/datasource/list` |
||||||
|
|
||||||
|
Security Center => `http://localhost:8888/#/security/tenant` |
||||||
|
``` |
||||||
|
| Tenant Management |
||||||
|
| User Management |
||||||
|
| Alarm Group Management |
||||||
|
- master |
||||||
|
- worker |
||||||
|
``` |
||||||
|
|
||||||
|
User Center => `http://localhost:8888/#/user/account` |
||||||
|
|
||||||
|
|
||||||
|
## Routing and state management |
||||||
|
|
||||||
|
The project `src/js/conf/home` is divided into |
||||||
|
|
||||||
|
`pages` => route to page directory |
||||||
|
``` |
||||||
|
The page file corresponding to the routing address |
||||||
|
``` |
||||||
|
|
||||||
|
`router` => route management |
||||||
|
``` |
||||||
|
vue router, the entry file index.js in each page will be registered. Specific operations: https://router.vuejs.org/zh/ |
||||||
|
``` |
||||||
|
|
||||||
|
`store` => status management |
||||||
|
``` |
||||||
|
The page corresponding to each route has a state management file divided into: |
||||||
|
|
||||||
|
actions => mapActions => Details:https://vuex.vuejs.org/zh/guide/actions.html |
||||||
|
|
||||||
|
getters => mapGetters => Details:https://vuex.vuejs.org/zh/guide/getters.html |
||||||
|
|
||||||
|
index => entrance |
||||||
|
mutations => mapMutations => Details:https://vuex.vuejs.org/zh/guide/mutations.html |
||||||
|
|
||||||
|
state => mapState => Details:https://vuex.vuejs.org/zh/guide/state.html |
||||||
|
|
||||||
|
Specific action:https://vuex.vuejs.org/zh/ |
||||||
|
|
||||||
|
``` |
||||||
|
|
||||||
|
|
||||||
|
## specification |
||||||
|
## Vue specification |
||||||
|
##### 1.Component name |
||||||
|
The component is named multiple words and is connected with a wire (-) to avoid conflicts with HTML tags and a clearer structure. |
||||||
|
``` |
||||||
|
// positive example |
||||||
|
export default { |
||||||
|
name: 'page-article-item' |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
##### 2.Component files |
||||||
|
The internal common component of the `src/js/module/components` project writes the folder name with the same name as the file name. The subcomponents and util tools that are split inside the common component are placed in the internal `_source` folder of the component. |
||||||
|
``` |
||||||
|
└── components |
||||||
|
├── header |
||||||
|
├── header.vue |
||||||
|
└── _source |
||||||
|
└── nav.vue |
||||||
|
└── util.js |
||||||
|
├── conditions |
||||||
|
├── conditions.vue |
||||||
|
└── _source |
||||||
|
└── serach.vue |
||||||
|
└── util.js |
||||||
|
``` |
||||||
|
|
||||||
|
##### 3.Prop |
||||||
|
When you define Prop, you should always name it in camel format (camelCase) and use the connection line (-) when assigning values to the parent component.This follows the characteristics of each language, because it is case-insensitive in HTML tags, and the use of links is more friendly; in JavaScript, the more natural is the hump name. |
||||||
|
|
||||||
|
``` |
||||||
|
// Vue |
||||||
|
props: { |
||||||
|
articleStatus: Boolean |
||||||
|
} |
||||||
|
// HTML |
||||||
|
<article-item :article-status="true"></article-item> |
||||||
|
``` |
||||||
|
|
||||||
|
The definition of Prop should specify its type, defaults, and validation as much as possible. |
||||||
|
|
||||||
|
Example: |
||||||
|
|
||||||
|
``` |
||||||
|
props: { |
||||||
|
attrM: Number, |
||||||
|
attrA: { |
||||||
|
type: String, |
||||||
|
required: true |
||||||
|
}, |
||||||
|
attrZ: { |
||||||
|
type: Object, |
||||||
|
// The default value of the array/object should be returned by a factory function |
||||||
|
default: function () { |
||||||
|
return { |
||||||
|
msg: 'achieve you and me' |
||||||
|
} |
||||||
|
} |
||||||
|
}, |
||||||
|
attrE: { |
||||||
|
type: String, |
||||||
|
validator: function (v) { |
||||||
|
return !(['success', 'fail'].indexOf(v) === -1) |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
##### 4.v-for |
||||||
|
When performing v-for traversal, you should always bring a key value to make rendering more efficient when updating the DOM. |
||||||
|
``` |
||||||
|
<ul> |
||||||
|
<li v-for="item in list" :key="item.id"> |
||||||
|
{{ item.title }} |
||||||
|
</li> |
||||||
|
</ul> |
||||||
|
``` |
||||||
|
|
||||||
|
v-for should be avoided on the same element as v-if (`for example: <li>`) because v-for has a higher priority than v-if. To avoid invalid calculations and rendering, you should try to use v-if Put it on top of the container's parent element. |
||||||
|
``` |
||||||
|
<ul v-if="showList"> |
||||||
|
<li v-for="item in list" :key="item.id"> |
||||||
|
{{ item.title }} |
||||||
|
</li> |
||||||
|
</ul> |
||||||
|
``` |
||||||
|
|
||||||
|
##### 5.v-if / v-else-if / v-else |
||||||
|
If the elements in the same set of v-if logic control are logically identical, Vue reuses the same part for more efficient element switching, `such as: value`. In order to avoid the unreasonable effect of multiplexing, you should add key to the same element for identification. |
||||||
|
``` |
||||||
|
<div v-if="hasData" key="mazey-data"> |
||||||
|
<span>{{ mazeyData }}</span> |
||||||
|
</div> |
||||||
|
<div v-else key="mazey-none"> |
||||||
|
<span>no data</span> |
||||||
|
</div> |
||||||
|
``` |
||||||
|
|
||||||
|
##### 6.Instruction abbreviation |
||||||
|
In order to unify the specification, the instruction abbreviation is always used. Using `v-bind`, `v-on` is not bad. Here is only a unified specification. |
||||||
|
``` |
||||||
|
<input :value="mazeyUser" @click="verifyUser"> |
||||||
|
``` |
||||||
|
|
||||||
|
##### 7.Top-level element order of single file components |
||||||
|
Styles are packaged in a file, all the styles defined in a single vue file, the same name in other files will also take effect. All will have a top class name before creating a component. |
||||||
|
Note: The sass plugin has been added to the project, and the sas syntax can be written directly in a single vue file. |
||||||
|
For uniformity and ease of reading, they should be placed in the order of `<template>`、`<script>`、`<style>`. |
||||||
|
|
||||||
|
``` |
||||||
|
<template> |
||||||
|
<div class="test-model"> |
||||||
|
test |
||||||
|
</div> |
||||||
|
</template> |
||||||
|
<script> |
||||||
|
export default { |
||||||
|
name: "test", |
||||||
|
data() { |
||||||
|
return {} |
||||||
|
}, |
||||||
|
props: {}, |
||||||
|
methods: {}, |
||||||
|
watch: {}, |
||||||
|
beforeCreate() { |
||||||
|
}, |
||||||
|
created() { |
||||||
|
}, |
||||||
|
beforeMount() { |
||||||
|
}, |
||||||
|
mounted() { |
||||||
|
}, |
||||||
|
beforeUpdate() { |
||||||
|
}, |
||||||
|
updated() { |
||||||
|
}, |
||||||
|
beforeDestroy() { |
||||||
|
}, |
||||||
|
destroyed() { |
||||||
|
}, |
||||||
|
computed: {}, |
||||||
|
components: {}, |
||||||
|
} |
||||||
|
</script> |
||||||
|
|
||||||
|
<style lang="scss" rel="stylesheet/scss"> |
||||||
|
.test-model { |
||||||
|
|
||||||
|
} |
||||||
|
</style> |
||||||
|
|
||||||
|
``` |
||||||
|
|
||||||
|
|
||||||
|
## JavaScript specification |
||||||
|
|
||||||
|
##### 1.var / let / const |
||||||
|
It is recommended to no longer use var, but use let / const, prefer const. The use of any variable must be declared in advance, except that the function defined by function can be placed anywhere. |
||||||
|
|
||||||
|
##### 2.quotes |
||||||
|
``` |
||||||
|
const foo = 'after division' |
||||||
|
const bar = `${foo},ront-end engineer` |
||||||
|
``` |
||||||
|
|
||||||
|
##### 3.function |
||||||
|
Anonymous functions use the arrow function uniformly. When multiple parameters/return values are used, the object's structure assignment is used first. |
||||||
|
``` |
||||||
|
function getPersonInfo ({name, sex}) { |
||||||
|
// ... |
||||||
|
return {name, gender} |
||||||
|
} |
||||||
|
``` |
||||||
|
The function name is uniformly named with a camel name. The beginning of the capital letter is a constructor. The lowercase letters start with ordinary functions, and the new operator should not be used to operate ordinary functions. |
||||||
|
|
||||||
|
##### 4.object |
||||||
|
``` |
||||||
|
const foo = {a: 0, b: 1} |
||||||
|
const bar = JSON.parse(JSON.stringify(foo)) |
||||||
|
|
||||||
|
const foo = {a: 0, b: 1} |
||||||
|
const bar = {...foo, c: 2} |
||||||
|
|
||||||
|
const foo = {a: 3} |
||||||
|
Object.assign(foo, {b: 4}) |
||||||
|
|
||||||
|
const myMap = new Map([]) |
||||||
|
for (let [key, value] of myMap.entries()) { |
||||||
|
// ... |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
##### 5.module |
||||||
|
Unified management of project modules using import / export. |
||||||
|
``` |
||||||
|
// lib.js |
||||||
|
export default {} |
||||||
|
|
||||||
|
// app.js |
||||||
|
import app from './lib' |
||||||
|
``` |
||||||
|
|
||||||
|
Import is placed at the top of the file. |
||||||
|
|
||||||
|
If the module has only one output value, use `export default`,otherwise no. |
||||||
|
|
||||||
|
## HTML / CSS |
||||||
|
|
||||||
|
##### 1.Label |
||||||
|
|
||||||
|
Do not write the type attribute when referencing external CSS or JavaScript. The HTML5 default type is the text/css and text/javascript properties, so there is no need to specify them. |
||||||
|
``` |
||||||
|
<link rel="stylesheet" href="//www.test.com/css/test.css"> |
||||||
|
<script src="//www.test.com/js/test.js"></script> |
||||||
|
``` |
||||||
|
|
||||||
|
##### 2.Naming |
||||||
|
The naming of Class and ID should be semantic, and you can see what you are doing by looking at the name; multiple words are connected by a link. |
||||||
|
``` |
||||||
|
// positive example |
||||||
|
.test-header{ |
||||||
|
font-size: 20px; |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
##### 3.Attribute abbreviation |
||||||
|
CSS attributes use abbreviations as much as possible to improve the efficiency and ease of understanding of the code. |
||||||
|
|
||||||
|
``` |
||||||
|
// counter example |
||||||
|
border-width: 1px; |
||||||
|
border-style: solid; |
||||||
|
border-color: #ccc; |
||||||
|
|
||||||
|
// positive example |
||||||
|
border: 1px solid #ccc; |
||||||
|
``` |
||||||
|
|
||||||
|
##### 4.Document type |
||||||
|
|
||||||
|
The HTML5 standard should always be used. |
||||||
|
|
||||||
|
``` |
||||||
|
<!DOCTYPE html> |
||||||
|
``` |
||||||
|
|
||||||
|
##### 5.Notes |
||||||
|
A block comment should be written to a module file. |
||||||
|
``` |
||||||
|
/** |
||||||
|
* @module mazey/api |
||||||
|
* @author Mazey <mazey@mazey.net> |
||||||
|
* @description test. |
||||||
|
* */ |
||||||
|
``` |
||||||
|
|
||||||
|
|
||||||
|
## interface |
||||||
|
|
||||||
|
##### All interfaces are returned as Promise |
||||||
|
Note that non-zero is wrong for catching catch |
||||||
|
|
||||||
|
``` |
||||||
|
const test = () => { |
||||||
|
return new Promise((resolve, reject) => { |
||||||
|
resolve({ |
||||||
|
a:1 |
||||||
|
}) |
||||||
|
}) |
||||||
|
} |
||||||
|
|
||||||
|
// transfer |
||||||
|
test.then(res => { |
||||||
|
console.log(res) |
||||||
|
// {a:1} |
||||||
|
}) |
||||||
|
``` |
||||||
|
|
||||||
|
Normal return |
||||||
|
``` |
||||||
|
{ |
||||||
|
code:0, |
||||||
|
data:{} |
||||||
|
msg:'success' |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
错误返回 |
||||||
|
``` |
||||||
|
{ |
||||||
|
code:10000, |
||||||
|
data:{} |
||||||
|
msg:'failed' |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
##### Related interface path |
||||||
|
|
||||||
|
dag related interface `src/js/conf/home/store/dag/actions.js` |
||||||
|
|
||||||
|
Data Source Center Related Interfaces `src/js/conf/home/store/datasource/actions.js` |
||||||
|
|
||||||
|
Project Management Related Interfaces `src/js/conf/home/store/projects/actions.js` |
||||||
|
|
||||||
|
Resource Center Related Interfaces `src/js/conf/home/store/resource/actions.js` |
||||||
|
|
||||||
|
Security Center Related Interfaces `src/js/conf/home/store/security/actions.js` |
||||||
|
|
||||||
|
User Center Related Interfaces `src/js/conf/home/store/user/actions.js` |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Extended development |
||||||
|
|
||||||
|
##### 1.Add node |
||||||
|
|
||||||
|
(1) First place the icon icon of the node in the `src/js/conf/home/pages/dag/img `folder, and note the English name of the node defined by the `toolbar_${in the background. For example: SHELL}.png` |
||||||
|
|
||||||
|
(2) Find the `tasksType` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it. |
||||||
|
``` |
||||||
|
'DEPENDENT': { // The background definition node type English name is used as the key value |
||||||
|
desc: 'DEPENDENT', // tooltip desc |
||||||
|
color: '#2FBFD8' // The color represented is mainly used for tree and gantt |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
(3) Add a `${node type (lowercase)}`.vue file in `src/js/conf/home/pages/dag/_source/formModel/tasks`. The contents of the components related to the current node are written here. Must belong to a node component must have a function _verification () After the verification is successful, the relevant data of the current component is thrown to the parent component. |
||||||
|
``` |
||||||
|
/** |
||||||
|
* Verification |
||||||
|
*/ |
||||||
|
_verification () { |
||||||
|
// datasource subcomponent verification |
||||||
|
if (!this.$refs.refDs._verifDatasource()) { |
||||||
|
return false |
||||||
|
} |
||||||
|
|
||||||
|
// verification function |
||||||
|
if (!this.method) { |
||||||
|
this.$message.warning(`${i18n.$t('Please enter method')}`) |
||||||
|
return false |
||||||
|
} |
||||||
|
|
||||||
|
// localParams subcomponent validation |
||||||
|
if (!this.$refs.refLocalParams._verifProp()) { |
||||||
|
return false |
||||||
|
} |
||||||
|
// store |
||||||
|
this.$emit('on-params', { |
||||||
|
type: this.type, |
||||||
|
datasource: this.datasource, |
||||||
|
method: this.method, |
||||||
|
localParams: this.localParams |
||||||
|
}) |
||||||
|
return true |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
(4) Common components used inside the node component are under` _source`, and `commcon.js` is used to configure public data. |
||||||
|
|
||||||
|
##### 2.Increase the status type |
||||||
|
|
||||||
|
(1) Find the `tasksState` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it. |
||||||
|
|
||||||
|
``` |
||||||
|
'WAITTING_DEPEND': { // 'WAITTING_DEPEND': { //后端定义状态类型 前端用作key值 |
||||||
|
id: 11, // front-end definition id is used as a sort |
||||||
|
desc: `${i18n.$t('waiting for dependency')}`, // tooltip desc |
||||||
|
color: '#5101be', // The color represented is mainly used for tree and gantt |
||||||
|
icoUnicode: '', // font icon |
||||||
|
isSpin: false // whether to rotate (requires code judgment) |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
##### 3.Add the action bar tool |
||||||
|
(1) Find the `toolOper` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it. |
||||||
|
``` |
||||||
|
{ |
||||||
|
code: 'pointer', // tool identifier |
||||||
|
icon: '', // tool icon |
||||||
|
disable: disable, // disable |
||||||
|
desc: `${i18n.$t('Drag node and selected item')}` // tooltip desc |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
(2) Tool classes are returned as a constructor `src/js/conf/home/pages/dag/_source/plugIn` |
||||||
|
|
||||||
|
`downChart.js` => dag image download processing |
||||||
|
|
||||||
|
`dragZoom.js` => mouse zoom effect processing |
||||||
|
|
||||||
|
`jsPlumbHandle.js` => drag and drop line processing |
||||||
|
|
||||||
|
`util.js` => belongs to the `plugIn` tool class |
||||||
|
|
||||||
|
|
||||||
|
The operation is handled in the `src/js/conf/home/pages/dag/_source/dag.js` => `toolbarEvent` event. |
||||||
|
|
||||||
|
|
||||||
|
##### 3.Add a routing page |
||||||
|
|
||||||
|
(1) First add a routing address`src/js/conf/home/router/index.js` in route management |
||||||
|
``` |
||||||
|
routing address{ |
||||||
|
path: '/test', // routing address |
||||||
|
name: 'test', // alias |
||||||
|
component: resolve => require(['../pages/test/index'], resolve), // route corresponding component entry file |
||||||
|
meta: { |
||||||
|
title: `${i18n.$t('test')} - EasyScheduler` // title display |
||||||
|
} |
||||||
|
}, |
||||||
|
``` |
||||||
|
|
||||||
|
(2)Create a `test` folder in `src/js/conf/home/pages` and create an `index.vue `entry file in the folder. |
||||||
|
|
||||||
|
This will give you direct access to`http://localhost:8888/#/test` |
||||||
|
|
||||||
|
|
||||||
|
##### 4.Increase the preset mailbox |
||||||
|
|
||||||
|
Find the `src/lib/localData/email.js` startup and timed email address input to automatically pull down the match. |
||||||
|
``` |
||||||
|
export default ["test@analysys.com.cn","test1@analysys.com.cn","test3@analysys.com.cn"] |
||||||
|
``` |
||||||
|
|
||||||
|
##### 5.Authority management and disabled state processing |
||||||
|
|
||||||
|
The permission gives the userType according to the backUser interface `getUserInfo` interface: `"ADMIN_USER/GENERAL_USER" `permission to control whether the page operation button is `disabled`. |
||||||
|
|
||||||
|
specific operation:`src/js/module/permissions/index.js` |
||||||
|
|
||||||
|
disabled processing:`src/js/module/mixin/disabledState.js` |
||||||
|
|
@ -0,0 +1,52 @@ |
|||||||
|
# Quick Start |
||||||
|
|
||||||
|
* Administrator user login |
||||||
|
> Address:192.168.xx.xx:8888 Username and password:admin/escheduler123 |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/48329107/61701549-ee738000-ad70-11e9-8d75-87ce04a0152f.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Create queue |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/48329107/61701943-896c5a00-ad71-11e9-99b8-a279762f1bc8.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Create tenant |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/48329107/61702051-bb7dbc00-ad71-11e9-86e1-1c328cafe916.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Creating Ordinary Users |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61704402-3517a900-ad76-11e9-865a-6325041d97e2.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Create an alarm group |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61704553-845dd980-ad76-11e9-85f1-05f33111409e.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Log in with regular users |
||||||
|
> Click on the user name in the upper right corner to "exit" and re-use the normal user login. |
||||||
|
|
||||||
|
* Project Management - > Create Project - > Click on Project Name |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61704688-dd2d7200-ad76-11e9-82ee-0833b16bd88f.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Click Workflow Definition - > Create Workflow Definition - > Online Process Definition |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61705638-c425c080-ad78-11e9-8619-6c21b61a24c9.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61705356-34801200-ad78-11e9-8d60-9b7494231028.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
|
@ -0,0 +1,715 @@ |
|||||||
|
# System Use Manual |
||||||
|
|
||||||
|
|
||||||
|
## Quick Start |
||||||
|
|
||||||
|
> Refer to[ Quick Start ]( Quick-Start.md) |
||||||
|
|
||||||
|
## Operational Guidelines |
||||||
|
|
||||||
|
- Administrator accounts can only be managed in terms of authority, do not participate in specific business, can not create projects, and can not perform related operations on process definition. |
||||||
|
- The following operations can only be performed by using ordinary user login system. |
||||||
|
|
||||||
|
### Create a project |
||||||
|
|
||||||
|
- Click "Project - > Create Project", enter project name, description, and click "Submit" to create a new project. |
||||||
|
- Click on the project name to enter the project home page. |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61776719-2ee50380-ae2e-11e9-9d11-41de8907efb5.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> Project Home Page contains task status statistics, process status statistics, process definition statistics, queue statistics, command statistics. |
||||||
|
|
||||||
|
- Task State Statistics: It refers to the statistics of the number of tasks to be run, failed, running, completed and succeeded in a given time frame. |
||||||
|
- Process State Statistics: It refers to the statistics of the number of waiting, failing, running, completing and succeeding process instances in a specified time range. |
||||||
|
- Process Definition Statistics: The process definition created by the user and the process definition granted by the administrator to the user are counted. |
||||||
|
- Queue statistics: Worker performs queue statistics, the number of tasks to be performed and the number of tasks to be killed |
||||||
|
- Command Status Statistics: Statistics of the Number of Commands Executed |
||||||
|
|
||||||
|
### Creating Process definitions |
||||||
|
- Go to the project home page, click "Process definitions" and enter the list page of process definition. |
||||||
|
- Click "Create process" to create a new process definition. |
||||||
|
- Drag the "SHELL" node to the canvas and add a shell task. |
||||||
|
- Fill in the Node Name, Description, and Script fields. |
||||||
|
- Selecting "task priority" will give priority to high-level tasks in the execution queue. Tasks with the same priority will be executed in the first-in-first-out order. |
||||||
|
- Timeout alarm. Fill in "Overtime Time". When the task execution time exceeds the overtime, it can alarm and fail over time. |
||||||
|
- Fill in "Custom Parameters" and refer to [Custom Parameters](#用户自定义参数) |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61778402-42459e00-ae31-11e9-96c6-8fd7fed8fed2.png" width="60%" /> |
||||||
|
</p> |
||||||
|
- Increase the order of execution between nodes: click "line connection". As shown, task 1 and task 3 are executed in parallel. When task 1 is executed, task 2 and task 3 are executed simultaneously. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61778247-f98de500-ae30-11e9-8f11-cce0530c3ff2.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Delete dependencies: Click on the arrow icon to "drag nodes and select items", select the connection line, click on the delete icon to delete dependencies between nodes. |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61778800-052ddb80-ae32-11e9-8ac0-4f13466d3515.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Click "Save", enter the name of the process definition, the description of the process definition, and set the global parameters. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61778891-3c03f180-ae32-11e9-812a-9d9f6c151301.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- For other types of nodes, refer to [task node types and parameter settings](#task node types and parameter settings) |
||||||
|
|
||||||
|
### Execution process definition |
||||||
|
- **The process definition of the off-line state can be edited, but not run**, so the on-line workflow is the first step. |
||||||
|
> Click on the Process definition, return to the list of process definitions, click on the icon "online", online process definition. |
||||||
|
|
||||||
|
> Before offline process, it is necessary to offline timed management before offline process can be successfully defined. |
||||||
|
> |
||||||
|
> |
||||||
|
|
||||||
|
- Click "Run" to execute the process. Description of operation parameters: |
||||||
|
* Failure strategy:**When a task node fails to execute, other parallel task nodes need to execute the strategy**。”Continue "Representation: Other task nodes perform normally" and "End" Representation: Terminate all ongoing tasks and terminate the entire process. |
||||||
|
* Notification strategy:When the process is over, send process execution information notification mail according to the process status. |
||||||
|
* Process priority: The priority of process running is divided into five levels:the highest , the high , the medium , the low , and the lowest . High-level processes are executed first in the execution queue, and processes with the same priority are executed first in first out order. |
||||||
|
* Worker group This process can only be executed in a specified machine group. Default, by default, can be executed on any worker. |
||||||
|
* Notification group: When the process ends or fault tolerance occurs, process information is sent to all members of the notification group by mail. |
||||||
|
* Recipient: Enter the mailbox and press Enter key to save. When the process ends and fault tolerance occurs, an alert message is sent to the recipient list. |
||||||
|
* Cc: Enter the mailbox and press Enter key to save. When the process is over and fault-tolerant occurs, alarm messages are copied to the copier list. |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61779865-0829cb80-ae34-11e9-901f-00cb3bf80e36.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Complement: To implement the workflow definition of a specified date, you can select the time range of the complement (currently only support for continuous days), such as the data from May 1 to May 10, as shown in the figure: |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61780083-6a82cc00-ae34-11e9-9839-fda9153f693b.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> SComplement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously. |
||||||
|
|
||||||
|
### Timing Process Definition |
||||||
|
- Create Timing: "Process Definition - > Timing" |
||||||
|
- Choose start-stop time, in the start-stop time range, regular normal work, beyond the scope, will not continue to produce timed workflow instances. |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61781565-28a75500-ae37-11e9-9ca5-85f211f341b2.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Add a timer to be executed once a day at 5:00 a.m. as shown below: |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61781968-d9adef80-ae37-11e9-9e90-3d9f0b3eb998.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Timely online,**the newly created timer is offline. You need to click "Timing Management - >online" to work properly.** |
||||||
|
|
||||||
|
### View process instances |
||||||
|
> Click on "Process Instances" to view the list of process instances. |
||||||
|
|
||||||
|
> Click on the process name to see the status of task execution. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61855837-6ff31b80-aef3-11e9-8464-2fb5773709df.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> Click on the task node, click "View Log" to view the task execution log. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783070-bdab4d80-ae39-11e9-9ada-355614fbb7f7.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> Click on the task instance node, click **View History** to view the list of task instances that the process instance runs. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783240-05ca7000-ae3a-11e9-8c10-591a7635834a.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
|
||||||
|
> Operations on workflow instances: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783291-21357b00-ae3a-11e9-837c-fc3d85404410.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Editor: You can edit the terminated process. When you save it after editing, you can choose whether to update the process definition or not. |
||||||
|
* Rerun: A process that has been terminated can be re-executed. |
||||||
|
* Recovery failure: For a failed process, a recovery failure operation can be performed, starting at the failed node. |
||||||
|
* Stop: Stop the running process, the background will `kill` he worker process first, then `kill -9` operation. |
||||||
|
* Pause:The running process can be **suspended**, the system state becomes **waiting to be executed**, waiting for the end of the task being executed, and suspending the next task to be executed. |
||||||
|
* Restore pause: **The suspended process** can be restored and run directly from the suspended node |
||||||
|
* Delete: Delete process instances and task instances under process instances |
||||||
|
* Gantt diagram: The vertical axis of Gantt diagram is the topological ordering of task instances under a process instance, and the horizontal axis is the running time of task instances, as shown in the figure: |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783596-aa4cb200-ae3a-11e9-9798-e795f80dae96.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### View task instances |
||||||
|
> Click on "Task Instance" to enter the Task List page and query the performance of the task. |
||||||
|
> |
||||||
|
> |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783544-91dc9780-ae3a-11e9-9dca-dfd901f1fe83.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> Click "View Log" in the action column to view the log of task execution. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783441-60fc6280-ae3a-11e9-8631-963dcf78467b.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### Create data source |
||||||
|
> Data Source Center supports MySQL, POSTGRESQL, HIVE and Spark data sources. |
||||||
|
|
||||||
|
#### Create and edit MySQL data source |
||||||
|
|
||||||
|
- Click on "Datasource - > Create Datasources" to create different types of datasources according to requirements. |
||||||
|
- Datasource: Select MYSQL |
||||||
|
- Datasource Name: Name of Input Datasource |
||||||
|
- Description: Description of input datasources |
||||||
|
- IP: Enter the IP to connect to MySQL |
||||||
|
- Port: Enter the port to connect MySQL |
||||||
|
- User name: Set the username to connect to MySQL |
||||||
|
- Password: Set the password to connect to MySQL |
||||||
|
- Database name: Enter the name of the database connecting MySQL |
||||||
|
- Jdbc connection parameters: parameter settings for MySQL connections, filled in as JSON |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783812-129b9380-ae3b-11e9-9b9c-77870371c5f3.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> Click "Test Connect" to test whether the data source can be successfully connected. |
||||||
|
> |
||||||
|
> |
||||||
|
|
||||||
|
#### Create and edit POSTGRESQL data source |
||||||
|
|
||||||
|
- Datasource: Select POSTGRESQL |
||||||
|
- Datasource Name: Name of Input Data Source |
||||||
|
- Description: Description of input data sources |
||||||
|
- IP: Enter IP to connect to POSTGRESQL |
||||||
|
- Port: Input port to connect POSTGRESQL |
||||||
|
- Username: Set the username to connect to POSTGRESQL |
||||||
|
- Password: Set the password to connect to POSTGRESQL |
||||||
|
- Database name: Enter the name of the database connecting to POSTGRESQL |
||||||
|
- Jdbc connection parameters: parameter settings for POSTGRESQL connections, filled in as JSON |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61783968-60180080-ae3b-11e9-91b7-36d49246a205.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
#### Create and edit HIVE data source |
||||||
|
|
||||||
|
1.Connect with HiveServer 2 |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61784129-b9802f80-ae3b-11e9-8a27-7be23e0953be.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Datasource: Select HIVE |
||||||
|
- Datasource Name: Name of Input Datasource |
||||||
|
- Description: Description of input datasources |
||||||
|
- IP: Enter IP to connect to HIVE |
||||||
|
- Port: Input port to connect to HIVE |
||||||
|
- Username: Set the username to connect to HIVE |
||||||
|
- Password: Set the password to connect to HIVE |
||||||
|
- Database Name: Enter the name of the database connecting to HIVE |
||||||
|
- Jdbc connection parameters: parameter settings for HIVE connections, filled in in as JSON |
||||||
|
|
||||||
|
2.Connect using Hive Server 2 HA Zookeeper mode |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61784420-3dd2b280-ae3c-11e9-894a-5b896863d37a.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
|
||||||
|
Note: If **kerberos** is turned on, you need to fill in **Principal ** |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61784847-0adcee80-ae3d-11e9-8ac7-ba8a13aef90c.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### Create and Edit Datasource |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/48329107/61853431-7af77d00-aeee-11e9-8e2e-95ba6cea43c8.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Datasource: Select Spark |
||||||
|
- Datasource Name: Name of Input Datasource |
||||||
|
- Description: Description of input datasources |
||||||
|
- IP: Enter the IP to connect to Spark |
||||||
|
- Port: Input port to connect Spark |
||||||
|
- Username: Set the username to connect to Spark |
||||||
|
- Password: Set the password to connect to Spark |
||||||
|
- Database name: Enter the name of the database connecting to Spark |
||||||
|
- Jdbc Connection Parameters: Parameter settings for Spark Connections, filled in as JSON |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Note: If **kerberos** If Kerberos is turned on, you need to fill in **Principal** |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/48329107/61853668-0709a480-aeef-11e9-8960-92107dd1a9ca.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### Upload Resources |
||||||
|
- Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required: |
||||||
|
|
||||||
|
``` |
||||||
|
conf/common/common.properties |
||||||
|
-- hdfs.startup.state=true |
||||||
|
conf/common/hadoop.properties |
||||||
|
-- fs.defaultFS=hdfs://xxxx:8020 |
||||||
|
-- yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx |
||||||
|
-- yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s |
||||||
|
``` |
||||||
|
|
||||||
|
#### File Manage |
||||||
|
|
||||||
|
> It is the management of various resource files, including creating basic txt/log/sh/conf files, uploading jar packages and other types of files, editing, downloading, deleting and other operations. |
||||||
|
> |
||||||
|
> |
||||||
|
> <p align="center"> |
||||||
|
> <img src="https://user-images.githubusercontent.com/53217792/61785274-ed5c5480-ae3d-11e9-8461-2178f49b228d.png" width="60%" /> |
||||||
|
> </p> |
||||||
|
|
||||||
|
* Create file |
||||||
|
> File formats support the following types:txt、log、sh、conf、cfg、py、java、sql、xml、hql |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61841049-f133b980-aec5-11e9-8ac8-db97cdccc599.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Upload Files |
||||||
|
|
||||||
|
> Upload Files: Click the Upload button to upload, drag the file to the upload area, and the file name will automatically complete the uploaded file name. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61841179-73bc7900-aec6-11e9-8780-28756e684754.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
|
||||||
|
* File View |
||||||
|
|
||||||
|
> For viewable file types, click on the file name to view file details |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61841247-9cdd0980-aec6-11e9-9f6f-0a7dd145f865.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
* Download files |
||||||
|
|
||||||
|
> You can download a file by clicking the download button in the top right corner of the file details, or by downloading the file under the download button after the file list. |
||||||
|
|
||||||
|
* File rename |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61841322-f47b7500-aec6-11e9-93b1-b00328e7b69e.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
#### Delete |
||||||
|
> File List - > Click the Delete button to delete the specified file |
||||||
|
|
||||||
|
#### Resource management |
||||||
|
> Resource management and file management functions are similar. The difference is that resource management is the UDF function of uploading, and file management uploads user programs, scripts and configuration files. |
||||||
|
|
||||||
|
* Upload UDF resources |
||||||
|
> The same as uploading files. |
||||||
|
|
||||||
|
#### Function management |
||||||
|
|
||||||
|
* Create UDF Functions |
||||||
|
> Click "Create UDF Function", enter parameters of udf function, select UDF resources, and click "Submit" to create udf function. |
||||||
|
> |
||||||
|
> |
||||||
|
> |
||||||
|
> Currently only temporary udf functions for HIVE are supported |
||||||
|
> |
||||||
|
> |
||||||
|
> |
||||||
|
> - UDF function name: name when entering UDF Function |
||||||
|
> - Package Name: Full Path of Input UDF Function |
||||||
|
> - Parameter: Input parameters used to annotate functions |
||||||
|
> - Database Name: Reserved Field for Creating Permanent UDF Functions |
||||||
|
> - UDF Resources: Set up the resource files corresponding to the created UDF |
||||||
|
> |
||||||
|
> |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61841562-c6e2fb80-aec7-11e9-9481-4202d63dab6f.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
## Security (Privilege System) |
||||||
|
|
||||||
|
- The security has the functions of queue management, tenant management, user management, warning group management, worker group manager, token manage and other functions. It can also authorize resources, data sources, projects, etc. |
||||||
|
- Administrator login, default username password: admin/escheduler 123 |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Create queues |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
- Queues are used to execute spark, mapreduce and other programs, which require the use of "queue" parameters. |
||||||
|
- Security - > Queue Manage - > Creat Queue |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61841945-078f4480-aec9-11e9-92fb-05b6f42f07d6.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
|
||||||
|
### Create Tenants |
||||||
|
- The tenant corresponds to the user of Linux, which is used by the worker to submit jobs. If Linux does not have this user, the worker creates the user when executing the script. |
||||||
|
- Tenant Code:**the tenant code is the only user on Linux that can't be duplicated.** |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61842372-8042d080-aeca-11e9-8c54-e3dee583eeff.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### Create Ordinary Users |
||||||
|
- Users are divided into **administrator users** and **ordinary users**. |
||||||
|
* Administrators have only **authorization and user management** privileges, and no privileges to **create project and process-defined operations**. |
||||||
|
* Ordinary users can **create projects and create, edit, and execute process definitions**. |
||||||
|
* Note: **If the user switches the tenant, all resources under the tenant will be copied to the switched new tenant.** |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61842461-da439600-aeca-11e9-98e3-f8327dbafa60.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### Create alarm group |
||||||
|
* The alarm group is a parameter set at start-up. After the process is finished, the status of the process and other information will be sent to the alarm group by mail. |
||||||
|
* New and Editorial Warning Group |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61842553-34445b80-aecb-11e9-84a8-3cc66b6c6135.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### Create Worker Group |
||||||
|
- Worker grouping provides a mechanism for tasks to run on a specified worker. Administrators set worker groups, and each task node can set worker groups for the task to run. If the task-specified groups are deleted or no groups are specified, the task will run on the worker specified by the process instance. |
||||||
|
- Multiple IP addresses within a worker group (**no aliases can be written**), separated by **commas in English** |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61842630-6b1a7180-aecb-11e9-8988-b4444de16b36.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### Token manage |
||||||
|
- Because the back-end interface has login check and token management, it provides a way to operate the system by calling the interface. |
||||||
|
- Call examples: |
||||||
|
|
||||||
|
```令牌调用示例 |
||||||
|
/** |
||||||
|
* test token |
||||||
|
*/ |
||||||
|
public void doPOSTParam()throws Exception{ |
||||||
|
// create HttpClient |
||||||
|
CloseableHttpClient httpclient = HttpClients.createDefault(); |
||||||
|
|
||||||
|
// create http post request |
||||||
|
HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/escheduler/projects/create"); |
||||||
|
httpPost.setHeader("token", "123"); |
||||||
|
// set parameters |
||||||
|
List<NameValuePair> parameters = new ArrayList<NameValuePair>(); |
||||||
|
parameters.add(new BasicNameValuePair("projectName", "qzw")); |
||||||
|
parameters.add(new BasicNameValuePair("desc", "qzw")); |
||||||
|
UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters); |
||||||
|
httpPost.setEntity(formEntity); |
||||||
|
CloseableHttpResponse response = null; |
||||||
|
try { |
||||||
|
// execute |
||||||
|
response = httpclient.execute(httpPost); |
||||||
|
// eponse status code 200 |
||||||
|
if (response.getStatusLine().getStatusCode() == 200) { |
||||||
|
String content = EntityUtils.toString(response.getEntity(), "UTF-8"); |
||||||
|
System.out.println(content); |
||||||
|
} |
||||||
|
} finally { |
||||||
|
if (response != null) { |
||||||
|
response.close(); |
||||||
|
} |
||||||
|
httpclient.close(); |
||||||
|
} |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
### Grant authority |
||||||
|
- Granting permissions includes project permissions, resource permissions, datasource permissions, UDF Function permissions. |
||||||
|
> Administrators can authorize projects, resources, data sources and UDF Functions that are not created by ordinary users. Because project, resource, data source and UDF Function are all authorized in the same way, the project authorization is introduced as an example. |
||||||
|
|
||||||
|
> Note:For projects created by the user himself, the user has all the permissions. The list of items and the list of selected items will not be reflected |
||||||
|
|
||||||
|
- 1.Click on the authorization button of the designated person as follows: |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61843204-71a9e880-aecd-11e9-83ad-365d7bf99375.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- 2.Select the project button to authorize the project |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61842992-af5a4180-aecc-11e9-9553-43e836aee78b.png" width="60%" /> |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
</p> |
||||||
|
|
||||||
|
### Monitor center |
||||||
|
- Service management is mainly to monitor and display the health status and basic information of each service in the system. |
||||||
|
|
||||||
|
#### Master monitor |
||||||
|
- Mainly related information about master. |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61843245-8edeb700-aecd-11e9-9916-ea50080e7d08.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
#### Worker monitor |
||||||
|
- Mainly related information of worker. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61843277-ae75df80-aecd-11e9-9667-b9f1615b6f3b.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
#### Zookeeper monitor |
||||||
|
- Mainly the configuration information of each worker and master in zookpeeper. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61843323-c64d6380-aecd-11e9-8392-1ca9b84cd794.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
#### Mysql monitor |
||||||
|
- Mainly the health status of mysql |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61843358-e11fd800-aecd-11e9-86d1-9490e48dc955.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
## Task Node Type and Parameter Setting |
||||||
|
|
||||||
|
### Shell |
||||||
|
|
||||||
|
- The shell node, when the worker executes, generates a temporary shell script, which is executed by a Linux user with the same name as the tenant. |
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SHELL.png) task node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61843728-6788e980-aecf-11e9-8006-241a7ec5024b.png" width="60%" /> |
||||||
|
</p>` |
||||||
|
|
||||||
|
- Node name: The node name in a process definition is unique |
||||||
|
- Run flag: Identify whether the node can be scheduled properly, and if it does not need to be executed, you can turn on the forbidden execution switch. |
||||||
|
- Description : Describes the function of the node |
||||||
|
- Number of failed retries: Number of failed task submissions, support drop-down and manual filling |
||||||
|
- Failure Retry Interval: Interval between tasks that fail to resubmit tasks, support drop-down and manual filling |
||||||
|
- Script: User-developed SHELL program |
||||||
|
- Resources: A list of resource files that need to be invoked in a script |
||||||
|
- Custom parameters: User-defined parameters that are part of SHELL replace the contents of scripts with ${variables} |
||||||
|
|
||||||
|
### SUB_PROCESS |
||||||
|
- The sub-process node is to execute an external workflow definition as its own task node. |
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61843799-adde4880-aecf-11e9-846e-f1696107029f.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Node name: The node name in a process definition is unique |
||||||
|
- Run flag: Identify whether the node is scheduled properly |
||||||
|
- Description: Describes the function of the node |
||||||
|
- Sub-node: The process definition of the selected sub-process is selected, and the process definition of the selected sub-process can be jumped to by entering the sub-node in the upper right corner. |
||||||
|
|
||||||
|
### DEPENDENT |
||||||
|
|
||||||
|
- Dependent nodes are **dependent checking nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node checks whether process B has a successful execution instance yesterday. |
||||||
|
|
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) ask node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61844369-be8fbe00-aed1-11e9-965d-ddb9aeeba9db.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> Dependent nodes provide logical judgment functions, such as checking whether yesterday's B process was successful or whether the C process was successfully executed. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node.png" width="80%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> For example, process A is a weekly task and process B and C are daily tasks. Task A requires that task B and C be successfully executed every day of the week, as shown in the figure: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node2.png" width="80%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> If weekly A also needs to be implemented successfully on Tuesday: |
||||||
|
> |
||||||
|
> |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node3.png" width="80%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
### PROCEDURE |
||||||
|
- The procedure is executed according to the selected data source. |
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png) task node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61844464-1af2dd80-aed2-11e9-9486-6cf1b8585aa5.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Datasource: The data source type of stored procedure supports MySQL and POSTGRRESQL, and chooses the corresponding data source. |
||||||
|
- Method: The method name of the stored procedure |
||||||
|
- Custom parameters: Custom parameter types of stored procedures support IN and OUT, and data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP and BOOLEAN. |
||||||
|
|
||||||
|
### SQL |
||||||
|
- Execute non-query SQL functionality |
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61850397-d7569e80-aee6-11e9-9da0-c4d96deaa8a1.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Executing the query SQL function, you can choose to send mail in the form of tables and attachments to the designated recipients. |
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png) task node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61850594-4d5b0580-aee7-11e9-9c9e-1934c91962b9.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Datasource: Select the corresponding datasource |
||||||
|
- sql type: support query and non-query, query is select type query, there is a result set returned, you can specify mail notification as table, attachment or table attachment three templates. Non-query is not returned by result set, and is for update, delete, insert three types of operations |
||||||
|
- sql parameter: input parameter format is key1 = value1; key2 = value2... |
||||||
|
- sql statement: SQL statement |
||||||
|
- UDF function: For HIVE type data sources, you can refer to UDF functions created in the resource center, other types of data sources do not support UDF functions for the time being. |
||||||
|
- Custom parameters: SQL task type, and stored procedure is to customize the order of parameters to set values for methods. Custom parameter type and data type are the same as stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the ${variable} in the SQL statement. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### SPARK |
||||||
|
|
||||||
|
- Through SPARK node, SPARK program can be directly executed. For spark node, worker will use `spark-submit` mode to submit tasks. |
||||||
|
|
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png) task node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
> |
||||||
|
> |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/48329107/61852935-3d462480-aeed-11e9-8241-415314bfc2e5.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Program Type: Support JAVA, Scala and Python |
||||||
|
- Class of the main function: The full path of Main Class, the entry to the Spark program |
||||||
|
- Master jar package: It's Spark's jar package |
||||||
|
- Deployment: support three modes: yarn-cluster, yarn-client, and local |
||||||
|
- Driver Kernel Number: Driver Kernel Number and Memory Number can be set |
||||||
|
- Executor Number: Executor Number, Executor Memory Number and Executor Kernel Number can be set |
||||||
|
- Command Line Parameters: Setting the input parameters of Spark program to support the replacement of custom parameter variables. |
||||||
|
- Other parameters: support - jars, - files, - archives, - conf format |
||||||
|
- Resource: If a resource file is referenced in other parameters, you need to select the specified resource. |
||||||
|
- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables} |
||||||
|
|
||||||
|
Note: JAVA and Scala are just used for identification, no difference. If it's a Spark developed by Python, there's no class of the main function, and everything else is the same. |
||||||
|
|
||||||
|
### MapReduce(MR) |
||||||
|
- Using MR nodes, MR programs can be executed directly. For Mr nodes, worker submits tasks using `hadoop jar` |
||||||
|
|
||||||
|
|
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png) task node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
|
||||||
|
1. JAVA program |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61851102-91023f00-aee8-11e9-9ac0-dbe588d860c2.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Class of the main function: The full path of the MR program's entry Main Class |
||||||
|
- Program Type: Select JAVA Language |
||||||
|
- Master jar package: MR jar package |
||||||
|
- Command Line Parameters: Setting the input parameters of MR program to support the replacement of custom parameter variables |
||||||
|
- Other parameters: support - D, - files, - libjars, - archives format |
||||||
|
- Resource: If a resource file is referenced in other parameters, you need to select the specified resource. |
||||||
|
- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables} |
||||||
|
|
||||||
|
2. Python program |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61851224-f3f3d600-aee8-11e9-8862-435220bbda93.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Program Type: Select Python Language |
||||||
|
- Main jar package: Python jar package running MR |
||||||
|
- Other parameters: support - D, - mapper, - reducer, - input - output format, where user-defined parameters can be set, such as: |
||||||
|
- mapper "mapper.py 1" - file mapper.py-reducer reducer.py-file reducer.py-input/journey/words.txt-output/journey/out/mr/${current TimeMillis} |
||||||
|
- Among them, mapper. py 1 after - mapper is two parameters, the first parameter is mapper. py, and the second parameter is 1. |
||||||
|
- Resource: If a resource file is referenced in other parameters, you need to select the specified resource. |
||||||
|
- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables} |
||||||
|
|
||||||
|
### Python |
||||||
|
- With Python nodes, Python scripts can be executed directly. For Python nodes, worker will use `python ** `to submit tasks. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png) task node in the toolbar onto the palette and double-click the task node as follows: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61851959-daec2480-aeea-11e9-83fd-3e00a030cb84.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
- Script: User-developed Python program |
||||||
|
- Resource: A list of resource files that need to be invoked in a script |
||||||
|
- Custom parameters: User-defined parameters that are part of Python that replace the contents in the script with ${variables} |
||||||
|
|
||||||
|
### System parameter |
||||||
|
|
||||||
|
<table> |
||||||
|
<tr><th>variable</th><th>meaning</th></tr> |
||||||
|
<tr> |
||||||
|
<td>${system.biz.date}</td> |
||||||
|
<td>The timing time of routine dispatching instance is one day before, in yyyyyMMdd format. When data is supplemented, the date + 1</td> |
||||||
|
</tr> |
||||||
|
<tr> |
||||||
|
<td>${system.biz.curdate}</td> |
||||||
|
<td> Daily scheduling example timing time, format is yyyyyMMdd, when supplementing data, the date + 1</td> |
||||||
|
</tr> |
||||||
|
<tr> |
||||||
|
<td>${system.datetime}</td> |
||||||
|
<td>Daily scheduling example timing time, format is yyyyyMMddHmmss, when supplementing data, the date + 1</td> |
||||||
|
</tr> |
||||||
|
</table> |
||||||
|
|
||||||
|
|
||||||
|
### Time Customization Parameters |
||||||
|
|
||||||
|
> Support code to customize the variable name, declaration: ${variable name}. It can refer to "system parameters" or specify "constants". |
||||||
|
|
||||||
|
> When we define this benchmark variable as $[...], [yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as:$[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] ,etc. |
||||||
|
|
||||||
|
> Can also do this: |
||||||
|
> |
||||||
|
> |
||||||
|
|
||||||
|
- Later N years: $[add_months (yyyyyyMMdd, 12*N)] |
||||||
|
- The previous N years: $[add_months (yyyyyyMMdd, -12*N)] |
||||||
|
- Later N months: $[add_months (yyyyyMMdd, N)] |
||||||
|
- The first N months: $[add_months (yyyyyyMMdd, -N)] |
||||||
|
- Later N weeks: $[yyyyyyMMdd + 7*N] |
||||||
|
- The first N weeks: $[yyyyyMMdd-7*N] |
||||||
|
- The day after that: $[yyyyyyMMdd + N] |
||||||
|
- The day before yesterday: $[yyyyyMMdd-N] |
||||||
|
- Later N hours: $[HHmmss + N/24] |
||||||
|
- First N hours: $[HHmmss-N/24] |
||||||
|
- After N minutes: $[HHmmss + N/24/60] |
||||||
|
- First N minutes: $[HHmmss-N/24/60] |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### User-defined parameters |
||||||
|
|
||||||
|
> User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process. |
||||||
|
|
||||||
|
> For example: |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61864229-a0db4c80-af03-11e9-962c-044ab12991c7.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> global_bizdate is a global parameter, referring to system parameters. |
||||||
|
|
||||||
|
<p align="center"> |
||||||
|
<img src="https://user-images.githubusercontent.com/53217792/61857313-78992100-aef6-11e9-9ba3-521c6ca33ce3.png" width="60%" /> |
||||||
|
</p> |
||||||
|
|
||||||
|
> In tasks, local_param_bizdate refers to global parameters by ${global_bizdate} for scripts, the value of variable local_param_bizdate can be referenced by${local_param_bizdate}, or the value of local_param_bizdate can be set directly by JDBC. |
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,38 @@ |
|||||||
|
|
||||||
|
# EasyScheduler upgrade documentation |
||||||
|
|
||||||
|
## 1. Back up the previous version of the file and database |
||||||
|
|
||||||
|
## 2. Stop all services of escheduler |
||||||
|
|
||||||
|
`sh ./script/stop_all.sh` |
||||||
|
|
||||||
|
## 3. Download the new version of the installation package |
||||||
|
|
||||||
|
- [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files), download the latest version of the front and rear installation package (backend referred to as escheduler-backend, front end referred to as escheduler-ui) |
||||||
|
- The following upgrade operations need to be performed in the new version of the directory |
||||||
|
|
||||||
|
## 4. Database upgrade |
||||||
|
- Modify the following properties in conf/dao/data_source.properties |
||||||
|
|
||||||
|
``` |
||||||
|
spring.datasource.url |
||||||
|
spring.datasource.username |
||||||
|
spring.datasource.password |
||||||
|
``` |
||||||
|
|
||||||
|
- Execute database upgrade script |
||||||
|
|
||||||
|
`sh ./script/upgrade_escheduler.sh` |
||||||
|
|
||||||
|
## 5. Backend service upgrade |
||||||
|
|
||||||
|
- Modify the contents of the install.sh configuration and execute the upgrade script |
||||||
|
|
||||||
|
`sh install.sh` |
||||||
|
|
||||||
|
## 6. Frontend service upgrade |
||||||
|
- Overwrite the previous version of the dist directory |
||||||
|
- Restart the nginx service |
||||||
|
|
||||||
|
`systemctl restart nginx` |
Loading…
Reference in new issue