Browse Source

merge from 1.1.0 upstream

pull/2/head
lidongdai 5 years ago
parent
commit
c2e6352e80
  1. 55
      docs/zh_CN/1.1.0-release.md
  2. 1
      docs/zh_CN/SUMMARY.md
  3. 2
      docs/zh_CN/book.json
  4. BIN
      docs/zh_CN/images/hive_kerberos.png
  5. BIN
      docs/zh_CN/images/sparksql_kerberos.png
  6. 16
      docs/zh_CN/系统使用手册.md
  7. 1
      escheduler-api/src/main/java/cn/escheduler/api/enums/Status.java
  8. 31
      escheduler-api/src/main/java/cn/escheduler/api/service/ExecutorService.java
  9. 2
      escheduler-api/src/main/java/cn/escheduler/api/service/ProcessDefinitionService.java
  10. 8
      escheduler-api/src/main/java/cn/escheduler/api/service/SchedulerService.java
  11. 11
      escheduler-common/src/main/java/cn/escheduler/common/queue/TaskQueueZkImpl.java
  12. 3
      escheduler-server/src/main/java/cn/escheduler/server/utils/LoggerUtils.java
  13. 5
      escheduler-server/src/main/java/cn/escheduler/server/worker/runner/FetchTaskThread.java
  14. 2
      escheduler-ui/src/js/conf/home/pages/dag/_source/dag.vue

55
docs/zh_CN/1.1.0-release.md

@ -0,0 +1,55 @@
Easy Scheduler Release 1.1.0
===
Easy Scheduler 1.1.0是1.x系列中的第五个版本。
新特性:
===
- [[EasyScheduler-391](https://github.com/analysys/EasyScheduler/issues/391)] run a process under a specified tenement user
- [[EasyScheduler-288](https://github.com/analysys/EasyScheduler/issues/288)] Feature/qiye_weixin
- [[EasyScheduler-189](https://github.com/analysys/EasyScheduler/issues/189)] Kerberos等安全支持
- [[EasyScheduler-398](https://github.com/analysys/EasyScheduler/issues/398)]管理员,有租户(install.sh设置默认租户),可以创建资源、项目和数据源(限制有一个管理员)
- [[EasyScheduler-293](https://github.com/analysys/EasyScheduler/issues/293)]点击运行流程时候选择的参数,没有地方可查看,也没有保存
- [[EasyScheduler-401](https://github.com/analysys/EasyScheduler/issues/401)]定时很容易定时每秒一次,定时完成以后可以在页面显示一下下次触发时间
- [[EasyScheduler-493](https://github.com/analysys/EasyScheduler/pull/493)]add datasource kerberos auth and FAQ modify and add resource upload s3
增强:
===
- [[EasyScheduler-227](https://github.com/analysys/EasyScheduler/issues/227)] upgrade spring-boot to 2.1.x and spring to 5.x
- [[EasyScheduler-434](https://github.com/analysys/EasyScheduler/issues/434)] worker节点数量 zk和mysql中不一致
- [[EasyScheduler-435](https://github.com/analysys/EasyScheduler/issues/435)]邮箱格式的验证
- [[EasyScheduler-441](https://github.com/analysys/EasyScheduler/issues/441)] 禁止运行节点加入已完成节点检测
- [[EasyScheduler-400](https://github.com/analysys/EasyScheduler/issues/400)] 首页页面,队列统计不和谐,命令统计无数据
- [[EasyScheduler-395](https://github.com/analysys/EasyScheduler/issues/395)] 对于容错恢复的流程,状态不能为 **正在运行
- [[EasyScheduler-529](https://github.com/analysys/EasyScheduler/issues/529)] optimize poll task from zookeeper
- [[EasyScheduler-242](https://github.com/analysys/EasyScheduler/issues/242)]worker-server节点获取任务性能问题
- [[EasyScheduler-352](https://github.com/analysys/EasyScheduler/issues/352)]worker 分组, 队列消费问题
- [[EasyScheduler-461](https://github.com/analysys/EasyScheduler/issues/461)]查看数据源参数,需要加密账号密码信息
- [[EasyScheduler-396](https://github.com/analysys/EasyScheduler/issues/396)]Dockerfile优化,并关联Dockerfile和github实现自动打镜像
- [[EasyScheduler-389](https://github.com/analysys/EasyScheduler/issues/389)]service monitor cannot find the change of master/worker
- [[EasyScheduler-511](https://github.com/analysys/EasyScheduler/issues/511)]support recovery process from stop/kill nodes.
- [[EasyScheduler-399](https://github.com/analysys/EasyScheduler/issues/399)]HadoopUtils指定用户操作,而不是 **部署用户
修复:
===
- [[EasyScheduler-394](https://github.com/analysys/EasyScheduler/issues/394)] master&worker部署在同一台机器上时,如果重启master&worker服务,会导致之前调度的任务无法继续调度
- [[EasyScheduler-469](https://github.com/analysys/EasyScheduler/issues/469)]Fix naming errors,monitor page
- [[EasyScheduler-392](https://github.com/analysys/EasyScheduler/issues/392)]Feature request: fix email regex check
- [[EasyScheduler-405](https://github.com/analysys/EasyScheduler/issues/405)]定时修改/添加页面,开始时间和结束时间不能相同
- [[EasyScheduler-517](https://github.com/analysys/EasyScheduler/issues/517)]补数 - 子工作流 - 时间参数
- [[EasyScheduler-532](https://github.com/analysys/EasyScheduler/issues/532)]python节点不执行的问题
- [[EasyScheduler-543](https://github.com/analysys/EasyScheduler/issues/543)]optimize datasource connection params safety
- [[EasyScheduler-569](https://github.com/analysys/EasyScheduler/issues/569)]定时任务无法真正停止
- [[EasyScheduler-463](https://github.com/analysys/EasyScheduler/issues/463)]邮箱验证不支持非常见后缀邮箱
感谢:
===
最后但最重要的是,没有以下伙伴的贡献就没有新版本的诞生:
Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, chgxtony, Stanfan, lfyee, thisnew, hujiang75277381, sunnyingit, lgbo-ustc, ivivi, lzy305, JackIllkid, telltime, lipengbo2018, wuchunfu, telltime
以及微信群里众多的热心伙伴!在此非常感谢!

1
docs/zh_CN/SUMMARY.md

@ -35,6 +35,7 @@
* 系统版本升级文档 * 系统版本升级文档
* [版本升级](升级文档.md) * [版本升级](升级文档.md)
* 历次版本发布内容 * 历次版本发布内容
* [1.1.0 release](1.1.0-release.md)
* [1.0.3 release](1.0.3-release.md) * [1.0.3 release](1.0.3-release.md)
* [1.0.2 release](1.0.2-release.md) * [1.0.2 release](1.0.2-release.md)
* [1.0.1 release](1.0.1-release.md) * [1.0.1 release](1.0.1-release.md)

2
docs/zh_CN/book.json

@ -1,6 +1,6 @@
{ {
"title": "调度系统-EasyScheduler", "title": "调度系统-EasyScheduler",
"author": "YIGUAN", "author": "",
"description": "调度系统", "description": "调度系统",
"language": "zh-hans", "language": "zh-hans",
"gitbook": "3.2.3", "gitbook": "3.2.3",

BIN
docs/zh_CN/images/hive_kerberos.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

BIN
docs/zh_CN/images/sparksql_kerberos.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

16
docs/zh_CN/系统使用手册.md

@ -213,6 +213,14 @@
</p> </p>
注意:如果开启了**kerberos**,则需要填写 **Principal**
<p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/hive_kerberos.png" width="60%" />
</p>
#### 创建、编辑Spark数据源 #### 创建、编辑Spark数据源
<p align="center"> <p align="center">
@ -229,6 +237,14 @@
- 数据库名:输入连接Spark的数据库名称 - 数据库名:输入连接Spark的数据库名称
- Jdbc连接参数:用于Spark连接的参数设置,以JSON形式填写 - Jdbc连接参数:用于Spark连接的参数设置,以JSON形式填写
注意:如果开启了**kerberos**,则需要填写 **Principal**
<p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/sparksql_kerberos.png" width="60%" />
</p>
### 上传资源 ### 上传资源
- 上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项: - 上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:

1
escheduler-api/src/main/java/cn/escheduler/api/enums/Status.java

@ -212,6 +212,7 @@ public enum Status {
DELETE_SCHEDULE_CRON_BY_ID_ERROR(50024,"delete schedule by id error"), DELETE_SCHEDULE_CRON_BY_ID_ERROR(50024,"delete schedule by id error"),
BATCH_DELETE_PROCESS_DEFINE_ERROR(50025,"batch delete process definition error"), BATCH_DELETE_PROCESS_DEFINE_ERROR(50025,"batch delete process definition error"),
BATCH_DELETE_PROCESS_DEFINE_BY_IDS_ERROR(50026,"batch delete process definition by ids {0} error"), BATCH_DELETE_PROCESS_DEFINE_BY_IDS_ERROR(50026,"batch delete process definition by ids {0} error"),
TENANT_NOT_SUITABLE(50027,"there is not any tenant suitable, please choose a tenant available."),
HDFS_NOT_STARTUP(60001,"hdfs not startup"), HDFS_NOT_STARTUP(60001,"hdfs not startup"),
HDFS_TERANT_RESOURCES_FILE_EXISTS(60002,"resource file exists,please delete resource first"), HDFS_TERANT_RESOURCES_FILE_EXISTS(60002,"resource file exists,please delete resource first"),

31
escheduler-api/src/main/java/cn/escheduler/api/service/ExecutorService.java

@ -110,6 +110,13 @@ public class ExecutorService extends BaseService{
return result; return result;
} }
if (!checkTenantSuitable(processDefinition)){
logger.error("there is not any vaild tenant for the process definition: id:{},name:{}, ",
processDefinition.getId(), processDefinition.getName());
putMsg(result, Status.TENANT_NOT_SUITABLE);
return result;
}
/** /**
* create command * create command
*/ */
@ -190,15 +197,10 @@ public class ExecutorService extends BaseService{
if (status != Status.SUCCESS) { if (status != Status.SUCCESS) {
return checkResult; return checkResult;
} }
if (!checkTenantSuitable(processDefinition)){
// checkTenantExists();
Tenant tenant = processDao.getTenantForProcess(processDefinition.getTenantId(),
processDefinition.getUserId());
if(tenant == null){
logger.error("there is not any vaild tenant for the process definition: id:{},name:{}, ", logger.error("there is not any vaild tenant for the process definition: id:{},name:{}, ",
processDefinition.getId(), processDefinition.getName()); processDefinition.getId(), processDefinition.getName());
putMsg(result, Status.PROCESS_INSTANCE_NOT_EXIST, processInstanceId); putMsg(result, Status.TENANT_NOT_SUITABLE);
return result;
} }
switch (executeType) { switch (executeType) {
@ -240,6 +242,21 @@ public class ExecutorService extends BaseService{
return result; return result;
} }
/**
* check tenant suitable
* @param processDefinition
* @return
*/
private boolean checkTenantSuitable(ProcessDefinition processDefinition) {
// checkTenantExists();
Tenant tenant = processDao.getTenantForProcess(processDefinition.getTenantId(),
processDefinition.getUserId());
if(tenant == null){
return false;
}
return true;
}
/** /**
* Check the state of process instance and the type of operation match * Check the state of process instance and the type of operation match
* *

2
escheduler-api/src/main/java/cn/escheduler/api/service/ProcessDefinitionService.java

@ -490,7 +490,7 @@ public class ProcessDefinitionService extends BaseDAGService {
// set status // set status
schedule.setReleaseState(ReleaseState.OFFLINE); schedule.setReleaseState(ReleaseState.OFFLINE);
scheduleMapper.update(schedule); scheduleMapper.update(schedule);
deleteSchedule(project.getId(), id); deleteSchedule(project.getId(), schedule.getId());
} }
break; break;
} }

8
escheduler-api/src/main/java/cn/escheduler/api/service/SchedulerService.java

@ -456,14 +456,14 @@ public class SchedulerService extends BaseService {
/** /**
* delete schedule * delete schedule
*/ */
public static void deleteSchedule(int projectId, int processId) throws RuntimeException{ public static void deleteSchedule(int projectId, int scheduleId) throws RuntimeException{
logger.info("delete schedules of project id:{}, flow id:{}", projectId, processId); logger.info("delete schedules of project id:{}, schedule id:{}", projectId, scheduleId);
String jobName = QuartzExecutors.buildJobName(processId); String jobName = QuartzExecutors.buildJobName(scheduleId);
String jobGroupName = QuartzExecutors.buildJobGroupName(projectId); String jobGroupName = QuartzExecutors.buildJobGroupName(projectId);
if(!QuartzExecutors.getInstance().deleteJob(jobName, jobGroupName)){ if(!QuartzExecutors.getInstance().deleteJob(jobName, jobGroupName)){
logger.warn("set offline failure:projectId:{},processId:{}",projectId,processId); logger.warn("set offline failure:projectId:{},scheduleId:{}",projectId,scheduleId);
throw new RuntimeException(String.format("set offline failure")); throw new RuntimeException(String.format("set offline failure"));
} }

11
escheduler-common/src/main/java/cn/escheduler/common/queue/TaskQueueZkImpl.java

@ -228,11 +228,11 @@ public class TaskQueueZkImpl extends AbstractZKClient implements ITaskQueue {
int j = 0; int j = 0;
List<String> taskslist = new ArrayList<>(tasksNum); List<String> taskslist = new ArrayList<>(tasksNum);
while(iterator.hasNext()){ while(iterator.hasNext()){
if(j++ < tasksNum){ if(j++ >= tasksNum){
String task = iterator.next(); break;
taskslist.add(getOriginTaskFormat(task));
} }
String task = iterator.next();
taskslist.add(getOriginTaskFormat(task));
} }
return taskslist; return taskslist;
} }
@ -245,6 +245,9 @@ public class TaskQueueZkImpl extends AbstractZKClient implements ITaskQueue {
*/ */
private String getOriginTaskFormat(String formatTask){ private String getOriginTaskFormat(String formatTask){
String[] taskArray = formatTask.split(Constants.UNDERLINE); String[] taskArray = formatTask.split(Constants.UNDERLINE);
if(taskArray.length< 4){
return formatTask;
}
int processInstanceId = Integer.parseInt(taskArray[1]); int processInstanceId = Integer.parseInt(taskArray[1]);
int taskId = Integer.parseInt(taskArray[3]); int taskId = Integer.parseInt(taskArray[3]);

3
escheduler-server/src/main/java/cn/escheduler/server/utils/LoggerUtils.java

@ -16,6 +16,7 @@
*/ */
package cn.escheduler.server.utils; package cn.escheduler.server.utils;
import cn.escheduler.common.Constants;
import org.slf4j.Logger; import org.slf4j.Logger;
import java.util.ArrayList; import java.util.ArrayList;
@ -31,7 +32,7 @@ public class LoggerUtils {
/** /**
* rules for extracting application ID * rules for extracting application ID
*/ */
private static final Pattern APPLICATION_REGEX = Pattern.compile("\\d+_\\d+"); private static final Pattern APPLICATION_REGEX = Pattern.compile(Constants.APPLICATION_REGEX);
/** /**
* build job id * build job id

5
escheduler-server/src/main/java/cn/escheduler/server/worker/runner/FetchTaskThread.java

@ -210,6 +210,11 @@ public class FetchTaskThread implements Runnable{
Tenant tenant = processDao.getTenantForProcess(processInstance.getTenantId(), Tenant tenant = processDao.getTenantForProcess(processInstance.getTenantId(),
processDefine.getUserId()); processDefine.getUserId());
if(tenant == null){
logger.error("cannot find suitable tenant for the task:{}, process instance tenant:{}, process definition tenant:{}",
taskInstance.getName(),processInstance.getTenantId(), processDefine.getTenantId());
continue;
}
// check and create Linux users // check and create Linux users
FileUtils.createWorkDirAndUserIfAbsent(execLocalPath, FileUtils.createWorkDirAndUserIfAbsent(execLocalPath,

2
escheduler-ui/src/js/conf/home/pages/dag/_source/dag.vue

@ -459,8 +459,6 @@
'tasks': { 'tasks': {
deep: true, deep: true,
handler (o) { handler (o) {
console.log('+++++ save dag params +++++')
console.log(o)
// Edit state does not allow deletion of node a... // Edit state does not allow deletion of node a...
this.setIsEditDag(true) this.setIsEditDag(true)

Loading…
Cancel
Save