部署文档

软件要求

  • Mysql (5.5+) : 必装
  • JDK (1.8+) :必装
  • Zookeeper (3.4.6) :必装
  • Hadoop (2.7+) :选装, 如果需要使用到EasyScheduler的资源上传,MapReduce任务在线提交则需要安装(上传的资源文件目前保存在Hdfs上)
  • Hive (1.2.1+) : 选装,如果跑Hive任务需要安装(
  • Reids安装 (2.7.0+) : 选装, 任务队列选择Redis时需要安装
  • Spark(1.x,2.x) : 选装,Spark任务提交需要安装
  • PostgreSQL(8.2.15+) : 选装,PostgreSQL存储过程需要安装

    注意:EasyScheduler本身不依赖Hadoop、Hive、Spark、PostgreSQL、Redis,仅是用到了他们的Client jar,用于对应任务的运行。

项目编译

  • 执行编译命令:

mvn -U clean package assembly:assembly -Dmaven.test.skip=true

  • 查看目录

正常编译完后,会在当前目录生成 target/escheduler-{version}-SNAPSHOT/

    bin
    conf
    lib
    script
    sql
  • 说明

    bin  : 工程服务启动脚本
    conf : 工程配置文件
    lib  : 工程依赖jar包,包括各个模块jar和第三方jar
    script : 工程自动化部署、启动脚本
    sql  : 工程依赖sql文件
    

数据库初始化

  • 创建database和账号
mysql -h {host} -u {user} -p{password}
mysql> CREATE DATABASE escheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
mysql> GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
mysql> GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
mysql> flush privileges;
  • 创建表
说明:在target/escheduler-{version}-SNAPSHOT/sql/有两个sql创建表文件:escheduler.sql和quartz.sql
执行:
mysql -h {host} -u {user} -p{password} -D {db} < escheduler.sql
mysql -h {host} -u {user} -p{password} -D {db} < quartz.sql

创建部署用户

因为easyscheduler worker都是以 sudo -u {linux-user} 方式来执行作业,所以部署用户需要有 sudo 权限,而且是免密的。

部署账号
vi /etc/sudoers

# 部署用户是 escheduler 账号
escheduler  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL

# 并且需要注释掉 Default requiretty 一行
#Default requiretty

配置文件

说明:配置文件位于 target/escheduler-{version}-SNAPSHOT/conf 下面

escheduler-alert

配置邮件告警信息

  • alert.properties
#以qq邮箱为例,如果是别的邮箱,请更改对应配置
#alert type is EMAIL/SMS
alert.type=EMAIL

# mail server configuration
mail.protocol=SMTP
mail.server.host=smtp.exmail.qq.com
mail.server.port=25
mail.sender=xxxxxx@qq.com
mail.passwd=xxxxxxx

# xls file path, need manually create it before use if not exist
xls.file.path=/opt/xls

配置告警数据源信息

  • alert/data_source.properties
#注意:请替换${xxx}里的内容

# common configuration
spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://${ip}:3306/escheduler?characterEncoding=UTF-8
spring.datasource.username=${username}
spring.datasource.password=${password}

# supplement configuration
spring.datasource.initialSize=5
# min connection number
spring.datasource.minIdle=5
# max connection number
spring.datasource.maxActive=20

# max wait time for get connection
spring.datasource.maxWait=60000

# idle connections closed,unit milliseconds
spring.datasource.timeBetweenEvictionRunsMillis=60000

# connection minimum survival time,unit milliseconds
spring.datasource.minEvictableIdleTimeMillis=300000
spring.datasource.validationQuery=SELECT 1
spring.datasource.validationQueryTimeout=3
spring.datasource.testWhileIdle=true
spring.datasource.testOnBorrow=true
spring.datasource.testOnReturn=false
spring.datasource.defaultAutoCommit=true

# open PSCache,set PSCache size
spring.datasource.poolPreparedStatements=false
spring.datasource.maxPoolPreparedStatementPerConnectionSize=20

日志配置文件

  • alert_logback.xml
<!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->
<configuration scan="true" scanPeriod="120 seconds"> <!--debug="true" -->
    <property name="log.base" value="logs" />
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>
                [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
            </pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <appender name="ALERTLOGFILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log.base}/escheduler-alert.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>${log.base}/escheduler-alert.%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
            <maxHistory>20</maxHistory>
            <maxFileSize>64MB</maxFileSize>
        </rollingPolicy>
        <encoder>
            <pattern>
                [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
            </pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="ALERTLOGFILE"/>
    </root>
</configuration>

escheduler-common

通用配置文件配置,队列选择及地址配置,通用文件目录配置。

  • common/common.properties
#task queue implementation, can choose "redis" or "zookeeper", default "zookeeper"
escheduler.queue.impl=zookeeper

#if escheduler.queue.impl=redis, you need to configuration relevant information with redis. redis configuration start
spring.redis.host=${redis_ip}
spring.redis.port=6379
spring.redis.maxIdle=1000
spring.redis.maxTotal=10000
#redis configuration end

# user data directory path, self configuration, please make sure the directory exists and have read write permissions
data.basedir.path=/xxx/xxx

# directory path for user data download. self configuration, please make sure the directory exists and have read write permissions
data.download.basedir.path=/xxx/xxx 

# process execute directory. self configuration, please make sure the directory exists and have read write permissions
process.exec.basepath=/xxx/xxx

# data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
data.store2hdfs.basepath=/escheduler

# system env path. self configuration, please make sure the directory and file exists and have read write execute permissions
escheduler.env.path=/xxx/xxx/.escheduler_env.sh
escheduler.env.py=/xxx/xxx/escheduler_env.py

#resource.view.suffixs
resource.view.suffixs=txt,log,sh,conf,cfg,py,java,sql,hql,xml

# is development state? default "false"
development.state=false

SHELL任务 环境变量配置

.escheduler_env.sh

#self configuration, please make sure the directory exists and have read write permissions
export HADOOP_HOME=/opt/soft/hadoop
export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
export SPARK_HOME1=/opt/soft/spark1
export SPARK_HOME2=/opt/soft/spark2
export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/opt/soft/java
export HIVE_HOME=/opt/soft/hive

export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH

Python任务 环境变量配置

escheduler_env.py

#self configuration, please make sure the directory exists and have read write execute permissions

import os

HADOOP_HOME="/opt/soft/hadoop"
PYTHON_HOME="/opt/soft/python"
JAVA_HOME="/opt/soft/java"
PATH=os.environ['PATH']
PATH="%s/bin:%s/bin:%s/bin:%s"%(HADOOP_HOME,JAVA_HOME,PYTHON_HOME,PATH)

os.putenv('PATH','%s'%PATH)

hadoop 配置文件

  • common/hadoop/hadoop.properties
#please replace the content in ${xxx}
# ha or single namenode
fs.defaultFS=hdfs://${cluster_ipOrName}:8020

#resourcemanager ha note this need ips , eg. 192.168.220.188,192.168.220.189
yarn.resourcemanager.ha.rm.ids=${ip1},${ip2}

# reousrcemanager path 
yarn.application.status.address=http://${ip1}:8088/ws/v1/cluster/apps/%s

定时器配置文件

  • quartz.properties
#please replace the content in ${xxx}
#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName = EasyScheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.makeSchedulerThreadDaemon = true
org.quartz.jobStore.useProperties = false

#============================================================================
# Configure ThreadPool
#============================================================================

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.makeThreadsDaemons = true
org.quartz.threadPool.threadCount = 25
org.quartz.threadPool.threadPriority = 5

#============================================================================
# Configure JobStore
#============================================================================

org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.clusterCheckinInterval = 5000
org.quartz.jobStore.dataSource = myDs

#============================================================================
# Configure Datasources  
#============================================================================

org.quartz.dataSource.myDs.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.myDs.URL = jdbc:mysql://${ip}:3306/escheduler?characterEncoding=utf8&useSSL=false
org.quartz.dataSource.myDs.user = ${username}
org.quartz.dataSource.myDs.password = ${password}
org.quartz.dataSource.myDs.maxConnections = 10
org.quartz.dataSource.myDs.validationQuery = select 1

zookeeper 配置文件

  • zookeeper.properties
#zookeeper cluster. eg. 192.168.220.188:2181,192.168.220.189:2181,192.168.220.190:2181
zookeeper.quorum=${ip1}:2181,${ip2}:2181,${ip3}:2181

#zookeeper server dirctory
zookeeper.escheduler.master=/escheduler/masters
zookeeper.escheduler.worker=/escheduler/workers

#zookeeper lock dirctory
zookeeper.escheduler.lock.master=/escheduler/lock/master
zookeeper.escheduler.lock.worker=/escheduler/lock/worker

#escheduler root directory
zookeeper.escheduler.root=/escheduler

#escheduler failover directory
zookeeper.escheduler.lock.master.failover=/escheduler/lock/failover/master
zookeeper.escheduler.lock.worker.failover=/escheduler/lock/failover/worker

#escheduler failover directory
zookeeper.session.timeout=300
zookeeper.connection.timeout=300
zookeeper.retry.sleep=1000
zookeeper.retry.maxtime=5

escheduler-dao

dao数据源配置

  • dao/data_source.properties
#please replace the content in ${xxx}

# base spring data source configuration
spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://${ip}:3306/escheduler?characterEncoding=UTF-8
spring.datasource.username=${username}
spring.datasource.password=${password}

# connection configuration
spring.datasource.initialSize=5
spring.datasource.minIdle=5
spring.datasource.maxActive=20

# max seconds wait connection timeout
spring.datasource.maxWait=60000

# milliseconds for check to close free connections
spring.datasource.timeBetweenEvictionRunsMillis=60000

#  connection minimum survival time(milliseconds)
spring.datasource.minEvictableIdleTimeMillis=300000
spring.datasource.validationQuery=SELECT 1
spring.datasource.validationQueryTimeout=3
spring.datasource.testWhileIdle=true
spring.datasource.testOnBorrow=true
spring.datasource.testOnReturn=false
spring.datasource.defaultAutoCommit=true

# open PSCache, specify count PSCache for every connection
spring.datasource.poolPreparedStatements=true
spring.datasource.maxPoolPreparedStatementPerConnectionSize=20


# data quality analysis is not currently in use. please ignore the following configuration
# task record flag
task.record.flag=false
task.record.datasource.url=jdbc:mysql://${ip}:3306/etl?characterEncoding=UTF-8
task.record.datasource.username=etl
task.record.datasource.password=xxxxx

escheduler-server

master配置文件

  • master.properties
# master execute thread num
master.exec.threads=100

# master execute task number in parallel
master.exec.task.number=20

# master heartbeat interval
master.heartbeat.interval=8

# master commit task retry times
master.task.commit.retryTimes=5

# master commit task interval
master.task.commit.interval=100


# only less than cpu avg load, master server can work. default value : the number of cpu cores * 2
master.max.cpuload.avg=10

# only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G.
master.reserved.memory=1

master日志文件

注意:对MASTERLOGFILE,自定义了MasterLogFilter

  • master_logback.xml
<!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->
<configuration scan="true" scanPeriod="120 seconds"> <!--debug="true" -->
   <property name="log.base" value="logs" />
   <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
      <encoder>
         <pattern>
            [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
         </pattern>
         <charset>UTF-8</charset>
      </encoder>
   </appender>

   <appender name="MASTERLOGFILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
      <file>${log.base}/escheduler-master.log</file>
      <filter class="cn.escheduler.server.master.log.MasterLogFilter">
         <level>INFO</level>
      </filter>
      <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
         <fileNamePattern>${log.base}/escheduler-master.%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
         <maxHistory>20</maxHistory>
         <maxFileSize>200MB</maxFileSize>
      </rollingPolicy>
      <encoder>
         <pattern>
            [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
         </pattern>
         <charset>UTF-8</charset>
      </encoder>
   </appender>

   <root level="INFO">
      <appender-ref ref="MASTERLOGFILE"/>
   </root>
</configuration>

worker配置文件

  • worker.properties
# worker execute thread num
worker.exec.threads=100

# worker heartbeat interval
worker.heartbeat.interval=8

# submit the number of tasks at a time
worker.fetch.task.num = 10

# only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2
worker.max.cpuload.avg=10

# only larger than reserved memory, worker server can work. default value : physical memory * 1/6, unit is G.
worker.reserved.memory=1

worker日志文件

注意:对WORKERLOGFILE,自定义了WorkerLogFilter

对于 TASKLOGFILE , 自定义了TaskLogAppender和TaskLogFilter

  • worker_logback.xml
<!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->
<configuration scan="true" scanPeriod="120 seconds">
    <property name="log.base" value="logs"/>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>
                [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
            </pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>
    <appender name="TASKLOGFILE" class="cn.escheduler.server.worker.log.TaskLogAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
        <filter class="cn.escheduler.server.worker.log.TaskLogFilter"></filter>
        <file>${log.base}/{processDefinitionId}/{processInstanceId}/{taskInstanceId}.log</file>
        <encoder>
            <pattern>
                [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
            </pattern>
            <charset>UTF-8</charset>
        </encoder>
        <append>true</append>
    </appender>

    <appender name="WORKERLOGFILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${log.base}/escheduler-worker.log</file>
        <filter class="cn.escheduler.server.worker.log.WorkerLogFilter">
            <level>INFO</level>
        </filter>

        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>${log.base}/escheduler-worker.%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
            <maxHistory>20</maxHistory>
            <maxFileSize>200MB</maxFileSize>
        </rollingPolicy>
             
        <encoder>
            <pattern>
                [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
            </pattern>
            <charset>UTF-8</charset>
        </encoder>
          
    </appender>


    <root level="INFO">
        <appender-ref ref="TASKLOGFILE"/>
        <appender-ref ref="WORKERLOGFILE"/>
    </root>
</configuration>

escheduler-web

web配置文件

  • application.properties
# server port
server.port=12345

# session config
server.session.timeout=7200


server.context-path=/escheduler/

# file size limit for upload
spring.http.multipart.max-file-size=1024MB
spring.http.multipart.max-request-size=1024MB

#post content
server.max-http-post-size=5000000

web日志文件

  • webserver_logback.xml
    <!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->
    <configuration scan="true" scanPeriod="120 seconds">
       <logger name="org.apache.zookeeper" level="WARN"/>
       <logger name="org.apache.hbase" level="WARN"/>
       <logger name="org.apache.hadoop" level="WARN"/>

       <property name="log.base" value="logs" />

       <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
          <encoder>
             <pattern>
                [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
             </pattern>
             <charset>UTF-8</charset>
          </encoder>
       </appender>

       <appender name="WEBSERVERLOGFILE"  class="ch.qos.logback.core.rolling.RollingFileAppender">
          <!-- Log level filter -->
          <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
             <level>INFO</level>
          </filter>
            <file>${log.base}/escheduler-web-server.log</file>
          <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
             <fileNamePattern>${log.base}/escheduler-web-server.%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
             <maxHistory>20</maxHistory>
             <maxFileSize>64MB</maxFileSize>
          </rollingPolicy>

          <encoder>
             <pattern>
                [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n
             </pattern>
             <charset>UTF-8</charset>
          </encoder>

       </appender>

       <root level="INFO">
          <appender-ref ref="WEBSERVERLOGFILE" />
       </root>
    </configuration>

启动停止命令

  • 启停Master
sh ./bin/arklifter-daemon.sh start master-server
sh ./bin/arklifter-daemon.sh stop master-server
  • 启停Worker
sh ./bin/arklifter-daemon.sh start worker-server
sh ./bin/arklifter-daemon.sh stop worker-server
  • 启停Web
sh ./bin/arklifter-daemon.sh start web-server
sh ./bin/arklifter-daemon.sh stop web-server
  • 启停Logger
    sh ./bin/arklifter-daemon.sh start logger-server
    sh ./bin/arklifter-daemon.sh stop logger-server
    
  • 启停Alert
    sh ./bin/arklifter-daemon.sh start alert-server
    sh ./bin/arklifter-daemon.sh stop alert-server
    

一键启停脚本

  • 部署用户配置

    1. 创建部署用户

      target/escheduler-{version}-SNAPSHOT/script/init_deploy_user.sh

    2. 配置

      因为escheduler worker 都是以 sudo -u {linux-user} 方式来执行作业,所以部署用户需要有 sudo 权限,而且是免密的

    vi /etc/sudoers

    # 部署用户是 escheduler 账号
    escheduler  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL

    # 并且需要注释掉 Default requiretty 一行
    #Default requiretty
  • 初始化 hdfs

​ target/escheduler-{version}-SNAPSHOT/script/init_hdfs.sh

  • 安装配置文件 install_config
    # 项目所在目录
    BASE_PATH=/opt/soft/program
    # 部署的机器
    IPS=ark0,ark1,ark2,ark3,ark4
  • 运行配置文件 run_config

     # master服务所在机器,>=1个
     MASTERS=ark0,ark1
     # worker服务所在机器,>=1个
     WORKERS=ark2,ark3,ark4
     # alert服务所在机器,1个
     ALERTS=ark3
     # web服务所在机器,1个
     WEBSERVER=ark1
    
  • 初始化安装目录

    target/escheduler-{version}-SNAPSHOT/script/init_install_path.sh

  • 将 target/escheduler-{version}-SNAPSHOT 下配置好的conf文件夹和编译好的escheduler-{version}-SNAPSHOT.tar.gz 复制到 主机器的 BASE_PATH 目录下

    说明:主机器需要能免密ssh登录到其它机器上

  • 启动所有服务

sh ./deploy/start_all.sh
  • 关闭所有服务
sh ./deploy/stop_all.sh

监控服务

monitor_server.py 脚本是监听,master和worker服务挂掉重启的脚本

注意:在全部服务都启动之后启动

nohup python -u monitor_server.py > nohup.out 2>&1 &

日志查看

日志统一存放于指定文件夹内

 logs/
    ├── escheduler-alert-server.log
    ├── escheduler-master-server.log
    |—— escheduler-worker-server.log
    |—— escheduler-web-server.log
    |—— escheduler-logger-server.log

results matching ""

    No results matching ""