You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
208 lines
7.2 KiB
208 lines
7.2 KiB
5 years ago
|
# Backend Deployment Document
|
||
|
|
||
|
There are two deployment modes for the backend:
|
||
|
|
||
5 years ago
|
- automatic deployment
|
||
|
- source code compile and then deployment
|
||
5 years ago
|
|
||
5 years ago
|
## Preparations
|
||
5 years ago
|
|
||
5 years ago
|
Download the latest version of the installation package, download address: [gitee download](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) or [github download](https://github.com/analysys/EasyScheduler/releases), download escheduler-backend-x.x.x.tar.gz(back-end referred to as escheduler-backend),escheduler-ui-x.x.x.tar.gz(front-end referred to as escheduler-ui)
|
||
5 years ago
|
|
||
|
|
||
|
|
||
|
#### Preparations 1: Installation of basic software (self-installation of required items)
|
||
|
|
||
|
* [Mysql](http://geek.analysys.cn/topic/124) (5.5+) : Mandatory
|
||
|
* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Mandatory
|
||
|
* [ZooKeeper](https://www.jianshu.com/p/de90172ea680)(3.4.6+) :Mandatory
|
||
|
* [Hadoop](https://blog.csdn.net/Evankaka/article/details/51612437)(2.6+) :Optionally, if you need to use the resource upload function, MapReduce task submission needs to configure Hadoop (uploaded resource files are currently stored on Hdfs)
|
||
|
* [Hive](https://staroon.pro/2017/12/09/HiveInstall/)(1.2.1) : Optional, hive task submission needs to be installed
|
||
|
* Spark(1.x,2.x) : Optional, Spark task submission needs to be installed
|
||
|
* PostgreSQL(8.2.15+) : Optional, PostgreSQL PostgreSQL stored procedures need to be installed
|
||
|
|
||
|
```
|
||
|
Note: Easy Scheduler itself does not rely on Hadoop, Hive, Spark, PostgreSQL, but only calls their Client to run the corresponding tasks.
|
||
|
```
|
||
|
|
||
|
#### Preparations 2: Create deployment users
|
||
|
|
||
5 years ago
|
- Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in `sudo-u {linux-user}`, so deployment users need sudo privileges and are confidential.
|
||
5 years ago
|
|
||
5 years ago
|
```
|
||
5 years ago
|
vi /etc/sudoers
|
||
|
|
||
|
# For example, the deployment user is an escheduler account
|
||
|
escheduler ALL=(ALL) NOPASSWD: NOPASSWD: ALL
|
||
|
|
||
|
# And you need to comment out the Default requiretty line
|
||
|
#Default requiretty
|
||
|
```
|
||
|
|
||
|
#### Preparations 3: SSH Secret-Free Configuration
|
||
|
Configure SSH secret-free login on deployment machines and other installation machines. If you want to install easyscheduler on deployment machines, you need to configure native password-free login itself.
|
||
|
|
||
|
- [Connect the host and other machines SSH](http://geek.analysys.cn/topic/113)
|
||
|
|
||
|
#### Preparations 4: database initialization
|
||
|
|
||
|
* Create databases and accounts
|
||
|
|
||
5 years ago
|
Execute the following command to create database and account
|
||
5 years ago
|
|
||
5 years ago
|
```
|
||
5 years ago
|
CREATE DATABASE escheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
|
||
|
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
|
||
|
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
|
||
|
flush privileges;
|
||
|
```
|
||
|
|
||
5 years ago
|
* creates tables and imports basic data
|
||
|
Modify the following attributes in ./conf/dao/data_source.properties
|
||
5 years ago
|
|
||
|
```
|
||
|
spring.datasource.url
|
||
|
spring.datasource.username
|
||
|
spring.datasource.password
|
||
|
```
|
||
5 years ago
|
|
||
5 years ago
|
Execute scripts for creating tables and importing basic data
|
||
5 years ago
|
|
||
5 years ago
|
```
|
||
|
sh ./script/create_escheduler.sh
|
||
|
```
|
||
|
|
||
|
#### Preparations 5: Modify the deployment directory permissions and operation parameters
|
||
|
|
||
5 years ago
|
instruction of escheduler-backend directory
|
||
5 years ago
|
|
||
|
```directory
|
||
|
bin : Basic service startup script
|
||
|
conf : Project Profile
|
||
|
lib : The project relies on jar packages, including individual module jars and third-party jars
|
||
|
script : Cluster Start, Stop and Service Monitor Start and Stop scripts
|
||
|
sql : The project relies on SQL files
|
||
|
install.sh : One-click deployment script
|
||
|
```
|
||
|
|
||
5 years ago
|
- Modify permissions (please modify the 'deployUser' to the corresponding deployment user) so that the deployment user has operational privileges on the escheduler-backend directory
|
||
5 years ago
|
|
||
|
`sudo chown -R deployUser:deployUser escheduler-backend`
|
||
|
|
||
|
- Modify the `.escheduler_env.sh` environment variable in the conf/env/directory
|
||
|
|
||
|
- Modify deployment parameters (depending on your server and business situation):
|
||
|
|
||
|
- Modify the parameters in **install.sh** to replace the values required by your business
|
||
|
- MonitorServerState switch variable, added in version 1.0.3, controls whether to start the self-start script (monitor master, worker status, if off-line will start automatically). The default value of "false" means that the self-start script is not started, and if it needs to start, it is changed to "true".
|
||
5 years ago
|
- 'hdfsStartupSate' switch variable controls whether to start hdfs
|
||
5 years ago
|
The default value of "false" means not to start hdfs
|
||
5 years ago
|
Change the variable to 'true' if you want to use hdfs, you also need to create the hdfs root path by yourself, that 'hdfsPath' in install.sh.
|
||
5 years ago
|
|
||
|
- If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory
|
||
|
|
||
|
|
||
5 years ago
|
## Deployment
|
||
5 years ago
|
Automated deployment is recommended, and experienced partners can use source deployment as well.
|
||
|
|
||
5 years ago
|
### Automated Deployment
|
||
5 years ago
|
|
||
|
- Install zookeeper tools
|
||
|
|
||
|
`pip install kazoo`
|
||
|
|
||
|
- Switch to deployment user, one-click deployment
|
||
|
|
||
|
`sh install.sh`
|
||
|
|
||
5 years ago
|
- Use the `jps` command to check if the services are started (`jps` comes from `Java JDK`)
|
||
5 years ago
|
|
||
|
```aidl
|
||
|
MasterServer ----- Master Service
|
||
|
WorkerServer ----- Worker Service
|
||
|
LoggerServer ----- Logger Service
|
||
|
ApiApplicationServer ----- API Service
|
||
|
AlertServer ----- Alert Service
|
||
|
```
|
||
5 years ago
|
|
||
|
If all services are normal, the automatic deployment is successful
|
||
5 years ago
|
|
||
|
|
||
|
After successful deployment, the log can be viewed and stored in a specified folder.
|
||
|
|
||
5 years ago
|
```logPath
|
||
5 years ago
|
logs/
|
||
|
├── escheduler-alert-server.log
|
||
|
├── escheduler-master-server.log
|
||
|
|—— escheduler-worker-server.log
|
||
|
|—— escheduler-api-server.log
|
||
|
|—— escheduler-logger-server.log
|
||
|
```
|
||
|
|
||
5 years ago
|
### Compile source code to deploy
|
||
5 years ago
|
|
||
|
After downloading the release version of the source package, unzip it into the root directory
|
||
|
|
||
|
* Execute the compilation command:
|
||
|
|
||
|
```
|
||
|
mvn -U clean package assembly:assembly -Dmaven.test.skip=true
|
||
|
```
|
||
|
|
||
|
* View directory
|
||
|
|
||
5 years ago
|
After normal compilation, ./target/escheduler-{version}/ is generated in the current directory
|
||
5 years ago
|
|
||
|
|
||
5 years ago
|
### Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details)
|
||
5 years ago
|
|
||
5 years ago
|
* stop all services in the cluster
|
||
5 years ago
|
|
||
|
` sh ./bin/stop_all.sh`
|
||
|
|
||
5 years ago
|
* start all services in the cluster
|
||
5 years ago
|
|
||
|
` sh ./bin/start_all.sh`
|
||
|
|
||
5 years ago
|
* start and stop one master server
|
||
5 years ago
|
|
||
5 years ago
|
```master
|
||
5 years ago
|
sh ./bin/escheduler-daemon.sh start master-server
|
||
|
sh ./bin/escheduler-daemon.sh stop master-server
|
||
|
```
|
||
|
|
||
5 years ago
|
* start and stop one worker server
|
||
5 years ago
|
|
||
5 years ago
|
```worker
|
||
5 years ago
|
sh ./bin/escheduler-daemon.sh start worker-server
|
||
|
sh ./bin/escheduler-daemon.sh stop worker-server
|
||
|
```
|
||
|
|
||
5 years ago
|
* start and stop api server
|
||
5 years ago
|
|
||
5 years ago
|
```Api
|
||
5 years ago
|
sh ./bin/escheduler-daemon.sh start api-server
|
||
|
sh ./bin/escheduler-daemon.sh stop api-server
|
||
|
```
|
||
5 years ago
|
* start and stop logger server
|
||
5 years ago
|
|
||
5 years ago
|
```Logger
|
||
5 years ago
|
sh ./bin/escheduler-daemon.sh start logger-server
|
||
|
sh ./bin/escheduler-daemon.sh stop logger-server
|
||
|
```
|
||
5 years ago
|
* start and stop alert server
|
||
5 years ago
|
|
||
5 years ago
|
```Alert
|
||
5 years ago
|
sh ./bin/escheduler-daemon.sh start alert-server
|
||
|
sh ./bin/escheduler-daemon.sh stop alert-server
|
||
|
```
|
||
|
|
||
5 years ago
|
## Database Upgrade
|
||
5 years ago
|
Database upgrade is a function added in version 1.0.2. The database can be upgraded automatically by executing the following command:
|
||
5 years ago
|
|
||
|
```upgrade
|
||
|
sh ./script/upgrade_escheduler.sh
|
||
|
```
|
||
|
|
||
|
|