The purpose of the pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, API server, are all on the same machine.
The purpose of the pseudo-cluster deployment is to deploy the DolphinScheduler service on a single machine. In this mode, DolphinScheduler's master, worker, API server, are all on the same machine.
If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow[pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow[pseudo-cluster deployment. If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
## Preparation
## Preparation
@ -204,7 +204,7 @@ sh ./bin/install.sh
## Login DolphinScheduler
## Login DolphinScheduler
Access address `http://localhost:12345/dolphinscheduler` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
Access address `http://localhost:12345/dolphinscheduler/ui` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
Standalone only for quick experience for DolphinScheduler.
Standalone only for quick experience for DolphinScheduler.
If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow [Standalone deployment](standalone.md). If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
If you are a new hand and want to experience DolphinScheduler functions, we recommend you install follow Standalone deployment. If you want to experience more complete functions and schedule massive tasks, we recommend you install follow [pseudo-cluster deployment](pseudo-cluster.md). If you want to deploy DolphinScheduler in production, we recommend you follow [cluster deployment](cluster.md) or [Kubernetes deployment](kubernetes.md).
> **_Note:_** Standalone only recommends the usage of fewer than 20 workflows, because it uses in-memory H2 Database in default, ZooKeeper Testing Server, too many tasks may cause instability.
> **_Note:_** Standalone only recommends the usage of fewer than 20 workflows, because it uses in-memory H2 Database in default, ZooKeeper Testing Server, too many tasks may cause instability.
> When Standalone stops or restarts, in-memory H2 database will clear up. To use Standalone with external databases like mysql or postgresql, please see [`Database Configuration`](#database-configuration).
> When Standalone stops or restarts, in-memory H2 database will clear up. To use Standalone with external databases like mysql or postgresql, please see [`Database Configuration`](#database-configuration).
@ -27,7 +27,7 @@ sh ./bin/dolphinscheduler-daemon.sh start standalone-server
### Login DolphinScheduler
### Login DolphinScheduler
Access address `http://localhost:12345/dolphinscheduler` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
Access address `http://localhost:12345/dolphinscheduler/ui` and login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
### Start or Stop Server
### Start or Stop Server
@ -46,11 +46,14 @@ sh ./bin/dolphinscheduler-daemon.sh stop standalone-server
Standalone server use H2 database as its metadata store, it is easy and users do not need to start database before they set up server.
But if user want to store metabase in other database like MySQL or PostgreSQL, they have to change some configuration. And we here use
MySQL as an example to illustrate how to configure an external database:
* Use mysql as an example to illustrate how to configure an external database:
* First of all, follow the instructions in [pseudo-cluster deployment](pseudo-cluster.md) `Initialize the Database` section to create and initialize database
* First of all, follow the instructions in [pseudo-cluster deployment](pseudo-cluster.md) `Initialize the Database` section to create and initialize database
* Set the following environment variables in your terminal with your database username and password for {user} and {password}:
* Set the following environment variables in your terminal with your database username and password for `{user}` and `{password}`:
* Add mysql-connector-java driver to `./standalone-server/libs/standalone-server/`, see [pseudo-cluster deployment](pseudo-cluster.md) `Initialize the Database` section about where to download
* Add mysql-connector-java driver to `./standalone-server/libs/standalone-server/`, see [pseudo-cluster deployment](pseudo-cluster.md) `Initialize the Database` section about where to download
* Start standalone-server, now you are using mysql as database and it will not clear up your data when you stop or restart standalone-server.
* Start standalone-server, now you are using mysql as database and it will not clear up your data when you stop or restart standalone-server.
To prevent data loss by some miss-operation, it is recommended to back up data before upgrading. The backup way according to your environment.
## Download the Latest Version Installation Package
### Download the Latest Version Installation Package
- [download](/en-us/download/download.html) the latest version of the installation packages.
Download the latest binary distribute package from [download](/en-us/download/download.html) and then put it in the different
- The following upgrade operations need to be performed in the new version's directory.
directory where current service running. And all below command is running in this directory.
## Database Upgrade
## Upgrade
- If using MySQL as the database to run DolphinScheduler, please config it in `./bin/env/dolphinscheduler_env.sh`, change username and password to yours, and add MYSQL connector jar into lib dir `./tools/libs`, here we download `mysql-connector-java-8.0.16.jar`, and then correctly configure database connection information. You can download MYSQL connector jar from [here](https://downloads.MySQL.com/archives/c-j/). Otherwise, PostgreSQL is the default database.
### Stop All Services of DolphinScheduler
Stop all services of dolphinscheduler according to your deployment method. If you deploy your dolphinscheduler according to [cluster deployment](./installation/cluster.md), you can stop all services by command `sh ./script/stop-all.sh`.
### Upgrade Database
Change configuration in `./bin/env/dolphinscheduler_env.sh` ({user} and {password} are changed to your database username and password), and then run the upgrade script.
Using MySQL as an example, change the value if you use other databases. Please manually download the [mysql-connector-java driver jar](https://downloads.MySQL.com/archives/c-j/)
jar package and add it to the `./tools/libs` directory, then change `./bin/ env/dolphinscheduler_env.sh` file
- If you deploy with Pseudo-Cluster deployment, change it according to [Pseudo-Cluster](./installation/pseudo-cluster.md) section "Modify Configuration".
- If you deploy with Cluster deployment, change it according to [Cluster](./installation/cluster.md) section "Modify Configuration".
### Modify the Content in `conf/config/install_config.conf` File
And them run command `sh ./bin/start-all.sh` to start all services.
- Standalone Deployment please refer to the [Standalone-Deployment](./installation/standalone.md).
## Notice
- Cluster Deployment please refer to the [Cluster-Deployment](./installation/cluster.md).
#### Masters Need Attentions
### Differences of worker group (before or after version 1.3.1 of dolphinscheduler)
Create worker group in 1.3.1 version has a different design:
The architecture of worker group is different between version before version 1.3.1 until version 2.0.0
- Before version 1.3.1 worker group can be created through UI interface.
- Before version 1.3.1(include itself) worker group can be created through UI interface.
- Since version 1.3.1 worker group can be created by modifying the worker configuration.
- Since version 1.3.1 and before version 2.0.0, worker group can be created by modifying the worker configuration.
#### When Upgrade from Version Before 1.3.1 to 1.3.2, the Below Operations are What We Need to Do to Keep Worker Group Configuration Consist with Previous
#### How Can I Do When I Upgrade from 1.3.1 to version before 2.0.0
1. Go to the backup database, search records in `t_ds_worker_group table`, mainly focus `id, name and IP` three columns.
* Check the backup database, search records in table `t_ds_worker_group` table and mainly focus on three columns: `id, name and IP`.
| id | name | ip_list |
| id | name | ip_list |
| :--- | :---: | ---: |
| :--- | :---: | ---: |
| 1 | service1 | 192.168.xx.10 |
| 1 | service1 | 192.168.xx.10 |
| 2 | service2 | 192.168.xx.11,192.168.xx.12 |
| 2 | service2 | 192.168.xx.11,192.168.xx.12 |
2. Modify the worker configuration in `conf/config/install_config.conf` file.
* Modify worker related configuration in `bin/env/install_config.conf`.
Assume bellow are the machine worker service to be deployed:
Assume bellow are the machine worker service to be deployed:
#### The Worker Group has Been Enhanced in Version 1.3.2
#### The Worker Group has Been Enhanced in Version 1.3.2
Workers in 1.3.1 can't belong to more than one worker group, but in 1.3.2 it's supported. So in 1.3.1 it's not supported when `workers="ds1:service1,ds1:service2"`, and in 1.3.2 it's supported.
Workers in 1.3.1 can only belong to one worker group, but after version 1.3.2 and before version 2.0.0 worker support more than one worker group.
### Execute Deploy Script
```shell
```sh
`sh install.sh`
workers="ds1:service1,ds1:service2"
```
```
#### Recovery UI Create Worker Group after Version 2.0.0
After version 2.0.0, include itself, we are recovery function create worker group from web UI.