Browse Source
Use related path in our docs for imgs, previous can not use because website need the absolute path from root directory, and after we merged apache/dolphinscheduler-website#789 we have covert function to do that close: #94263.1.0-release
Jiajie Zhong
3 years ago
committed by
GitHub
115 changed files with 618 additions and 618 deletions
@ -1,62 +1,62 @@ |
|||||||
# Quick Start |
# Quick Start |
||||||
|
|
||||||
* Watch Apache DolphinScheduler Quick Start Tutorile here: |
* Watch Apache DolphinScheduler Quick Start Tutorile here: |
||||||
[![image](/img/video_cover/quick-use.png)](https://www.youtube.com/watch?v=nrF20hpCkug) |
[![image](../../../../img/video_cover/quick-use.png)](https://www.youtube.com/watch?v=nrF20hpCkug) |
||||||
|
|
||||||
|
|
||||||
* Administrator user login |
* Administrator user login |
||||||
|
|
||||||
> Address:http://localhost:12345/dolphinscheduler/ui Username and password: `admin/dolphinscheduler123` |
> Address:http://localhost:12345/dolphinscheduler/ui Username and password: `admin/dolphinscheduler123` |
||||||
|
|
||||||
![login](/img/new_ui/dev/quick-start/login.png) |
![login](../../../../img/new_ui/dev/quick-start/login.png) |
||||||
|
|
||||||
* Create a queue |
* Create a queue |
||||||
|
|
||||||
![create-queue](/img/new_ui/dev/quick-start/create-queue.png) |
![create-queue](../../../../img/new_ui/dev/quick-start/create-queue.png) |
||||||
|
|
||||||
* Create a tenant |
* Create a tenant |
||||||
|
|
||||||
![create-tenant](/img/new_ui/dev/quick-start/create-tenant.png) |
![create-tenant](../../../../img/new_ui/dev/quick-start/create-tenant.png) |
||||||
|
|
||||||
* Create Ordinary Users |
* Create Ordinary Users |
||||||
|
|
||||||
![create-user](/img/new_ui/dev/quick-start/create-user.png) |
![create-user](../../../../img/new_ui/dev/quick-start/create-user.png) |
||||||
|
|
||||||
* Create an alarm instance |
* Create an alarm instance |
||||||
|
|
||||||
![create-alarmInstance](/img/new_ui/dev/quick-start/create-alarmInstance.png) |
![create-alarmInstance](../../../../img/new_ui/dev/quick-start/create-alarmInstance.png) |
||||||
|
|
||||||
* Create an alarm group |
* Create an alarm group |
||||||
|
|
||||||
![create-alarmGroup](/img/new_ui/dev/quick-start/create-alarmGroup.png) |
![create-alarmGroup](../../../../img/new_ui/dev/quick-start/create-alarmGroup.png) |
||||||
|
|
||||||
* Create a worker group |
* Create a worker group |
||||||
|
|
||||||
![create-workerGroup](/img/new_ui/dev/quick-start/create-workerGroup.png) |
![create-workerGroup](../../../../img/new_ui/dev/quick-start/create-workerGroup.png) |
||||||
|
|
||||||
* Create environment |
* Create environment |
||||||
|
|
||||||
![create-environment](/img/new_ui/dev/quick-start/create-environment.png) |
![create-environment](../../../../img/new_ui/dev/quick-start/create-environment.png) |
||||||
|
|
||||||
* Create a token |
* Create a token |
||||||
|
|
||||||
![create-token](/img/new_ui/dev/quick-start/create-token.png) |
![create-token](../../../../img/new_ui/dev/quick-start/create-token.png) |
||||||
|
|
||||||
* Login with regular users |
* Login with regular users |
||||||
> Click on the user name in the upper right corner to "exit" and re-use the normal user login. |
> Click on the user name in the upper right corner to "exit" and re-use the normal user login. |
||||||
|
|
||||||
* `Project Management - > Create Project - > Click on Project Name` |
* `Project Management - > Create Project - > Click on Project Name` |
||||||
|
|
||||||
![project](/img/new_ui/dev/quick-start/project.png) |
![project](../../../../img/new_ui/dev/quick-start/project.png) |
||||||
|
|
||||||
* `Click Workflow Definition - > Create Workflow Definition - > Online Process Definition` |
* `Click Workflow Definition - > Create Workflow Definition - > Online Process Definition` |
||||||
|
|
||||||
<p align="center"> |
<p align="center"> |
||||||
<img src="/img/process_definition_en.png" width="60%" /> |
<img src="../../../../img/process_definition_en.png" width="60%" /> |
||||||
</p> |
</p> |
||||||
|
|
||||||
* `Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log` |
* `Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log` |
||||||
|
|
||||||
<p align="center"> |
<p align="center"> |
||||||
<img src="/img/log_en.png" width="60%" /> |
<img src="../../../../img/log_en.png" width="60%" /> |
||||||
</p> |
</p> |
||||||
|
@ -1,63 +1,63 @@ |
|||||||
# DataX |
# DataX |
||||||
|
|
||||||
## Overview |
## Overview |
||||||
|
|
||||||
DataX task type for executing DataX programs. For DataX nodes, the worker will execute `${DATAX_HOME}/bin/datax.py` to analyze the input json file. |
DataX task type for executing DataX programs. For DataX nodes, the worker will execute `${DATAX_HOME}/bin/datax.py` to analyze the input json file. |
||||||
|
|
||||||
## Create Task |
## Create Task |
||||||
|
|
||||||
- Click Project Management -> Project Name -> Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page. |
- Click Project Management -> Project Name -> Workflow Definition, and click the "Create Workflow" button to enter the DAG editing page. |
||||||
- Drag the <img src="/img/tasks/icons/datax.png" width="15"/> from the toolbar to the drawing board. |
- Drag the <img src="../../../../img/tasks/icons/datax.png" width="15"/> from the toolbar to the drawing board. |
||||||
|
|
||||||
## Task Parameter |
## Task Parameter |
||||||
|
|
||||||
- **Node name**: The node name in a workflow definition is unique. |
- **Node name**: The node name in a workflow definition is unique. |
||||||
- **Run flag**: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch. |
- **Run flag**: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch. |
||||||
- **Descriptive information**: describe the function of the node. |
- **Descriptive information**: describe the function of the node. |
||||||
- **Task priority**: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle. |
- **Task priority**: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle. |
||||||
- **Worker grouping**: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution. |
- **Worker grouping**: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution. |
||||||
- **Environment Name**: Configure the environment name in which to run the script. |
- **Environment Name**: Configure the environment name in which to run the script. |
||||||
- **Number of failed retry attempts**: The number of times the task failed to be resubmitted. |
- **Number of failed retry attempts**: The number of times the task failed to be resubmitted. |
||||||
- **Failed retry interval**: The time, in cents, interval for resubmitting the task after a failed task. |
- **Failed retry interval**: The time, in cents, interval for resubmitting the task after a failed task. |
||||||
- **Delayed execution time**: The time, in cents, that a task is delayed in execution. |
- **Delayed execution time**: The time, in cents, that a task is delayed in execution. |
||||||
- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail. |
- **Timeout alarm**: Check the timeout alarm and timeout failure. When the task exceeds the "timeout period", an alarm email will be sent and the task execution will fail. |
||||||
- **Custom template**: Custom the content of the DataX node's json profile when the default data source provided does not meet the required requirements. |
- **Custom template**: Custom the content of the DataX node's json profile when the default data source provided does not meet the required requirements. |
||||||
- **json**: json configuration file for DataX synchronization. |
- **json**: json configuration file for DataX synchronization. |
||||||
- **Custom parameters**: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the \${variable} in the SQL statement. |
- **Custom parameters**: SQL task type, and stored procedure is a custom parameter order to set values for the method. The custom parameter type and data type are the same as the stored procedure task type. The difference is that the SQL task type custom parameter will replace the \${variable} in the SQL statement. |
||||||
- **Data source**: Select the data source from which the data will be extracted. |
- **Data source**: Select the data source from which the data will be extracted. |
||||||
- **sql statement**: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias. |
- **sql statement**: the sql statement used to extract data from the target database, the sql query column name is automatically parsed when the node is executed, and mapped to the target table synchronization column name. When the source table and target table column names are inconsistent, they can be converted by column alias. |
||||||
- **Target library**: Select the target library for data synchronization. |
- **Target library**: Select the target library for data synchronization. |
||||||
- **Pre-sql**: Pre-sql is executed before the sql statement (executed by the target library). |
- **Pre-sql**: Pre-sql is executed before the sql statement (executed by the target library). |
||||||
- **Post-sql**: Post-sql is executed after the sql statement (executed by the target library). |
- **Post-sql**: Post-sql is executed after the sql statement (executed by the target library). |
||||||
- **Stream limit (number of bytes)**: Limits the number of bytes in the query. |
- **Stream limit (number of bytes)**: Limits the number of bytes in the query. |
||||||
- **Limit flow (number of records)**: Limit the number of records for a query. |
- **Limit flow (number of records)**: Limit the number of records for a query. |
||||||
- **Running memory**: the minimum and maximum memory required can be configured to suit the actual production environment. |
- **Running memory**: the minimum and maximum memory required can be configured to suit the actual production environment. |
||||||
- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task. |
- **Predecessor task**: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task. |
||||||
|
|
||||||
## Task Example |
## Task Example |
||||||
|
|
||||||
This example demonstrates importing data from Hive into MySQL. |
This example demonstrates importing data from Hive into MySQL. |
||||||
|
|
||||||
### Configuring the DataX environment in DolphinScheduler |
### Configuring the DataX environment in DolphinScheduler |
||||||
|
|
||||||
If you are using the DataX task type in a production environment, it is necessary to configure the required environment first. The configuration file is as follows: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`. |
If you are using the DataX task type in a production environment, it is necessary to configure the required environment first. The configuration file is as follows: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`. |
||||||
|
|
||||||
![datax_task01](/img/tasks/demo/datax_task01.png) |
![datax_task01](../../../../img/tasks/demo/datax_task01.png) |
||||||
|
|
||||||
After the environment has been configured, DolphinScheduler needs to be restarted. |
After the environment has been configured, DolphinScheduler needs to be restarted. |
||||||
|
|
||||||
### Configuring DataX Task Node |
### Configuring DataX Task Node |
||||||
|
|
||||||
As the default data source does not contain data to be read from Hive, a custom json is required, refer to: [HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md). Note: Partition directories exist on the HDFS path, when importing data in real world situations, partitioning is recommended to be passed as a parameter, using custom parameters. |
As the default data source does not contain data to be read from Hive, a custom json is required, refer to: [HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md). Note: Partition directories exist on the HDFS path, when importing data in real world situations, partitioning is recommended to be passed as a parameter, using custom parameters. |
||||||
|
|
||||||
After writing the required json file, you can configure the node content by following the steps in the diagram below. |
After writing the required json file, you can configure the node content by following the steps in the diagram below. |
||||||
|
|
||||||
![datax_task02](/img/tasks/demo/datax_task02.png) |
![datax_task02](../../../../img/tasks/demo/datax_task02.png) |
||||||
|
|
||||||
### View run results |
### View run results |
||||||
|
|
||||||
![datax_task03](/img/tasks/demo/datax_task03.png) |
![datax_task03](../../../../img/tasks/demo/datax_task03.png) |
||||||
|
|
||||||
### Notice |
### Notice |
||||||
|
|
||||||
If the default data source provided does not meet your needs, you can configure the writer and reader of DataX according to the actual usage environment in the custom template option, available at https://github.com/alibaba/DataX. |
If the default data source provided does not meet your needs, you can configure the writer and reader of DataX according to the actual usage environment in the custom template option, available at https://github.com/alibaba/DataX. |
||||||
|
@ -1,63 +1,63 @@ |
|||||||
# DATAX 节点 |
# DATAX 节点 |
||||||
|
|
||||||
## 综述 |
## 综述 |
||||||
|
|
||||||
DataX 任务类型,用于执行 DataX 程序。对于 DataX 节点,worker 会通过执行 `${DATAX_HOME}/bin/datax.py` 来解析传入的 json 文件。 |
DataX 任务类型,用于执行 DataX 程序。对于 DataX 节点,worker 会通过执行 `${DATAX_HOME}/bin/datax.py` 来解析传入的 json 文件。 |
||||||
|
|
||||||
## 创建任务 |
## 创建任务 |
||||||
|
|
||||||
- 点击项目管理 -> 项目名称 -> 工作流定义,点击“创建工作流”按钮,进入 DAG 编辑页面; |
- 点击项目管理 -> 项目名称 -> 工作流定义,点击“创建工作流”按钮,进入 DAG 编辑页面; |
||||||
- 拖动工具栏的<img src="/img/tasks/icons/datax.png" width="15"/> 任务节点到画板中。 |
- 拖动工具栏的<img src="../../../../img/tasks/icons/datax.png" width="15"/> 任务节点到画板中。 |
||||||
|
|
||||||
## 任务参数 |
## 任务参数 |
||||||
|
|
||||||
- 节点名称:设置任务节点的名称。一个工作流定义中的节点名称是唯一的。 |
- 节点名称:设置任务节点的名称。一个工作流定义中的节点名称是唯一的。 |
||||||
- 运行标志:标识这个结点是否能正常调度,如果不需要执行,可以打开禁止执行开关。 |
- 运行标志:标识这个结点是否能正常调度,如果不需要执行,可以打开禁止执行开关。 |
||||||
- 描述:描述该节点的功能。 |
- 描述:描述该节点的功能。 |
||||||
- 任务优先级:worker 线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。 |
- 任务优先级:worker 线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。 |
||||||
- Worker 分组:任务分配给 worker 组的机器执行,选择 Default ,会随机选择一台 worker 机执行。 |
- Worker 分组:任务分配给 worker 组的机器执行,选择 Default ,会随机选择一台 worker 机执行。 |
||||||
- 环境名称:配置运行脚本的环境。 |
- 环境名称:配置运行脚本的环境。 |
||||||
- 失败重试次数:任务失败重新提交的次数。 |
- 失败重试次数:任务失败重新提交的次数。 |
||||||
- 失败重试间隔:任务失败重新提交任务的时间间隔,以分为单位。 |
- 失败重试间隔:任务失败重新提交任务的时间间隔,以分为单位。 |
||||||
- 延时执行时间:任务延迟执行的时间,以分为单位。 |
- 延时执行时间:任务延迟执行的时间,以分为单位。 |
||||||
- 超时警告:勾选超时警告、超时失败,当任务超过“超时时长”后,会发送告警邮件并且任务执行失败。 |
- 超时警告:勾选超时警告、超时失败,当任务超过“超时时长”后,会发送告警邮件并且任务执行失败。 |
||||||
- 自定义模板:当默认提供的数据源不满足所需要求的时,可自定义 datax 节点的 json 配置文件内容。 |
- 自定义模板:当默认提供的数据源不满足所需要求的时,可自定义 datax 节点的 json 配置文件内容。 |
||||||
- json:DataX 同步的 json 配置文件。 |
- json:DataX 同步的 json 配置文件。 |
||||||
- 自定义参数:sql 任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换 sql 语句中 ${变量}。 |
- 自定义参数:sql 任务类型,而存储过程是自定义参数顺序的给方法设置值自定义参数类型和数据类型同存储过程任务类型一样。区别在于SQL任务类型自定义参数会替换 sql 语句中 ${变量}。 |
||||||
- 数据源:选择抽取数据的数据源。 |
- 数据源:选择抽取数据的数据源。 |
||||||
- sql 语句:目标库抽取数据的 sql 语句,节点执行时自动解析 sql 查询列名,映射为目标表同步列名,源表和目标表列名不一致时,可以通过列别名(as)转换。 |
- sql 语句:目标库抽取数据的 sql 语句,节点执行时自动解析 sql 查询列名,映射为目标表同步列名,源表和目标表列名不一致时,可以通过列别名(as)转换。 |
||||||
- 目标库:选择数据同步的目标库。 |
- 目标库:选择数据同步的目标库。 |
||||||
- 目标库前置 sql:前置 sql 在 sql 语句之前执行(目标库执行)。 |
- 目标库前置 sql:前置 sql 在 sql 语句之前执行(目标库执行)。 |
||||||
- 目标库后置 sql:后置 sql 在 sql 语句之后执行(目标库执行)。 |
- 目标库后置 sql:后置 sql 在 sql 语句之后执行(目标库执行)。 |
||||||
- 限流(字节数):限制查询的字节数。 |
- 限流(字节数):限制查询的字节数。 |
||||||
- 限流(记录数):限制查询的记录数。 |
- 限流(记录数):限制查询的记录数。 |
||||||
- 运行内存:可根据实际生产环境配置所需的最小和最大内存。 |
- 运行内存:可根据实际生产环境配置所需的最小和最大内存。 |
||||||
- 前置任务:选择当前任务的前置任务,会将被选择的前置任务设置为当前任务的上游。 |
- 前置任务:选择当前任务的前置任务,会将被选择的前置任务设置为当前任务的上游。 |
||||||
|
|
||||||
## 任务样例 |
## 任务样例 |
||||||
|
|
||||||
该样例演示为从 Hive 数据导入到 MySQL 中。 |
该样例演示为从 Hive 数据导入到 MySQL 中。 |
||||||
|
|
||||||
### 在 DolphinScheduler 中配置 DataX 环境 |
### 在 DolphinScheduler 中配置 DataX 环境 |
||||||
|
|
||||||
若生产环境中要是使用到 DataX 任务类型,则需要先配置好所需的环境。配置文件如下:`/dolphinscheduler/conf/env/dolphinscheduler_env.sh`。 |
若生产环境中要是使用到 DataX 任务类型,则需要先配置好所需的环境。配置文件如下:`/dolphinscheduler/conf/env/dolphinscheduler_env.sh`。 |
||||||
|
|
||||||
![datax_task01](/img/tasks/demo/datax_task01.png) |
![datax_task01](../../../../img/tasks/demo/datax_task01.png) |
||||||
|
|
||||||
当环境配置完成之后,需要重启 DolphinScheduler。 |
当环境配置完成之后,需要重启 DolphinScheduler。 |
||||||
|
|
||||||
### 配置 DataX 任务节点 |
### 配置 DataX 任务节点 |
||||||
|
|
||||||
由于默认的的数据源中并不包含从 Hive 中读取数据,所以需要自定义 json,可参考:[HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md)。其中需要注意的是 HDFS 路径上存在分区目录,在实际情况导入数据时,分区建议进行传参,即使用自定义参数。 |
由于默认的的数据源中并不包含从 Hive 中读取数据,所以需要自定义 json,可参考:[HDFS Writer](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md)。其中需要注意的是 HDFS 路径上存在分区目录,在实际情况导入数据时,分区建议进行传参,即使用自定义参数。 |
||||||
|
|
||||||
在编写好所需的 json 之后,可按照下图步骤进行配置节点内容。 |
在编写好所需的 json 之后,可按照下图步骤进行配置节点内容。 |
||||||
|
|
||||||
![datax_task02](/img/tasks/demo/datax_task02.png) |
![datax_task02](../../../../img/tasks/demo/datax_task02.png) |
||||||
|
|
||||||
### 查看运行结果 |
### 查看运行结果 |
||||||
|
|
||||||
![datax_task03](/img/tasks/demo/datax_task03.png) |
![datax_task03](../../../../img/tasks/demo/datax_task03.png) |
||||||
|
|
||||||
## 注意事项: |
## 注意事项: |
||||||
|
|
||||||
若默认提供的数据源不满足需求,可在自定义模板选项中,根据实际使用环境来配置 DataX 的 writer 和 reader,可参考:https://github.com/alibaba/DataX |
若默认提供的数据源不满足需求,可在自定义模板选项中,根据实际使用环境来配置 DataX 的 writer 和 reader,可参考:https://github.com/alibaba/DataX |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue