diff --git a/docs/docs/en/development/api-standard.md b/docs/docs/en/development/api-standard.md
new file mode 100644
index 0000000000..7a6421cd73
--- /dev/null
+++ b/docs/docs/en/development/api-standard.md
@@ -0,0 +1,100 @@
+# API design standard
+A standardized and unified API is the cornerstone of project design.The API of DolphinScheduler follows the REST ful standard. REST ful is currently the most popular Internet software architecture. It has a clear structure, conforms to standards, is easy to understand and extend.
+
+This article uses the DolphinScheduler API as an example to explain how to construct a Restful API.
+
+## 1. URI design
+REST is "Representational State Transfer".The design of Restful URI is based on resources.The resource corresponds to an entity on the network, for example: a piece of text, a picture, and a service. And each resource corresponds to a URI.
+
++ One Kind of Resource: expressed in the plural, such as `task-instances`、`groups` ;
++ A Resource: expressed in the singular, or use the ID to represent the corresponding resource, such as `group`、`groups/{groupId}`;
++ Sub Resources: Resources under a certain resource, such as `/instances/{instanceId}/tasks`;
++ A Sub Resource:`/instances/{instanceId}/tasks/{taskId}`;
+
+## 2. Method design
+We need to locate a certain resource by URI, and then use Method or declare actions in the path suffix to reflect the operation of the resource.
+
+### ① Query - GET
+Use URI to locate the resource, and use GET to indicate query.
+
++ When the URI is a type of resource, it means to query a type of resource. For example, the following example indicates paging query `alter-groups`.
+```
+Method: GET
+/api/dolphinscheduler/alert-groups
+```
+
++ When the URI is a single resource, it means to query this resource. For example, the following example means to query the specified `alter-group`.
+```
+Method: GET
+/api/dolphinscheduler/alter-groups/{id}
+```
+
++ In addition, we can also express query sub-resources based on URI, as follows:
+```
+Method: GET
+/api/dolphinscheduler/projects/{projectId}/tasks
+```
+
+**The above examples all represent paging query. If we need to query all data, we need to add `/list` after the URI to distinguish. Do not mix the same API for both paged query and query.**
+```
+Method: GET
+/api/dolphinscheduler/alert-groups/list
+```
+
+### ② Create - POST
+Use URI to locate the resource, use POST to indicate create, and then return the created id to requester.
+
++ create an `alter-group`:
+
+```
+Method: POST
+/api/dolphinscheduler/alter-groups
+```
+
++ create sub-resources is also the same as above.
+```
+Method: POST
+/api/dolphinscheduler/alter-groups/{alterGroupId}/tasks
+```
+
+### ③ Modify - PUT
+Use URI to locate the resource, use PUT to indicate modify.
++ modify an `alert-group`
+```
+Method: PUT
+/api/dolphinscheduler/alter-groups/{alterGroupId}
+```
+
+### ④ Delete -DELETE
+Use URI to locate the resource, use DELETE to indicate delete.
+
++ delete an `alert-group`
+```
+Method: DELETE
+/api/dolphinscheduler/alter-groups/{alterGroupId}
+```
+
++ batch deletion: batch delete the id array,we should use POST. **(Do not use the DELETE method, because the body of the DELETE request has no semantic meaning, and it is possible that some gateways, proxies, and firewalls will directly strip off the request body after receiving the DELETE request.)**
+```
+Method: POST
+/api/dolphinscheduler/alter-groups/batch-delete
+```
+
+### ⑤ Others
+In addition to creating, deleting, modifying and quering, we also locate the corresponding resource through url, and then append operations to it after the path, such as:
+```
+/api/dolphinscheduler/alert-groups/verify-name
+/api/dolphinscheduler/projects/{projectCode}/process-instances/{code}/view-gantt
+```
+
+## 3. Parameter design
+There are two types of parameters, one is request parameter and the other is path parameter. And the parameter must use small hump.
+
+In the case of paging, if the parameter entered by the user is less than 1, the front end needs to automatically turn to 1, indicating that the first page is requested; When the backend finds that the parameter entered by the user is greater than the total number of pages, it should directly return to the last page.
+
+## 4. Others design
+### base URL
+The URI of the project needs to use `/api/` as the base path, so as to identify that these APIs are under this project.
+```
+/api/dolphinscheduler
+```
\ No newline at end of file
diff --git a/docs/docs/en/development/architecture-design.md b/docs/docs/en/development/architecture-design.md
new file mode 100644
index 0000000000..09f932b90e
--- /dev/null
+++ b/docs/docs/en/development/architecture-design.md
@@ -0,0 +1,315 @@
+## Architecture Design
+Before explaining the architecture of the schedule system, let us first understand the common nouns of the schedule system.
+
+### 1.Noun Interpretation
+
+**DAG:** Full name Directed Acyclic Graph,referred to as DAG。Tasks in the workflow are assembled in the form of directed acyclic graphs, which are topologically traversed from nodes with zero indegrees of ingress until there are no successor nodes. For example, the following picture:
+
+
+
+
+ dag example
+
+
+
+**Process definition**: Visualization **DAG** by dragging task nodes and establishing associations of task nodes
+
+**Process instance**: A process instance is an instantiation of a process definition, which can be generated by manual startup or scheduling. The process definition runs once, a new process instance is generated
+
+**Task instance**: A task instance is the instantiation of a specific task node when a process instance runs, which indicates the specific task execution status
+
+**Task type**: Currently supports SHELL, SQL, SUB_PROCESS (sub-process), PROCEDURE, MR, SPARK, PYTHON, DEPENDENT (dependency), and plans to support dynamic plug-in extension, note: the sub-**SUB_PROCESS** is also A separate process definition that can be launched separately
+
+**Schedule mode** : The system supports timing schedule and manual schedule based on cron expressions. Command type support: start workflow, start execution from current node, resume fault-tolerant workflow, resume pause process, start execution from failed node, complement, timer, rerun, pause, stop, resume waiting thread. Where **recovers the fault-tolerant workflow** and **restores the waiting thread** The two command types are used by the scheduling internal control and cannot be called externally
+
+**Timed schedule**: The system uses **quartz** distributed scheduler and supports the generation of cron expression visualization
+
+**Dependency**: The system does not only support **DAG** Simple dependencies between predecessors and successor nodes, but also provides **task dependencies** nodes, support for **custom task dependencies between processes**
+
+**Priority**: Supports the priority of process instances and task instances. If the process instance and task instance priority are not set, the default is first in, first out.
+
+**Mail Alert**: Support **SQL Task** Query Result Email Send, Process Instance Run Result Email Alert and Fault Tolerant Alert Notification
+
+**Failure policy**: For tasks running in parallel, if there are tasks that fail, two failure policy processing methods are provided. **Continue** means that the status of the task is run in parallel until the end of the process failure. **End** means that once a failed task is found, Kill also drops the running parallel task and the process ends.
+
+**Complement**: Complement historical data, support **interval parallel and serial** two complement methods
+
+
+
+### 2.System architecture
+
+#### 2.1 System Architecture Diagram
+
+
+
+ System Architecture Diagram
+
+
+
+
+
+#### 2.2 Architectural description
+
+* **MasterServer**
+
+ MasterServer adopts the distributed non-central design concept. MasterServer is mainly responsible for DAG task split, task submission monitoring, and monitoring the health status of other MasterServer and WorkerServer.
+ When the MasterServer service starts, it registers a temporary node with Zookeeper, and listens to the Zookeeper temporary node state change for fault tolerance processing.
+
+
+
+ ##### The service mainly contains:
+
+ - **Distributed Quartz** distributed scheduling component, mainly responsible for the start and stop operation of the scheduled task. When the quartz picks up the task, the master internally has a thread pool to be responsible for the subsequent operations of the task.
+
+ - **MasterSchedulerThread** is a scan thread that periodically scans the **command** table in the database for different business operations based on different **command types**
+
+ - **MasterExecThread** is mainly responsible for DAG task segmentation, task submission monitoring, logic processing of various command types
+
+ - **MasterTaskExecThread** is mainly responsible for task persistence
+
+
+
+* **WorkerServer**
+
+ - WorkerServer also adopts a distributed, non-central design concept. WorkerServer is mainly responsible for task execution and providing log services. When the WorkerServer service starts, it registers the temporary node with Zookeeper and maintains the heartbeat.
+
+ ##### This service contains:
+
+ - **FetchTaskThread** is mainly responsible for continuously receiving tasks from **Task Queue** and calling **TaskScheduleThread** corresponding executors according to different task types.
+
+ - **ZooKeeper**
+
+ The ZooKeeper service, the MasterServer and the WorkerServer nodes in the system all use the ZooKeeper for cluster management and fault tolerance. In addition, the system also performs event monitoring and distributed locking based on ZooKeeper.
+ We have also implemented queues based on Redis, but we hope that DolphinScheduler relies on as few components as possible, so we finally removed the Redis implementation.
+
+ - **Task Queue**
+
+ The task queue operation is provided. Currently, the queue is also implemented based on Zookeeper. Since there is less information stored in the queue, there is no need to worry about too much data in the queue. In fact, we have over-measured a million-level data storage queue, which has no effect on system stability and performance.
+
+ - **Alert**
+
+ Provides alarm-related interfaces. The interfaces mainly include **Alarms**. The storage, query, and notification functions of the two types of alarm data. The notification function has two types: **mail notification** and **SNMP (not yet implemented)**.
+
+ - **API**
+
+ The API interface layer is mainly responsible for processing requests from the front-end UI layer. The service provides a RESTful api to provide request services externally.
+ Interfaces include workflow creation, definition, query, modification, release, offline, manual start, stop, pause, resume, start execution from this node, and more.
+
+ - **UI**
+
+ The front-end page of the system provides various visual operation interfaces of the system. For details, see the [quick start](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/quick-start.html) section.
+
+
+
+#### 2.3 Architectural Design Ideas
+
+##### I. Decentralized vs centralization
+
+###### Centralization Thought
+
+The centralized design concept is relatively simple. The nodes in the distributed cluster are divided into two roles according to their roles:
+
+
+
+
+
+- The role of Master is mainly responsible for task distribution and supervising the health status of Slave. It can dynamically balance the task to Slave, so that the Slave node will not be "busy" or "free".
+- The role of the Worker is mainly responsible for the execution of the task and maintains the heartbeat with the Master so that the Master can assign tasks to the Slave.
+
+Problems in the design of centralized :
+
+- Once the Master has a problem, the group has no leader and the entire cluster will crash. In order to solve this problem, most Master/Slave architecture modes adopt the design scheme of the master and backup masters, which can be hot standby or cold standby, automatic switching or manual switching, and more and more new systems are available. Automatically elects the ability to switch masters to improve system availability.
+- Another problem is that if the Scheduler is on the Master, although it can support different tasks in one DAG running on different machines, it will generate overload of the Master. If the Scheduler is on the Slave, all tasks in a DAG can only be submitted on one machine. If there are more parallel tasks, the pressure on the Slave may be larger.
+
+###### Decentralization
+
+
+
+
+- In the decentralized design, there is usually no Master/Slave concept, all roles are the same, the status is equal, the global Internet is a typical decentralized distributed system, networked arbitrary node equipment down machine , all will only affect a small range of features.
+- The core design of decentralized design is that there is no "manager" that is different from other nodes in the entire distributed system, so there is no single point of failure problem. However, since there is no "manager" node, each node needs to communicate with other nodes to get the necessary machine information, and the unreliable line of distributed system communication greatly increases the difficulty of implementing the above functions.
+- In fact, truly decentralized distributed systems are rare. Instead, dynamic centralized distributed systems are constantly emerging. Under this architecture, the managers in the cluster are dynamically selected, rather than preset, and when the cluster fails, the nodes of the cluster will spontaneously hold "meetings" to elect new "managers". Go to preside over the work. The most typical case is the Etcd implemented in ZooKeeper and Go.
+
+- Decentralization of DolphinScheduler is the registration of Master/Worker to ZooKeeper. The Master Cluster and the Worker Cluster are not centered, and the Zookeeper distributed lock is used to elect one Master or Worker as the “manager” to perform the task.
+
+##### 二、Distributed lock practice
+
+DolphinScheduler uses ZooKeeper distributed locks to implement only one Master to execute the Scheduler at the same time, or only one Worker to perform task submission.
+
+1. The core process algorithm for obtaining distributed locks is as follows
+
+
+
+
+
+2. Scheduler thread distributed lock implementation flow chart in DolphinScheduler:
+
+
+
+
+
+##### Third, the thread is insufficient loop waiting problem
+
+- If there is no subprocess in a DAG, if the number of data in the Command is greater than the threshold set by the thread pool, the direct process waits or fails.
+- If a large number of sub-processes are nested in a large DAG, the following figure will result in a "dead" state:
+
+
+
+
+
+In the above figure, MainFlowThread waits for SubFlowThread1 to end, SubFlowThread1 waits for SubFlowThread2 to end, SubFlowThread2 waits for SubFlowThread3 to end, and SubFlowThread3 waits for a new thread in the thread pool, then the entire DAG process cannot end, and thus the thread cannot be released. This forms the state of the child parent process loop waiting. At this point, the scheduling cluster will no longer be available unless a new Master is started to add threads to break such a "stuck."
+
+It seems a bit unsatisfactory to start a new Master to break the deadlock, so we proposed the following three options to reduce this risk:
+
+1. Calculate the sum of the threads of all Masters, and then calculate the number of threads required for each DAG, that is, pre-calculate before the DAG process is executed. Because it is a multi-master thread pool, the total number of threads is unlikely to be obtained in real time.
+2. Judge the single master thread pool. If the thread pool is full, let the thread fail directly.
+3. Add a Command type with insufficient resources. If the thread pool is insufficient, the main process will be suspended. This way, the thread pool has a new thread, which can make the process with insufficient resources hang up and wake up again.
+
+Note: The Master Scheduler thread is FIFO-enabled when it gets the Command.
+
+So we chose the third way to solve the problem of insufficient threads.
+
+##### IV. Fault Tolerant Design
+
+Fault tolerance is divided into service fault tolerance and task retry. Service fault tolerance is divided into two types: Master Fault Tolerance and Worker Fault Tolerance.
+
+###### 1. Downtime fault tolerance
+
+Service fault tolerance design relies on ZooKeeper's Watcher mechanism. The implementation principle is as follows:
+
+
+
+
+
+The Master monitors the directories of other Masters and Workers. If the remove event is detected, the process instance is fault-tolerant or the task instance is fault-tolerant according to the specific business logic.
+
+
+
+- Master fault tolerance flow chart:
+
+
+
+
+
+After the ZooKeeper Master is fault-tolerant, it is rescheduled by the Scheduler thread in DolphinScheduler. It traverses the DAG to find the "Running" and "Submit Successful" tasks, and monitors the status of its task instance for the "Running" task. You need to determine whether the Task Queue already exists. If it exists, monitor the status of the task instance. If it does not exist, resubmit the task instance.
+
+
+
+- Worker fault tolerance flow chart:
+
+
+
+
+
+Once the Master Scheduler thread finds the task instance as "need to be fault tolerant", it takes over the task and resubmits.
+
+ Note: Because the "network jitter" may cause the node to lose the heartbeat of ZooKeeper in a short time, the node's remove event occurs. In this case, we use the easiest way, that is, once the node has timeout connection with ZooKeeper, it will directly stop the Master or Worker service.
+
+###### 2. Task failure retry
+
+Here we must first distinguish between the concept of task failure retry, process failure recovery, and process failure rerun:
+
+- Task failure Retry is task level, which is automatically performed by the scheduling system. For example, if a shell task sets the number of retries to 3 times, then the shell task will try to run up to 3 times after failing to run.
+- Process failure recovery is process level, is done manually, recovery can only be performed **from the failed node** or **from the current node**
+- Process failure rerun is also process level, is done manually, rerun is from the start node
+
+
+
+Next, let's talk about the topic, we divided the task nodes in the workflow into two types.
+
+- One is a business node, which corresponds to an actual script or processing statement, such as a Shell node, an MR node, a Spark node, a dependent node, and so on.
+- There is also a logical node, which does not do the actual script or statement processing, but the logical processing of the entire process flow, such as sub-flow sections.
+
+Each **service node** can configure the number of failed retries. When the task node fails, it will automatically retry until it succeeds or exceeds the configured number of retries. **Logical node** does not support failed retry. But the tasks in the logical nodes support retry.
+
+If there is a task failure in the workflow that reaches the maximum number of retries, the workflow will fail to stop, and the failed workflow can be manually rerun or process resumed.
+
+
+
+##### V. Task priority design
+
+In the early scheduling design, if there is no priority design and fair scheduling design, it will encounter the situation that the task submitted first may be completed simultaneously with the task submitted subsequently, but the priority of the process or task cannot be set. We have redesigned this, and we are currently designing it as follows:
+
+- According to **different process instance priority** prioritizes **same process instance priority** prioritizes **task priority within the same process** takes precedence over **same process** commit order from high Go to low for task processing.
+
+ - The specific implementation is to resolve the priority according to the json of the task instance, and then save the **process instance priority _ process instance id_task priority _ task id** information in the ZooKeeper task queue, when obtained from the task queue, Through string comparison, you can get the task that needs to be executed first.
+
+ - The priority of the process definition is that some processes need to be processed before other processes. This can be configured at the start of the process or at the time of scheduled start. There are 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
+
+
+
+
+
+ - The priority of the task is also divided into 5 levels, followed by HIGHEST, HIGH, MEDIUM, LOW, and LOWEST. As shown below
+
+ `
+
+
+
+##### VI. Logback and gRPC implement log access
+
+- Since the Web (UI) and Worker are not necessarily on the same machine, viewing the log is not as it is for querying local files. There are two options:
+ - Put the logs on the ES search engine
+ - Obtain remote log information through gRPC communication
+- Considering the lightweightness of DolphinScheduler as much as possible, gRPC was chosen to implement remote access log information.
+
+
+
+
+
+- We use a custom Logback FileAppender and Filter function to generate a log file for each task instance.
+- The main implementation of FileAppender is as follows:
+
+```java
+ /**
+ * task log appender
+ */
+ Public class TaskLogAppender extends FileAppender {
+
+ ...
+
+ @Override
+ Protected void append(ILoggingEvent event) {
+
+ If (currentlyActiveFile == null){
+ currentlyActiveFile = getFile();
+ }
+ String activeFile = currentlyActiveFile;
+ // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+ String threadName = event.getThreadName();
+ String[] threadNameArr = threadName.split("-");
+ // logId = processDefineId_processInstanceId_taskInstanceId
+ String logId = threadNameArr[1];
+ ...
+ super.subAppend(event);
+ }
+}
+```
+
+Generate a log in the form of /process definition id/process instance id/task instance id.log
+
+- Filter matches the thread name starting with TaskLogInfo:
+- TaskLogFilter is implemented as follows:
+
+```java
+ /**
+ * task log filter
+ */
+Public class TaskLogFilter extends Filter {
+
+ @Override
+ Public FilterReply decide(ILoggingEvent event) {
+ If (event.getThreadName().startsWith("TaskLogInfo-")){
+ Return FilterReply.ACCEPT;
+ }
+ Return FilterReply.DENY;
+ }
+}
+```
+
+
+
+### summary
+
+Starting from the scheduling, this paper introduces the architecture principle and implementation ideas of the big data distributed workflow scheduling system-DolphinScheduler. To be continued
diff --git a/docs/docs/en/development/backend/mechanism/global-parameter.md b/docs/docs/en/development/backend/mechanism/global-parameter.md
new file mode 100644
index 0000000000..53b73747d8
--- /dev/null
+++ b/docs/docs/en/development/backend/mechanism/global-parameter.md
@@ -0,0 +1,61 @@
+# Global Parameter development document
+
+After the user defines the parameter with the direction OUT, it is saved in the localParam of the task.
+
+## Usage of parameters
+
+Getting the direct predecessor node `preTasks` of the current `taskInstance` to be created from the DAG, get the `varPool` of `preTasks`, merge this varPool (List) into one `varPool`, and in the merging process, if parameters with the same parameter name are found, they will be handled according to the following logics:
+
+* If all the values are null, the merged value is null
+* If one and only one value is non-null, then the merged value is the non-null value
+* If all the values are not null, it would be the earliest value of the endtime of taskInstance taken by VarPool.
+
+The direction of all the merged properties is updated to IN during the merge process.
+
+The result of the merge is saved in taskInstance.varPool.
+
+The worker receives and parses the varPool into the format of `Map`, where the key of the map is property.prop, which is the parameter name.
+
+When the processor processes the parameters, it will merge the varPool and localParam and globalParam parameters, and if there are parameters with duplicate names during the merging process, they will be replaced according to the following priorities, with the higher priority being retained and the lower priority being replaced:
+
+* globalParam: high
+* varPool: middle
+* localParam: low
+
+The parameters are replaced with the corresponding values using regular expressions compared to ${parameter name} before the node content is executed.
+
+## Parameter setting
+
+Currently, only SQL and SHELL nodes are supported to get parameters.
+
+Get the parameter with direction OUT from localParam, and do the following way according to the type of different nodes.
+
+### SQL node
+
+The structure returned by the parameter is List>, where the elements of List are each row of data, the key of Map is the column name, and the value is the value corresponding to the column.
+
+* If the SQL statement returns one row of data, match the OUT parameter name based on the OUT parameter name defined by the user when defining the task, or discard it if it does not match.
+* If the SQL statement returns multiple rows of data, the column names are matched based on the OUT parameter names defined by the user when defining the task of type LIST. All rows of the corresponding column are converted to `List` as the value of this parameter. If there is no match, it is discarded.
+
+### SHELL node
+
+The result of the processor execution is returned as `Map`.
+
+The user needs to define `${setValue(key=value)}` in the output when defining the shell script.
+
+Remove `${setValue()}` when processing parameters, split by "=", with the 0th being the key and the 1st being the value.
+
+Similarly match the OUT parameter name and key defined by the user when defining the task, and use value as the value of that parameter.
+
+Return parameter processing
+
+* The result of acquired Processor is String.
+* Determine whether the processor is empty or not, and exit if it is empty.
+* Determine whether the localParam is empty or not, and exit if it is empty.
+* Get the parameter of localParam which is OUT, and exit if it is empty.
+* Format String as per appeal format (`List>` for SQL, `Map>` for shell).
+
+Assign the parameters with matching values to varPool (List, which contains the original IN's parameters)
+
+* Format the varPool as json and pass it to master.
+* The parameters that are OUT would be written into the localParam after the master has received the varPool.
diff --git a/docs/docs/en/development/backend/mechanism/overview.md b/docs/docs/en/development/backend/mechanism/overview.md
new file mode 100644
index 0000000000..4f0d592c46
--- /dev/null
+++ b/docs/docs/en/development/backend/mechanism/overview.md
@@ -0,0 +1,6 @@
+# Overview
+
+
+
+* [Global Parameter](global-parameter.md)
+* [Switch Task type](task/switch.md)
diff --git a/docs/docs/en/development/backend/mechanism/task/switch.md b/docs/docs/en/development/backend/mechanism/task/switch.md
new file mode 100644
index 0000000000..490510405e
--- /dev/null
+++ b/docs/docs/en/development/backend/mechanism/task/switch.md
@@ -0,0 +1,8 @@
+# SWITCH Task development
+
+Switch task workflow step as follows
+
+* User-defined expressions and branch information are stored in `taskParams` in `taskdefinition`. When the switch is executed, it will be formatted as `SwitchParameters`
+* `SwitchTaskExecThread` processes the expressions defined in `switch` from top to bottom, obtains the value of the variable from `varPool`, and parses the expression through `javascript`. If the expression returns true, stop checking and record The order of the expression, here we record as resultConditionLocation. The task of SwitchTaskExecThread is over
+* After the `switch` task runs, if there is no error (more commonly, the user-defined expression is out of specification or there is a problem with the parameter name), then `MasterExecThread.submitPostNode` will obtain the downstream node of the `DAG` to continue execution.
+* If it is found in `DagHelper.parsePostNodes` that the current node (the node that has just completed the work) is a `switch` node, the `resultConditionLocation` will be obtained, and all branches except `resultConditionLocation` in the SwitchParameters will be skipped. In this way, only the branches that need to be executed are left
diff --git a/docs/docs/en/development/backend/spi/alert.md b/docs/docs/en/development/backend/spi/alert.md
new file mode 100644
index 0000000000..d5e94bcafa
--- /dev/null
+++ b/docs/docs/en/development/backend/spi/alert.md
@@ -0,0 +1,75 @@
+### DolphinScheduler Alert SPI main design
+
+#### DolphinScheduler SPI Design
+
+DolphinScheduler is undergoing a microkernel + plug-in architecture change. All core capabilities such as tasks, resource storage, registration centers, etc. will be designed as extension points. We hope to use SPI to improve DolphinScheduler’s own flexibility and friendliness (extended sex).
+
+For alarm-related codes, please refer to the `dolphinscheduler-alert-api` module. This module defines the extension interface of the alarm plug-in and some basic codes. When we need to realize the plug-inization of related functions, it is recommended to read the code of this block first. Of course, it is recommended that you read the document. This will reduce a lot of time, but the document There is a certain degree of lag. When the document is missing, it is recommended to take the source code as the standard (if you are interested, we also welcome you to submit related documents). In addition, we will hardly make changes to the extended interface (excluding new additions) , Unless there is a major structural adjustment, there is an incompatible upgrade version, so the existing documents can generally be satisfied.
+
+We use the native JAVA-SPI, when you need to extend, in fact, you only need to pay attention to the extension of the `org.apache.dolphinscheduler.alert.api.AlertChannelFactory` interface, the underlying logic such as plug-in loading, and other kernels have been implemented, Which makes our development more focused and simple.
+
+By the way, we have adopted an excellent front-end component form-create, which supports the generation of front-end UI components based on JSON. If plug-in development involves the front-end, we will use JSON to generate related front-end UI components, org.apache.dolphinscheduler. The parameters of the plug-in are encapsulated in spi.params, which will convert all the relevant parameters into the corresponding JSON, which means that you can complete the drawing of the front-end components by way of Java code (here is mainly the form, we only care Data exchanged between the front and back ends).
+
+This article mainly focuses on the design and development of Alert.
+
+#### Main Modules
+
+If you don't care about its internal design, but simply want to know how to develop your own alarm plug-in, you can skip this content.
+
+* dolphinscheduler-alert-api
+
+ This module is the core module of ALERT SPI. This module defines the interface of the alarm plug-in extension and some basic codes. The extension plug-in must implement the interface defined by this module: `org.apache.dolphinscheduler.alert.api.AlertChannelFactory`
+
+* dolphinscheduler-alert-plugins
+
+ This module is currently a plug-in provided by us, such as Email, DingTalk, Script, etc.
+
+
+#### Alert SPI Main class information.
+AlertChannelFactory
+Alarm plug-in factory interface. All alarm plug-ins need to implement this interface. This interface is used to define the name of the alarm plug-in and the required parameters. The create method is used to create a specific alarm plug-in instance.
+
+AlertChannel
+The interface of the alert plug-in. The alert plug-in needs to implement this interface. There is only one method process in this interface. The upper-level alert system will call this method and obtain the return information of the alert through the AlertResult returned by this method.
+
+AlertData
+Alarm content information, including id, title, content, log.
+
+AlertInfo
+For alarm-related information, when the upper-level system calls an instance of the alarm plug-in, the instance of this class is passed to the specific alarm plug-in through the process method. It contains the alert content AlertData and the parameter information filled in by the front end of the called alert plug-in instance.
+
+AlertResult
+The alarm plug-in sends alarm return information.
+
+org.apache.dolphinscheduler.spi.params
+This package is a plug-in parameter definition. Our front-end uses the from-create front-end library http://www.form-create.com, which can dynamically generate the front-end UI based on the parameter list json returned by the plug-in definition, so We don't need to care about the front end when we are doing SPI plug-in development.
+
+Under this package, we currently only encapsulate RadioParam, TextParam, and PasswordParam, which are used to define text type parameters, radio parameters and password type parameters, respectively.
+
+AbsPluginParams This class is the base class of all parameters, RadioParam these classes all inherit this class. Each DS alert plug-in will return a list of AbsPluginParams in the implementation of AlertChannelFactory.
+
+The specific design of alert_spi can be seen in the issue: [Alert Plugin Design](https://github.com/apache/incubator-dolphinscheduler/issues/3049)
+
+#### Alert SPI built-in implementation
+
+* Email
+
+ Email alert notification
+
+* DingTalk
+
+ Alert for DingTalk group chat bots
+
+* EnterpriseWeChat
+
+ EnterpriseWeChat alert notifications
+
+ Related parameter configuration can refer to the EnterpriseWeChat robot document.
+
+* Script
+
+ We have implemented a shell script for alerting. We will pass the relevant alert parameters to the script and you can implement your alert logic in the shell. This is a good way to interface with internal alerting applications.
+
+* SMS
+
+ SMS alerts
diff --git a/docs/docs/en/development/backend/spi/datasource.md b/docs/docs/en/development/backend/spi/datasource.md
new file mode 100644
index 0000000000..5772b4357c
--- /dev/null
+++ b/docs/docs/en/development/backend/spi/datasource.md
@@ -0,0 +1,23 @@
+## DolphinScheduler Datasource SPI main design
+
+#### How do I use data sources?
+
+The data source center supports POSTGRESQL, HIVE/IMPALA, SPARK, CLICKHOUSE, SQLSERVER data sources by default.
+
+If you are using MySQL or ORACLE data source, you need to place the corresponding driver package in the lib directory
+
+#### How to do Datasource plugin development?
+
+org.apache.dolphinscheduler.spi.datasource.DataSourceChannel
+org.apache.dolphinscheduler.spi.datasource.DataSourceChannelFactory
+org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient
+
+1. In the first step, the data source plug-in can implement the above interfaces and inherit the general client. For details, refer to the implementation of data source plug-ins such as sqlserver and mysql. The addition methods of all RDBMS plug-ins are the same.
+
+2. Add the driver configuration in the data source plug-in pom.xml
+
+We provide APIs for external access of all data sources in the dolphin scheduler data source API module
+
+#### **Future plan**
+
+Support data sources such as kafka, http, files, sparkSQL, FlinkSQL, etc.
\ No newline at end of file
diff --git a/docs/docs/en/development/backend/spi/registry.md b/docs/docs/en/development/backend/spi/registry.md
new file mode 100644
index 0000000000..0957ff3cdd
--- /dev/null
+++ b/docs/docs/en/development/backend/spi/registry.md
@@ -0,0 +1,27 @@
+### DolphinScheduler Registry SPI Extension
+
+#### how to use?
+
+Make the following configuration (take zookeeper as an example)
+
+* Registry plug-in configuration, take Zookeeper as an example (registry.properties)
+ dolphinscheduler-service/src/main/resources/registry.properties
+ ```registry.properties
+ registry.plugin.name=zookeeper
+ registry.servers=127.0.0.1:2181
+ ```
+
+For specific configuration information, please refer to the parameter information provided by the specific plug-in, for example zk: `org/apache/dolphinscheduler/plugin/registry/zookeeper/ZookeeperConfiguration.java`
+All configuration information prefixes need to be +registry, such as base.sleep.time.ms, which should be configured in the registry as follows: registry.base.sleep.time.ms=100
+
+#### How to expand
+
+`dolphinscheduler-registry-api` defines the standard for implementing plugins. When you need to extend plugins, you only need to implement `org.apache.dolphinscheduler.registry.api.RegistryFactory`.
+
+Under the `dolphinscheduler-registry-plugin` module is the registry plugin we currently provide.
+
+#### FAQ
+
+1: registry connect timeout
+
+You can increase the relevant timeout parameters.
diff --git a/docs/docs/en/development/backend/spi/task.md b/docs/docs/en/development/backend/spi/task.md
new file mode 100644
index 0000000000..70b01d48ff
--- /dev/null
+++ b/docs/docs/en/development/backend/spi/task.md
@@ -0,0 +1,15 @@
+## DolphinScheduler Task SPI extension
+
+#### How to develop task plugins?
+
+org.apache.dolphinscheduler.spi.task.TaskChannel
+
+The plug-in can implement the above interface. It mainly includes creating tasks (task initialization, task running, etc.) and task cancellation. If it is a yarn task, you need to implement org.apache.dolphinscheduler.plugin.task.api.AbstractYarnTask.
+
+We provide APIs for external access to all tasks in the dolphinscheduler-task-api module, while the dolphinscheduler-spi module is the spi general code library, which defines all the plug-in modules, such as the alarm module, the registry module, etc., you can read and view in detail .
+
+*NOTICE*
+
+Since the task plug-in involves the front-end page, the front-end SPI has not yet been implemented, so you need to implement the front-end page corresponding to the plug-in separately.
+
+If there is a class conflict in the task plugin, you can use [Shade-Relocating Classes](https://maven.apache.org/plugins/maven-shade-plugin/) to solve this problem.
\ No newline at end of file
diff --git a/docs/docs/en/development/development-environment-setup.md b/docs/docs/en/development/development-environment-setup.md
new file mode 100644
index 0000000000..ad25b8577f
--- /dev/null
+++ b/docs/docs/en/development/development-environment-setup.md
@@ -0,0 +1,159 @@
+# DolphinScheduler development
+
+## Software Requests
+
+Before setting up the DolphinScheduler development environment, please make sure you have installed the software as below:
+
+* [Git](https://git-scm.com/downloads): DolphinScheduler version control system
+* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html): DolphinScheduler backend language
+* [Maven](http://maven.apache.org/download.cgi): Java Package Management System
+* [Node](https://nodejs.org/en/download): DolphinScheduler frontend
+ language
+
+### Clone Git Repository
+
+Download the git repository through your git management tool, here we use git-core as an example
+
+```shell
+mkdir dolphinscheduler
+cd dolphinscheduler
+git clone git@github.com:apache/dolphinscheduler.git
+```
+### compile source code
+
+i. If you use MySQL database, pay attention to modify pom.xml in the root project, and change the scope of the mysql-connector-java dependency to compile.
+
+ii. Run `mvn clean install -Prelease -Dmaven.test.skip=true`
+
+
+## Notice
+
+There are two ways to configure the DolphinScheduler development environment, standalone mode and normal mode
+
+* [Standalone mode](#dolphinscheduler-standalone-quick-start): **Recommended**,more convenient to build development environment, it can cover most scenes.
+* [Normal mode](#dolphinscheduler-normal-mode): Separate server master, worker, api, which can cover more test environments than standalone, and it is more like production environment in real life.
+
+## DolphinScheduler Standalone Quick Start
+
+> **_Note:_** Standalone server only for development and debugging, cause it use H2 Database, Zookeeper Testing Server which may not stable in production
+> Standalone is only supported in DolphinScheduler 1.3.9 and later versions
+
+### Git Branch Choose
+
+Use different Git branch to develop different codes
+
+* If you want to develop based on a binary package, switch git branch to specific release branch, for example, if you want to develop base on 1.3.9, you should choose branch `1.3.9-release`.
+* If you want to develop the latest code, choose branch branch `dev`.
+
+### Start backend server
+
+Find the class `org.apache.dolphinscheduler.server.StandaloneServer` in Intellij IDEA and clikc run main function to startup.
+
+### Start frontend server
+
+Install frontend dependencies and run it
+
+```shell
+cd dolphinscheduler-ui
+npm install
+npm run start
+```
+
+The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
+
+## DolphinScheduler Normal Mode
+
+### Prepare
+
+#### zookeeper
+
+Download [ZooKeeper](https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3), and extract it.
+
+* Create directory `zkData` and `zkLog`
+* Go to the zookeeper installation directory, copy configure file `zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in conf/zoo.cfg to dataDir=./tmp/zookeeper
+
+ ```shell
+ # We use path /data/zookeeper/data and /data/zookeeper/datalog here as example
+ dataDir=/data/zookeeper/data
+ dataLogDir=/data/zookeeper/datalog
+ ```
+
+* Run `./bin/zkServer.sh` in terminal by command `./bin/zkServer.sh start`.
+
+#### Database
+
+The DolphinScheduler's metadata is stored in relational database. Currently supported MySQL and Postgresql. We use MySQL as an example. Start the database and create a new database named dolphinscheduler as DolphinScheduler metabase
+
+After creating the new database, run the sql file under `dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_mysql.sql` directly in MySQL to complete the database initialization
+
+#### Start Backend Server
+
+Following steps will guide how to start the DolphinScheduler backend service
+
+##### Backend Start Prepare
+
+* Open project: Use IDE open the project, here we use Intellij IDEA as an example, after opening it will take a while for Intellij IDEA to complete the dependent download
+* Plugin installation(**Only required for 2.0 or later**)
+
+ * Registry plug-in configuration, take Zookeeper as an example (registry.properties)
+ dolphinscheduler-service/src/main/resources/registry.properties
+ ```registry.properties
+ registry.plugin.name=zookeeper
+ registry.servers=127.0.0.1:2181
+ ```
+* File change
+ * If you use MySQL as your metadata database, you need to modify `dolphinscheduler/pom.xml` and change the `scope` of the `mysql-connector-java` dependency to `compile`. This step is not necessary to use PostgreSQL
+ * Modify database configuration, modify the database configuration in the `dolphinscheduler-dao/src/main/resources/application-mysql.yaml`
+
+
+ We here use MySQL with database, username, password named dolphinscheduler as an example
+ ```application-mysql.yaml
+ spring:
+ datasource:
+ driver-class-name: com.mysql.jdbc.Driver
+ url: jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
+ username: ds_user
+ password: dolphinscheduler
+ ```
+
+* Log level: add a line ` ` to the following configuration to enable the log to be displayed on the command line
+
+ `dolphinscheduler-server/src/main/resources/logback-worker.xml`
+
+ `dolphinscheduler-server/src/main/resources/logback-master.xml`
+
+ `dolphinscheduler-api/src/main/resources/logback-api.xml`
+
+ here we add the result after modify as below:
+
+ ```diff
+
+ +
+
+
+
+ ```
+
+> **_Note:_** Only DolphinScheduler 2.0 and later versions need to inatall plugin before start server. It not need before version 2.0.
+
+##### Server start
+
+There are three services that need to be started, including MasterServer, WorkerServer, ApiApplicationServer.
+
+* MasterServer:Execute function `main` in the class `org.apache.dolphinscheduler.server.master.MasterServer` by Intellij IDEA, with the configuration *VM Options* `-Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false -Dspring.profiles.active=mysql`
+* WorkerServer:Execute function `main` in the class `org.apache.dolphinscheduler.server.worker.WorkerServer` by Intellij IDEA, with the configuration *VM Options* `-Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false -Dspring.profiles.active=mysql`
+* ApiApplicationServer:Execute function `main` in the class `org.apache.dolphinscheduler.api.ApiApplicationServer` by Intellij IDEA, with the configuration *VM Options* `-Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api,mysql`. After it started, you could find Open API documentation in http://localhost:12345/dolphinscheduler/doc.html
+
+> The `mysql` in the VM Options `-Dspring.profiles.active=mysql` means specified configuration file
+
+### Start Frontend Server
+
+Install frontend dependencies and run it
+
+```shell
+cd dolphinscheduler-ui
+npm install
+npm run start
+```
+
+The browser access address http://localhost:12345/dolphinscheduler can login DolphinScheduler UI. The default username and password are **admin/dolphinscheduler123**
diff --git a/docs/docs/en/development/e2e-test.md b/docs/docs/en/development/e2e-test.md
new file mode 100644
index 0000000000..3f5e26af69
--- /dev/null
+++ b/docs/docs/en/development/e2e-test.md
@@ -0,0 +1,197 @@
+# DolphinScheduler E2E Automation Test
+
+## I. Preparatory knowledge
+
+### 1. The difference between E2E Test and Unit Test
+
+E2E, which stands for "End to End", can be translated as "end-to-end" testing. It imitates the user, starting from a certain entry point and progressively performing actions until a certain job is completed. And unit tests are different, the latter usually requires testing parameters, types and parameter values, the number of arguments, the return value, throw an error, and so on, the purpose is to ensure that a specific function to finishing the work is stable and reliable in all cases. Unit testing assumes that if all functions work correctly, then the whole product will work.
+
+In contrast, E2E test does not emphasize so much the need to cover all usage scenarios, it focuses on whether a complete chain of operations can be completed. For the web front-end, it is also concerned with the layout of the interface and whether the content information meets expectations.
+
+For example, E2E test of the login page is concerned with whether the user is able to enter and log in normally, and whether the error message is correctly displayed if the login fails. It is not a major concern whether input that is not legal is processed.
+
+### 2. Selenium test framework
+
+[Selenium](https://www.selenium.dev) is an open source testing tool for executing automated tests on a web browser. The framework uses WebDriver to transform Web Service commands into browser native calls through the browser's native components to complete operations. In simple words, it simulates the browser and makes selection operations on the elements of the page.
+
+A WebDriver is an API and protocol which defines a language-neutral interface for controlling the behavior of a web browser. Every browser has a specific WebDriver implementation, called a driver. The driver is the component responsible for delegating to the browser and handling the communication with Selenium and the browser.
+
+The Selenium framework links all these components together through a user-facing interface that allows transparent work with different browser backends, enabling cross-browser and cross-platform automation.
+
+## II. E2E Test
+
+### 1. E2E-Pages
+
+DolphinScheduler's E2E tests are deployed using docker-compose. The current tests are in standalone mode and are mainly used to check some basic functions such as "add, delete, change and check". For further cluster validation, such as collaboration between services or communication mechanisms between services, refer to `deploy/docker/docker-compose.yml` for configuration.
+
+For E2E test (the front-end part), the [page model](https://www.selenium.dev/documentation/guidelines/page_object_models/) form is used, mainly to create a corresponding model for each page. The following is an example of a login page.
+
+```java
+package org.apache.dolphinscheduler.e2e.pages;
+
+import org.apache.dolphinscheduler.e2e.pages.common.NavBarPage;
+import org.apache.dolphinscheduler.e2e.pages.security.TenantPage;
+
+import org.openqa.selenium.WebElement;
+import org.openqa.selenium.remote.RemoteWebDriver;
+import org.openqa.selenium.support.FindBy;
+import org.openqa.selenium.support.ui.ExpectedConditions;
+import org.openqa.selenium.support.ui.WebDriverWait;
+
+import lombok.Getter;
+import lombok.SneakyThrows;
+
+@Getter
+public final class LoginPage extends NavBarPage {
+ @FindBy(id = "inputUsername")
+ private WebElement inputUsername;
+
+ @FindBy(id = "inputPassword")
+ private WebElement inputPassword;
+
+ @FindBy(id = "btnLogin")
+ private WebElement buttonLogin;
+
+ public LoginPage(RemoteWebDriver driver) {
+ super(driver);
+ }
+
+ @SneakyThrows
+ public TenantPage login(String username, String password) {
+ inputUsername().sendKeys(username);
+ inputPassword().sendKeys(password);
+ buttonLogin().click();
+
+ new WebDriverWait(driver, 10)
+ .until(ExpectedConditions.urlContains("/#/security"));
+
+ return new TenantPage(driver);
+ }
+}
+```
+
+During the test process, we only test the elements we need to focus on, not all elements of the page. So on the login page only the username, password and login button elements are declared. The FindBy interface is provided by the Selenium test framework to find the corresponding id or class in a Vue file.
+
+In addition, during the testing process, the elements are not manipulated directly. The general choice is to package the corresponding methods to achieve the effect of reuse. For example, if you want to log in, you input your username and password through the `public TenantPage login()` method to manipulate the elements you pass in to achieve the effect of logging in. That is, when the user finishes logging in, he or she jumps to the Security Centre (which goes to the Tenant Management page by default).
+
+The goToTab method is provided in SecurityPage to test the corresponding sidebar jumps, which include TenantPage, UserPage and WorkerGroupPge and QueuePage. These pages are implemented in the same way, to test that the form's input, add and delete buttons return the corresponding pages.
+
+```java
+ public T goToTab(Class tab) {
+ if (tab == TenantPage.class) {
+ WebElement menuTenantManageElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(menuTenantManage));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menuTenantManageElement);
+ return tab.cast(new TenantPage(driver));
+ }
+ if (tab == UserPage.class) {
+ WebElement menUserManageElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(menUserManage));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menUserManageElement);
+ return tab.cast(new UserPage(driver));
+ }
+ if (tab == WorkerGroupPage.class) {
+ WebElement menWorkerGroupManageElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(menWorkerGroupManage));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menWorkerGroupManageElement);
+ return tab.cast(new WorkerGroupPage(driver));
+ }
+ if (tab == QueuePage.class) {
+ menuQueueManage().click();
+ return tab.cast(new QueuePage(driver));
+ }
+ throw new UnsupportedOperationException("Unknown tab: " + tab.getName());
+ }
+```
+
+![SecurityPage](/img/e2e-test/SecurityPage.png)
+
+For navigation bar options jumping, the goToNav method is provided in `org/apache/dolphinscheduler/e2e/pages/common/NavBarPage.java`. The currently supported pages are: ProjectPage, SecurityPage and ResourcePage.
+
+```java
+ public T goToNav(Class nav) {
+ if (nav == ProjectPage.class) {
+ WebElement projectTabElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(projectTab));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", projectTabElement);
+ return nav.cast(new ProjectPage(driver));
+ }
+
+ if (nav == SecurityPage.class) {
+ WebElement securityTabElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(securityTab));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", securityTabElement);
+ return nav.cast(new SecurityPage(driver));
+ }
+
+ if (nav == ResourcePage.class) {
+ WebElement resourceTabElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(resourceTab));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", resourceTabElement);
+ return nav.cast(new ResourcePage(driver));
+ }
+
+ throw new UnsupportedOperationException("Unknown nav bar");
+ }
+```
+
+### E2E-Cases
+
+Current E2E test cases supported include: File Management, Project Management, Queue Management, Tenant Management, User Management, Worker Group Management and Workflow Test.
+
+![E2E_Cases](/img/e2e-test/E2E_Cases.png)
+
+The following is an example of a tenant management test. As explained earlier, we use docker-compose for deployment, so for each test case, we need to import the corresponding file in the form of an annotation.
+
+The browser is loaded using the RemoteWebDriver provided with Selenium. Before each test case is started there is some preparation work that needs to be done. For example: logging in the user, jumping to the corresponding page (depending on the specific test case).
+
+```java
+ @BeforeAll
+ public static void setup() {
+ new LoginPage(browser)
+ .login("admin", "dolphinscheduler123")
+ .goToNav(SecurityPage.class)
+ .goToTab(TenantPage.class)
+ ;
+ }
+```
+
+When the preparation is complete, it is time for the formal test case writing. We use a form of @Order() annotation for modularity, to confirm the order of the tests. After the tests have been run, assertions are used to determine if the tests were successful, and if the assertion returns true, the tenant creation was successful. The following code can be used as a reference:
+
+```java
+ @Test
+ @Order(10)
+ void testCreateTenant() {
+ final TenantPage page = new TenantPage(browser);
+ page.create(tenant);
+
+ await().untilAsserted(() -> assertThat(page.tenantList())
+ .as("Tenant list should contain newly-created tenant")
+ .extracting(WebElement::getText)
+ .anyMatch(it -> it.contains(tenant)));
+ }
+```
+
+The rest are similar cases and can be understood by referring to the specific source code.
+
+https://github.com/apache/dolphinscheduler/tree/dev/dolphinscheduler-e2e/dolphinscheduler-e2e-case/src/test/java/org/apache/dolphinscheduler/e2e/cases
+
+## III. Supplements
+
+When running E2E tests locally, First, you need to start the local service, you can refer to this page:
+[development-environment-setup](https://dolphinscheduler.apache.org/en-us/development/development-environment-setup.html)
+
+When running E2E tests locally, the `-Dlocal=true` parameter can be configured to connect locally and facilitate changes to the UI.
+
+When running E2E tests with `M1` chip, you can use `-Dm1_chip=true` parameter to configure containers supported by
+`ARM64`.
+
+![Dlocal](/img/e2e-test/Dlocal.png)
+
+If a connection timeout occurs during a local run, increase the load time to a recommended 30 and above.
+
+![timeout](/img/e2e-test/timeout.png)
+
+The test run will be available as an MP4 file.
+
+![MP4](/img/e2e-test/MP4.png)
diff --git a/docs/docs/en/development/frontend-development.md b/docs/docs/en/development/frontend-development.md
new file mode 100644
index 0000000000..297a7ccee0
--- /dev/null
+++ b/docs/docs/en/development/frontend-development.md
@@ -0,0 +1,639 @@
+# Front-end development documentation
+
+### Technical selection
+```
+Vue mvvm framework
+
+Es6 ECMAScript 6.0
+
+Ans-ui Analysys-ui
+
+D3 Visual Library Chart Library
+
+Jsplumb connection plugin library
+
+Lodash high performance JavaScript utility library
+```
+
+### Development environment
+
+- #### Node installation
+Node package download (note version v12.20.2) `https://nodejs.org/download/release/v12.20.2/`
+
+- #### Front-end project construction
+Use the command line mode `cd` enter the `dolphinscheduler-ui` project directory and execute `npm install` to pull the project dependency package.
+
+> If `npm install` is very slow, you can set the taobao mirror
+
+```
+npm config set registry http://registry.npm.taobao.org/
+```
+
+- Modify `API_BASE` in the file `dolphinscheduler-ui/.env` to interact with the backend:
+
+```
+# back end interface address
+API_BASE = http://127.0.0.1:12345
+```
+
+> ##### ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution.
+
+```bash
+npm install node-sass --unsafe-perm #Install node-sass dependency separately
+```
+
+- #### Development environment operation
+- `npm start` project development environment (after startup address http://localhost:8888)
+
+#### Front-end project release
+
+- `npm run build` project packaging (after packaging, the root directory will create a folder called dist for publishing Nginx online)
+
+Run the `npm run build` command to generate a package file (dist) package
+
+Copy it to the corresponding directory of the server (front-end service static page storage directory)
+
+Visit address` http://localhost:8888`
+
+#### Start with node and daemon under Linux
+
+Install pm2 `npm install -g pm2`
+
+Execute `pm2 start npm -- run dev` to start the project in the project `dolphinscheduler-ui `root directory
+
+#### command
+
+- Start `pm2 start npm -- run dev`
+
+- Stop `pm2 stop npm`
+
+- delete `pm2 delete npm`
+
+- Status `pm2 list`
+
+```
+
+[root@localhost dolphinscheduler-ui]# pm2 start npm -- run dev
+[PM2] Applying action restartProcessId on app [npm](ids: 0)
+[PM2] [npm](0) ✓
+[PM2] Process successfully started
+┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐
+│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
+├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤
+│ npm │ 0 │ N/A │ fork │ 6168 │ online │ 31 │ 0s │ 0% │ 5.6 MB │ root │ disabled │
+└──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘
+ Use `pm2 show ` to get more details about an app
+
+```
+
+### Project directory structure
+
+`build` some webpack configurations for packaging and development environment projects
+
+`node_modules` development environment node dependency package
+
+`src` project required documents
+
+`src => combo` project third-party resource localization `npm run combo` specific view `build/combo.js`
+
+`src => font` Font icon library can be added by visiting https://www.iconfont.cn Note: The font library uses its own secondary development to reintroduce its own library `src/sass/common/_font.scss`
+
+`src => images` public image storage
+
+`src => js` js/vue
+
+`src => lib` internal components of the company (company component library can be deleted after open source)
+
+`src => sass` sass file One page corresponds to a sass file
+
+`src => view` page file One page corresponds to an html file
+
+```
+> Projects are developed using vue single page application (SPA)
+- All page entry files are in the `src/js/conf/${ corresponding page filename => home} index.js` entry file
+- The corresponding sass file is in `src/sass/conf/${corresponding page filename => home}/index.scss`
+- The corresponding html file is in `src/view/${corresponding page filename => home}/index.html`
+```
+
+Public module and utill `src/js/module`
+
+`components` => internal project common components
+
+`download` => download component
+
+`echarts` => chart component
+
+`filter` => filter and vue pipeline
+
+`i18n` => internationalization
+
+`io` => io request encapsulation based on axios
+
+`mixin` => vue mixin public part for disabled operation
+
+`permissions` => permission operation
+
+`util` => tool
+
+### System function module
+
+Home => `http://localhost:8888/#/home`
+
+Project Management => `http://localhost:8888/#/projects/list`
+```
+| Project Home
+| Workflow
+ - Workflow definition
+ - Workflow instance
+ - Task instance
+```
+
+Resource Management => `http://localhost:8888/#/resource/file`
+```
+| File Management
+| udf Management
+ - Resource Management
+ - Function management
+```
+
+Data Source Management => `http://localhost:8888/#/datasource/list`
+
+Security Center => `http://localhost:8888/#/security/tenant`
+```
+| Tenant Management
+| User Management
+| Alarm Group Management
+ - master
+ - worker
+```
+
+User Center => `http://localhost:8888/#/user/account`
+
+## Routing and state management
+
+The project `src/js/conf/home` is divided into
+
+`pages` => route to page directory
+```
+ The page file corresponding to the routing address
+```
+
+`router` => route management
+```
+vue router, the entry file index.js in each page will be registered. Specific operations: https://router.vuejs.org/zh/
+```
+
+`store` => status management
+```
+The page corresponding to each route has a state management file divided into:
+
+actions => mapActions => Details:https://vuex.vuejs.org/zh/guide/actions.html
+
+getters => mapGetters => Details:https://vuex.vuejs.org/zh/guide/getters.html
+
+index => entrance
+
+mutations => mapMutations => Details:https://vuex.vuejs.org/zh/guide/mutations.html
+
+state => mapState => Details:https://vuex.vuejs.org/zh/guide/state.html
+
+Specific action:https://vuex.vuejs.org/zh/
+```
+
+## specification
+## Vue specification
+##### 1.Component name
+The component is named multiple words and is connected with a wire (-) to avoid conflicts with HTML tags and a clearer structure.
+```
+// positive example
+export default {
+ name: 'page-article-item'
+}
+```
+
+##### 2.Component files
+The internal common component of the `src/js/module/components` project writes the folder name with the same name as the file name. The subcomponents and util tools that are split inside the common component are placed in the internal `_source` folder of the component.
+```
+└── components
+ ├── header
+ ├── header.vue
+ └── _source
+ └── nav.vue
+ └── util.js
+ ├── conditions
+ ├── conditions.vue
+ └── _source
+ └── search.vue
+ └── util.js
+```
+
+##### 3.Prop
+When you define Prop, you should always name it in camel format (camelCase) and use the connection line (-) when assigning values to the parent component.
+This follows the characteristics of each language, because it is case-insensitive in HTML tags, and the use of links is more friendly; in JavaScript, the more natural is the hump name.
+
+```
+// Vue
+props: {
+ articleStatus: Boolean
+}
+// HTML
+
+```
+
+The definition of Prop should specify its type, defaults, and validation as much as possible.
+
+Example:
+
+```
+props: {
+ attrM: Number,
+ attrA: {
+ type: String,
+ required: true
+ },
+ attrZ: {
+ type: Object,
+ // The default value of the array/object should be returned by a factory function
+ default: function () {
+ return {
+ msg: 'achieve you and me'
+ }
+ }
+ },
+ attrE: {
+ type: String,
+ validator: function (v) {
+ return !(['success', 'fail'].indexOf(v) === -1)
+ }
+ }
+}
+```
+
+##### 4.v-for
+When performing v-for traversal, you should always bring a key value to make rendering more efficient when updating the DOM.
+```
+
+```
+
+v-for should be avoided on the same element as v-if (`for example: `) because v-for has a higher priority than v-if. To avoid invalid calculations and rendering, you should try to use v-if Put it on top of the container's parent element.
+```
+
+```
+
+##### 5.v-if / v-else-if / v-else
+If the elements in the same set of v-if logic control are logically identical, Vue reuses the same part for more efficient element switching, `such as: value`. In order to avoid the unreasonable effect of multiplexing, you should add key to the same element for identification.
+```
+
+ {{ mazeyData }}
+
+
+ no data
+
+```
+
+##### 6.Instruction abbreviation
+In order to unify the specification, the instruction abbreviation is always used. Using `v-bind`, `v-on` is not bad. Here is only a unified specification.
+```
+
+```
+
+##### 7.Top-level element order of single file components
+Styles are packaged in a file, all the styles defined in a single vue file, the same name in other files will also take effect. All will have a top class name before creating a component.
+Note: The sass plugin has been added to the project, and the sas syntax can be written directly in a single vue file.
+For uniformity and ease of reading, they should be placed in the order of ``、`
+
+
+
+```
+
+## JavaScript specification
+
+##### 1.var / let / const
+It is recommended to no longer use var, but use let / const, prefer const. The use of any variable must be declared in advance, except that the function defined by function can be placed anywhere.
+
+##### 2.quotes
+```
+const foo = 'after division'
+const bar = `${foo},ront-end engineer`
+```
+
+##### 3.function
+Anonymous functions use the arrow function uniformly. When multiple parameters/return values are used, the object's structure assignment is used first.
+```
+function getPersonInfo ({name, sex}) {
+ // ...
+ return {name, gender}
+}
+```
+The function name is uniformly named with a camel name. The beginning of the capital letter is a constructor. The lowercase letters start with ordinary functions, and the new operator should not be used to operate ordinary functions.
+
+##### 4.object
+```
+const foo = {a: 0, b: 1}
+const bar = JSON.parse(JSON.stringify(foo))
+
+const foo = {a: 0, b: 1}
+const bar = {...foo, c: 2}
+
+const foo = {a: 3}
+Object.assign(foo, {b: 4})
+
+const myMap = new Map([])
+for (let [key, value] of myMap.entries()) {
+ // ...
+}
+```
+
+##### 5.module
+Unified management of project modules using import / export.
+```
+// lib.js
+export default {}
+
+// app.js
+import app from './lib'
+```
+
+Import is placed at the top of the file.
+
+If the module has only one output value, use `export default`,otherwise no.
+
+## HTML / CSS
+
+##### 1.Label
+
+Do not write the type attribute when referencing external CSS or JavaScript. The HTML5 default type is the text/css and text/javascript properties, so there is no need to specify them.
+```
+
+
+```
+
+##### 2.Naming
+The naming of Class and ID should be semantic, and you can see what you are doing by looking at the name; multiple words are connected by a link.
+```
+// positive example
+.test-header{
+ font-size: 20px;
+}
+```
+
+##### 3.Attribute abbreviation
+CSS attributes use abbreviations as much as possible to improve the efficiency and ease of understanding of the code.
+
+```
+// counter example
+border-width: 1px;
+border-style: solid;
+border-color: #ccc;
+
+// positive example
+border: 1px solid #ccc;
+```
+
+##### 4.Document type
+The HTML5 standard should always be used.
+
+```
+
+```
+
+##### 5.Notes
+A block comment should be written to a module file.
+```
+/**
+* @module mazey/api
+* @author Mazey
+* @description test.
+* */
+```
+
+## interface
+
+##### All interfaces are returned as Promise
+Note that non-zero is wrong for catching catch
+
+```
+const test = () => {
+ return new Promise((resolve, reject) => {
+ resolve({
+ a:1
+ })
+ })
+}
+
+// transfer
+test.then(res => {
+ console.log(res)
+ // {a:1}
+})
+```
+
+Normal return
+```
+{
+ code:0,
+ data:{}
+ msg:'success'
+}
+```
+
+Error return
+```
+{
+ code:10000,
+ data:{}
+ msg:'failed'
+}
+```
+If the interface is a post request, the Content-Type defaults to application/x-www-form-urlencoded; if the Content-Type is changed to application/json,
+Interface parameter transfer needs to be changed to the following way
+```
+io.post('url', payload, null, null, { emulateJSON: false } res => {
+ resolve(res)
+}).catch(e => {
+ reject(e)
+})
+```
+
+##### Related interface path
+
+dag related interface `src/js/conf/home/store/dag/actions.js`
+
+Data Source Center Related Interfaces `src/js/conf/home/store/datasource/actions.js`
+
+Project Management Related Interfaces `src/js/conf/home/store/projects/actions.js`
+
+Resource Center Related Interfaces `src/js/conf/home/store/resource/actions.js`
+
+Security Center Related Interfaces `src/js/conf/home/store/security/actions.js`
+
+User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
+
+## Extended development
+
+##### 1.Add node
+
+(1) First place the icon icon of the node in the `src/js/conf/home/pages/dag/img `folder, and note the English name of the node defined by the `toolbar_${in the background. For example: SHELL}.png`
+
+(2) Find the `tasksType` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
+```
+'DEPENDENT': { // The background definition node type English name is used as the key value
+ desc: 'DEPENDENT', // tooltip desc
+ color: '#2FBFD8' // The color represented is mainly used for tree and gantt
+}
+```
+
+(3) Add a `${node type (lowercase)}`.vue file in `src/js/conf/home/pages/dag/_source/formModel/tasks`. The contents of the components related to the current node are written here. Must belong to a node component must have a function _verification () After the verification is successful, the relevant data of the current component is thrown to the parent component.
+```
+/**
+ * Verification
+*/
+ _verification () {
+ // datasource subcomponent verification
+ if (!this.$refs.refDs._verifDatasource()) {
+ return false
+ }
+
+ // verification function
+ if (!this.method) {
+ this.$message.warning(`${i18n.$t('Please enter method')}`)
+ return false
+ }
+
+ // localParams subcomponent validation
+ if (!this.$refs.refLocalParams._verifProp()) {
+ return false
+ }
+ // store
+ this.$emit('on-params', {
+ type: this.type,
+ datasource: this.datasource,
+ method: this.method,
+ localParams: this.localParams
+ })
+ return true
+ }
+```
+
+(4) Common components used inside the node component are under` _source`, and `commcon.js` is used to configure public data.
+
+##### 2.Increase the status type
+(1) Find the `tasksState` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
+
+```
+ 'WAITTING_DEPEND': { // 'WAITTING_DEPEND': { //Backend defines state type, frontend is used as key value
+ id: 11, // front-end definition id is used as a sort
+ desc: `${i18n.$t('waiting for dependency')}`, // tooltip desc
+ color: '#5101be', // The color represented is mainly used for tree and gantt
+ icoUnicode: '', // font icon
+ isSpin: false // whether to rotate (requires code judgment)
+}
+```
+
+##### 3.Add the action bar tool
+(1) Find the `toolOper` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
+```
+{
+ code: 'pointer', // tool identifier
+ icon: '', // tool icon
+ disable: disable, // disable
+ desc: `${i18n.$t('Drag node and selected item')}` // tooltip desc
+}
+```
+
+(2) Tool classes are returned as a constructor `src/js/conf/home/pages/dag/_source/plugIn`
+
+`downChart.js` => dag image download processing
+
+`dragZoom.js` => mouse zoom effect processing
+
+`jsPlumbHandle.js` => drag and drop line processing
+
+`util.js` => belongs to the `plugIn` tool class
+
+
+The operation is handled in the `src/js/conf/home/pages/dag/_source/dag.js` => `toolbarEvent` event.
+
+
+##### 3.Add a routing page
+
+(1) First add a routing address`src/js/conf/home/router/index.js` in route management
+```
+routing address{
+ path: '/test', // routing address
+ name: 'test', // alias
+ component: resolve => require(['../pages/test/index'], resolve), // route corresponding component entry file
+ meta: {
+ title: `${i18n.$t('test')} - EasyScheduler` // title display
+ }
+},
+```
+
+(2)Create a `test` folder in `src/js/conf/home/pages` and create an `index.vue `entry file in the folder.
+
+ This will give you direct access to`http://localhost:8888/#/test`
+
+
+##### 4.Increase the preset mailbox
+
+Find the `src/lib/localData/email.js` startup and timed email address input to automatically pull down the match.
+```
+export default ["test@analysys.com.cn","test1@analysys.com.cn","test3@analysys.com.cn"]
+```
+
+##### 5.Authority management and disabled state processing
+
+The permission gives the userType according to the backUser interface `getUserInfo` interface: `"ADMIN_USER/GENERAL_USER" `permission to control whether the page operation button is `disabled`.
+
+specific operation:`src/js/module/permissions/index.js`
+
+disabled processing:`src/js/module/mixin/disabledState.js`
+
diff --git a/docs/docs/en/development/have-questions.md b/docs/docs/en/development/have-questions.md
new file mode 100644
index 0000000000..2d84759982
--- /dev/null
+++ b/docs/docs/en/development/have-questions.md
@@ -0,0 +1,65 @@
+# Have Questions?
+
+## StackOverflow
+
+For usage questions, it is recommended you use the StackOverflow tag [apache-dolphinscheduler](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler) as it is an active forum for DolphinScheduler users’ questions and answers.
+
+Some quick tips when using StackOverflow:
+
+- Prior to asking submitting questions, please:
+ - Search StackOverflow’s [apache-dolphinscheduler](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler) tag to see if your question has already been answered
+- Please follow the StackOverflow [code of conduct](https://stackoverflow.com/help/how-to-ask)
+- Always use the apache-dolphinscheduler tag when asking questions
+- Please do not cross-post between [StackOverflow](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler) and [GitHub issues](https://github.com/apache/dolphinscheduler/issues/new/choose)
+
+Question template:
+
+> **Describe the question**
+>
+> A clear and concise description of what the question is.
+>
+> **Which version of DolphinScheduler:**
+>
+> -[1.3.0-preview]
+>
+> **Additional context**
+>
+> Add any other context about the problem here.
+>
+> **Requirement or improvement**
+>
+> \- Please describe about your requirements or improvement suggestions.
+
+For broad, opinion based, ask for external resources, debug issues, bugs, contributing to the project, and scenarios, it is recommended you use the[ GitHub issues ](https://github.com/apache/dolphinscheduler/issues/new/choose)or dev@dolphinscheduler.apache.org mailing list.
+
+## Mailing Lists
+
+- [dev@dolphinscheduler.apache.org](https://lists.apache.org/list.html?dev@dolphinscheduler.apache.org) is for people who want to contribute code to DolphinScheduler. [(subscribe)](mailto:dev-subscribe@dolphinscheduler.apache.org?subject=(send%20this%20email%20to%20subscribe)) [(unsubscribe)](mailto:dev-unsubscribe@dolphinscheduler.apache.org?subject=(send%20this%20email%20to%20unsubscribe)) [(archives)](http://lists.apache.org/list.html?dev@dolphinscheduler.apache.org)
+
+Some quick tips when using email:
+
+- Prior to asking submitting questions, please:
+ - Search StackOverflow at [apache-dolphinscheduler](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler) to see if your question has already been answered
+
+- Tagging the subject line of your email will help you get a faster response, e.g. [api-server]: How to get open api interface?
+
+- Tags may help identify a topic by:
+ - Component: MasterServer,ApiServer,WorkerServer,AlertServer, etc
+ - Level: Beginner, Intermediate, Advanced
+ - Scenario: Debug, How-to
+
+- For error logs or long code examples, please use [GitHub gist](https://gist.github.com/) and include only a few lines of the pertinent code / log within the email.
+
+## Chat Rooms
+
+Chat rooms are great for quick questions or discussions on specialized topics.
+
+The following chat rooms are officially part of Apache DolphinScheduler:
+
+ The Slack workspace URL: http://asf-dolphinscheduler.slack.com/.
+
+ You can join through invitation url: https://s.apache.org/dolphinscheduler-slack.
+
+This chat room is used for questions and discussions related to using DolphinScheduler.
+
+
\ No newline at end of file
diff --git a/docs/docs/zh/development/api-standard.md b/docs/docs/zh/development/api-standard.md
new file mode 100644
index 0000000000..e3597608ab
--- /dev/null
+++ b/docs/docs/zh/development/api-standard.md
@@ -0,0 +1,102 @@
+# API 设计规范
+规范统一的 API 是项目设计的基石。DolphinScheduler 的 API 遵循 REST ful 标准,REST ful 是目前最流行的一种互联网软件架构,它结构清晰,符合标准,易于理解,扩展方便。
+
+本文以 DolphinScheduler 项目的接口为样例,讲解如何构造具有 Restful 风格的 API。
+
+## 1. URI 设计
+REST 即为 Representational State Transfer 的缩写,即“表现层状态转化”。
+
+“表现层”指的就是“资源”。资源对应网络上的一种实体,例如:一段文本,一张图片,一种服务。且每种资源都对应一个特定的 URI。
+
+Restful URI 的设计基于资源:
++ 一类资源:用复数表示,如 `task-instances`、`groups` 等;
++ 单个资源:用单数,或是用 id 值表示某类资源下的一个,如 `group`、`groups/{groupId}`;
++ 子资源:某个资源下的资源:`/instances/{instanceId}/tasks`;
++ 子资源下的单个资源:`/instances/{instanceId}/tasks/{taskId}`;
+
+## 2. Method 设计
+我们需要通过 URI 来定位某种资源,再通过 Method,或者在路径后缀声明动作来体现对资源的操作。
+
+### ① 查询操作 - GET
+通过 URI 来定位要资源,通过 GET 表示查询。
+
++ 当 URI 为一类资源时表示查询一类资源,例如下面样例表示分页查询 `alter-groups`。
+```
+Method: GET
+/api/dolphinscheduler/alert-groups
+```
+
++ 当 URI 为单个资源时表示查询此资源,例如下面样例表示查询对应的 `alter-group`。
+```
+Method: GET
+/api/dolphinscheduler/alter-groups/{id}
+```
+
++ 此外,我们还可以根据 URI 来表示查询子资源,如下:
+```
+Method: GET
+/api/dolphinscheduler/projects/{projectId}/tasks
+```
+
+**上述的关于查询的方式都表示分页查询,如果我们需要查询全部数据的话,则需在 URI 的后面加 `/list` 来区分。分页查询和查询全部不要混用一个 API。**
+```
+Method: GET
+/api/dolphinscheduler/alert-groups/list
+```
+
+### ② 创建操作 - POST
+通过 URI 来定位要创建的资源类型,通过 POST 表示创建动作,并且将创建后的 `id` 返回给请求者。
+
++ 下面样例表示创建一个 `alter-group`:
+
+```
+Method: POST
+/api/dolphinscheduler/alter-groups
+```
+
++ 创建子资源也是类似的操作:
+```
+Method: POST
+/api/dolphinscheduler/alter-groups/{alterGroupId}/tasks
+```
+
+### ③ 修改操作 - PUT
+通过 URI 来定位某一资源,通过 PUT 指定对其修改。
+```
+Method: PUT
+/api/dolphinscheduler/alter-groups/{alterGroupId}
+```
+
+### ④ 删除操作 -DELETE
+通过 URI 来定位某一资源,通过 DELETE 指定对其删除。
+
++ 下面例子表示删除 `alterGroupId` 对应的资源:
+```
+Method: DELETE
+/api/dolphinscheduler/alter-groups/{alterGroupId}
+```
+
++ 批量删除:对传入的 id 数组进行批量删除,使用 POST 方法。**(这里不要用 DELETE 方法,因为 DELETE 请求的 body 在语义上没有任何意义,而且有可能一些网关,代理,防火墙在收到 DELETE 请求后会把请求的 body 直接剥离掉。)**
+```
+Method: POST
+/api/dolphinscheduler/alter-groups/batch-delete
+```
+
+### ⑤ 其他操作
+除增删改查外的操作,我们同样也通过 `url` 定位到对应的资源,然后再在路径后面追加对其进行的操作。例如:
+```
+/api/dolphinscheduler/alert-groups/verify-name
+/api/dolphinscheduler/projects/{projectCode}/process-instances/{code}/view-gantt
+```
+
+## 3. 参数设计
+参数分为两种,一种是请求参数(Request Param 或 Request Body),另一种是路径参数(Path Param)。
+
+参数变量必须用小驼峰表示,并且在分页场景中,用户输入的参数小于 1,则前端需要返给后端 1 表示请求第一页;当后端发现用户输入的参数大于总页数时,直接返回最后一页。
+
+## 4. 其他设计
+### 基础路径
+整个项目的 URI 需要以 `/api/` 作为基础路径,从而标识这类 API 都是项目下的,即:
+```
+/api/dolphinscheduler
+```
\ No newline at end of file
diff --git a/docs/docs/zh/development/architecture-design.md b/docs/docs/zh/development/architecture-design.md
new file mode 100644
index 0000000000..6e9c9250cb
--- /dev/null
+++ b/docs/docs/zh/development/architecture-design.md
@@ -0,0 +1,302 @@
+## 系统架构设计
+在对调度系统架构说明之前,我们先来认识一下调度系统常用的名词
+
+### 1.名词解释
+**DAG:** 全称Directed Acyclic Graph,简称DAG。工作流中的Task任务以有向无环图的形式组装起来,从入度为零的节点进行拓扑遍历,直到无后继节点为止。举例如下图:
+
+
+
+
+ dag示例
+
+
+
+**流程定义**:通过拖拽任务节点并建立任务节点的关联所形成的可视化**DAG**
+
+**流程实例**:流程实例是流程定义的实例化,可以通过手动启动或定时调度生成,流程定义每运行一次,产生一个流程实例
+
+**任务实例**:任务实例是流程定义中任务节点的实例化,标识着具体的任务执行状态
+
+**任务类型**: 目前支持有SHELL、SQL、SUB_PROCESS(子流程)、PROCEDURE、MR、SPARK、PYTHON、DEPENDENT(依赖),同时计划支持动态插件扩展,注意:其中子 **SUB_PROCESS** 也是一个单独的流程定义,是可以单独启动执行的
+
+**调度方式:** 系统支持基于cron表达式的定时调度和手动调度。命令类型支持:启动工作流、从当前节点开始执行、恢复被容错的工作流、恢复暂停流程、从失败节点开始执行、补数、定时、重跑、暂停、停止、恢复等待线程。其中 **恢复被容错的工作流** 和 **恢复等待线程** 两种命令类型是由调度内部控制使用,外部无法调用
+
+**定时调度**:系统采用 **quartz** 分布式调度器,并同时支持cron表达式可视化的生成
+
+**依赖**:系统不单单支持 **DAG** 简单的前驱和后继节点之间的依赖,同时还提供**任务依赖**节点,支持**流程间的自定义任务依赖**
+
+**优先级** :支持流程实例和任务实例的优先级,如果流程实例和任务实例的优先级不设置,则默认是先进先出
+
+**邮件告警**:支持 **SQL任务** 查询结果邮件发送,流程实例运行结果邮件告警及容错告警通知
+
+**失败策略**:对于并行运行的任务,如果有任务失败,提供两种失败策略处理方式,**继续**是指不管并行运行任务的状态,直到流程失败结束。**结束**是指一旦发现失败任务,则同时Kill掉正在运行的并行任务,流程失败结束
+
+**补数**:补历史数据,支持**区间并行和串行**两种补数方式
+
+### 2.系统架构
+
+#### 2.1 系统架构图
+
+
+
+ 系统架构图
+
+
+
+#### 2.2 架构说明
+
+* **MasterServer**
+
+ MasterServer采用分布式无中心设计理念,MasterServer主要负责 DAG 任务切分、任务提交监控,并同时监听其它MasterServer和WorkerServer的健康状态。
+ MasterServer服务启动时向Zookeeper注册临时节点,通过监听Zookeeper临时节点变化来进行容错处理。
+
+ ##### 该服务内主要包含:
+
+ - **Distributed Quartz**分布式调度组件,主要负责定时任务的启停操作,当quartz调起任务后,Master内部会有线程池具体负责处理任务的后续操作
+
+ - **MasterSchedulerThread**是一个扫描线程,定时扫描数据库中的 **command** 表,根据不同的**命令类型**进行不同的业务操作
+
+ - **MasterExecThread**主要是负责DAG任务切分、任务提交监控、各种不同命令类型的逻辑处理
+
+ - **MasterTaskExecThread**主要负责任务的持久化
+
+* **WorkerServer**
+
+ WorkerServer也采用分布式无中心设计理念,WorkerServer主要负责任务的执行和提供日志服务。WorkerServer服务启动时向Zookeeper注册临时节点,并维持心跳。
+ ##### 该服务包含:
+ - **FetchTaskThread**主要负责不断从**Task Queue**中领取任务,并根据不同任务类型调用**TaskScheduleThread**对应执行器。
+
+* **ZooKeeper**
+
+ ZooKeeper服务,系统中的MasterServer和WorkerServer节点都通过ZooKeeper来进行集群管理和容错。另外系统还基于ZooKeeper进行事件监听和分布式锁。
+ 我们也曾经基于Redis实现过队列,不过我们希望DolphinScheduler依赖到的组件尽量地少,所以最后还是去掉了Redis实现。
+
+* **Task Queue**
+
+ 提供任务队列的操作,目前队列也是基于Zookeeper来实现。由于队列中存的信息较少,不必担心队列里数据过多的情况,实际上我们压测过百万级数据存队列,对系统稳定性和性能没影响。
+
+* **Alert**
+
+ 提供告警相关接口,接口主要包括两种类型的告警数据的存储、查询和通知功能。其中通知功能又有**邮件通知**和**SNMP(暂未实现)**两种。
+
+* **API**
+
+ API接口层,主要负责处理前端UI层的请求。该服务统一提供RESTful api向外部提供请求服务。
+ 接口包括工作流的创建、定义、查询、修改、发布、下线、手工启动、停止、暂停、恢复、从该节点开始执行等等。
+
+* **UI**
+
+ 系统的前端页面,提供系统的各种可视化操作界面,详见 [快速开始](https://dolphinscheduler.apache.org/zh-cn/docs/latest/user_doc/guide/quick-start.html) 部分。
+
+#### 2.3 架构设计思想
+
+##### 一、去中心化vs中心化
+
+###### 中心化思想
+
+中心化的设计理念比较简单,分布式集群中的节点按照角色分工,大体上分为两种角色:
+
+
+
+
+- Master的角色主要负责任务分发并监督Slave的健康状态,可以动态的将任务均衡到Slave上,以致Slave节点不至于“忙死”或”闲死”的状态。
+- Worker的角色主要负责任务的执行工作并维护和Master的心跳,以便Master可以分配任务给Slave。
+
+
+
+中心化思想设计存在的问题:
+
+- 一旦Master出现了问题,则群龙无首,整个集群就会崩溃。为了解决这个问题,大多数Master/Slave架构模式都采用了主备Master的设计方案,可以是热备或者冷备,也可以是自动切换或手动切换,而且越来越多的新系统都开始具备自动选举切换Master的能力,以提升系统的可用性。
+- 另外一个问题是如果Scheduler在Master上,虽然可以支持一个DAG中不同的任务运行在不同的机器上,但是会产生Master的过负载。如果Scheduler在Slave上,则一个DAG中所有的任务都只能在某一台机器上进行作业提交,则并行任务比较多的时候,Slave的压力可能会比较大。
+
+
+
+###### 去中心化
+
+
+
+- 在去中心化设计里,通常没有Master/Slave的概念,所有的角色都是一样的,地位是平等的,全球互联网就是一个典型的去中心化的分布式系统,联网的任意节点设备down机,都只会影响很小范围的功能。
+- 去中心化设计的核心设计在于整个分布式系统中不存在一个区别于其他节点的”管理者”,因此不存在单点故障问题。但由于不存在” 管理者”节点所以每个节点都需要跟其他节点通信才得到必须要的机器信息,而分布式系统通信的不可靠性,则大大增加了上述功能的实现难度。
+- 实际上,真正去中心化的分布式系统并不多见。反而动态中心化分布式系统正在不断涌出。在这种架构下,集群中的管理者是被动态选择出来的,而不是预置的,并且集群在发生故障的时候,集群的节点会自发的举行"会议"来选举新的"管理者"去主持工作。最典型的案例就是ZooKeeper及Go语言实现的Etcd。
+
+
+
+- DolphinScheduler的去中心化是Master/Worker注册到Zookeeper中,实现Master集群和Worker集群无中心,并使用Zookeeper分布式锁来选举其中的一台Master或Worker为“管理者”来执行任务。
+
+##### 二、分布式锁实践
+
+DolphinScheduler使用ZooKeeper分布式锁来实现同一时刻只有一台Master执行Scheduler,或者只有一台Worker执行任务的提交。
+1. 获取分布式锁的核心流程算法如下
+
+
+
+
+2. DolphinScheduler中Scheduler线程分布式锁实现流程图:
+
+
+
+
+
+##### 三、线程不足循环等待问题
+
+- 如果一个DAG中没有子流程,则如果Command中的数据条数大于线程池设置的阈值,则直接流程等待或失败。
+- 如果一个大的DAG中嵌套了很多子流程,如下图则会产生“死等”状态:
+
+
+
+
+上图中MainFlowThread等待SubFlowThread1结束,SubFlowThread1等待SubFlowThread2结束, SubFlowThread2等待SubFlowThread3结束,而SubFlowThread3等待线程池有新线程,则整个DAG流程不能结束,从而其中的线程也不能释放。这样就形成的子父流程循环等待的状态。此时除非启动新的Master来增加线程来打破这样的”僵局”,否则调度集群将不能再使用。
+
+对于启动新Master来打破僵局,似乎有点差强人意,于是我们提出了以下三种方案来降低这种风险:
+
+1. 计算所有Master的线程总和,然后对每一个DAG需要计算其需要的线程数,也就是在DAG流程执行之前做预计算。因为是多Master线程池,所以总线程数不太可能实时获取。
+2. 对单Master线程池进行判断,如果线程池已经满了,则让线程直接失败。
+3. 增加一种资源不足的Command类型,如果线程池不足,则将主流程挂起。这样线程池就有了新的线程,可以让资源不足挂起的流程重新唤醒执行。
+
+注意:Master Scheduler线程在获取Command的时候是FIFO的方式执行的。
+
+于是我们选择了第三种方式来解决线程不足的问题。
+
+
+##### 四、容错设计
+容错分为服务宕机容错和任务重试,服务宕机容错又分为Master容错和Worker容错两种情况
+
+###### 1. 宕机容错
+
+服务容错设计依赖于ZooKeeper的Watcher机制,实现原理如图:
+
+
+
+
+其中Master监控其他Master和Worker的目录,如果监听到remove事件,则会根据具体的业务逻辑进行流程实例容错或者任务实例容错。
+
+
+
+- Master容错流程图:
+
+
+
+
+ZooKeeper Master容错完成之后则重新由DolphinScheduler中Scheduler线程调度,遍历 DAG 找到”正在运行”和“提交成功”的任务,对”正在运行”的任务监控其任务实例的状态,对”提交成功”的任务需要判断Task Queue中是否已经存在,如果存在则同样监控任务实例的状态,如果不存在则重新提交任务实例。
+
+
+
+- Worker容错流程图:
+
+
+
+
+
+Master Scheduler线程一旦发现任务实例为” 需要容错”状态,则接管任务并进行重新提交。
+
+ 注意:由于” 网络抖动”可能会使得节点短时间内失去和ZooKeeper的心跳,从而发生节点的remove事件。对于这种情况,我们使用最简单的方式,那就是节点一旦和ZooKeeper发生超时连接,则直接将Master或Worker服务停掉。
+
+###### 2.任务失败重试
+
+这里首先要区分任务失败重试、流程失败恢复、流程失败重跑的概念:
+
+- 任务失败重试是任务级别的,是调度系统自动进行的,比如一个Shell任务设置重试次数为3次,那么在Shell任务运行失败后会自己再最多尝试运行3次
+- 流程失败恢复是流程级别的,是手动进行的,恢复是从只能**从失败的节点开始执行**或**从当前节点开始执行**
+- 流程失败重跑也是流程级别的,是手动进行的,重跑是从开始节点进行
+
+
+
+接下来说正题,我们将工作流中的任务节点分了两种类型。
+
+- 一种是业务节点,这种节点都对应一个实际的脚本或者处理语句,比如Shell节点,MR节点、Spark节点、依赖节点等。
+
+- 还有一种是逻辑节点,这种节点不做实际的脚本或语句处理,只是整个流程流转的逻辑处理,比如子流程节等。
+
+每一个**业务节点**都可以配置失败重试的次数,当该任务节点失败,会自动重试,直到成功或者超过配置的重试次数。**逻辑节点**不支持失败重试。但是逻辑节点里的任务支持重试。
+
+如果工作流中有任务失败达到最大重试次数,工作流就会失败停止,失败的工作流可以手动进行重跑操作或者流程恢复操作
+
+
+
+##### 五、任务优先级设计
+在早期调度设计中,如果没有优先级设计,采用公平调度设计的话,会遇到先行提交的任务可能会和后继提交的任务同时完成的情况,而不能做到设置流程或者任务的优先级,因此我们对此进行了重新设计,目前我们设计如下:
+
+- 按照**不同流程实例优先级**优先于**同一个流程实例优先级**优先于**同一流程内任务优先级**优先于**同一流程内任务**提交顺序依次从高到低进行任务处理。
+ - 具体实现是根据任务实例的json解析优先级,然后把**流程实例优先级_流程实例id_任务优先级_任务id**信息保存在ZooKeeper任务队列中,当从任务队列获取的时候,通过字符串比较即可得出最需要优先执行的任务
+
+ - 其中流程定义的优先级是考虑到有些流程需要先于其他流程进行处理,这个可以在流程启动或者定时启动时配置,共有5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+
+
+
+
+ - 任务的优先级也分为5级,依次为HIGHEST、HIGH、MEDIUM、LOW、LOWEST。如下图
+
+
+
+
+
+##### 六、Logback和gRPC实现日志访问
+
+- 由于Web(UI)和Worker不一定在同一台机器上,所以查看日志不能像查询本地文件那样。有两种方案:
+ - 将日志放到ES搜索引擎上
+ - 通过gRPC通信获取远程日志信息
+
+- 介于考虑到尽可能的DolphinScheduler的轻量级性,所以选择了gRPC实现远程访问日志信息。
+
+
+
+
+
+
+- 我们使用自定义Logback的FileAppender和Filter功能,实现每个任务实例生成一个日志文件。
+- FileAppender主要实现如下:
+
+ ```java
+ /**
+ * task log appender
+ */
+ public class TaskLogAppender extends FileAppender {
+
+ ...
+
+ @Override
+ protected void append(ILoggingEvent event) {
+
+ if (currentlyActiveFile == null){
+ currentlyActiveFile = getFile();
+ }
+ String activeFile = currentlyActiveFile;
+ // thread name: taskThreadName-processDefineId_processInstanceId_taskInstanceId
+ String threadName = event.getThreadName();
+ String[] threadNameArr = threadName.split("-");
+ // logId = processDefineId_processInstanceId_taskInstanceId
+ String logId = threadNameArr[1];
+ ...
+ super.subAppend(event);
+ }
+}
+ ```
+
+
+以/流程定义id/流程实例id/任务实例id.log的形式生成日志
+
+- 过滤匹配以TaskLogInfo开始的线程名称:
+
+- TaskLogFilter实现如下:
+
+ ```java
+ /**
+ * task log filter
+ */
+public class TaskLogFilter extends Filter {
+
+ @Override
+ public FilterReply decide(ILoggingEvent event) {
+ if (event.getThreadName().startsWith("TaskLogInfo-")){
+ return FilterReply.ACCEPT;
+ }
+ return FilterReply.DENY;
+ }
+}
+ ```
+
+### 总结
+本文从调度出发,初步介绍了大数据分布式工作流调度系统--DolphinScheduler的架构原理及实现思路。未完待续
+
+
diff --git a/docs/docs/zh/development/backend/mechanism/global-parameter.md b/docs/docs/zh/development/backend/mechanism/global-parameter.md
new file mode 100644
index 0000000000..7df22bc225
--- /dev/null
+++ b/docs/docs/zh/development/backend/mechanism/global-parameter.md
@@ -0,0 +1,61 @@
+# 全局参数开发文档
+
+用户在定义方向为 OUT 的参数后,会保存在 task 的 localParam 中。
+
+## 参数的使用
+
+从 DAG 中获取当前需要创建的 taskInstance 的直接前置节点 preTasks,获取 preTasks 的 varPool,将该 `varPool(List)`合并为一个 varPool,在合并过程中,如果发现有相同的变量名的变量,按照以下逻辑处理
+
+* 若所有的值都是 null,则合并后的值为 null
+* 若有且只有一个值为非 null,则合并后的值为该非 null 值
+* 若所有的值都不是 null,则根据取 varPool 的 taskInstance 的 endtime 最早的一个
+
+在合并过程中将所有的合并过来的 Property 的方向更新为 IN
+
+合并后的结果保存在 taskInstance.varPool 中。
+
+Worker 收到后将 varPool 解析为 Map 的格式,其中 map 的 key 为 property.prop 也就是变量名。
+
+在 processor 处理参数时,会将 varPool 和 localParam 和 globalParam 三个变量池参数合并,合并过程中若有参数名重复的参数,按照以下优先级进行替换,高优先级保留,低优先级被替换:
+
+* `globalParam` :高
+* `varPool` :中
+* `localParam` :低
+
+参数会在节点内容执行之前利用正则表达式比配到 ${变量名},替换为对应的值。
+
+## 参数的设置
+
+目前仅支持 SQL 和 SHELL 节点的参数获取。
+从 localParam 中获取方向为 OUT 的参数,根据不同节点的类型做以下方式处理。
+
+### SQL 节点
+
+参数返回的结构为 List>
+
+其中,List 的元素为每行数据,Map 的 key 为列名,value 为该列对应的值
+
+* 若 SQL 语句返回为有一行数据,则根据用户在定义 task 时定义的 OUT 参数名匹配列名,若没有匹配到则放弃。
+* 若 SQL 语句返回多行,按照根据用户在定义 task 时定义的类型为 LIST 的 OUT 参数名匹配列名,将对应列的所有行数据转换为 `List`,作为该参数的值。若没有匹配到则放弃。
+
+### SHELL 节点
+
+processor 执行后的结果返回为 `Map`
+
+用户在定义 shell 脚本时需要在输出中定义 `${setValue(key=value)}`
+
+在参数处理时去掉 ${setValue()},按照 “=” 进行拆分,第 0 个为 key,第 1 个为 value。
+
+同样匹配用户定义 task 时定义的 OUT 参数名与 key,将 value 作为该参数的值。
+
+返回参数处理
+
+* 获取到的 processor 的结果为 String
+* 判断 processor 是否为空,为空退出
+* 判断 localParam 是否为空,为空退出
+* 获取 localParam 中为 OUT 的参数,为空退出
+* 将String按照上诉格式格式化(SQL为List>,shell为Map)
+* 将匹配好值的参数赋值给 varPool(List,其中包含原有 IN 的参数)
+
+varPool 格式化为 json,传递给 master。
+Master 接收到 varPool 后,将其中为 OUT 的参数回写到 localParam 中。
diff --git a/docs/docs/zh/development/backend/mechanism/overview.md b/docs/docs/zh/development/backend/mechanism/overview.md
new file mode 100644
index 0000000000..22bed2737f
--- /dev/null
+++ b/docs/docs/zh/development/backend/mechanism/overview.md
@@ -0,0 +1,6 @@
+# 综述
+
+
+
+* [全局参数](global-parameter.md)
+* [switch任务类型](task/switch.md)
diff --git a/docs/docs/zh/development/backend/mechanism/task/switch.md b/docs/docs/zh/development/backend/mechanism/task/switch.md
new file mode 100644
index 0000000000..27ed7f9cfa
--- /dev/null
+++ b/docs/docs/zh/development/backend/mechanism/task/switch.md
@@ -0,0 +1,8 @@
+# SWITCH 任务类型开发文档
+
+Switch任务类型的工作流程如下
+
+* 用户定义的表达式和分支流转的信息存在了taskdefinition中的taskParams中,当switch被执行到时,会被格式化为SwitchParameters。
+* SwitchTaskExecThread从上到下(用户在页面上定义的表达式顺序)处理switch中定义的表达式,从varPool中获取变量的值,通过js解析表达式,如果表达式返回true,则停止检查,并且记录该表达式的顺序,这里我们记录为resultConditionLocation。SwitchTaskExecThread的任务便结束了。
+* 当switch节点运行结束之后,如果没有发生错误(较为常见的是用户定义的表达式不合规范或参数名有问题),这个时候MasterExecThread.submitPostNode会获取DAG的下游节点继续执行。
+* DagHelper.parsePostNodes中如果发现当前节点(刚刚运行完成功的节点)是switch节点的话,会获取resultConditionLocation,将SwitchParameters中除了resultConditionLocation以外的其他分支全部skip掉。这样留下来的就只有需要执行的分支了。
diff --git a/docs/docs/zh/development/backend/spi/alert.md b/docs/docs/zh/development/backend/spi/alert.md
new file mode 100644
index 0000000000..f4b18a5ed2
--- /dev/null
+++ b/docs/docs/zh/development/backend/spi/alert.md
@@ -0,0 +1,71 @@
+### DolphinScheduler Alert SPI 主要设计
+
+#### DolphinScheduler SPI 设计
+
+DolphinScheduler 正在处于微内核 + 插件化的架构更改之中,所有核心能力如任务、资源存储、注册中心等都将被设计为扩展点,我们希望通过 SPI 来提高 DolphinScheduler 本身的灵活性以及友好性(扩展性)。
+
+告警相关代码可以参考 `dolphinscheduler-alert-api` 模块。该模块定义了告警插件扩展的接口以及一些基础代码,当我们需要实现相关功能的插件化的时候,建议先阅读此块的代码,当然,更建议你阅读文档,这会减少很多时间,不过文档有一定的后滞性,当文档缺失的时候,建议以源码为准(如果有兴趣,我们也欢迎你来提交相关文档),此外,我们几乎不会对扩展接口做变更(不包括新增),除非重大架构调整,出现不兼容升级版本,因此,现有文档一般都能够满足。
+
+我们采用了原生的 JAVA-SPI,当你需要扩展的时候,事实上你只需要关注扩展`org.apache.dolphinscheduler.alert.api.AlertChannelFactory`接口即可,底层相关逻辑如插件加载等内核已经实现,这让我们的开发更加专注且简单。
+
+顺便提一句,我们采用了一款优秀的前端组件 form-create,它支持基于 json 生成前端 ui 组件,如果插件开发牵扯到前端,我们会通过 json 来生成相关前端 UI 组件,org.apache.dolphinscheduler.spi.params 里面对插件的参数做了封装,它会将相关参数全部全部转化为对应的 json,这意味这你完全可以通过 Java 代码的方式完成前端组件的绘制(这里主要是表单,我们只关心前后端交互的数据)。
+
+本文主要着重讲解 Alert 告警相关设计以及开发。
+
+#### 主要模块
+
+如果你并不关心它的内部设计,只是想单纯的了解如何开发自己的告警插件,可以略过该内容。
+
+* dolphinscheduler-alert-api
+
+ 该模块是 ALERT SPI 的核心模块,该模块定义了告警插件扩展的接口以及一些基础代码,扩展插件必须实现此模块所定义的接口:`org.apache.dolphinscheduler.alert.api.AlertChannelFactory`
+
+* dolphinscheduler-alert-plugins
+
+ 该模块是目前我们提供的插件,如 Email、DingTalk、Script等。
+
+
+#### Alert SPI 主要类信息:
+
+AlertChannelFactory
+告警插件工厂接口,所有告警插件需要实现该接口,该接口用来定义告警插件的名称,需要的参数,create 方法用来创建具体的告警插件实例。
+
+AlertChannel
+告警插件的接口,告警插件需要实现该接口,该接口中只有一个方法 process ,上层告警系统会调用该方法并通过该方法返回的 AlertResult 来获取告警的返回信息。
+
+AlertData
+告警内容信息,包括 id,标题,内容,日志。
+
+AlertInfo
+告警相关信息,上层系统调用告警插件实例时,将该类的实例通过 process 方法传入具体的告警插件。内部包含告警内容 AlertData 和调用的告警插件实例的前端填写的参数信息。
+
+AlertResult
+告警插件发送告警返回信息。
+
+org.apache.dolphinscheduler.spi.params
+该包下是插件化的参数定义,我们前端使用 from-create 这个前端库,该库可以基于插件定义返回的参数列表 json 来动态生成前端的 ui,因此我们在做 SPI 插件开发的时候无需关心前端。
+
+该 package 下我们目前只封装了 RadioParam,TextParam,PasswordParam,分别用来定义 text 类型的参数,radio 参数和 password 类型的参数。
+
+AbsPluginParams 该类是所有参数的基类,RadioParam 这些类都继承了该类。每个 DS 的告警插件都会在 AlertChannelFactory 的实现中返回一个 AbsPluginParams 的 list。
+
+alert_spi 具体设计可见 issue:[Alert Plugin Design](https://github.com/apache/incubator-dolphinscheduler/issues/3049)
+
+#### Alert SPI 内置实现
+
+* Email
+
+ 电子邮件告警通知
+
+* DingTalk
+
+ 钉钉群聊机器人告警
+
+* EnterpriseWeChat
+
+ 企业微信告警通知
+
+ 相关参数配置可以参考企业微信机器人文档。
+* Script
+
+ 我们实现了 Shell 脚本告警,我们会将相关告警参数透传给脚本,你可以在 Shell 中实现你的相关告警逻辑,如果你需要对接内部告警应用,这是一种不错的方法。
diff --git a/docs/docs/zh/development/backend/spi/datasource.md b/docs/docs/zh/development/backend/spi/datasource.md
new file mode 100644
index 0000000000..1868c86d9e
--- /dev/null
+++ b/docs/docs/zh/development/backend/spi/datasource.md
@@ -0,0 +1,23 @@
+## DolphinScheduler Datasource SPI 主要设计
+
+#### 如何使用数据源?
+
+数据源中心默认支持POSTGRESQL、HIVE/IMPALA、SPARK、CLICKHOUSE、SQLSERVER数据源。
+
+如果使用的是MySQL、ORACLE数据源则需要、把对应的驱动包放置lib目录下
+
+#### 如何进行数据源插件开发?
+
+org.apache.dolphinscheduler.spi.datasource.DataSourceChannel
+org.apache.dolphinscheduler.spi.datasource.DataSourceChannelFactory
+org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient
+
+1. 第一步数据源插件实现以上接口和继承通用client即可,具体可以参考sqlserver、mysql等数据源插件实现,所有RDBMS插件的添加方式都是一样的。
+2. 在数据源插件pom.xml添加驱动配置
+
+我们在 dolphinscheduler-datasource-api 模块提供了所有数据源对外访问的 API
+
+#### **未来计划**
+
+支持kafka、http、文件、sparkSQL、FlinkSQL等数据源
+
diff --git a/docs/docs/zh/development/backend/spi/registry.md b/docs/docs/zh/development/backend/spi/registry.md
new file mode 100644
index 0000000000..36c4d1f00f
--- /dev/null
+++ b/docs/docs/zh/development/backend/spi/registry.md
@@ -0,0 +1,26 @@
+### DolphinScheduler Registry SPI 扩展
+
+#### 如何使用?
+
+进行以下配置(以 zookeeper 为例)
+
+* 注册中心插件配置, 以Zookeeper 为例 (registry.properties)
+ dolphinscheduler-service/src/main/resources/registry.properties
+ ```registry.properties
+ registry.plugin.name=zookeeper
+ registry.servers=127.0.0.1:2181
+ ```
+
+具体配置信息请参考具体插件提供的参数信息,例如 zk:`org/apache/dolphinscheduler/plugin/registry/zookeeper/ZookeeperConfiguration.java`
+所有配置信息前缀需要 +registry,如 base.sleep.time.ms,在 registry 中应该这样配置:registry.base.sleep.time.ms=100
+
+#### 如何扩展
+
+`dolphinscheduler-registry-api` 定义了实现插件的标准,当你需要扩展插件的时候只需要实现 `org.apache.dolphinscheduler.registry.api.RegistryFactory` 即可。
+
+`dolphinscheduler-registry-plugin` 模块下是我们目前所提供的注册中心插件。
+#### FAQ
+
+1:registry connect timeout
+
+可以增加相关超时参数。
diff --git a/docs/docs/zh/development/backend/spi/task.md b/docs/docs/zh/development/backend/spi/task.md
new file mode 100644
index 0000000000..b2ee5242b5
--- /dev/null
+++ b/docs/docs/zh/development/backend/spi/task.md
@@ -0,0 +1,15 @@
+## DolphinScheduler Task SPI 扩展
+
+#### 如何进行任务插件开发?
+
+org.apache.dolphinscheduler.spi.task.TaskChannel
+
+插件实现以上接口即可。主要包含创建任务(任务初始化,任务运行等方法)、任务取消,如果是 yarn 任务,则需要实现 org.apache.dolphinscheduler.plugin.task.api.AbstractYarnTask。
+
+我们在 dolphinscheduler-task-api 模块提供了所有任务对外访问的 API,而 dolphinscheduler-spi 模块则是 spi 通用代码库,定义了所有的插件模块,比如告警模块,注册中心模块等,你可以详细阅读查看。
+
+*NOTICE*
+
+由于任务插件涉及到前端页面,目前前端的SPI还没有实现,因此你需要单独实现插件对应的前端页面。
+
+如果任务插件存在类冲突,你可以采用 [Shade-Relocating Classes](https://maven.apache.org/plugins/maven-shade-plugin/) 来解决这种问题。
diff --git a/docs/docs/zh/development/development-environment-setup.md b/docs/docs/zh/development/development-environment-setup.md
new file mode 100644
index 0000000000..fae9d73eab
--- /dev/null
+++ b/docs/docs/zh/development/development-environment-setup.md
@@ -0,0 +1,155 @@
+# DolphinScheduler 开发手册
+
+## 前置条件
+
+在搭建 DolphinScheduler 开发环境之前请确保你已经安装一下软件
+
+* [Git](https://git-scm.com/downloads): 版本控制系统
+* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html): 后端开发
+* [Maven](http://maven.apache.org/download.cgi): Java包管理系统
+* [Node](https://nodejs.org/en/download): 前端开发
+
+### 克隆代码库
+
+通过你 git 管理工具下载 git 代码,下面以 git-core 为例
+
+```shell
+mkdir dolphinscheduler
+cd dolphinscheduler
+git clone git@github.com:apache/dolphinscheduler.git
+```
+### 编译源码
+* 如果使用MySQL数据库,请注意修改pom.xml, 添加 ` mysql-connector-java ` 依赖。
+* 运行 `mvn clean install -Prelease -Dmaven.test.skip=true`
+
+
+
+## 开发者须知
+
+DolphinScheduler 开发环境配置有两个方式,分别是standalone模式,以及普通模式
+
+* [standalone模式](#dolphinscheduler-standalone快速开发模式):**推荐使用,但仅支持 1.3.9 及以后的版本**,方便快速的开发环境搭建,能解决大部分场景的开发
+* [普通模式](#dolphinscheduler-普通开发模式):master、worker、api等单独启动,能更好的的模拟真实生产环境,可以覆盖的测试环境更多
+
+## DolphinScheduler Standalone快速开发模式
+
+> **_注意:_** 仅供单机开发调试使用,默认使用 H2 Database,Zookeeper Testing Server
+> Standalone 仅在 DolphinScheduler 1.3.9 及以后的版本支持
+
+### 分支选择
+
+开发不同的代码需要基于不同的分支
+
+* 如果想基于二进制包开发,切换到对应版本的代码,如 1.3.9 则是 `1.3.9-release`
+* 如果想要开发最新代码,切换到 `dev` 分支
+
+### 启动后端
+
+在 Intellij IDEA 找到并启动类 `org.apache.dolphinscheduler.server.StandaloneServer` 即可完成后端启动
+
+### 启动前端
+
+安装前端依赖并运行前端组件
+
+```shell
+cd dolphinscheduler-ui
+npm install
+npm run start
+```
+
+截止目前,前后端已成功运行起来,浏览器访问[http://localhost:8888](http://localhost:8888),并使用默认账户密码 **admin/dolphinscheduler123** 即可完成登录
+
+## DolphinScheduler 普通开发模式
+
+### 必要软件安装
+
+#### zookeeper
+
+下载 [ZooKeeper](https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.3),解压
+
+* 在 ZooKeeper 的目录下新建 zkData、zkLog文件夹
+* 将 conf 目录下的 `zoo_sample.cfg` 文件,复制一份,重命名为 `zoo.cfg`,修改其中数据和日志的配置,如:
+
+ ```shell
+ dataDir=/data/zookeeper/data ## 此处使用绝对路径
+ dataLogDir=/data/zookeeper/datalog
+ ```
+
+* 运行 `./bin/zkServer.sh`
+
+#### 数据库
+
+DolphinScheduler 的元数据存储在关系型数据库中,目前支持的关系型数据库包括 MySQL 以及 PostgreSQL。下面以MySQL为例,启动数据库并创建新 database 作为 DolphinScheduler 元数据库,这里以数据库名 dolphinscheduler 为例
+
+创建完新数据库后,将 `dolphinscheduler/dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_mysql.sql` 下的 sql 文件直接在 MySQL 中运行,完成数据库初始化
+
+#### 启动后端
+
+下面步骤将引导如何启动 DolphinScheduler 后端服务
+
+##### 必要的准备工作
+
+* 打开项目:使用开发工具打开项目,这里以 Intellij IDEA 为例,打开后需要一段时间,让 Intellij IDEA 完成以依赖的下载
+
+* 插件的配置(**仅 2.0 及以后的版本需要**):
+
+ * 注册中心插件配置, 以Zookeeper 为例 (registry.properties)
+ dolphinscheduler-service/src/main/resources/registry.properties
+ ```registry.properties
+ registry.plugin.name=zookeeper
+ registry.servers=127.0.0.1:2181
+ ```
+* 必要的修改
+ * 如果使用 MySQL 作为元数据库,需要先修改 `dolphinscheduler/pom.xml`,将 `mysql-connector-java` 依赖的 `scope` 改为 `compile`,使用 PostgreSQL 则不需要
+ * 修改数据库配置,修改 `dolphinscheduler-dao/src/main/resources/application-mysql.yaml` 文件中的数据库配置
+
+
+ 本样例以 MySQL 为例,其中数据库名为 dolphinscheduler,账户名密码均为 dolphinscheduler
+ ```application-mysql.yaml
+ spring:
+ datasource:
+ driver-class-name: com.mysql.jdbc.Driver
+ url: jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8
+ username: ds_user
+ password: dolphinscheduler
+ ```
+
+* 修改日志级别:为以下配置增加一行内容 ` ` 使日志能在命令行中显示
+
+ `dolphinscheduler-server/src/main/resources/logback-worker.xml`
+
+ `dolphinscheduler-server/src/main/resources/logback-master.xml`
+
+ `dolphinscheduler-api/src/main/resources/logback-api.xml`
+
+ 修改后的结果如下:
+
+ ```diff
+
+ +
+
+
+
+ ```
+
+##### 启动服务
+
+我们需要启动三个服务,包括 MasterServer,WorkerServer,ApiApplicationServer
+
+* MasterServer:在 Intellij IDEA 中执行 `org.apache.dolphinscheduler.server.master.MasterServer` 中的 `main` 方法,并配置 *VM Options* `-Dlogging.config=classpath:logback-master.xml -Ddruid.mysql.usePingMethod=false -Dspring.profiles.active=mysql`
+* WorkerServer:在 Intellij IDEA 中执行 `org.apache.dolphinscheduler.server.worker.WorkerServer` 中的 `main` 方法,并配置 *VM Options* `-Dlogging.config=classpath:logback-worker.xml -Ddruid.mysql.usePingMethod=false -Dspring.profiles.active=mysql`
+* ApiApplicationServer:在 Intellij IDEA 中执行 `org.apache.dolphinscheduler.api.ApiApplicationServer` 中的 `main` 方法,并配置 *VM Options* `-Dlogging.config=classpath:logback-api.xml -Dspring.profiles.active=api,mysql`。启动完成可以浏览 Open API 文档,地址为 http://localhost:12345/dolphinscheduler/doc.html
+
+> VM Options `-Dspring.profiles.active=mysql` 中 `mysql` 表示指定的配置文件
+
+### 启动前端
+
+安装前端依赖并运行前端组件
+
+```shell
+cd dolphinscheduler-ui
+npm install
+npm run start
+```
+
+截止目前,前后端已成功运行起来,浏览器访问[http://localhost:8888](http://localhost:8888),并使用默认账户密码 **admin/dolphinscheduler123** 即可完成登录
diff --git a/docs/docs/zh/development/e2e-test.md b/docs/docs/zh/development/e2e-test.md
new file mode 100644
index 0000000000..2aac1bb0e1
--- /dev/null
+++ b/docs/docs/zh/development/e2e-test.md
@@ -0,0 +1,194 @@
+# DolphinScheduler — E2E 自动化测试
+## 一、前置知识:
+
+### 1、E2E 测试与单元测试的区别
+
+E2E,是“End to End”的缩写,可以翻译成“端到端”测试。它模仿用户,从某个入口开始,逐步执行操作,直到完成某项工作。与单元测试不同,后者通常需要测试参数、参数类型、参数值、参数数量、返回值、抛出错误等,目的在于保证特定函数能够在任何情况下都稳定可靠完成工作。单元测试假定只要所有函数都正常工作,那么整个产品就能正常工作。
+
+相对来说,E2E 测试并没有那么强调要覆盖全部使用场景,它关注的**一个完整的操作链是否能够完成**。对于 Web 前端来说,还关注**界面布局、内容信息是否符合预期**。
+
+比如,登陆界面的 E2E 测试,关注用户是否能够正常输入,正常登录;登陆失败的话,是否能够正确显示错误信息。至于输入不合法的内容是否处理,并不是所关注的重点。
+
+### 2、Selenium 测试框架
+
+[Selenium](https://www.selenium.dev) 是一种开源测试工具,用于在 Web 浏览器上执行自动化测试。该框架使用 WebDriver 通过浏览器的原生组件,转化 Web Service 的命令为浏览器 native 的调用来完成操作。简单来说,就是模拟浏览器,对于页面的元素进行选择操作。
+
+WebDriver 是一个 API 和协议,它定义了一个语言中立的接口,用于控制 web 浏览器的行为。 每个浏览器都有一个特定的 WebDriver 实现,称为驱动程序。驱动程序是负责委派给浏览器的组件,并处理与 Selenium 和浏览器之间的通信。
+
+Selenium 框架通过一个面向用户的界面将所有这些部分连接在一起, 该界面允许透明地使用不同的浏览器后端, 从而实现跨浏览器和跨平台自动化。
+
+## 二、E2E 测试
+
+### 1、E2E-Pages
+
+DolphinScheduler 的 E2E 测试使用 docker-compose 部署,当前测试的为单机模式,主要用于检验一些例如“增删改查”基本功能,后期如需做集群验证,例如不同服务之间的协作,或者各个服务之间的通讯机制,可参考 `deploy/docker/docker-compose.yml`来配置。
+
+对于 E2E 测试(前端这一块),使用 [页面模型](https://www.selenium.dev/documentation/guidelines/page_object_models/) 的形式,主要为每一个页面建立一个对应的模型。下面以登录页为例:
+
+```java
+package org.apache.dolphinscheduler.e2e.pages;
+
+import org.apache.dolphinscheduler.e2e.pages.common.NavBarPage;
+import org.apache.dolphinscheduler.e2e.pages.security.TenantPage;
+
+import org.openqa.selenium.WebElement;
+import org.openqa.selenium.remote.RemoteWebDriver;
+import org.openqa.selenium.support.FindBy;
+import org.openqa.selenium.support.ui.ExpectedConditions;
+import org.openqa.selenium.support.ui.WebDriverWait;
+
+import lombok.Getter;
+import lombok.SneakyThrows;
+
+@Getter
+public final class LoginPage extends NavBarPage {
+ @FindBy(id = "inputUsername")
+ private WebElement inputUsername;
+
+ @FindBy(id = "inputPassword")
+ private WebElement inputPassword;
+
+ @FindBy(id = "btnLogin")
+ private WebElement buttonLogin;
+
+ public LoginPage(RemoteWebDriver driver) {
+ super(driver);
+ }
+
+ @SneakyThrows
+ public TenantPage login(String username, String password) {
+ inputUsername().sendKeys(username);
+ inputPassword().sendKeys(password);
+ buttonLogin().click();
+
+ new WebDriverWait(driver, 10)
+ .until(ExpectedConditions.urlContains("/#/security"));
+
+ return new TenantPage(driver);
+ }
+}
+```
+
+在测试过程中,我们只针对所需要关注的元素进行测试,而非页面中的所有元素,所以在登陆页面只对用户名、密码和登录按钮这些元素进行声明。通过 Selenium 测试框架所提供的 FindBy 接口来查找 Vue 文件中对应的 id 或 class。
+
+此外,在测试过程中,并不会直接去操作元素,一般选择封装对应的方法,以达到复用的效果。例如想要登录的话,直接传入用户名和密码,通过 `public TenantPage login()` 方法去操作所传入的元素,从而达到实现登录的效果,即当用户完成登录之后,跳转到安全中心(默认进入到租户管理页面)。
+
+在安全中心页面(SecurityPage)提供了 goToTab 方法,用于测试对应侧栏的跳转,主要包括:租户管理(TenantPage)、用户管理(UserPage)、工作组管理(WorkerGroupPge)和队列管理(QueuePage)。这些页面的实现方式同理,主要测试表单的输入、增加和删除按钮是否能够返回出对应的页面。
+
+```java
+ public T goToTab(Class tab) {
+ if (tab == TenantPage.class) {
+ WebElement menuTenantManageElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(menuTenantManage));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menuTenantManageElement);
+ return tab.cast(new TenantPage(driver));
+ }
+ if (tab == UserPage.class) {
+ WebElement menUserManageElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(menUserManage));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menUserManageElement);
+ return tab.cast(new UserPage(driver));
+ }
+ if (tab == WorkerGroupPage.class) {
+ WebElement menWorkerGroupManageElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(menWorkerGroupManage));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", menWorkerGroupManageElement);
+ return tab.cast(new WorkerGroupPage(driver));
+ }
+ if (tab == QueuePage.class) {
+ menuQueueManage().click();
+ return tab.cast(new QueuePage(driver));
+ }
+ throw new UnsupportedOperationException("Unknown tab: " + tab.getName());
+ }
+```
+
+![SecurityPage](/img/e2e-test/SecurityPage.png)
+
+对于导航栏选项的跳转,在`org/apache/dolphinscheduler/e2e/pages/common/NavBarPage.java` 中提供了 goToNav 的方法。当前支持的页面为:项目管理(ProjectPage)、安全中心(SecurityPage)和资源中心(ResourcePage)。
+
+```java
+ public T goToNav(Class nav) {
+ if (nav == ProjectPage.class) {
+ WebElement projectTabElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(projectTab));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", projectTabElement);
+ return nav.cast(new ProjectPage(driver));
+ }
+
+ if (nav == SecurityPage.class) {
+ WebElement securityTabElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(securityTab));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", securityTabElement);
+ return nav.cast(new SecurityPage(driver));
+ }
+
+ if (nav == ResourcePage.class) {
+ WebElement resourceTabElement = new WebDriverWait(driver, 60)
+ .until(ExpectedConditions.elementToBeClickable(resourceTab));
+ ((JavascriptExecutor)driver).executeScript("arguments[0].click();", resourceTabElement);
+ return nav.cast(new ResourcePage(driver));
+ }
+
+ throw new UnsupportedOperationException("Unknown nav bar");
+ }
+```
+
+### 2、E2E-Cases
+
+当前所支持的 E2E 测试案例,主要包括:文件管理、项目管理、队列管理、租户管理、用户管理、Worker 分组管理和工作流测试。
+
+![E2E_Cases](/img/e2e-test/E2E_Cases.png)
+
+下面以租户管理测试为例,前文已经说明,我们使用 docker-compose 进行部署,所以每个测试案例,都需要以注解的形式引入对应的文件。
+
+使用 Selenium 所提供的 RemoteWebDriver 来加载浏览器。在每个测试案例开始之前都需要进行一些准备工作。比如:登录用户、跳转到对应的页面(根据具体的测试案例而定)。
+
+```java
+ @BeforeAll
+ public static void setup() {
+ new LoginPage(browser)
+ .login("admin", "dolphinscheduler123") // 登录进入租户界面
+ .goToNav(SecurityPage.class) // 安全中心
+ .goToTab(TenantPage.class)
+ ;
+ }
+```
+
+在完成准备工作之后,就是正式的测试案例编写。我们使用 @Order() 注解的形式,用于模块化,确认测试顺序。在进行测试之后,使用断言来判断测试是否成功,如果断言返回 true,则表示创建租户成功。可参考创建租户的测试代码:
+
+```java
+ @Test
+ @Order(10)
+ void testCreateTenant() {
+ final TenantPage page = new TenantPage(browser);
+ page.create(tenant);
+
+ await().untilAsserted(() -> assertThat(page.tenantList())
+ .as("Tenant list should contain newly-created tenant")
+ .extracting(WebElement::getText)
+ .anyMatch(it -> it.contains(tenant)));
+ }
+```
+
+其余的都是类似的情况,可参考具体的源码来理解。
+
+https://github.com/apache/dolphinscheduler/tree/dev/dolphinscheduler-e2e/dolphinscheduler-e2e-case/src/test/java/org/apache/dolphinscheduler/e2e/cases
+
+## 三、补充
+
+在本地运行的时候,首先需要启动相应的本地服务,可以参考该页面: [环境搭建](https://dolphinscheduler.apache.org/zh-cn/development/development-environment-setup.html)
+
+在本地运行 E2E 测试的时候,可以配置 `-Dlocal=true` 参数,用于连接本地,方便对于 UI 界面的更改。
+
+如果是`M1`芯片的机器,可以使用`-Dm1_chip=true` 参数,用于配置使用`ARM64`支持的容器。
+
+![Dlocal](/img/e2e-test/Dlocal.png)
+
+在本地运行过程中,如果出现连接超时,可增大加载时间,建议 30 及其以上。
+
+![timeout](/img/e2e-test/timeout.png)
+
+测试的运行过程将会以 MP4 的文件格式存在。
+
+![MP4](/img/e2e-test/MP4.png)
diff --git a/docs/docs/zh/development/frontend-development.md b/docs/docs/zh/development/frontend-development.md
new file mode 100644
index 0000000000..bfb0e5cd30
--- /dev/null
+++ b/docs/docs/zh/development/frontend-development.md
@@ -0,0 +1,639 @@
+# 前端开发文档
+
+### 技术选型
+```
+Vue mvvm 框架
+
+Es6 ECMAScript 6.0
+
+Ans-ui Analysys-ui
+
+D3 可视化库图表库
+
+Jsplumb 连线插件库
+
+Lodash 高性能的 JavaScript 实用工具库
+```
+
+### 开发环境搭建
+
+- #### Node安装
+Node包下载 (注意版本 v12.20.2) `https://nodejs.org/download/release/v12.20.2/`
+
+- #### 前端项目构建
+用命令行模式 `cd` 进入 `dolphinscheduler-ui`项目目录并执行 `npm install` 拉取项目依赖包
+
+> 如果 `npm install` 速度非常慢,你可以设置淘宝镜像
+
+```
+npm config set registry http://registry.npm.taobao.org/
+```
+
+- 修改 `dolphinscheduler-ui/.env` 文件中的 `API_BASE`,用于跟后端交互:
+
+```
+# 代理的接口地址(自行修改)
+API_BASE = http://127.0.0.1:12345
+```
+
+> ##### !!!这里特别注意 项目如果在拉取依赖包的过程中报 " node-sass error " 错误,请在执行完后再次执行以下命令
+
+```bash
+npm install node-sass --unsafe-perm #单独安装node-sass依赖
+```
+
+- #### 开发环境运行
+- `npm start` 项目开发环境 (启动后访问地址 http://localhost:8888)
+
+#### 前端项目发布
+
+- `npm run build` 项目打包 (打包后根目录会创建一个名为dist文件夹,用于发布线上Nginx)
+
+运行 `npm run build` 命令,生成打包文件(dist)包
+
+再拷贝到服务器对应的目录下(前端服务静态页面存放目录)
+
+访问地址 `http://localhost:8888`
+
+#### Linux下使用node启动并且守护进程
+
+安装pm2 `npm install -g pm2`
+
+在项目`dolphinscheduler-ui`根目录执行 `pm2 start npm -- run dev` 启动项目
+
+#### 命令
+
+- 启用 `pm2 start npm -- run dev`
+
+- 停止 `pm2 stop npm`
+
+- 删除 `pm2 delete npm`
+
+- 状态 `pm2 list`
+
+```
+
+[root@localhost dolphinscheduler-ui]# pm2 start npm -- run dev
+[PM2] Applying action restartProcessId on app [npm](ids: 0)
+[PM2] [npm](0) ✓
+[PM2] Process successfully started
+┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐
+│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
+├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤
+│ npm │ 0 │ N/A │ fork │ 6168 │ online │ 31 │ 0s │ 0% │ 5.6 MB │ root │ disabled │
+└──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘
+ Use `pm2 show ` to get more details about an app
+
+```
+
+### 项目目录结构
+
+`build` 打包及开发环境项目的一些webpack配置
+
+`node_modules` 开发环境node依赖包
+
+`src` 项目所需文件
+
+`src => combo` 项目第三方资源本地化 `npm run combo`具体查看`build/combo.js`
+
+`src => font` 字体图标库可访问 `https://www.iconfont.cn` 进行添加 注意:字体库用的自己的 二次开发需要重新引入自己的库 `src/sass/common/_font.scss`
+
+`src => images` 公共图片存放
+
+`src => js` js/vue
+
+`src => lib` 公司内部组件(公司组件库开源后可删掉)
+
+`src => sass` sass文件 一个页面对应一个sass文件
+
+`src => view` 页面文件 一个页面对应一个html文件
+
+```
+> 项目采用vue单页面应用(SPA)开发
+- 所有页面入口文件在 `src/js/conf/${对应页面文件名 => home}` 的 `index.js` 入口文件
+- 对应的sass文件则在 `src/sass/conf/${对应页面文件名 => home}/index.scss`
+- 对应的html文件则在 `src/view/${对应页面文件名 => home}/index.html`
+```
+
+公共模块及util `src/js/module`
+
+`components` => 内部项目公共组件
+
+`download` => 下载组件
+
+`echarts` => 图表组件
+
+`filter` => 过滤器和vue管道
+
+`i18n` => 国际化
+
+`io` => io请求封装 基于axios
+
+`mixin` => vue mixin 公共部分 用于disabled操作
+
+`permissions` => 权限操作
+
+`util` => 工具
+
+### 系统功能模块
+
+首页 => `http://localhost:8888/#/home`
+
+项目管理 => `http://localhost:8888/#/projects/list`
+```
+| 项目首页
+| 工作流
+ - 工作流定义
+ - 工作流实例
+ - 任务实例
+```
+
+资源管理 => `http://localhost:8888/#/resource/file`
+```
+| 文件管理
+| UDF管理
+ - 资源管理
+ - 函数管理
+```
+
+数据源管理 => `http://localhost:8888/#/datasource/list`
+
+安全中心 => `http://localhost:8888/#/security/tenant`
+```
+| 租户管理
+| 用户管理
+| 告警组管理
+ - master
+ - worker
+```
+
+用户中心 => `http://localhost:8888/#/user/account`
+
+## 路由和状态管理
+
+项目 `src/js/conf/home` 下分为
+
+`pages` => 路由指向页面目录
+```
+ 路由地址对应的页面文件
+```
+
+`router` => 路由管理
+```
+vue的路由器,在每个页面的入口文件index.js 都会注册进来 具体操作:https://router.vuejs.org/zh/
+```
+
+`store` => 状态管理
+```
+每个路由对应的页面都有一个状态管理的文件 分为:
+
+actions => mapActions => 详情:https://vuex.vuejs.org/zh/guide/actions.html
+
+getters => mapGetters => 详情:https://vuex.vuejs.org/zh/guide/getters.html
+
+index => 入口
+
+mutations => mapMutations => 详情:https://vuex.vuejs.org/zh/guide/mutations.html
+
+state => mapState => 详情:https://vuex.vuejs.org/zh/guide/state.html
+
+具体操作:https://vuex.vuejs.org/zh/
+```
+
+## 规范
+## Vue规范
+##### 1.组件名
+组件名为多个单词,并且用连接线(-)连接,避免与 HTML 标签冲突,并且结构更加清晰。
+```
+// 正例
+export default {
+ name: 'page-article-item'
+}
+```
+
+##### 2.组件文件
+`src/js/module/components`项目内部公共组件书写文件夹名与文件名同名,公共组件内部所拆分的子组件与util工具都放置组件内部 `_source`文件夹里。
+```
+└── components
+ ├── header
+ ├── header.vue
+ └── _source
+ └── nav.vue
+ └── util.js
+ ├── conditions
+ ├── conditions.vue
+ └── _source
+ └── search.vue
+ └── util.js
+```
+
+##### 3.Prop
+定义 Prop 的时候应该始终以驼峰格式(camelCase)命名,在父组件赋值的时候使用连接线(-)。
+这里遵循每个语言的特性,因为在 HTML 标记中对大小写是不敏感的,使用连接线更加友好;而在 JavaScript 中更自然的是驼峰命名。
+
+```
+// Vue
+props: {
+ articleStatus: Boolean
+}
+// HTML
+
+```
+
+Prop 的定义应该尽量详细的指定其类型、默认值和验证。
+
+示例:
+
+```
+props: {
+ attrM: Number,
+ attrA: {
+ type: String,
+ required: true
+ },
+ attrZ: {
+ type: Object,
+ // 数组/对象的默认值应该由一个工厂函数返回
+ default: function () {
+ return {
+ msg: '成就你我'
+ }
+ }
+ },
+ attrE: {
+ type: String,
+ validator: function (v) {
+ return !(['success', 'fail'].indexOf(v) === -1)
+ }
+ }
+}
+```
+
+##### 4.v-for
+在执行 v-for 遍历的时候,总是应该带上 key 值使更新 DOM 时渲染效率更高。
+```
+
+```
+
+v-for 应该避免与 v-if 在同一个元素(`例如:`)上使用,因为 v-for 的优先级比 v-if 更高,为了避免无效计算和渲染,应该尽量将 v-if 放到容器的父元素之上。
+```
+
+```
+
+##### 5.v-if / v-else-if / v-else
+若同一组 v-if 逻辑控制中的元素逻辑相同,Vue 为了更高效的元素切换,会复用相同的部分,`例如:value`。为了避免复用带来的不合理效果,应该在同种元素上加上 key 做标识。
+```
+
+ {{ mazeyData }}
+
+
+ 无数据
+
+```
+
+##### 6.指令缩写
+为了统一规范始终使用指令缩写,使用`v-bind`,`v-on`并没有什么不好,这里仅为了统一规范。
+```
+
+```
+
+##### 7.单文件组件的顶级元素顺序
+样式后续都是打包在一个文件里,所有在单个vue文件中定义的样式,在别的文件里同类名的样式也是会生效的所有在创建一个组件前都会有个顶级类名
+注意:项目内已经增加了sass插件,单个vue文件里可以直接书写sass语法
+为了统一和便于阅读,应该按 ``、`
+
+
+
+```
+
+## JavaScript规范
+
+##### 1.var / let / const
+建议不再使用 var,而使用 let / const,优先使用 const。任何一个变量的使用都要提前申明,除了 function 定义的函数可以随便放在任何位置。
+
+##### 2.引号
+```
+const foo = '后除'
+const bar = `${foo},前端工程师`
+```
+
+##### 3.函数
+匿名函数统一使用箭头函数,多个参数/返回值时优先使用对象的结构赋值。
+```
+function getPersonInfo ({name, sex}) {
+ // ...
+ return {name, gender}
+}
+```
+函数名统一使用驼峰命名,以大写字母开头申明的都是构造函数,使用小写字母开头的都是普通函数,也不该使用 new 操作符去操作普通函数。
+
+##### 4.对象
+```
+const foo = {a: 0, b: 1}
+const bar = JSON.parse(JSON.stringify(foo))
+
+const foo = {a: 0, b: 1}
+const bar = {...foo, c: 2}
+
+const foo = {a: 3}
+Object.assign(foo, {b: 4})
+
+const myMap = new Map([])
+for (let [key, value] of myMap.entries()) {
+ // ...
+}
+```
+
+##### 5.模块
+统一使用 import / export 的方式管理项目的模块。
+```
+// lib.js
+export default {}
+
+// app.js
+import app from './lib'
+```
+
+import 统一放在文件顶部。
+
+如果模块只有一个输出值,使用 `export default`,否则不用。
+
+
+## HTML / CSS
+
+###### 1.标签
+在引用外部 CSS 或 JavaScript 时不写 type 属性。HTML5 默认 type 为 `text/css` 和 `text/javascript` 属性,所以没必要指定。
+```
+
+
+```
+
+##### 2.命名
+Class 和 ID 的命名应该语义化,通过看名字就知道是干嘛的;多个单词用连接线 - 连接。
+```
+// 正例
+.test-header{
+ font-size: 20px;
+}
+```
+
+##### 3.属性缩写
+CSS 属性尽量使用缩写,提高代码的效率和方便理解。
+
+```
+// 反例
+border-width: 1px;
+border-style: solid;
+border-color: #ccc;
+
+// 正例
+border: 1px solid #ccc;
+```
+
+##### 4.文档类型
+应该总是使用 HTML5 标准。
+
+```
+
+```
+
+##### 5.注释
+应该给一个模块文件写一个区块注释。
+```
+/**
+* @module mazey/api
+* @author Mazey
+* @description test.
+* */
+```
+
+## 接口
+
+##### 所有的接口都以 Promise 形式返回
+注意非0都为错误走catch
+
+```
+const test = () => {
+ return new Promise((resolve, reject) => {
+ resolve({
+ a:1
+ })
+ })
+}
+
+// 调用
+test.then(res => {
+ console.log(res)
+ // {a:1}
+})
+```
+
+正常返回
+```
+{
+ code:0,
+ data:{}
+ msg:'成功'
+}
+```
+
+错误返回
+```
+{
+ code:10000,
+ data:{}
+ msg:'失败'
+}
+```
+接口如果是post请求,Content-Type默认为application/x-www-form-urlencoded;如果Content-Type改成application/json,
+接口传参需要改成下面的方式
+```
+io.post('url', payload, null, null, { emulateJSON: false } res => {
+ resolve(res)
+}).catch(e => {
+ reject(e)
+})
+```
+
+##### 相关接口路径
+
+dag 相关接口 `src/js/conf/home/store/dag/actions.js`
+
+数据源中心 相关接口 `src/js/conf/home/store/datasource/actions.js`
+
+项目管理 相关接口 `src/js/conf/home/store/projects/actions.js`
+
+资源中心 相关接口 `src/js/conf/home/store/resource/actions.js`
+
+安全中心 相关接口 `src/js/conf/home/store/security/actions.js`
+
+用户中心 相关接口 `src/js/conf/home/store/user/actions.js`
+
+## 扩展开发
+
+##### 1.增加节点
+
+(1) 先将节点的icon小图标放置`src/js/conf/home/pages/dag/img`文件夹内,注意 `toolbar_${后台定义的节点的英文名称 例如:SHELL}.png`
+
+(2) 找到 `src/js/conf/home/pages/dag/_source/config.js` 里的 `tasksType` 对象,往里增加
+```
+'DEPENDENT': { // 后台定义节点类型英文名称用作key值
+ desc: 'DEPENDENT', // tooltip desc
+ color: '#2FBFD8' // 代表的颜色主要用于 tree和gantt 两张图
+}
+```
+
+(3) 在 `src/js/conf/home/pages/dag/_source/formModel/tasks` 增加一个 `${节点类型(小写)}`.vue 文件,跟当前节点相关的组件内容都在这里写。 属于节点组件内的必须拥有一个函数 `_verification()` 验证成功后将当前组件的相关数据往父组件抛。
+```
+/**
+ * 验证
+*/
+ _verification () {
+ // datasource 子组件验证
+ if (!this.$refs.refDs._verifDatasource()) {
+ return false
+ }
+
+ // 验证函数
+ if (!this.method) {
+ this.$message.warning(`${i18n.$t('请输入方法')}`)
+ return false
+ }
+
+ // localParams 子组件验证
+ if (!this.$refs.refLocalParams._verifProp()) {
+ return false
+ }
+ // 存储
+ this.$emit('on-params', {
+ type: this.type,
+ datasource: this.datasource,
+ method: this.method,
+ localParams: this.localParams
+ })
+ return true
+ }
+```
+
+(4) 节点组件内部所用到公共的组件都在`_source`下,`commcon.js`用于配置公共数据
+
+##### 2.增加状态类型
+
+(1) 找到 `src/js/conf/home/pages/dag/_source/config.js` 里的 `tasksState` 对象,往里增加
+```
+'WAITTING_DEPEND': { //后端定义状态类型 前端用作key值
+ id: 11, // 前端定义id 后续用作排序
+ desc: `${i18n.$t('等待依赖')}`, // tooltip desc
+ color: '#5101be', // 代表的颜色主要用于 tree和gantt 两张图
+ icoUnicode: '', // 字体图标
+ isSpin: false // 是否旋转(需代码判断)
+}
+```
+
+##### 3.增加操作栏工具
+(1) 找到 `src/js/conf/home/pages/dag/_source/config.js` 里的 `toolOper` 对象,往里增加
+```
+{
+ code: 'pointer', // 工具标识
+ icon: '', // 工具图标
+ disable: disable, // 是否禁用
+ desc: `${i18n.$t('拖动节点和选中项')}` // tooltip desc
+}
+```
+
+(2) 工具类都以一个构造函数返回 `src/js/conf/home/pages/dag/_source/plugIn`
+
+`downChart.js` => dag 图片下载处理
+
+`dragZoom.js` => 鼠标缩放效果处理
+
+`jsPlumbHandle.js` => 拖拽线条处理
+
+`util.js` => 属于 `plugIn` 工具类
+
+
+操作则在 `src/js/conf/home/pages/dag/_source/dag.js` => `toolbarEvent` 事件中处理。
+
+
+##### 3.增加一个路由页面
+
+(1) 首先在路由管理增加一个路由地址`src/js/conf/home/router/index.js`
+```
+{
+ path: '/test', // 路由地址
+ name: 'test', // 别名
+ component: resolve => require(['../pages/test/index'], resolve), // 路由对应组件入口文件
+ meta: {
+ title: `${i18n.$t('test')} - DolphinScheduler` // title 显示
+ }
+},
+```
+
+(2) 在`src/js/conf/home/pages` 建一个 `test` 文件夹,在文件夹里建一个`index.vue`入口文件。
+
+ 这样就可以直接访问 `http://localhost:8888/#/test`
+
+
+##### 4.增加预置邮箱
+
+找到`src/lib/localData/email.js`启动和定时邮箱地址输入可以自动下拉匹配。
+```
+export default ["test@analysys.com.cn","test1@analysys.com.cn","test3@analysys.com.cn"]
+```
+
+##### 5.权限管理及disabled状态处理
+
+权限根据后端接口`getUserInfo`接口给出`userType: "ADMIN_USER/GENERAL_USER"`权限控制页面操作按钮是否`disabled`
+
+具体操作:`src/js/module/permissions/index.js`
+
+disabled处理:`src/js/module/mixin/disabledState.js`
+
diff --git a/docs/docs/zh/development/have-questions.md b/docs/docs/zh/development/have-questions.md
new file mode 100644
index 0000000000..7d9ad9cc94
--- /dev/null
+++ b/docs/docs/zh/development/have-questions.md
@@ -0,0 +1,65 @@
+# 当你遇到问题时
+
+## StackOverflow
+
+如果在使用上有疑问,建议你使用StackOverflow标签 [apache-dolphinscheduler](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler),这是一个DolphinScheduler用户问答的活跃论坛。
+
+使用StackOverflow时的快速提示:
+
+- 在提交问题之前:
+ - 在StackOverflow的 [apache-dolphinscheduler](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler) 标签下进行搜索,看看你的问题是否已经被回答。
+
+- 请遵守StackOverflow的[行为准则](https://stackoverflow.com/help/how-to-ask)
+
+- 提出问题时,请务必使用apache-dolphinscheduler标签。
+
+- 请不要在 [StackOverflow](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler) 和 [GitHub issues](https://github.com/apache/dolphinscheduler/issues/new/choose)之间交叉发帖。
+
+提问模板:
+
+> **Describe the question**
+>
+> 对问题的内容进行清晰、简明的描述。
+>
+> **Which version of DolphinScheduler:**
+>
+> -[1.3.0-preview]
+>
+> **Additional context**
+>
+> 在此添加关于该问题的其他背景。
+>
+> **Requirement or improvement**
+>
+> 在此描述您的要求或改进建议。
+
+如果你的问题较为宽泛、有意见或建议、期望请求外部资源,或是有项目调试、bug提交等相关问题,或者想要对项目做出贡献、对场景进行讨论,建议你提交[ GitHub issues ](https://github.com/apache/dolphinscheduler/issues/new/choose)或使用dev@dolphinscheduler.apache.org 邮件列表进行讨论。
+
+## 邮件列表
+
+- [dev@dolphinscheduler.apache.org](https://lists.apache.org/list.html?dev@dolphinscheduler.apache.org) 是为那些想为DolphinScheduler贡献代码的人准备的。 [(订阅)](mailto:dev-subscribe@dolphinscheduler.apache.org?subject=(send%20this%20email%20to%20subscribe)) [(退订)](mailto:dev-unsubscribe@dolphinscheduler.apache.org?subject=(send%20this%20email%20to%20unsubscribe)) [(存档)](http://lists.apache.org/list.html?dev@dolphinscheduler.apache.org)
+
+使用电子邮件时的一些快速提示:
+
+- 在提出问题之前:
+ - 请在StackOverflow的 [apache-dolphinscheduler](https://stackoverflow.com/questions/tagged/apache-dolphinscheduler) 标签下进行搜索,看看你的问题是否已经被回答。
+- 在你的邮件的主题栏里加上标签会帮助你得到更快的回应,例如:[ApiServer]:如何获得开放的api接口?
+- 可以通过以下标签定义你的主题。
+ - 组件相关:MasterServer、ApiServer、WorkerServer、AlertServer等等。
+ - 级别:Beginner、Intermediate、Advanced
+ - 场景相关:Debug,、How-to
+- 如果内容包括错误日志或长代码,请使用 [GitHub gist](https://gist.github.com/),并在邮件中只附加相关代码/日志的几行。
+
+## Chat Rooms
+
+聊天室是快速提问或讨论具体话题的好地方。
+
+以下聊天室是Apache DolphinScheduler的正式组成部分:
+
+ Slack工作区的网址:http://asf-dolphinscheduler.slack.com/
+
+ 你可以通过该邀请链接加入:https://s.apache.org/dolphinscheduler-slack
+
+此聊天室用于与DolphinScheduler使用相关的问题讨论。
+
+
\ No newline at end of file
diff --git a/docs/img/architecture-design/dag_examples.png b/docs/img/architecture-design/dag_examples.png
new file mode 100644
index 0000000000..15848da71a
Binary files /dev/null and b/docs/img/architecture-design/dag_examples.png differ
diff --git a/docs/img/architecture-design/distributed_lock.png b/docs/img/architecture-design/distributed_lock.png
new file mode 100644
index 0000000000..5c34fc429d
Binary files /dev/null and b/docs/img/architecture-design/distributed_lock.png differ
diff --git a/docs/img/architecture-design/distributed_lock_procss.png b/docs/img/architecture-design/distributed_lock_procss.png
new file mode 100644
index 0000000000..469128bc16
Binary files /dev/null and b/docs/img/architecture-design/distributed_lock_procss.png differ
diff --git a/docs/img/architecture-design/fault-tolerant.png b/docs/img/architecture-design/fault-tolerant.png
new file mode 100644
index 0000000000..45dadf76e0
Binary files /dev/null and b/docs/img/architecture-design/fault-tolerant.png differ
diff --git a/docs/img/architecture-design/fault-tolerant_master.png b/docs/img/architecture-design/fault-tolerant_master.png
new file mode 100644
index 0000000000..a9901ce733
Binary files /dev/null and b/docs/img/architecture-design/fault-tolerant_master.png differ
diff --git a/docs/img/architecture-design/fault-tolerant_worker.png b/docs/img/architecture-design/fault-tolerant_worker.png
new file mode 100644
index 0000000000..e7f379d1f1
Binary files /dev/null and b/docs/img/architecture-design/fault-tolerant_worker.png differ
diff --git a/docs/img/architecture-design/grpc.png b/docs/img/architecture-design/grpc.png
new file mode 100644
index 0000000000..633b837566
Binary files /dev/null and b/docs/img/architecture-design/grpc.png differ
diff --git a/docs/img/architecture-design/lack_thread.png b/docs/img/architecture-design/lack_thread.png
new file mode 100644
index 0000000000..0dc5a7b137
Binary files /dev/null and b/docs/img/architecture-design/lack_thread.png differ
diff --git a/docs/img/architecture-design/process_priority.png b/docs/img/architecture-design/process_priority.png
new file mode 100644
index 0000000000..c6cd6001a5
Binary files /dev/null and b/docs/img/architecture-design/process_priority.png differ
diff --git a/docs/img/architecture-design/task_priority.png b/docs/img/architecture-design/task_priority.png
new file mode 100644
index 0000000000..3470260712
Binary files /dev/null and b/docs/img/architecture-design/task_priority.png differ
diff --git a/docs/img/architecture.jpg b/docs/img/architecture.jpg
new file mode 100644
index 0000000000..cbefda24ae
Binary files /dev/null and b/docs/img/architecture.jpg differ
diff --git a/docs/img/e2e-test/Dlocal.png b/docs/img/e2e-test/Dlocal.png
new file mode 100644
index 0000000000..2ba9efbe00
Binary files /dev/null and b/docs/img/e2e-test/Dlocal.png differ
diff --git a/docs/img/e2e-test/E2E_Cases.png b/docs/img/e2e-test/E2E_Cases.png
new file mode 100644
index 0000000000..1da289145b
Binary files /dev/null and b/docs/img/e2e-test/E2E_Cases.png differ
diff --git a/docs/img/e2e-test/MP4.png b/docs/img/e2e-test/MP4.png
new file mode 100644
index 0000000000..fb194b4d4d
Binary files /dev/null and b/docs/img/e2e-test/MP4.png differ
diff --git a/docs/img/e2e-test/SecurityPage.png b/docs/img/e2e-test/SecurityPage.png
new file mode 100644
index 0000000000..07c0cbfc43
Binary files /dev/null and b/docs/img/e2e-test/SecurityPage.png differ
diff --git a/docs/img/e2e-test/timeout.png b/docs/img/e2e-test/timeout.png
new file mode 100644
index 0000000000..9a31a70f3d
Binary files /dev/null and b/docs/img/e2e-test/timeout.png differ
diff --git a/docs/img_utils.py b/docs/img_utils.py
index 98ae07dcca..aafceb7192 100644
--- a/docs/img_utils.py
+++ b/docs/img_utils.py
@@ -61,7 +61,7 @@ def get_paths_rel_path(paths: Set[Path], rel: Path) -> Set:
def get_docs_img_path(paths: Set[Path]) -> Set:
"""Get all img syntax from given :param:`paths` using the regexp from :param:`pattern`."""
res = set()
- pattern = re.compile(r"/img[\w./-]*")
+ pattern = re.compile(r"/img[\w./-]+")
for path in paths:
content = path.read_text()
find = pattern.findall(content)