From 2294160cdbe7bd0f4215a5e9950cc784aa0b5a7d Mon Sep 17 00:00:00 2001 From: Yiming Guo <49181899+GavinGYM@users.noreply.github.com> Date: Mon, 30 May 2022 12:07:29 +0800 Subject: [PATCH] [docs] Added local file configuration guide for resource center (#10264) * Added Local File Resource Configuration Guide to the document. * Removed contents with windows features in the documents and improved expression. * Specify `the user who deploy dolphinscheduler have read and write permissions` in en and zh docs. Co-authored-by: xiangzihao <460888207@qq.com> --- docs/docs/en/guide/resource/configuration.md | 17 ++++++++++++++--- docs/docs/zh/guide/resource/configuration.md | 11 +++++++++++ 2 files changed, 25 insertions(+), 3 deletions(-) diff --git a/docs/docs/en/guide/resource/configuration.md b/docs/docs/en/guide/resource/configuration.md index 7506cc2a81..ffae6db97e 100644 --- a/docs/docs/en/guide/resource/configuration.md +++ b/docs/docs/en/guide/resource/configuration.md @@ -2,13 +2,24 @@ The Resource Center is usually used for operations such as uploading files, UDF functions, and task group management. You can appoint the local file directory as the upload directory for a single machine (this operation does not need to deploy Hadoop). Or you can also upload to a Hadoop or MinIO cluster, at this time, you need to have Hadoop (2.6+) or MinIO or other related environments. +## Local File Resource Configuration + +For a single machine, you can choose to use local file directory as the upload directory (no need to deploy Hadoop) by making the following configuration. + +### Configuring the `common.properties` + +Configure the file in the following paths: `api-server/conf/common.properties` and `worker-server/conf/common.properties`. + +- Change `data.basedir.path` to the local directory path. Please make sure the user who deploy dolphinscheduler have read and write permissions, such as: `data.basedir.path=/tmp/dolphinscheduler`. And the directory you configured will be auto-created if it does not exists. +- Modify the following two parameters, `resource.storage.type=HDFS` and `fs.defaultFS=file:///`. + ## HDFS Resource Configuration When it is necessary to use the Resource Center to create or upload relevant files, all files and resources will be stored on HDFS. Therefore the following configuration is required. ### Configuring the common.properties -After version 3.0.0-alpha, if you want to upload resources using HDFS or S3 from the Resource Center, you will need to configure the following paths The following paths need to be configured: `api-server/conf/common.properties` and `worker-server/conf/common.properties`. This can be found as follows. +After version 3.0.0-alpha, if you want to upload resources using HDFS or S3 from the Resource Center, the following paths need to be configured: `api-server/conf/common.properties` and `worker-server/conf/common.properties`. This can be found as follows. ```properties # @@ -110,7 +121,7 @@ alert.rpc.port=50052 ``` > **_Note:_** -> +> > * If only the `api-server/conf/common.properties` file is configured, then resource uploading is enabled, but you can not use resources in task. If you want to use or execute the files in the workflow you need to configure `worker-server/conf/common.properties` too. > * If you want to use the resource upload function, the deployment user in [installation and deployment](../installation/standalone.md) must have relevant operation authority. -> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise skip this copy step. +> * If you using a Hadoop cluster with HA, you need to enable HDFS resource upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise skip this copy step. \ No newline at end of file diff --git a/docs/docs/zh/guide/resource/configuration.md b/docs/docs/zh/guide/resource/configuration.md index 1b02a02587..a12184a9db 100644 --- a/docs/docs/zh/guide/resource/configuration.md +++ b/docs/docs/zh/guide/resource/configuration.md @@ -2,6 +2,17 @@ 资源中心通常用于上传文件、 UDF 函数,以及任务组管理等操作。针对单机环境可以选择本地文件目录作为上传文件夹(此操作不需要部署 Hadoop)。当然也可以选择上传到 Hadoop or MinIO 集群上,此时则需要有 Hadoop(2.6+)或者 MinIOn 等相关环境。 +## 本地资源配置 + +在单机环境下,可以选择使用本地文件目录作为上传文件夹(无需部署Hadoop),此时需要进行如下配置: + +### 配置 `common.properties` 文件 + +对以下路径的文件进行配置:`api-server/conf/common.properties` 和 `worker-server/conf/common.properties` + +- 将 `data.basedir.path` 改为本地存储路径,请确保部署 DolphinScheduler 的用户拥有读写权限,例如:`data.basedir.path=/tmp/dolphinscheduler`。当路径不存在时会自动创建文件夹 +- 修改下列两个参数,分别是 `resource.storage.type=HDFS` 和 `fs.defaultFS=file:///`。 + ## HDFS 资源配置 当需要使用资源中心进行相关文件的创建或者上传操作时,所有的文件和资源都会被存储在 HDFS 上。所以需要进行以下配置: