Browse Source

[docs] Added local file configuration guide for resource center (#10264)

* Added Local File Resource Configuration Guide to the document.
* Removed contents with windows features in the documents and improved expression.
* Specify `the user who deploy dolphinscheduler have read and write permissions` in en and zh docs.

Co-authored-by: xiangzihao <460888207@qq.com>
3.1.0-release
Yiming Guo 3 years ago committed by GitHub
parent
commit
2294160cdb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 13
      docs/docs/en/guide/resource/configuration.md
  2. 11
      docs/docs/zh/guide/resource/configuration.md

13
docs/docs/en/guide/resource/configuration.md

@ -2,13 +2,24 @@
The Resource Center is usually used for operations such as uploading files, UDF functions, and task group management. You can appoint the local file directory as the upload directory for a single machine (this operation does not need to deploy Hadoop). Or you can also upload to a Hadoop or MinIO cluster, at this time, you need to have Hadoop (2.6+) or MinIO or other related environments.
## Local File Resource Configuration
For a single machine, you can choose to use local file directory as the upload directory (no need to deploy Hadoop) by making the following configuration.
### Configuring the `common.properties`
Configure the file in the following paths: `api-server/conf/common.properties` and `worker-server/conf/common.properties`.
- Change `data.basedir.path` to the local directory path. Please make sure the user who deploy dolphinscheduler have read and write permissions, such as: `data.basedir.path=/tmp/dolphinscheduler`. And the directory you configured will be auto-created if it does not exists.
- Modify the following two parameters, `resource.storage.type=HDFS` and `fs.defaultFS=file:///`.
## HDFS Resource Configuration
When it is necessary to use the Resource Center to create or upload relevant files, all files and resources will be stored on HDFS. Therefore the following configuration is required.
### Configuring the common.properties
After version 3.0.0-alpha, if you want to upload resources using HDFS or S3 from the Resource Center, you will need to configure the following paths The following paths need to be configured: `api-server/conf/common.properties` and `worker-server/conf/common.properties`. This can be found as follows.
After version 3.0.0-alpha, if you want to upload resources using HDFS or S3 from the Resource Center, the following paths need to be configured: `api-server/conf/common.properties` and `worker-server/conf/common.properties`. This can be found as follows.
```properties
#

11
docs/docs/zh/guide/resource/configuration.md

@ -2,6 +2,17 @@
资源中心通常用于上传文件、 UDF 函数,以及任务组管理等操作。针对单机环境可以选择本地文件目录作为上传文件夹(此操作不需要部署 Hadoop)。当然也可以选择上传到 Hadoop or MinIO 集群上,此时则需要有 Hadoop(2.6+)或者 MinIOn 等相关环境。
## 本地资源配置
在单机环境下,可以选择使用本地文件目录作为上传文件夹(无需部署Hadoop),此时需要进行如下配置:
### 配置 `common.properties` 文件
对以下路径的文件进行配置:`api-server/conf/common.properties` 和 `worker-server/conf/common.properties`
- 将 `data.basedir.path` 改为本地存储路径,请确保部署 DolphinScheduler 的用户拥有读写权限,例如:`data.basedir.path=/tmp/dolphinscheduler`。当路径不存在时会自动创建文件夹
- 修改下列两个参数,分别是 `resource.storage.type=HDFS``fs.defaultFS=file:///`
## HDFS 资源配置
当需要使用资源中心进行相关文件的创建或者上传操作时,所有的文件和资源都会被存储在 HDFS 上。所以需要进行以下配置:

Loading…
Cancel
Save