Browse Source

[Improvement][DOC] Update resource S3 configuration docs (#13985)

* update resoure s3 docs
3.2.0-release
JieguangZhou 2 years ago committed by GitHub
parent
commit
5c1edd2912
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 28
      docs/docs/en/guide/resource/configuration.md
  2. 26
      docs/docs/zh/guide/resource/configuration.md

28
docs/docs/en/guide/resource/configuration.md

@ -26,9 +26,35 @@ The configuration you may need to change:
> and `resource.hdfs.fs.defaultFS=file:///`, The configuration of `resource.storage.type=LOCAL` is for user-friendly, and enables
> the local resource center to be enabled by default
## connect AWS S3
if you want to upload resources to `Resource Center` connected to `S3`, you need to configure `api-server/conf/common.properties` and `worker-server/conf/common.properties`. You can refer to the following:
config the following fields
```properties
......
resource.storage.type=S3
......
resource.aws.access.key.id=aws_access_key_id
# The AWS secret access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required
resource.aws.secret.access.key=aws_secret_access_key
# The AWS Region to use. if resource.storage.type=S3 or use EMR-Task, This configuration is required
resource.aws.region=us-west-2
# The name of the bucket. You need to create them by yourself. Otherwise, the system cannot start. All buckets in Amazon S3 share a single namespace; ensure the bucket is given a unique name.
resource.aws.s3.bucket.name=dolphinscheduler
# You need to set this parameter when private cloud s4. If S3 uses public cloud, you only need to set resource.aws.region or set to the endpoint of a public cloud such as S3.cn-north-1.amazonaws.com.cn
resource.aws.s3.endpoint=
......
```
## Use HDFS or Remote Object Storage
After version 3.0.0-alpha, if you want to upload resources to `Resource Center` connected to `HDFS` or `S3`, you need to configure `api-server/conf/common.properties` and `worker-server/conf/common.properties`.
After version 3.0.0-alpha, if you want to upload resources to `Resource Center` connected to `HDFS`, you need to configure `api-server/conf/common.properties` and `worker-server/conf/common.properties`.
```properties
#

26
docs/docs/zh/guide/resource/configuration.md

@ -24,6 +24,32 @@ Dolphinscheduler 资源中心使用本地系统默认是开启的,不需要用
> 3. 当配置 `resource.storage.type=LOCAL`,其实您配置了两个配置项,分别是 `resource.storage.type=HDFS``resource.hdfs.fs.defaultFS=file:///` ,我们单独配置 `resource.storage.type=LOCAL` 这个值是为了
> 方便用户,并且能使得本地资源中心默认开启
## 对接AWS S3
如果需要使用到资源中心的 S3 上传资源,我们需要对以下路径的进行配置:`api-server/conf/common.properties` 和 `worker-server/conf/common.properties`。可参考如下:
配置以下字段
```properties
......
resource.storage.type=S3
......
resource.aws.access.key.id=aws_access_key_id
# The AWS secret access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required
resource.aws.secret.access.key=aws_secret_access_key
# The AWS Region to use. if resource.storage.type=S3 or use EMR-Task, This configuration is required
resource.aws.region=us-west-2
# The name of the bucket. You need to create them by yourself. Otherwise, the system cannot start. All buckets in Amazon S3 share a single namespace; ensure the bucket is given a unique name.
resource.aws.s3.bucket.name=dolphinscheduler
# You need to set this parameter when private cloud s4. If S3 uses public cloud, you only need to set resource.aws.region or set to the endpoint of a public cloud such as S3.cn-north-1.amazonaws.com.cn
resource.aws.s3.endpoint=
......
```
## 对接分布式或远端对象存储
当需要使用资源中心进行相关文件的创建或者上传操作时,所有的文件和资源都会被存储在分布式文件系统`HDFS`或者远端的对象存储,如`S3`上。所以需要进行以下配置:

Loading…
Cancel
Save