Skip to content

Commit

Permalink
[typo](doc)spark load add task timeout parameter apache#20115
Browse files Browse the repository at this point in the history
  • Loading branch information
caoliang-web authored May 27, 2023
1 parent 8c00012 commit 875e72b
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,7 @@ REVOKE USAGE_PRIV ON RESOURCE resource_name FROM ROLE role_name
- `spark.master`: required, yarn is supported at present, `spark://host:port`.
- `spark.submit.deployMode`: the deployment mode of Spark Program. It is required and supports cluster and client.
- `spark.hadoop.fs.defaultfs`: required when master is yarn.
- `spark.submit.timeout`:spark task timeout, default 5 minutes
- Other parameters are optional, refer to `http://spark.apache.org/docs/latest/configuration.html`
- YARN RM related parameters are as follows:
- If Spark is a single-point RM, you need to configure `spark.hadoop.yarn.resourcemanager.address`,address of the single point resource manager.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,7 @@ REVOKE USAGE_PRIV ON RESOURCE resource_name FROM ROLE role_name
- `spark.master`: 必填,目前支持 Yarn,Spark://host:port
- `spark.submit.deployMode`: Spark 程序的部署模式,必填,支持 Cluster、Client 两种。
- `spark.hadoop.fs.defaultFS`: Master 为 Yarn 时必填。
- `spark.submit.timeout`:spark任务超时时间,默认5分钟
- YARN RM 相关参数如下:
- 如果 Spark 为单点 RM,则需要配置`spark.hadoop.yarn.resourcemanager.address`,表示单点 ResourceManager 地址。
- 如果 Spark 为 RM-HA,则需要配置(其中 hostname 和 address 任选一个配置):
Expand Down

0 comments on commit 875e72b

Please sign in to comment.