-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Spark-20812][Mesos] Add secrets support to the dispatcher #18837
[Spark-20812][Mesos] Add secrets support to the dispatcher #18837
Conversation
@skonto @susanxhuynh Please review. |
resource-managers/mesos/pom.xml
Outdated
@@ -29,7 +29,7 @@ | |||
<name>Spark Project Mesos</name> | |||
<properties> | |||
<sbt.project.name>mesos</sbt.project.name> | |||
<mesos.version>1.0.0</mesos.version> | |||
<mesos.version>1.3.0-rc1</mesos.version> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1.3.0 is out I think right? Should we depend on a stable release? Does this change anything?
docs/running-on-mesos.md
Outdated
</td> | ||
</tr> | ||
<tr> | ||
<td><code>spark.mesos.driver.secret.name</code></td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the fully qualified path to the secret in the store?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this refers to the message Secret
here https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto#L2335 It says a secret can have one of two types: reference or value. Presumably the type here is REFERENCE
and this is the name of the reference. And, in practice this could be the path in a secret store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, the Secret
protobuf message is new in Mesos 1.3. What are the implications for compatibility with older (<1.3) Mesos clusters? Also, is there anything in Mesos 1.4 that this relies on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@susanxhuynh @skonto This is an oversight on my part. We should allow the user to specify which secret Type
they are going to use. Would the format --conf spark.mesos.driver.secret.name=/path:TYPE
be acceptable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ArtRand What would the path
mean for TYPE=VALUE. Would it be a path to a local file containing the value of the secret?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@susanxhuynh the type would only be set on the secret name
for example
...
--conf spark.mesos.driver.secret.name=/mysecret:REFERENCE \
--conf spark.mesos.driver.secret.filename=asecret
Also, while we're on the topic, what about multiple secrets? Is comma-separated entries the typical way? E.g:
--conf spark.mesos.driver.secret.name=/mysecret:REFERENCE,/super/secret:REFERENCE \
--conf spark.mesos.driver.secret.filename=asecret,topsecret
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ArtRand Looking at the protobuf definition, https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto#L2335, I see that unlike a Reference
, a message Value
does not have a name: it has a field called data
. So, I was wondering how the user would pass in that data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you're right, we'll need a separate system for value secrets. I would imagine that VALUE
secrets are mainly used as environment-based secrets? Let me re-do this, thanks for bringing this up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @susanxhuynh, so the way it works now is you can specify a secret as a REFERENCE
or as a VALUE
type by using the spark.mesos.driver.secret.name
or spark.mesos.driver.secret.value
configs, respectively. These secrets are then made file-based and/or env-based depending on the contents of spark.mesos.driver.secret.filename
and spark.mesos.driver.secret.envkey
. I allow for multiple secrets (but not multiple types) as comma-seperated lists (like Mesos URIs).
case MesosTaskState.TASK_FAILED => TaskState.FAILED | ||
case MesosTaskState.TASK_FAILED | | ||
MesosTaskState.TASK_GONE | | ||
MesosTaskState.TASK_GONE_BY_OPERATOR => TaskState.FAILED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are new to 1.3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually they were introduced in 1.2 https://github.com/apache/mesos/blob/1.2.0/include/mesos/mesos.proto#L1672
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so my thinking's clear, this still doesn't require the Mesos cluster to be 1.2+, because the wire protocol is still the same? there won't be older Mesos client libs lying around that would cause this to fail at runtime.
TaskInfo.newBuilder() | ||
.setTaskId(taskId) | ||
.setName(s"Driver for ${appName}") | ||
.setSlaveId(offer.offer.getSlaveId) | ||
.setCommand(buildDriverCommand(desc)) | ||
.setContainer(getContainerInfo(desc)) | ||
.addAllResources(cpuResourcesToUse.asJava) | ||
.addAllResources(memResourcesToUse.asJava) | ||
.setLabels(MesosProtoUtils.mesosLabels(desc.conf.get(config.DRIVER_LABELS).getOrElse(""))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More clean I think to do here:
val mesosLabels = MesosProtoUtils.mesosLabels(desc.conf.get(config.DRIVER_LABELS).getOrElse(""))
.setLebels(mesosLabels)
Btw a lot of secrets are defined in the code base: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ArtRand ! A question about backwards compatibility and a few nits.
docs/running-on-mesos.md
Outdated
protobuf for more information. | ||
</td> | ||
</tr> | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document spark.mesos.driver.secret.value
here?
@@ -29,7 +29,7 @@ | |||
<name>Spark Project Mesos</name> | |||
<properties> | |||
<sbt.project.name>mesos</sbt.project.name> | |||
<mesos.version>1.0.0</mesos.version> | |||
<mesos.version>1.3.0</mesos.version> | |||
<mesos.classifier>shaded-protobuf</mesos.classifier> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://spark.apache.org/docs/latest/running-on-mesos.html#installing-mesos mentions that Spark is designed for use with Mesos 1.0.0+. Is that still true? Do these changes affect users who have clusters that are < 1.3.0? Also, how much of this change requires 1.4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think anything requires 1.4, only that DC/OS with secrets runs 1.4. On a <1.3 Mesos cluster all of the protobuf messages will still be valid. I added to the documentation w.r.t secrets requiring 1.3+.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we don't think this will require users to use Mesos 1.3+?
@@ -58,12 +58,39 @@ package object config { | |||
|
|||
private [spark] val DRIVER_LABELS = | |||
ConfigBuilder("spark.mesos.driver.labels") | |||
.doc("Mesos labels to add to the driver. Labels are free-form key-value pairs. Key-value " + | |||
.doc("Mesos labels to add to the driver. Labels are free-form key-value pairs. Key-value" + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing a space after "Key-value"
"pairs should be separated by a colon, and commas used to list more than one." + | ||
"Ex. key:value,key2:value2") | ||
.stringConf | ||
.createOptional | ||
|
||
private[spark] val SECRET_NAME = | ||
ConfigBuilder("spark.mesos.driver.secret.name") | ||
.doc("A comma-seperated list of secret references. Consult the Mesos Secret protobuf for " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sp: "separated". Check for same typo below also.
docs/running-on-mesos.md
Outdated
If set, the contents of the secret referenced by | ||
spark.mesos.driver.secret.name will be written to the provided | ||
environment variable in the driver's process. | ||
</td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe update to mention this can be a comma-separated list of env vars. Same for filename below.
val containerInfo = MesosSchedulerBackendUtil.containerInfo(desc.conf) | ||
|
||
getSecretVolume(desc).foreach { volume => | ||
logInfo(s"Setting secret name=${volume.getSource.getSecret.getReference.getName} " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this assuming the type is Reference
when it could be either Reference
or Value
? Same question for the logInfo() for env vars above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, I changed this to be conditioned on the secret being a REFERENCE
type. (You probably don't want the content of your secret printed to the logs with VALUE
type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a rebase. The form and style looks OK. I can't really review the substance of the change but tend to trust you've got that as right as anyone can.
I'm going to run the tests now, to double-check whether any MiMa or deps checks fail.
@@ -29,7 +29,7 @@ | |||
<name>Spark Project Mesos</name> | |||
<properties> | |||
<sbt.project.name>mesos</sbt.project.name> | |||
<mesos.version>1.0.0</mesos.version> | |||
<mesos.version>1.3.0</mesos.version> | |||
<mesos.classifier>shaded-protobuf</mesos.classifier> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we don't think this will require users to use Mesos 1.3+?
case MesosTaskState.TASK_FAILED => TaskState.FAILED | ||
case MesosTaskState.TASK_FAILED | | ||
MesosTaskState.TASK_GONE | | ||
MesosTaskState.TASK_GONE_BY_OPERATOR => TaskState.FAILED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so my thinking's clear, this still doesn't require the Mesos cluster to be 1.2+, because the wire protocol is still the same? there won't be older Mesos client libs lying around that would cause this to fail at runtime.
Test build #3900 has finished for PR 18837 at commit
|
…rent issue. # What issue does this PR address ? Jira:https://issues.apache.org/jira/browse/SPARK-21223 fix the Thread-safety issue in FsHistoryProvider Currently, Spark HistoryServer use a HashMap named fileToAppInfo in class FsHistoryProvider to store the map of eventlog path and attemptInfo. When use ThreadPool to Replay the log files in the list and merge the list of old applications with new ones, multi thread may update fileToAppInfo at the same time, which may cause Thread-safety issues, such as falling into an infinite loop because of calling resize func of the hashtable. Author: 曾林西 <zenglinxi@meituan.com> Closes apache#18430 from zenglinxi0615/master.
… expressions ### What changes were proposed in this pull request? Function argument should not be named expressions. It could cause two issues: - Misleading error message - Unexpected query results when the column name is `distinct`, which is not a reserved word in our parser. ``` spark-sql> select count(distinct c1, distinct c2) from t1; Error in query: cannot resolve '`distinct`' given input columns: [c1, c2]; line 1 pos 26; 'Project [unresolvedalias('count(c1#30, 'distinct), None)] +- SubqueryAlias t1 +- CatalogRelation `default`.`t1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [c1#30, c2#31] ``` After the fix, the error message becomes ``` spark-sql> select count(distinct c1, distinct c2) from t1; Error in query: extraneous input 'c2' expecting {')', ',', '.', '[', 'OR', 'AND', 'IN', NOT, 'BETWEEN', 'LIKE', RLIKE, 'IS', EQ, '<=>', '<>', '!=', '<', LTE, '>', GTE, '+', '-', '*', '/', '%', 'DIV', '&', '|', '||', '^'}(line 1, pos 35) == SQL == select count(distinct c1, distinct c2) from t1 -----------------------------------^^^ ``` ### How was this patch tested? Added a test case to parser suite. Author: Xiao Li <gatorsmile@gmail.com> Author: gatorsmile <gatorsmile@gmail.com> Closes apache#18338 from gatorsmile/parserDistinctAggFunc.
## What changes were proposed in this pull request? Remove `numHashCollisions` in `BytesToBytesMap`. And change `getAverageProbesPerLookup()` to `getAverageProbesPerLookup` as suggested. ## How was this patch tested? Existing tests. Author: Liang-Chi Hsieh <viirya@gmail.com> Closes apache#18480 from viirya/SPARK-21052-followup.
…InternalRow ## What changes were proposed in this pull request? For performance reasons, `UnsafeRow.getString`, `getStruct`, etc. return a "pointer" that points to a memory region of this unsafe row. This makes the unsafe projection a little dangerous, because all of its output rows share one instance. When we implement SQL operators, we should be careful to not cache the input rows because they may be produced by unsafe projection from child operator and thus its content may change overtime. However, when we updating values of InternalRow(e.g. in mutable projection and safe projection), we only copy UTF8String, we should also copy InternalRow, ArrayData and MapData. This PR fixes this, and also fixes the copy of vairous InternalRow, ArrayData and MapData implementations. ## How was this patch tested? new regression tests Author: Wenchen Fan <wenchen@databricks.com> Closes apache#18483 from cloud-fan/fix-copy.
## What changes were proposed in this pull request? Update stats after the following data changing commands: - InsertIntoHadoopFsRelationCommand - InsertIntoHiveTable - LoadDataCommand - TruncateTableCommand - AlterTableSetLocationCommand - AlterTableDropPartitionCommand ## How was this patch tested? Added new test cases. Author: wangzhenhua <wangzhenhua@huawei.com> Author: Zhenhua Wang <wzh_zju@163.com> Closes apache#18334 from wzhfy/changeStatsForOperation.
… and mixin ## What changes were proposed in this pull request? We currently implement statistics propagation directly in logical plan. Given we already have two different implementations, it'd make sense to actually decouple the two and add stats propagation using mixin. This would reduce the coupling between logical plan and statistics handling. This can also be a powerful pattern in the future to add additional properties (e.g. constraints). ## How was this patch tested? Should be covered by existing test cases. Author: Reynold Xin <rxin@databricks.com> Closes apache#18479 from rxin/stats-trait.
## What changes were proposed in this pull request? Update GLM test to use supportedFamilyNames as suggested here: apache#16699 (diff) Author: actuaryzhang <actuaryzhang10@gmail.com> Closes apache#18495 from actuaryzhang/mlGlmTest2.
## What changes were proposed in this pull request? 1, make param support non-final with `finalFields` option 2, generate `HasSolver` with `finalFields = false` 3, override `solver` in LiR, GLR, and make MLPC inherit `HasSolver` ## How was this patch tested? existing tests Author: Ruifeng Zheng <ruifengz@foxmail.com> Author: Zheng RuiFeng <ruifengz@foxmail.com> Closes apache#16028 from zhengruifeng/param_non_final.
…s IllegalArgumentException: Self-suppression not permitted ## What changes were proposed in this pull request? Not adding the exception to the suppressed if it is the same instance as originalThrowable. ## How was this patch tested? Added new tests to verify this, these tests fail without source code changes and passes with the change. Author: Devaraj K <devaraj@apache.org> Closes apache#18384 from devaraj-kavali/SPARK-21170.
## What changes were proposed in this pull request? OutputFakerExec was added long ago and is not used anywhere now so we should remove it. ## How was this patch tested? N/A Author: Xingbo Jiang <xingbo.jiang@databricks.com> Closes apache#18473 from jiangxb1987/OutputFakerExec.
…ndle invalid data ## What changes were proposed in this pull request? This PR is to maintain API parity with changes made in SPARK-17498 to support a new option 'keep' in StringIndexer to handle unseen labels or NULL values with PySpark. Note: This is updated version of apache#17237 , the primary author of this PR is VinceShieh . ## How was this patch tested? Unit tests. Author: VinceShieh <vincent.xie@intel.com> Author: Yanbo Liang <ybliang8@gmail.com> Closes apache#18453 from yanboliang/spark-19852.
…can be pushed down to Oracle correctly ## What changes were proposed in this pull request? Move `compileValue` method in JDBCRDD to JdbcDialect, and override the `compileValue` method in OracleDialect to rewrite the Oracle-specific timestamp and date literals in where clause. ## How was this patch tested? An integration test has been added. Author: Rui Zha <zrdt713@gmail.com> Author: Zharui <zrdt713@gmail.com> Closes apache#18451 from SharpRay/extend-compileValue-to-dialects.
…n worker page to visit job page. ## What changes were proposed in this pull request? Add a url in the table of 'Running Executors' in worker page to visit job page. When I click URL of 'Name', the current page jumps to the job page. Of course this is only in the table of 'Running Executors'. This URL of 'Name' is in the table of 'Finished Executors' does not exist, the click will not jump to any page. fix before: ![1](https://user-images.githubusercontent.com/26266482/27679397-30ddc262-5ceb-11e7-839b-0889d1f42480.png) fix after: ![2](https://user-images.githubusercontent.com/26266482/27679405-3588ef12-5ceb-11e7-9756-0a93815cd698.png) ## How was this patch tested? manual tests Please review http://spark.apache.org/contributing.html before opening a pull request. Author: guoxiaolong <guo.xiaolong1@zte.com.cn> Closes apache#18464 from guoxiaolongzte/SPARK-21250.
## What changes were proposed in this pull request? Parallelize FileInputFormat.listStatus in Hadoop API via LIST_STATUS_NUM_THREADS to speed up examination of file sizes for wholeTextFiles et al ## How was this patch tested? Existing tests, which will exercise the key path here: using a local file system. Author: Sean Owen <sowen@cloudera.com> Closes apache#18441 from srowen/SPARK-21137.
## What changes were proposed in this pull request? It is strange that we will get "table not found" error if **the first sql** uses upper case table names, when developers write tests with `TestHiveSingleton`, **although case insensitivity**. This is because in `TestHiveQueryExecution`, test tables are loaded based on exact matching instead of case sensitivity. ## How was this patch tested? Added a new test case. Author: Zhenhua Wang <wzh_zju@163.com> Closes apache#18504 from wzhfy/testHive.
### Idea This PR adds validation to REFRESH sql statements. Currently, users can specify whatever they want as resource path. For example, spark.sql("REFRESH ! $ !") will be executed without any exceptions. ### Implementation I am not sure that my current implementation is the most optimal, so any feedback is appreciated. My first idea was to make the grammar as strict as possible. Unfortunately, there were some problems. I tried the approach below: SqlBase.g4 ``` ... | REFRESH TABLE tableIdentifier #refreshTable | REFRESH resourcePath #refreshResource ... resourcePath : STRING | (IDENTIFIER | number | nonReserved | '/' | '-')+ // other symbols can be added if needed ; ``` It is not flexible enough and requires to explicitly mention all possible symbols. Therefore, I came up with the current approach that is implemented in the code. Let me know your opinion on which one is better. Author: aokolnychyi <anton.okolnychyi@sap.com> Closes apache#18368 from aokolnychyi/spark-21102.
…-safe equals ## What changes were proposed in this pull request? This pr added code to print the same warning messages with `===` cases when using NULL-safe equals (`<=>`). ## How was this patch tested? Existing tests. Author: Takeshi Yamamuro <yamamuro@apache.org> Closes apache#18436 from maropu/SPARK-20073.
Hello @srowen, thanks for taking a look at this. You're correct in that this change does not require users to have a Mesos 1.3+ cluster, we do not change or omit any required records in the proto (https://github.com/apache/mesos/blob/1.3.0/include/mesos/mesos.proto#L2053). If a user tries to specify a secret on an older cluster, the additional (optional) records should be ignored. |
Test build #3904 has finished for PR 18837 at commit
|
private[spark] val DRIVER_FAILOVER_TIMEOUT = | ||
private[spark] val SECRET_NAME = | ||
ConfigBuilder("spark.mesos.driver.secret.name") | ||
.doc("A comma-separated list of secret references. Consult the Mesos Secret protobuf for " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is a list it should be created with .toSequence
; it returns the value properly parsed as a list to you. And it should probably be called .names
.
|
||
private[spark] val SECRET_VALUE = | ||
ConfigBuilder("spark.mesos.driver.secret.value") | ||
.doc("A comma-separated list of secret values.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
|
||
private[spark] val SECRET_ENVKEY = | ||
ConfigBuilder("spark.mesos.driver.secret.envkey") | ||
.doc("A comma-separated list of the environment variables to contain the secrets." + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
|
||
private[spark] val SECRET_FILENAME = | ||
ConfigBuilder("spark.mesos.driver.secret.filename") | ||
.doc("A comma-seperated list of file paths secret will be written to. Consult the Mesos " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
Hello @vanzin, thanks for the review. I added |
private def getSecretEnvVar(desc: MesosDriverDescription): List[Variable] = { | ||
val secrets = getSecrets(desc) | ||
val secretEnvKeys = { | ||
if (desc.conf.get(config.SECRET_ENVKEY).isDefined) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This kind of pattern is not really necessary.
You can either use conf.get(BLAH).getOrElse(Nil)
, or declare Nil
as the default when creating the config constant.
"pairs should be separated by a colon, and commas used to list more than one." + | ||
"Ex. key:value,key2:value2") | ||
.stringConf | ||
.createOptional | ||
|
||
private[spark] val SECRET_NAME = | ||
ConfigBuilder("spark.mesos.driver.secret.name") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you going to keep the config names in the singular? That's both odd since they're lists, and inconsistent with the config above ("spark.mesos.driver.labels").
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs/running-on-mesos.md
Outdated
@@ -481,6 +483,43 @@ See the [configuration page](configuration.html) for information on Spark config | |||
</tr> | |||
|
|||
<tr> | |||
<td><code>spark.mesos.driver.secret.envkey</code></td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation now doesn't match the code.
} | ||
|
||
val referenceSecrets: Seq[Secret] = { | ||
if (desc.conf.get(config.SECRET_NAME).isDefined) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you going to fix this kind of code according to my previous suggestion?
desc.conf.get(config.SECRET_NAME).getOrElse(Nil).map(...)
Hello @vanzin should be consistent now. Thanks again. |
There are still some small issues (minor style nits, duplicating conf keys instead of using Merging to master. |
## Background In #18837 , ArtRand added Mesos secrets support to the dispatcher. **This PR is to add the same secrets support to the drivers.** This means if the secret configs are set, the driver will launch executors that have access to either env or file-based secrets. One use case for this is to support TLS in the driver <=> executor communication. ## What changes were proposed in this pull request? Most of the changes are a refactor of the dispatcher secrets support (#18837) - moving it to a common place that can be used by both the dispatcher and drivers. The same goes for the unit tests. ## How was this patch tested? There are four config combinations: [env or file-based] x [value or reference secret]. For each combination: - Added a unit test. - Tested in DC/OS. Author: Susan X. Huynh <xhuynh@mesosphere.com> Closes #19437 from susanxhuynh/sh-mesos-driver-secret.
In apache#18837 , ArtRand added Mesos secrets support to the dispatcher. **This PR is to add the same secrets support to the drivers.** This means if the secret configs are set, the driver will launch executors that have access to either env or file-based secrets. One use case for this is to support TLS in the driver <=> executor communication. Most of the changes are a refactor of the dispatcher secrets support (apache#18837) - moving it to a common place that can be used by both the dispatcher and drivers. The same goes for the unit tests. There are four config combinations: [env or file-based] x [value or reference secret]. For each combination: - Added a unit test. - Tested in DC/OS. Author: Susan X. Huynh <xhuynh@mesosphere.com> Closes apache#19437 from susanxhuynh/sh-mesos-driver-secret.
In apache#18837 , ArtRand added Mesos secrets support to the dispatcher. **This PR is to add the same secrets support to the drivers.** This means if the secret configs are set, the driver will launch executors that have access to either env or file-based secrets. One use case for this is to support TLS in the driver <=> executor communication. Most of the changes are a refactor of the dispatcher secrets support (apache#18837) - moving it to a common place that can be used by both the dispatcher and drivers. The same goes for the unit tests. There are four config combinations: [env or file-based] x [value or reference secret]. For each combination: - Added a unit test. - Tested in DC/OS. Author: Susan X. Huynh <xhuynh@mesosphere.com> Closes apache#19437 from susanxhuynh/sh-mesos-driver-secret.
What changes were proposed in this pull request?
Mesos has secrets primitives for environment and file-based secrets, this PR adds that functionality to the Spark dispatcher and the appropriate configuration flags.
How was this patch tested?
Unit tested and manually tested against a DC/OS cluster with Mesos 1.4.