Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoPR] datafactory/resource-manager #5730

Merged
merged 5 commits into from
Sep 11, 2019
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Generated from b07009df21da758efcb13fbbd25ce9e450751586 (#5729)
[DataFactory] Update Databricks linked service swagger related to warm pools.
  • Loading branch information
AutorestCI authored Sep 10, 2019
commit 8647a936af80a577ddd4ee02eca3a34bd10662be
28 changes: 21 additions & 7 deletions services/datafactory/mgmt/2018-06-01/datafactory/models.go
Original file line number Diff line number Diff line change
Expand Up @@ -16296,25 +16296,27 @@ type AzureDatabricksLinkedServiceTypeProperties struct {
Domain interface{} `json:"domain,omitempty"`
// AccessToken - Access token for databricks REST API. Refer to https://docs.azuredatabricks.net/api/latest/authentication.html. Type: string (or Expression with resultType string).
AccessToken BasicSecretBase `json:"accessToken,omitempty"`
// ExistingClusterID - The id of an existing cluster that will be used for all runs of this job. Type: string (or Expression with resultType string).
// ExistingClusterID - The id of an existing interactive cluster that will be used for all runs of this activity. Type: string (or Expression with resultType string).
ExistingClusterID interface{} `json:"existingClusterId,omitempty"`
// NewClusterVersion - The Spark version of new cluster. Type: string (or Expression with resultType string).
// InstancePoolID - The id of an existing instance pool that will be used for all runs of this activity. Type: string (or Expression with resultType string).
InstancePoolID interface{} `json:"instancePoolId,omitempty"`
// NewClusterVersion - If not using an existing interactive cluster, this specifies the Spark version of a new job cluster or instance pool nodes created for each run of this activity. Required if instancePoolId is specified. Type: string (or Expression with resultType string).
NewClusterVersion interface{} `json:"newClusterVersion,omitempty"`
// NewClusterNumOfWorker - Number of worker nodes that new cluster should have. A string formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 as min and 10 as max. Type: string (or Expression with resultType string).
// NewClusterNumOfWorker - If not using an existing interactive cluster, this specifies the number of worker nodes to use for the new job cluster or instance pool. For new job clusters, this a string-formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 (min) to 10 (max). For instance pools, this is a string-formatted Int32, and can only specify a fixed number of worker nodes, such as '2'. Required if newClusterVersion is specified. Type: string (or Expression with resultType string).
NewClusterNumOfWorker interface{} `json:"newClusterNumOfWorker,omitempty"`
// NewClusterNodeType - The node types of new cluster. Type: string (or Expression with resultType string).
// NewClusterNodeType - The node type of the new job cluster. This property is required if newClusterVersion is specified and instancePoolId is not specified. If instancePoolId is specified, this property is ignored. Type: string (or Expression with resultType string).
NewClusterNodeType interface{} `json:"newClusterNodeType,omitempty"`
// NewClusterSparkConf - A set of optional, user-specified Spark configuration key-value pairs.
NewClusterSparkConf map[string]interface{} `json:"newClusterSparkConf"`
// NewClusterSparkEnvVars - A set of optional, user-specified Spark environment variables key-value pairs.
NewClusterSparkEnvVars map[string]interface{} `json:"newClusterSparkEnvVars"`
// NewClusterCustomTags - Additional tags for cluster resources.
// NewClusterCustomTags - Additional tags for cluster resources. This property is ignored in instance pool configurations.
NewClusterCustomTags map[string]interface{} `json:"newClusterCustomTags"`
// NewClusterDriverNodeType - The driver node type for the new cluster. Type: string (or Expression with resultType string).
// NewClusterDriverNodeType - The driver node type for the new job cluster. This property is ignored in instance pool configurations. Type: string (or Expression with resultType string).
NewClusterDriverNodeType interface{} `json:"newClusterDriverNodeType,omitempty"`
// NewClusterInitScripts - User-defined initialization scripts for the new cluster. Type: array of strings (or Expression with resultType array of strings).
NewClusterInitScripts interface{} `json:"newClusterInitScripts,omitempty"`
// NewClusterEnableElasticDisk - Enable the elastic disk on the new cluster. Type: boolean (or Expression with resultType boolean).
// NewClusterEnableElasticDisk - Enable the elastic disk on the new cluster. This property is now ignored, and takes the default elastic disk behavior in Databricks (elastic disks are always enabled). Type: boolean (or Expression with resultType boolean).
NewClusterEnableElasticDisk interface{} `json:"newClusterEnableElasticDisk,omitempty"`
// EncryptedCredential - The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
EncryptedCredential interface{} `json:"encryptedCredential,omitempty"`
Expand All @@ -16330,6 +16332,9 @@ func (adlstp AzureDatabricksLinkedServiceTypeProperties) MarshalJSON() ([]byte,
if adlstp.ExistingClusterID != nil {
objectMap["existingClusterId"] = adlstp.ExistingClusterID
}
if adlstp.InstancePoolID != nil {
objectMap["instancePoolId"] = adlstp.InstancePoolID
}
if adlstp.NewClusterVersion != nil {
objectMap["newClusterVersion"] = adlstp.NewClusterVersion
}
Expand Down Expand Up @@ -16398,6 +16403,15 @@ func (adlstp *AzureDatabricksLinkedServiceTypeProperties) UnmarshalJSON(body []b
}
adlstp.ExistingClusterID = existingClusterID
}
case "instancePoolId":
if v != nil {
var instancePoolID interface{}
err = json.Unmarshal(*v, &instancePoolID)
if err != nil {
return err
}
adlstp.InstancePoolID = instancePoolID
}
case "newClusterVersion":
if v != nil {
var newClusterVersion interface{}
Expand Down

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.