-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update cluster.md: add data_security_mode parameters NONE
and NO_ISOLATION
#3740
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
mgyucht
approved these changes
Jul 5, 2024
alexott
reviewed
Jul 5, 2024
@@ -43,7 +43,7 @@ resource "databricks_cluster" "shared_autoscaling" { | |||
* `autotermination_minutes` - (Optional) Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to `60`. *We highly recommend having this setting present for Interactive/BI clusters.* | |||
* `enable_elastic_disk` - (Optional) If you don’t want to allocate a fixed number of EBS volumes at cluster creation time, use autoscaling local storage. With autoscaling local storage, Databricks monitors the amount of free disk space available on your cluster’s Spark workers. If a worker begins to run too low on disk, Databricks automatically attaches a new EBS volume to the worker before it runs out of disk space. EBS volumes are attached up to a limit of 5 TB of total disk space per instance (including the instance’s local storage). To scale down EBS usage, make sure you have `autotermination_minutes` and `autoscale` attributes set. More documentation available at [cluster configuration page](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage-1). | |||
* `enable_local_disk_encryption` - (Optional) Some instance types you use to run clusters may have locally attached disks. Databricks may store shuffle data or temporary data on these locally attached disks. To ensure that all data at rest is encrypted for all storage types, including shuffle data stored temporarily on your cluster’s local disks, you can enable local disk encryption. When local disk encryption is enabled, Databricks generates an encryption key locally unique to each cluster node and uses it to encrypt all data stored on local disks. The scope of the key is local to each cluster node and is destroyed along with the cluster node itself. During its lifetime, the key resides in memory for encryption and decryption and is stored encrypted on the disk. *Your workloads may run more slowly because of the performance impact of reading and writing encrypted data to and from local volumes. This feature is not available for all Azure Databricks subscriptions. Contact your Microsoft or Databricks account representative to request access.* | |||
* `data_security_mode` - (Optional) Select the security features of the cluster. [Unity Catalog requires](https://docs.databricks.com/data-governance/unity-catalog/compute.html#create-clusters--sql-warehouses-with-unity-catalog-access) `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. If omitted, no security features are enabled. In the Databricks UI, this has been recently been renamed *Access Mode* and `USER_ISOLATION` has been renamed *Shared*, but use these terms here. | |||
* `data_security_mode` - (Optional) Select the security features of the cluster. [Unity Catalog requires](https://docs.databricks.com/data-governance/unity-catalog/compute.html#create-clusters--sql-warehouses-with-unity-catalog-access) `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. If omitted, default security features are enabled. To disable security features use `NONE` or legacy mode `NO_ISOLATION`. In the Databricks UI, this has been recently been renamed *Access Mode* and `USER_ISOLATION` has been renamed *Shared*, but use these terms here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we say "default security features are enabled if UC is enabled"?
tanmay-db
added a commit
that referenced
this pull request
Jul 9, 2024
### Internal Changes * Add Release tag ([#3748](#3748)). * Improve Changelog by grouping changes ([#3747](#3747)). * Upgrade Go SDK to v0.43.2 ([#3750](#3750)). ### Other Changes * Add `databricks_schema` data source ([#3732](#3732)). * Add new APIErrorBody struct and update deps ([#3745](#3745)). * Added support for binding storage credentials and external locations to specific workspaces ([#3678](#3678)). * Adds `databricks_volume` as data source ([#3211](#3211)). * Change TF registry ownership ([#3736](#3736)). * Exporter: Emit directories during the listing only if they are explicitly configured in `-listing` ([#3673](#3673)). * Exporter: export libraries specified as `requirements.txt` ([#3649](#3649)). * Exporter: fix generation of `run_as` blocks in `databricks_job` ([#3724](#3724)). * Exporter: use Go SDK structs for `databricks_job` resource ([#3727](#3727)). * Fix invalid priviledges in grants.md ([#3716](#3716)). * Make the schedule.pause_status field read-only ([#3692](#3692)). * Refactored `databricks_cluster(s)` data sources to Go SDK ([#3685](#3685)). * Renamed `databricks_catalog_workspace_binding` to `databricks_workspace_binding` ([#3703](#3703)). * Run goreleaser action in snapshot mode from merge queue ([#3646](#3646)). * Update cluster.md: add data_security_mode parameters `NONE` and `NO_ISOLATION` ([#3740](#3740)). * Upgrade databricks-sdk-go ([#3743](#3743)). * remove references to basic auth ([#3720](#3720)).
Merged
github-merge-queue bot
pushed a commit
that referenced
this pull request
Jul 19, 2024
## 1.49.0 ### New Features and Improvements * Added `databricks_dashboard` resource ([#3729](#3729)). * Added `databricks_schema` data source ([#3732](#3732)). * Added support for binding storage credentials and external locations to specific workspaces ([#3678](#3678)). * Added `databricks_volume` as data source ([#3211](#3211)). * Make the `schedule.pause_status` field read-only ([#3692](#3692)). * Renamed `databricks_catalog_workspace_binding` to `databricks_workspace_binding` ([#3703](#3703)). * Make `cluster_name_contains` optional in `databricks_clusters` data source ([#3760](#3760)). * Tolerate OAuth errors in databricks_mws_workspaces when managing tokens ([#3761](#3761)). * Permissions for `databricks_dashboard` resource ([#3762](#3762)). ### Exporter * Emit directories during the listing only if they are explicitly configured in `-listing` ([#3673](#3673)). * Export libraries specified as `requirements.txt` ([#3649](#3649)). * Fix generation of `run_as` blocks in `databricks_job` ([#3724](#3724)). * Use Go SDK structs for `databricks_job` resource ([#3727](#3727)). * Clarify use of `-listing` and `-services` options ([#3755](#3755)). * Improve code generation for SQL Endpoints ([#3764](#3764)) ### Documentation * Fix invalid priviledges in grants.md ([#3716](#3716)). * Update cluster.md: add data_security_mode parameters `NONE` and `NO_ISOLATION` ([#3740](#3740)). * Remove references to basic auth ([#3720](#3720)). * Update resources diagram ([#3765](#3765)). ### Internal Changes * Add Release tag ([#3748](#3748)). * Improve Changelog by grouping changes ([#3747](#3747)). * Change TF registry ownership ([#3736](#3736)). * Refactored `databricks_cluster(s)` data sources to Go SDK ([#3685](#3685)). * Upgrade databricks-sdk-go ([#3743](#3743)). * Run goreleaser action in snapshot mode from merge queue ([#3646](#3646)). * Make `dashboard_name` random in integration tests for `databricks_dashboard` resource ([#3763](#3763)). * Clear stale go.sum values ([#3768](#3768)). * Add "Owner" tag to test cluster in acceptance test ([#3771](#3771)). * Fix integration test for restrict workspace admins setting ([#3772](#3772)). * Add "Owner" tag to test SQL endpoint in acceptance test ([#3774](#3774)). * Move PR message validation to a separate workflow ([#3777](#3777)). * Trigger the validate workflow in the merge queue ([#3782](#3782)). * Update properties for managed SQL table on latest DBR ([#3784](#3784)). * Add "Owner" tag to test SQL endpoint in acceptance test ([#3785](#3785)).
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Changes
Recreation of legacy clusters in our environment caused assignment of
USER_ISOLATION
("Shared" access mode in DB UI).With
NO_ISOLATION
the previous legacy access mode can be enforced ("Custom" access mode in DB UI).