From 1352541e97b5d249491c286ea15439728e92a440 Mon Sep 17 00:00:00 2001 From: Serge Smertin Date: Tue, 18 Jul 2023 15:49:07 +0200 Subject: [PATCH] Release v0.2.0 * Add Issue Templates ([#208](https://github.com/databricks/databricks-sdk-py/pull/208)). * Fixed notebook native auth for jobs ([#209](https://github.com/databricks/databricks-sdk-py/pull/209)). * Replace `datatime.timedelta()` with `datetime.timedelta()` in codebase ([#207](https://github.com/databricks/databricks-sdk-py/pull/207)). * Support dod in python sdk ([#212](https://github.com/databricks/databricks-sdk-py/pull/212)). * [DECO-1115] Add local implementation for `dbutils.widgets` ([#93](https://github.com/databricks/databricks-sdk-py/pull/93)). * Fix error message, ExportFormat -> ImportFormat ([#220](https://github.com/databricks/databricks-sdk-py/pull/220)). * Regenerate Python SDK using recent OpenAPI Specification ([#229](https://github.com/databricks/databricks-sdk-py/pull/229)). * Make workspace client also return runtime dbutils when in dbr ([#210](https://github.com/databricks/databricks-sdk-py/pull/210)). * Use .ConstantName defining target enum states for waiters ([#230](https://github.com/databricks/databricks-sdk-py/pull/230)). * Fix enum deserialization ([#234](https://github.com/databricks/databricks-sdk-py/pull/234)). * Fix enum deserialization, take 2 ([#235](https://github.com/databricks/databricks-sdk-py/pull/235)). * Added toolchain configuration to `.codegen.json` ([#236](https://github.com/databricks/databricks-sdk-py/pull/236)). * Make OpenAPI spec location configurable ([#237](https://github.com/databricks/databricks-sdk-py/pull/237)). * Rearrange imports in `databricks.sdk.runtime` to improve local editor experience ([#219](https://github.com/databricks/databricks-sdk-py/pull/219)). * Updated account-level and workspace-level user management examples ([#241](https://github.com/databricks/databricks-sdk-py/pull/241)). API Changes: * Removed `maintenance()` method for [w.metastores](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/metastores.html) workspace-level service. * Added `enable_optimization()` method for [w.metastores](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/metastores.html) workspace-level service. * Added `update()` method for [w.tables](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/tables.html) workspace-level service. * Added `force` field for `databricks.sdk.service.catalog.DeleteAccountMetastoreRequest`. * Added `force` field for `databricks.sdk.service.catalog.DeleteAccountStorageCredentialRequest`. * Removed `databricks.sdk.service.catalog.UpdateAutoMaintenance` dataclass. * Removed `databricks.sdk.service.catalog.UpdateAutoMaintenanceResponse` dataclass. * Added `databricks.sdk.service.catalog.UpdatePredictiveOptimization` dataclass. * Added `databricks.sdk.service.catalog.UpdatePredictiveOptimizationResponse` dataclass. * Added `databricks.sdk.service.catalog.UpdateTableRequest` dataclass. * Added `schema` field for `databricks.sdk.service.iam.PartialUpdate`. * Added `databricks.sdk.service.iam.PatchSchema` dataclass. * Added `trigger_info` field for `databricks.sdk.service.jobs.BaseRun`. * Added `health` field for `databricks.sdk.service.jobs.CreateJob`. * Added `job_source` field for `databricks.sdk.service.jobs.GitSource`. * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.JobEmailNotifications`. * Added `health` field for `databricks.sdk.service.jobs.JobSettings`. * Added `trigger_info` field for `databricks.sdk.service.jobs.Run`. * Added `run_job_output` field for `databricks.sdk.service.jobs.RunOutput`. * Added `run_job_task` field for `databricks.sdk.service.jobs.RunTask`. * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitRun`. * Added `health` field for `databricks.sdk.service.jobs.SubmitRun`. * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `health` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `notification_settings` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `health` field for `databricks.sdk.service.jobs.Task`. * Added `run_job_task` field for `databricks.sdk.service.jobs.Task`. * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.TaskEmailNotifications`. * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.WebhookNotifications`. * Added `databricks.sdk.service.jobs.JobSource` dataclass. * Added `databricks.sdk.service.jobs.JobSourceDirtyState` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthMetric` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthOperator` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthRule` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthRules` dataclass. * Added `databricks.sdk.service.jobs.RunJobOutput` dataclass. * Added `databricks.sdk.service.jobs.RunJobTask` dataclass. * Added `databricks.sdk.service.jobs.TriggerInfo` dataclass. * Added `databricks.sdk.service.jobs.WebhookNotificationsOnDurationWarningThresholdExceededItem` dataclass. * Removed `whl` field for `databricks.sdk.service.pipelines.PipelineLibrary`. * Changed `delete_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. * Changed `read_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. * Changed `etag` field for `databricks.sdk.service.settings.DeletePersonalComputeSettingRequest` to be required. * Changed `etag` field for `databricks.sdk.service.settings.ReadPersonalComputeSettingRequest` to be required. * Added [w.clean_rooms](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/clean_rooms.html) workspace-level service. * Added `databricks.sdk.service.sharing.CentralCleanRoomInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomAssetInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomCatalog` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomCatalogUpdate` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomCollaboratorInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomNotebookInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomTableInfo` dataclass. * Added `databricks.sdk.service.sharing.ColumnInfo` dataclass. * Added `databricks.sdk.service.sharing.ColumnMask` dataclass. * Added `databricks.sdk.service.sharing.ColumnTypeName` dataclass. * Added `databricks.sdk.service.sharing.CreateCleanRoom` dataclass. * Added `databricks.sdk.service.sharing.DeleteCleanRoomRequest` dataclass. * Added `databricks.sdk.service.sharing.GetCleanRoomRequest` dataclass. * Added `databricks.sdk.service.sharing.ListCleanRoomsResponse` dataclass. * Added `databricks.sdk.service.sharing.UpdateCleanRoom` dataclass. * Changed `query` field for `databricks.sdk.service.sql.Alert` to `databricks.sdk.service.sql.AlertQuery` dataclass. * Changed `value` field for `databricks.sdk.service.sql.AlertOptions` to `any` dataclass. * Removed `is_db_admin` field for `databricks.sdk.service.sql.User`. * Removed `profile_image_url` field for `databricks.sdk.service.sql.User`. * Added `databricks.sdk.service.sql.AlertQuery` dataclass. OpenAPI SHA: 0a1949ba96f71680dad30e06973eaae85b1307bb, Date: 2023-07-18 --- .codegen/_openapi_sha | 1 + .gitattributes | 2 +- CHANGELOG.md | 89 +++++++++++ databricks/sdk/service/catalog.py | 141 ++++++++++-------- databricks/sdk/service/jobs.py | 116 +++++++++++++- databricks/sdk/service/ml.py | 2 +- databricks/sdk/service/pipelines.py | 5 +- databricks/sdk/service/sql.py | 1 + databricks/sdk/version.py | 2 +- docs/account/account-billing.rst | 6 +- docs/account/account-catalog.rst | 6 +- docs/account/account-iam.rst | 6 +- docs/account/account-oauth2.rst | 6 +- docs/account/account-provisioning.rst | 6 +- docs/account/account-settings.rst | 6 +- docs/account/groups.rst | 12 +- docs/account/index.rst | 6 +- docs/account/metastores.rst | 4 +- docs/account/service_principals.rst | 4 +- docs/account/settings.rst | 40 +++-- docs/account/storage_credentials.rst | 4 +- docs/account/users.rst | 49 ++++-- docs/workspace/alerts.rst | 6 +- docs/workspace/clean_rooms.rst | 95 ++++++++++++ docs/workspace/clusters.rst | 2 +- docs/workspace/command_execution.rst | 6 +- docs/workspace/dashboards.rst | 3 +- docs/workspace/experiments.rst | 2 +- docs/workspace/groups.rst | 12 +- docs/workspace/index.rst | 6 +- docs/workspace/instance_profiles.rst | 18 +-- docs/workspace/jobs.rst | 30 +++- docs/workspace/metastores.rst | 68 ++++----- docs/workspace/policy_families.rst | 51 ++++++- docs/workspace/queries.rst | 18 +-- docs/workspace/service_principals.rst | 4 +- docs/workspace/serving_endpoints.rst | 14 +- docs/workspace/tables.rst | 16 ++ docs/workspace/users.rst | 49 ++++-- docs/workspace/workspace-catalog.rst | 6 +- docs/workspace/workspace-compute.rst | 6 +- docs/workspace/workspace-files.rst | 6 +- docs/workspace/workspace-iam.rst | 6 +- docs/workspace/workspace-jobs.rst | 6 +- docs/workspace/workspace-ml.rst | 6 +- docs/workspace/workspace-pipelines.rst | 6 +- docs/workspace/workspace-serving.rst | 6 +- docs/workspace/workspace-settings.rst | 6 +- docs/workspace/workspace-sharing.rst | 7 +- docs/workspace/workspace-sql.rst | 6 +- docs/workspace/workspace-workspace.rst | 6 +- .../enable_optimization_metastores.py | 15 ++ examples/users/patch_account_users.py | 2 +- 53 files changed, 750 insertions(+), 254 deletions(-) create mode 100644 .codegen/_openapi_sha create mode 100644 docs/workspace/clean_rooms.rst create mode 100755 examples/metastores/enable_optimization_metastores.py diff --git a/.codegen/_openapi_sha b/.codegen/_openapi_sha new file mode 100644 index 00000000..1079283d --- /dev/null +++ b/.codegen/_openapi_sha @@ -0,0 +1 @@ +0a1949ba96f71680dad30e06973eaae85b1307bb \ No newline at end of file diff --git a/.gitattributes b/.gitattributes index 76d12aa4..0dc018d9 100755 --- a/.gitattributes +++ b/.gitattributes @@ -134,9 +134,9 @@ examples/log_delivery/list_log_delivery.py linguist-generated=true examples/metastores/assign_metastores.py linguist-generated=true examples/metastores/create_metastores.py linguist-generated=true examples/metastores/current_metastores.py linguist-generated=true +examples/metastores/enable_optimization_metastores.py linguist-generated=true examples/metastores/get_metastores.py linguist-generated=true examples/metastores/list_metastores.py linguist-generated=true -examples/metastores/maintenance_metastores.py linguist-generated=true examples/metastores/summary_metastores.py linguist-generated=true examples/metastores/unassign_metastores.py linguist-generated=true examples/metastores/update_metastores.py linguist-generated=true diff --git a/CHANGELOG.md b/CHANGELOG.md index e3665987..6c311d3a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,94 @@ # Version changelog +## 0.2.0 + +* Add Issue Templates ([#208](https://github.com/databricks/databricks-sdk-py/pull/208)). +* Fixed notebook native auth for jobs ([#209](https://github.com/databricks/databricks-sdk-py/pull/209)). +* Replace `datatime.timedelta()` with `datetime.timedelta()` in codebase ([#207](https://github.com/databricks/databricks-sdk-py/pull/207)). +* Support dod in python sdk ([#212](https://github.com/databricks/databricks-sdk-py/pull/212)). +* [DECO-1115] Add local implementation for `dbutils.widgets` ([#93](https://github.com/databricks/databricks-sdk-py/pull/93)). +* Fix error message, ExportFormat -> ImportFormat ([#220](https://github.com/databricks/databricks-sdk-py/pull/220)). +* Regenerate Python SDK using recent OpenAPI Specification ([#229](https://github.com/databricks/databricks-sdk-py/pull/229)). +* Make workspace client also return runtime dbutils when in dbr ([#210](https://github.com/databricks/databricks-sdk-py/pull/210)). +* Use .ConstantName defining target enum states for waiters ([#230](https://github.com/databricks/databricks-sdk-py/pull/230)). +* Fix enum deserialization ([#234](https://github.com/databricks/databricks-sdk-py/pull/234)). +* Fix enum deserialization, take 2 ([#235](https://github.com/databricks/databricks-sdk-py/pull/235)). +* Added toolchain configuration to `.codegen.json` ([#236](https://github.com/databricks/databricks-sdk-py/pull/236)). +* Make OpenAPI spec location configurable ([#237](https://github.com/databricks/databricks-sdk-py/pull/237)). +* Rearrange imports in `databricks.sdk.runtime` to improve local editor experience ([#219](https://github.com/databricks/databricks-sdk-py/pull/219)). +* Updated account-level and workspace-level user management examples ([#241](https://github.com/databricks/databricks-sdk-py/pull/241)). + +API Changes: + + * Removed `maintenance()` method for [w.metastores](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/metastores.html) workspace-level service. + * Added `enable_optimization()` method for [w.metastores](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/metastores.html) workspace-level service. + * Added `update()` method for [w.tables](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/tables.html) workspace-level service. + * Added `force` field for `databricks.sdk.service.catalog.DeleteAccountMetastoreRequest`. + * Added `force` field for `databricks.sdk.service.catalog.DeleteAccountStorageCredentialRequest`. + * Removed `databricks.sdk.service.catalog.UpdateAutoMaintenance` dataclass. + * Removed `databricks.sdk.service.catalog.UpdateAutoMaintenanceResponse` dataclass. + * Added `databricks.sdk.service.catalog.UpdatePredictiveOptimization` dataclass. + * Added `databricks.sdk.service.catalog.UpdatePredictiveOptimizationResponse` dataclass. + * Added `databricks.sdk.service.catalog.UpdateTableRequest` dataclass. + * Added `schema` field for `databricks.sdk.service.iam.PartialUpdate`. + * Added `databricks.sdk.service.iam.PatchSchema` dataclass. + * Added `trigger_info` field for `databricks.sdk.service.jobs.BaseRun`. + * Added `health` field for `databricks.sdk.service.jobs.CreateJob`. + * Added `job_source` field for `databricks.sdk.service.jobs.GitSource`. + * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.JobEmailNotifications`. + * Added `health` field for `databricks.sdk.service.jobs.JobSettings`. + * Added `trigger_info` field for `databricks.sdk.service.jobs.Run`. + * Added `run_job_output` field for `databricks.sdk.service.jobs.RunOutput`. + * Added `run_job_task` field for `databricks.sdk.service.jobs.RunTask`. + * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitRun`. + * Added `health` field for `databricks.sdk.service.jobs.SubmitRun`. + * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `health` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `notification_settings` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `health` field for `databricks.sdk.service.jobs.Task`. + * Added `run_job_task` field for `databricks.sdk.service.jobs.Task`. + * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.TaskEmailNotifications`. + * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.WebhookNotifications`. + * Added `databricks.sdk.service.jobs.JobSource` dataclass. + * Added `databricks.sdk.service.jobs.JobSourceDirtyState` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthMetric` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthOperator` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthRule` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthRules` dataclass. + * Added `databricks.sdk.service.jobs.RunJobOutput` dataclass. + * Added `databricks.sdk.service.jobs.RunJobTask` dataclass. + * Added `databricks.sdk.service.jobs.TriggerInfo` dataclass. + * Added `databricks.sdk.service.jobs.WebhookNotificationsOnDurationWarningThresholdExceededItem` dataclass. + * Removed `whl` field for `databricks.sdk.service.pipelines.PipelineLibrary`. + * Changed `delete_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. + * Changed `read_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. + * Changed `etag` field for `databricks.sdk.service.settings.DeletePersonalComputeSettingRequest` to be required. + * Changed `etag` field for `databricks.sdk.service.settings.ReadPersonalComputeSettingRequest` to be required. + * Added [w.clean_rooms](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/clean_rooms.html) workspace-level service. + * Added `databricks.sdk.service.sharing.CentralCleanRoomInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomAssetInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomCatalog` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomCatalogUpdate` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomCollaboratorInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomNotebookInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomTableInfo` dataclass. + * Added `databricks.sdk.service.sharing.ColumnInfo` dataclass. + * Added `databricks.sdk.service.sharing.ColumnMask` dataclass. + * Added `databricks.sdk.service.sharing.ColumnTypeName` dataclass. + * Added `databricks.sdk.service.sharing.CreateCleanRoom` dataclass. + * Added `databricks.sdk.service.sharing.DeleteCleanRoomRequest` dataclass. + * Added `databricks.sdk.service.sharing.GetCleanRoomRequest` dataclass. + * Added `databricks.sdk.service.sharing.ListCleanRoomsResponse` dataclass. + * Added `databricks.sdk.service.sharing.UpdateCleanRoom` dataclass. + * Changed `query` field for `databricks.sdk.service.sql.Alert` to `databricks.sdk.service.sql.AlertQuery` dataclass. + * Changed `value` field for `databricks.sdk.service.sql.AlertOptions` to `any` dataclass. + * Removed `is_db_admin` field for `databricks.sdk.service.sql.User`. + * Removed `profile_image_url` field for `databricks.sdk.service.sql.User`. + * Added `databricks.sdk.service.sql.AlertQuery` dataclass. + +OpenAPI SHA: 0a1949ba96f71680dad30e06973eaae85b1307bb, Date: 2023-07-18 + ## 0.1.12 * Beta release ([#198](https://github.com/databricks/databricks-sdk-py/pull/198)). diff --git a/databricks/sdk/service/catalog.py b/databricks/sdk/service/catalog.py index 9183b2cd..ae6da3a0 100755 --- a/databricks/sdk/service/catalog.py +++ b/databricks/sdk/service/catalog.py @@ -877,6 +877,7 @@ class DeleteAccountMetastoreRequest: """Delete a metastore""" metastore_id: str + force: Optional[bool] = None @dataclass @@ -885,6 +886,7 @@ class DeleteAccountStorageCredentialRequest: metastore_id: str name: str + force: Optional[bool] = None @dataclass @@ -2414,42 +2416,6 @@ class UnassignRequest: metastore_id: str -@dataclass -class UpdateAutoMaintenance: - metastore_id: str - enable: bool - - def as_dict(self) -> dict: - body = {} - if self.enable is not None: body['enable'] = self.enable - if self.metastore_id is not None: body['metastore_id'] = self.metastore_id - return body - - @classmethod - def from_dict(cls, d: Dict[str, any]) -> 'UpdateAutoMaintenance': - return cls(enable=d.get('enable', None), metastore_id=d.get('metastore_id', None)) - - -@dataclass -class UpdateAutoMaintenanceResponse: - state: Optional[bool] = None - user_id: Optional[int] = None - username: Optional[str] = None - - def as_dict(self) -> dict: - body = {} - if self.state is not None: body['state'] = self.state - if self.user_id is not None: body['user_id'] = self.user_id - if self.username is not None: body['username'] = self.username - return body - - @classmethod - def from_dict(cls, d: Dict[str, any]) -> 'UpdateAutoMaintenanceResponse': - return cls(state=d.get('state', None), - user_id=d.get('user_id', None), - username=d.get('username', None)) - - @dataclass class UpdateCatalog: comment: Optional[str] = None @@ -2632,6 +2598,42 @@ def from_dict(cls, d: Dict[str, any]) -> 'UpdatePermissions': securable_type=_enum(d, 'securable_type', SecurableType)) +@dataclass +class UpdatePredictiveOptimization: + metastore_id: str + enable: bool + + def as_dict(self) -> dict: + body = {} + if self.enable is not None: body['enable'] = self.enable + if self.metastore_id is not None: body['metastore_id'] = self.metastore_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'UpdatePredictiveOptimization': + return cls(enable=d.get('enable', None), metastore_id=d.get('metastore_id', None)) + + +@dataclass +class UpdatePredictiveOptimizationResponse: + state: Optional[bool] = None + user_id: Optional[int] = None + username: Optional[str] = None + + def as_dict(self) -> dict: + body = {} + if self.state is not None: body['state'] = self.state + if self.user_id is not None: body['user_id'] = self.user_id + if self.username is not None: body['username'] = self.username + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'UpdatePredictiveOptimizationResponse': + return cls(state=d.get('state', None), + user_id=d.get('user_id', None), + username=d.get('username', None)) + + @dataclass class UpdateSchema: comment: Optional[str] = None @@ -3069,7 +3071,7 @@ def create(self, *, metastore_info: Optional[CreateMetastore] = None, **kwargs) json = self._api.do('POST', f'/api/2.0/accounts/{self._api.account_id}/metastores', body=body) return AccountsMetastoreInfo.from_dict(json) - def delete(self, metastore_id: str, **kwargs): + def delete(self, metastore_id: str, *, force: Optional[bool] = None, **kwargs): """Delete a metastore. Deletes a Unity Catalog metastore for an account, both specified by ID. Please add a header @@ -3077,14 +3079,21 @@ def delete(self, metastore_id: str, **kwargs): :param metastore_id: str Unity Catalog metastore ID + :param force: bool (optional) + Force deletion even if the metastore is not empty. Default is false. """ request = kwargs.get('request', None) if not request: # request is not given through keyed args - request = DeleteAccountMetastoreRequest(metastore_id=metastore_id) + request = DeleteAccountMetastoreRequest(force=force, metastore_id=metastore_id) - self._api.do('DELETE', f'/api/2.0/accounts/{self._api.account_id}/metastores/{request.metastore_id}') + query = {} + if force: query['force'] = request.force + + self._api.do('DELETE', + f'/api/2.0/accounts/{self._api.account_id}/metastores/{request.metastore_id}', + query=query) def get(self, metastore_id: str, **kwargs) -> AccountsMetastoreInfo: """Get a metastore. @@ -3183,7 +3192,7 @@ def create(self, body=body) return StorageCredentialInfo.from_dict(json) - def delete(self, metastore_id: str, name: str, **kwargs): + def delete(self, metastore_id: str, name: str, *, force: Optional[bool] = None, **kwargs): """Delete a storage credential. Deletes a storage credential from the metastore. The caller must be an owner of the storage @@ -3193,17 +3202,22 @@ def delete(self, metastore_id: str, name: str, **kwargs): Unity Catalog metastore ID :param name: str Name of the storage credential. + :param force: bool (optional) + Force deletion even if the Storage Credential is not empty. Default is false. """ request = kwargs.get('request', None) if not request: # request is not given through keyed args - request = DeleteAccountStorageCredentialRequest(metastore_id=metastore_id, name=name) + request = DeleteAccountStorageCredentialRequest(force=force, metastore_id=metastore_id, name=name) + + query = {} + if force: query['force'] = request.force self._api.do( 'DELETE', - f'/api/2.0/accounts/{self._api.account_id}/metastores/{request.metastore_id}/storage-credentials/' - ) + f'/api/2.0/accounts/{self._api.account_id}/metastores/{request.metastore_id}/storage-credentials/', + query=query) def get(self, metastore_id: str, name: str, **kwargs) -> StorageCredentialInfo: """Gets the named storage credential. @@ -4148,6 +4162,27 @@ def delete(self, id: str, *, force: Optional[bool] = None, **kwargs): self._api.do('DELETE', f'/api/2.1/unity-catalog/metastores/{request.id}', query=query) + def enable_optimization(self, metastore_id: str, enable: bool, + **kwargs) -> UpdatePredictiveOptimizationResponse: + """Toggle predictive optimization on the metastore. + + Enables or disables predictive optimization on the metastore. + + :param metastore_id: str + Unique identifier of metastore. + :param enable: bool + Whether to enable predictive optimization on the metastore. + + :returns: :class:`UpdatePredictiveOptimizationResponse` + """ + request = kwargs.get('request', None) + if not request: # request is not given through keyed args + request = UpdatePredictiveOptimization(enable=enable, metastore_id=metastore_id) + body = request.as_dict() + + json = self._api.do('PATCH', '/api/2.0/predictive-optimization/service', body=body) + return UpdatePredictiveOptimizationResponse.from_dict(json) + def get(self, id: str, **kwargs) -> MetastoreInfo: """Get a metastore. @@ -4178,26 +4213,6 @@ def list(self) -> Iterator[MetastoreInfo]: json = self._api.do('GET', '/api/2.1/unity-catalog/metastores') return [MetastoreInfo.from_dict(v) for v in json.get('metastores', [])] - def maintenance(self, metastore_id: str, enable: bool, **kwargs) -> UpdateAutoMaintenanceResponse: - """Enables or disables auto maintenance on the metastore. - - Enables or disables auto maintenance on the metastore. - - :param metastore_id: str - Unique identifier of metastore. - :param enable: bool - Whether to enable auto maintenance on the metastore. - - :returns: :class:`UpdateAutoMaintenanceResponse` - """ - request = kwargs.get('request', None) - if not request: # request is not given through keyed args - request = UpdateAutoMaintenance(enable=enable, metastore_id=metastore_id) - body = request.as_dict() - - json = self._api.do('PATCH', '/api/2.0/auto-maintenance/service', body=body) - return UpdateAutoMaintenanceResponse.from_dict(json) - def summary(self) -> GetMetastoreSummaryResponse: """Get a metastore summary. diff --git a/databricks/sdk/service/jobs.py b/databricks/sdk/service/jobs.py index eb4a9f01..deaff968 100755 --- a/databricks/sdk/service/jobs.py +++ b/databricks/sdk/service/jobs.py @@ -257,6 +257,7 @@ class CreateJob: email_notifications: Optional['JobEmailNotifications'] = None format: Optional['Format'] = None git_source: Optional['GitSource'] = None + health: Optional['JobsHealthRules'] = None job_clusters: Optional['List[JobCluster]'] = None max_concurrent_runs: Optional[int] = None name: Optional[str] = None @@ -279,6 +280,7 @@ def as_dict(self) -> dict: if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.format is not None: body['format'] = self.format.value if self.git_source: body['git_source'] = self.git_source.as_dict() + if self.health: body['health'] = self.health.as_dict() if self.job_clusters: body['job_clusters'] = [v.as_dict() for v in self.job_clusters] if self.max_concurrent_runs is not None: body['max_concurrent_runs'] = self.max_concurrent_runs if self.name is not None: body['name'] = self.name @@ -301,6 +303,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'CreateJob': email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), format=_enum(d, 'format', Format), git_source=_from_dict(d, 'git_source', GitSource), + health=_from_dict(d, 'health', JobsHealthRules), job_clusters=_repeated(d, 'job_clusters', JobCluster), max_concurrent_runs=d.get('max_concurrent_runs', None), name=d.get('name', None), @@ -625,6 +628,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'JobCompute': @dataclass class JobEmailNotifications: no_alert_for_skipped_runs: Optional[bool] = None + on_duration_warning_threshold_exceeded: Optional['List[str]'] = None on_failure: Optional['List[str]'] = None on_start: Optional['List[str]'] = None on_success: Optional['List[str]'] = None @@ -633,6 +637,10 @@ def as_dict(self) -> dict: body = {} if self.no_alert_for_skipped_runs is not None: body['no_alert_for_skipped_runs'] = self.no_alert_for_skipped_runs + if self.on_duration_warning_threshold_exceeded: + body['on_duration_warning_threshold_exceeded'] = [ + v for v in self.on_duration_warning_threshold_exceeded + ] if self.on_failure: body['on_failure'] = [v for v in self.on_failure] if self.on_start: body['on_start'] = [v for v in self.on_start] if self.on_success: body['on_success'] = [v for v in self.on_success] @@ -641,6 +649,8 @@ def as_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, any]) -> 'JobEmailNotifications': return cls(no_alert_for_skipped_runs=d.get('no_alert_for_skipped_runs', None), + on_duration_warning_threshold_exceeded=d.get('on_duration_warning_threshold_exceeded', + None), on_failure=d.get('on_failure', None), on_start=d.get('on_start', None), on_success=d.get('on_success', None)) @@ -731,6 +741,7 @@ class JobSettings: email_notifications: Optional['JobEmailNotifications'] = None format: Optional['Format'] = None git_source: Optional['GitSource'] = None + health: Optional['JobsHealthRules'] = None job_clusters: Optional['List[JobCluster]'] = None max_concurrent_runs: Optional[int] = None name: Optional[str] = None @@ -751,6 +762,7 @@ def as_dict(self) -> dict: if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.format is not None: body['format'] = self.format.value if self.git_source: body['git_source'] = self.git_source.as_dict() + if self.health: body['health'] = self.health.as_dict() if self.job_clusters: body['job_clusters'] = [v.as_dict() for v in self.job_clusters] if self.max_concurrent_runs is not None: body['max_concurrent_runs'] = self.max_concurrent_runs if self.name is not None: body['name'] = self.name @@ -772,6 +784,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'JobSettings': email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), format=_enum(d, 'format', Format), git_source=_from_dict(d, 'git_source', GitSource), + health=_from_dict(d, 'health', JobsHealthRules), job_clusters=_repeated(d, 'job_clusters', JobCluster), max_concurrent_runs=d.get('max_concurrent_runs', None), name=d.get('name', None), @@ -816,6 +829,54 @@ class JobSourceDirtyState(Enum): NOT_SYNCED = 'NOT_SYNCED' +class JobsHealthMetric(Enum): + """Specifies the health metric that is being evaluated for a particular health rule.""" + + RUN_DURATION_SECONDS = 'RUN_DURATION_SECONDS' + + +class JobsHealthOperator(Enum): + """Specifies the operator used to compare the health metric value with the specified threshold.""" + + GREATER_THAN = 'GREATER_THAN' + + +@dataclass +class JobsHealthRule: + metric: Optional['JobsHealthMetric'] = None + op: Optional['JobsHealthOperator'] = None + value: Optional[int] = None + + def as_dict(self) -> dict: + body = {} + if self.metric is not None: body['metric'] = self.metric.value + if self.op is not None: body['op'] = self.op.value + if self.value is not None: body['value'] = self.value + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'JobsHealthRule': + return cls(metric=_enum(d, 'metric', JobsHealthMetric), + op=_enum(d, 'op', JobsHealthOperator), + value=d.get('value', None)) + + +@dataclass +class JobsHealthRules: + """An optional set of health rules that can be defined for this job.""" + + rules: Optional['List[JobsHealthRule]'] = None + + def as_dict(self) -> dict: + body = {} + if self.rules: body['rules'] = [v.as_dict() for v in self.rules] + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'JobsHealthRules': + return cls(rules=_repeated(d, 'rules', JobsHealthRule)) + + @dataclass class ListJobsRequest: """List jobs""" @@ -2068,6 +2129,7 @@ class SubmitRun: access_control_list: Optional['List[iam.AccessControlRequest]'] = None email_notifications: Optional['JobEmailNotifications'] = None git_source: Optional['GitSource'] = None + health: Optional['JobsHealthRules'] = None idempotency_token: Optional[str] = None notification_settings: Optional['JobNotificationSettings'] = None run_name: Optional[str] = None @@ -2081,6 +2143,7 @@ def as_dict(self) -> dict: body['access_control_list'] = [v.as_dict() for v in self.access_control_list] if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.git_source: body['git_source'] = self.git_source.as_dict() + if self.health: body['health'] = self.health.as_dict() if self.idempotency_token is not None: body['idempotency_token'] = self.idempotency_token if self.notification_settings: body['notification_settings'] = self.notification_settings.as_dict() if self.run_name is not None: body['run_name'] = self.run_name @@ -2094,6 +2157,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'SubmitRun': return cls(access_control_list=_repeated(d, 'access_control_list', iam.AccessControlRequest), email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), git_source=_from_dict(d, 'git_source', GitSource), + health=_from_dict(d, 'health', JobsHealthRules), idempotency_token=d.get('idempotency_token', None), notification_settings=_from_dict(d, 'notification_settings', JobNotificationSettings), run_name=d.get('run_name', None), @@ -2123,6 +2187,7 @@ class SubmitTask: depends_on: Optional['List[TaskDependency]'] = None email_notifications: Optional['JobEmailNotifications'] = None existing_cluster_id: Optional[str] = None + health: Optional['JobsHealthRules'] = None libraries: Optional['List[compute.Library]'] = None new_cluster: Optional['compute.ClusterSpec'] = None notebook_task: Optional['NotebookTask'] = None @@ -2141,6 +2206,7 @@ def as_dict(self) -> dict: if self.depends_on: body['depends_on'] = [v.as_dict() for v in self.depends_on] if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.existing_cluster_id is not None: body['existing_cluster_id'] = self.existing_cluster_id + if self.health: body['health'] = self.health.as_dict() if self.libraries: body['libraries'] = [v.as_dict() for v in self.libraries] if self.new_cluster: body['new_cluster'] = self.new_cluster.as_dict() if self.notebook_task: body['notebook_task'] = self.notebook_task.as_dict() @@ -2161,6 +2227,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'SubmitTask': depends_on=_repeated(d, 'depends_on', TaskDependency), email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), existing_cluster_id=d.get('existing_cluster_id', None), + health=_from_dict(d, 'health', JobsHealthRules), libraries=_repeated(d, 'libraries', compute.Library), new_cluster=_from_dict(d, 'new_cluster', compute.ClusterSpec), notebook_task=_from_dict(d, 'notebook_task', NotebookTask), @@ -2185,6 +2252,7 @@ class Task: description: Optional[str] = None email_notifications: Optional['TaskEmailNotifications'] = None existing_cluster_id: Optional[str] = None + health: Optional['JobsHealthRules'] = None job_cluster_key: Optional[str] = None libraries: Optional['List[compute.Library]'] = None max_retries: Optional[int] = None @@ -2212,6 +2280,7 @@ def as_dict(self) -> dict: if self.description is not None: body['description'] = self.description if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.existing_cluster_id is not None: body['existing_cluster_id'] = self.existing_cluster_id + if self.health: body['health'] = self.health.as_dict() if self.job_cluster_key is not None: body['job_cluster_key'] = self.job_cluster_key if self.libraries: body['libraries'] = [v.as_dict() for v in self.libraries] if self.max_retries is not None: body['max_retries'] = self.max_retries @@ -2242,6 +2311,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'Task': description=d.get('description', None), email_notifications=_from_dict(d, 'email_notifications', TaskEmailNotifications), existing_cluster_id=d.get('existing_cluster_id', None), + health=_from_dict(d, 'health', JobsHealthRules), job_cluster_key=d.get('job_cluster_key', None), libraries=_repeated(d, 'libraries', compute.Library), max_retries=d.get('max_retries', None), @@ -2280,12 +2350,17 @@ def from_dict(cls, d: Dict[str, any]) -> 'TaskDependency': @dataclass class TaskEmailNotifications: + on_duration_warning_threshold_exceeded: Optional['List[str]'] = None on_failure: Optional['List[str]'] = None on_start: Optional['List[str]'] = None on_success: Optional['List[str]'] = None def as_dict(self) -> dict: body = {} + if self.on_duration_warning_threshold_exceeded: + body['on_duration_warning_threshold_exceeded'] = [ + v for v in self.on_duration_warning_threshold_exceeded + ] if self.on_failure: body['on_failure'] = [v for v in self.on_failure] if self.on_start: body['on_start'] = [v for v in self.on_start] if self.on_success: body['on_success'] = [v for v in self.on_success] @@ -2293,7 +2368,9 @@ def as_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, any]) -> 'TaskEmailNotifications': - return cls(on_failure=d.get('on_failure', None), + return cls(on_duration_warning_threshold_exceeded=d.get('on_duration_warning_threshold_exceeded', + None), + on_failure=d.get('on_failure', None), on_start=d.get('on_start', None), on_success=d.get('on_success', None)) @@ -2470,12 +2547,18 @@ def from_dict(cls, d: Dict[str, any]) -> 'Webhook': @dataclass class WebhookNotifications: + on_duration_warning_threshold_exceeded: Optional[ + 'List[WebhookNotificationsOnDurationWarningThresholdExceededItem]'] = None on_failure: Optional['List[Webhook]'] = None on_start: Optional['List[Webhook]'] = None on_success: Optional['List[Webhook]'] = None def as_dict(self) -> dict: body = {} + if self.on_duration_warning_threshold_exceeded: + body['on_duration_warning_threshold_exceeded'] = [ + v.as_dict() for v in self.on_duration_warning_threshold_exceeded + ] if self.on_failure: body['on_failure'] = [v.as_dict() for v in self.on_failure] if self.on_start: body['on_start'] = [v.as_dict() for v in self.on_start] if self.on_success: body['on_success'] = [v.as_dict() for v in self.on_success] @@ -2483,11 +2566,28 @@ def as_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, any]) -> 'WebhookNotifications': - return cls(on_failure=_repeated(d, 'on_failure', Webhook), + return cls(on_duration_warning_threshold_exceeded=_repeated( + d, 'on_duration_warning_threshold_exceeded', + WebhookNotificationsOnDurationWarningThresholdExceededItem), + on_failure=_repeated(d, 'on_failure', Webhook), on_start=_repeated(d, 'on_start', Webhook), on_success=_repeated(d, 'on_success', Webhook)) +@dataclass +class WebhookNotificationsOnDurationWarningThresholdExceededItem: + id: Optional[str] = None + + def as_dict(self) -> dict: + body = {} + if self.id is not None: body['id'] = self.id + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'WebhookNotificationsOnDurationWarningThresholdExceededItem': + return cls(id=d.get('id', None)) + + class JobsAPI: """The Jobs API allows you to create, edit, and delete jobs. @@ -2588,6 +2688,7 @@ def create(self, email_notifications: Optional[JobEmailNotifications] = None, format: Optional[Format] = None, git_source: Optional[GitSource] = None, + health: Optional[JobsHealthRules] = None, job_clusters: Optional[List[JobCluster]] = None, max_concurrent_runs: Optional[int] = None, name: Optional[str] = None, @@ -2621,6 +2722,8 @@ def create(self, :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param job_clusters: List[:class:`JobCluster`] (optional) A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. @@ -2639,7 +2742,7 @@ def create(self, This value cannot exceed 1000\. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run. :param name: str (optional) - An optional name for the job. + An optional name for the job. The maximum length is 4096 bytes in UTF-8 encoding. :param notification_settings: :class:`JobNotificationSettings` (optional) Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this job. @@ -2681,6 +2784,7 @@ def create(self, email_notifications=email_notifications, format=format, git_source=git_source, + health=health, job_clusters=job_clusters, max_concurrent_runs=max_concurrent_runs, name=name, @@ -3299,6 +3403,7 @@ def submit(self, access_control_list: Optional[List[iam.AccessControlRequest]] = None, email_notifications: Optional[JobEmailNotifications] = None, git_source: Optional[GitSource] = None, + health: Optional[JobsHealthRules] = None, idempotency_token: Optional[str] = None, notification_settings: Optional[JobNotificationSettings] = None, run_name: Optional[str] = None, @@ -3320,6 +3425,8 @@ def submit(self, :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param idempotency_token: str (optional) An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the @@ -3354,6 +3461,7 @@ def submit(self, request = SubmitRun(access_control_list=access_control_list, email_notifications=email_notifications, git_source=git_source, + health=health, idempotency_token=idempotency_token, notification_settings=notification_settings, run_name=run_name, @@ -3372,6 +3480,7 @@ def submit_and_wait( access_control_list: Optional[List[iam.AccessControlRequest]] = None, email_notifications: Optional[JobEmailNotifications] = None, git_source: Optional[GitSource] = None, + health: Optional[JobsHealthRules] = None, idempotency_token: Optional[str] = None, notification_settings: Optional[JobNotificationSettings] = None, run_name: Optional[str] = None, @@ -3382,6 +3491,7 @@ def submit_and_wait( return self.submit(access_control_list=access_control_list, email_notifications=email_notifications, git_source=git_source, + health=health, idempotency_token=idempotency_token, notification_settings=notification_settings, run_name=run_name, diff --git a/databricks/sdk/service/ml.py b/databricks/sdk/service/ml.py index 45b8b4c2..1df63360 100755 --- a/databricks/sdk/service/ml.py +++ b/databricks/sdk/service/ml.py @@ -2592,7 +2592,7 @@ def log_batch(self, The following limits also apply to metric, param, and tag keys and values: - * Metric keyes, param keys, and tag keys can be up to 250 characters in length * Parameter and tag + * Metric keys, param keys, and tag keys can be up to 250 characters in length * Parameter and tag values can be up to 250 characters in length :param metrics: List[:class:`Metric`] (optional) diff --git a/databricks/sdk/service/pipelines.py b/databricks/sdk/service/pipelines.py index e232ddf3..465ec36b 100755 --- a/databricks/sdk/service/pipelines.py +++ b/databricks/sdk/service/pipelines.py @@ -624,7 +624,6 @@ class PipelineLibrary: jar: Optional[str] = None maven: Optional['compute.MavenLibrary'] = None notebook: Optional['NotebookLibrary'] = None - whl: Optional[str] = None def as_dict(self) -> dict: body = {} @@ -632,7 +631,6 @@ def as_dict(self) -> dict: if self.jar is not None: body['jar'] = self.jar if self.maven: body['maven'] = self.maven.as_dict() if self.notebook: body['notebook'] = self.notebook.as_dict() - if self.whl is not None: body['whl'] = self.whl return body @classmethod @@ -640,8 +638,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'PipelineLibrary': return cls(file=_from_dict(d, 'file', FileLibrary), jar=d.get('jar', None), maven=_from_dict(d, 'maven', compute.MavenLibrary), - notebook=_from_dict(d, 'notebook', NotebookLibrary), - whl=d.get('whl', None)) + notebook=_from_dict(d, 'notebook', NotebookLibrary)) @dataclass diff --git a/databricks/sdk/service/sql.py b/databricks/sdk/service/sql.py index 261a0822..464c9ad2 100755 --- a/databricks/sdk/service/sql.py +++ b/databricks/sdk/service/sql.py @@ -212,6 +212,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'ChannelInfo': class ChannelName(Enum): + """Name of the channel""" CHANNEL_NAME_CURRENT = 'CHANNEL_NAME_CURRENT' CHANNEL_NAME_CUSTOM = 'CHANNEL_NAME_CUSTOM' diff --git a/databricks/sdk/version.py b/databricks/sdk/version.py index e6d0c4f4..7fd229a3 100644 --- a/databricks/sdk/version.py +++ b/databricks/sdk/version.py @@ -1 +1 @@ -__version__ = '0.1.12' +__version__ = '0.2.0' diff --git a/docs/account/account-billing.rst b/docs/account/account-billing.rst index 6b369368..ea434aa2 100644 --- a/docs/account/account-billing.rst +++ b/docs/account/account-billing.rst @@ -1,12 +1,12 @@ Billing ======= - + Configure different aspects of Databricks billing and usage. - + .. toctree:: :maxdepth: 1 - + billable_usage budgets log_delivery \ No newline at end of file diff --git a/docs/account/account-catalog.rst b/docs/account/account-catalog.rst index 98ddf2f7..d235579a 100644 --- a/docs/account/account-catalog.rst +++ b/docs/account/account-catalog.rst @@ -1,12 +1,12 @@ Unity Catalog ============= - + Configure data governance with Unity Catalog for metastores, catalogs, schemas, tables, external locations, and storage credentials - + .. toctree:: :maxdepth: 1 - + metastore_assignments metastores storage_credentials \ No newline at end of file diff --git a/docs/account/account-iam.rst b/docs/account/account-iam.rst index 3cf39e0b..1c74cd15 100644 --- a/docs/account/account-iam.rst +++ b/docs/account/account-iam.rst @@ -1,12 +1,12 @@ Identity and Access Management ============================== - + Manage users, service principals, groups and their permissions in Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + access_control groups service_principals diff --git a/docs/account/account-oauth2.rst b/docs/account/account-oauth2.rst index f8fc02ff..f504ce4c 100644 --- a/docs/account/account-oauth2.rst +++ b/docs/account/account-oauth2.rst @@ -1,12 +1,12 @@ OAuth ===== - + Configure OAuth 2.0 application registrations for Databricks - + .. toctree:: :maxdepth: 1 - + custom_app_integration o_auth_enrollment published_app_integration diff --git a/docs/account/account-provisioning.rst b/docs/account/account-provisioning.rst index a9c3f4aa..5107ab3a 100644 --- a/docs/account/account-provisioning.rst +++ b/docs/account/account-provisioning.rst @@ -1,12 +1,12 @@ Provisioning ============ - + Resource management for secure Databricks Workspace deployment, cross-account IAM roles, storage, encryption, networking and private access. - + .. toctree:: :maxdepth: 1 - + credentials encryption_keys networks diff --git a/docs/account/account-settings.rst b/docs/account/account-settings.rst index e96f7c83..1feecca1 100644 --- a/docs/account/account-settings.rst +++ b/docs/account/account-settings.rst @@ -1,11 +1,11 @@ Settings ======== - + Manage security settings for Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + ip_access_lists settings \ No newline at end of file diff --git a/docs/account/groups.rst b/docs/account/groups.rst index 4e16cc26..4595ed45 100644 --- a/docs/account/groups.rst +++ b/docs/account/groups.rst @@ -9,7 +9,7 @@ Account Groups instead of to users individually. All Databricks account identities can be assigned as members of groups, and members inherit permissions that are assigned to their group. - .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, roles]) + .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, meta, roles]) Usage: @@ -38,6 +38,8 @@ Account Groups :param id: str (optional) Databricks group ID :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) :returns: :class:`Group` @@ -127,7 +129,7 @@ Account Groups :returns: Iterator over :class:`Group` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update group details. @@ -136,11 +138,13 @@ Account Groups :param id: str Unique ID for a group in the Databricks account. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. - .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, roles]) + .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, meta, roles]) Replace a group. @@ -154,6 +158,8 @@ Account Groups :param external_id: str (optional) :param groups: List[:class:`ComplexValue`] (optional) :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) diff --git a/docs/account/index.rst b/docs/account/index.rst index 82c2f6f6..8993d212 100644 --- a/docs/account/index.rst +++ b/docs/account/index.rst @@ -1,12 +1,12 @@ Account APIs ============ - + These APIs are available from AccountClient - + .. toctree:: :maxdepth: 1 - + account-iam account-catalog account-settings diff --git a/docs/account/metastores.rst b/docs/account/metastores.rst index 051be7b5..0b20dc9d 100644 --- a/docs/account/metastores.rst +++ b/docs/account/metastores.rst @@ -35,7 +35,7 @@ Account Metastores :returns: :class:`AccountsMetastoreInfo` - .. py:method:: delete(metastore_id) + .. py:method:: delete(metastore_id [, force]) Delete a metastore. @@ -44,6 +44,8 @@ Account Metastores :param metastore_id: str Unity Catalog metastore ID + :param force: bool (optional) + Force deletion even if the metastore is not empty. Default is false. diff --git a/docs/account/service_principals.rst b/docs/account/service_principals.rst index 667b7eda..497ec8db 100644 --- a/docs/account/service_principals.rst +++ b/docs/account/service_principals.rst @@ -130,7 +130,7 @@ Account Service Principals :returns: Iterator over :class:`ServicePrincipal` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update service principal details. @@ -139,6 +139,8 @@ Account Service Principals :param id: str Unique ID for a service principal in the Databricks account. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/account/settings.rst b/docs/account/settings.rst index 3aede411..08ec2dae 100644 --- a/docs/account/settings.rst +++ b/docs/account/settings.rst @@ -1,29 +1,43 @@ -Personal Compute setting -======================== +Personal Compute Enablement +=========================== .. py:class:: AccountSettingsAPI - TBD + The Personal Compute enablement setting lets you control which users can use the Personal Compute default + policy to create compute resources. By default all users in all workspaces have access (ON), but you can + change the setting to instead let individual workspaces configure access control (DELEGATE). + + There is only one instance of this setting per account. Since this setting has a default value, this + setting is present on all accounts even though it's never set on a given account. Deletion reverts the + value of the setting back to the default value. - .. py:method:: delete_personal_compute_setting( [, etag]) + .. py:method:: delete_personal_compute_setting(etag) Delete Personal Compute setting. - TBD + Reverts back the Personal Compute setting value to default (ON) - :param etag: str (optional) - TBD + :param etag: str + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. :returns: :class:`DeletePersonalComputeSettingResponse` - .. py:method:: read_personal_compute_setting( [, etag]) + .. py:method:: read_personal_compute_setting(etag) Get Personal Compute setting. - TBD + Gets the value of the Personal Compute setting. - :param etag: str (optional) - TBD + :param etag: str + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. :returns: :class:`PersonalComputeSetting` @@ -32,10 +46,10 @@ Personal Compute setting Update Personal Compute setting. - TBD + Updates the value of the Personal Compute setting. :param allow_missing: bool (optional) - TBD + This should always be set to true for Settings RPCs. Added for AIP compliance. :param setting: :class:`PersonalComputeSetting` (optional) :returns: :class:`PersonalComputeSetting` diff --git a/docs/account/storage_credentials.rst b/docs/account/storage_credentials.rst index 638e7e43..17f30768 100644 --- a/docs/account/storage_credentials.rst +++ b/docs/account/storage_credentials.rst @@ -42,7 +42,7 @@ Account Storage Credentials :returns: :class:`StorageCredentialInfo` - .. py:method:: delete(metastore_id, name) + .. py:method:: delete(metastore_id, name [, force]) Delete a storage credential. @@ -53,6 +53,8 @@ Account Storage Credentials Unity Catalog metastore ID :param name: str Name of the storage credential. + :param force: bool (optional) + Force deletion even if the Storage Credential is not empty. Default is false. diff --git a/docs/account/users.rst b/docs/account/users.rst index 39c35b2f..a87b6283 100644 --- a/docs/account/users.rst +++ b/docs/account/users.rst @@ -20,11 +20,14 @@ Account Users import time - from databricks.sdk import WorkspaceClient + from databricks.sdk import AccountClient - w = WorkspaceClient() + a = AccountClient() + + user = a.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') - user = w.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') + # cleanup + a.users.delete(delete=user.id) Create a new user. @@ -85,13 +88,16 @@ Account Users import time - from databricks.sdk import WorkspaceClient + from databricks.sdk import AccountClient - w = WorkspaceClient() + a = AccountClient() - user = w.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') + user = a.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') - fetch = w.users.get(get=user.id) + by_id = a.users.get(get=user.id) + + # cleanup + a.users.delete(delete=user.id) Get user details. @@ -116,7 +122,7 @@ Account Users all_users = w.users.list(attributes="id,userName", sort_by="userName", - sort_order=iam.ListSortOrder.descending) + sort_order=iam.ListSortOrder.DESCENDING) List users. @@ -146,7 +152,30 @@ Account Users :returns: Iterator over :class:`User` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) + + Usage: + + .. code-block:: + + import time + + from databricks.sdk import AccountClient + from databricks.sdk.service import iam + + a = AccountClient() + + user = a.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') + + a.users.patch(id=user.id, + schema=[iam.PatchSchema.URN_IETF_PARAMS_SCIM_API_MESSAGES20_PATCH_OP], + operations=[ + iam.Patch(op=iam.PatchOp.ADD, + value=iam.User(roles=[iam.ComplexValue(value="account_admin")])) + ]) + + # cleanup + a.users.delete(delete=user.id) Update user details. @@ -155,6 +184,8 @@ Account Users :param id: str Unique ID for a user in the Databricks account. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/alerts.rst b/docs/workspace/alerts.rst index 194eecdb..0778ee93 100644 --- a/docs/workspace/alerts.rst +++ b/docs/workspace/alerts.rst @@ -45,9 +45,9 @@ Alerts :param options: :class:`AlertOptions` Alert configuration options. :param query_id: str - ID of the query evaluated by the alert. + Query ID. :param parent: str (optional) - The identifier of the workspace folder containing the alert. The default is ther user's home folder. + The identifier of the workspace folder containing the object. :param rearm: int (optional) Number of seconds after being triggered before the alert rearms itself and can be triggered again. If `null`, alert will never be triggered again. @@ -167,7 +167,7 @@ Alerts :param options: :class:`AlertOptions` Alert configuration options. :param query_id: str - ID of the query evaluated by the alert. + Query ID. :param alert_id: str :param rearm: int (optional) Number of seconds after being triggered before the alert rearms itself and can be triggered again. diff --git a/docs/workspace/clean_rooms.rst b/docs/workspace/clean_rooms.rst new file mode 100644 index 00000000..f6a6d0a0 --- /dev/null +++ b/docs/workspace/clean_rooms.rst @@ -0,0 +1,95 @@ +Clean Rooms +=========== +.. py:class:: CleanRoomsAPI + + A clean room is a secure, privacy-protecting environment where two or more parties can share sensitive + enterprise data, including customer data, for measurements, insights, activation and other use cases. + + To create clean rooms, you must be a metastore admin or a user with the **CREATE_CLEAN_ROOM** privilege. + + .. py:method:: create(name, remote_detailed_info [, comment]) + + Create a clean room. + + Creates a new clean room with specified colaborators. The caller must be a metastore admin or have the + **CREATE_CLEAN_ROOM** privilege on the metastore. + + :param name: str + Name of the clean room. + :param remote_detailed_info: :class:`CentralCleanRoomInfo` + Central clean room details. + :param comment: str (optional) + User-provided free-form text description. + + :returns: :class:`CleanRoomInfo` + + + .. py:method:: delete(name_arg) + + Delete a clean room. + + Deletes a data object clean room from the metastore. The caller must be an owner of the clean room. + + :param name_arg: str + The name of the clean room. + + + + + .. py:method:: get(name_arg [, include_remote_details]) + + Get a clean room. + + Gets a data object clean room from the metastore. The caller must be a metastore admin or the owner of + the clean room. + + :param name_arg: str + The name of the clean room. + :param include_remote_details: bool (optional) + Whether to include remote details (central) on the clean room. + + :returns: :class:`CleanRoomInfo` + + + .. py:method:: list() + + List clean rooms. + + Gets an array of data object clean rooms from the metastore. The caller must be a metastore admin or + the owner of the clean room. There is no guarantee of a specific ordering of the elements in the + array. + + :returns: Iterator over :class:`CleanRoomInfo` + + + .. py:method:: update(name_arg [, catalog_updates, comment, name, owner]) + + Update a clean room. + + Updates the clean room with the changes and data objects in the request. The caller must be the owner + of the clean room or a metastore admin. + + When the caller is a metastore admin, only the __owner__ field can be updated. + + In the case that the clean room name is changed **updateCleanRoom** requires that the caller is both + the clean room owner and a metastore admin. + + For each table that is added through this method, the clean room owner must also have **SELECT** + privilege on the table. The privilege must be maintained indefinitely for recipients to be able to + access the table. Typically, you should use a group as the clean room owner. + + Table removals through **update** do not require additional privileges. + + :param name_arg: str + The name of the clean room. + :param catalog_updates: List[:class:`CleanRoomCatalogUpdate`] (optional) + Array of shared data object updates. + :param comment: str (optional) + User-provided free-form text description. + :param name: str (optional) + Name of the clean room. + :param owner: str (optional) + Username of current owner of clean room. + + :returns: :class:`CleanRoomInfo` + \ No newline at end of file diff --git a/docs/workspace/clusters.rst b/docs/workspace/clusters.rst index 32da3f53..eb511eb5 100644 --- a/docs/workspace/clusters.rst +++ b/docs/workspace/clusters.rst @@ -421,7 +421,7 @@ Clusters cluster_id = os.environ["TEST_DEFAULT_CLUSTER_ID"] - context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.python).result() + context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.PYTHON).result() w.clusters.ensure_cluster_is_running(cluster_id) diff --git a/docs/workspace/command_execution.rst b/docs/workspace/command_execution.rst index 988a86a5..f2d15635 100644 --- a/docs/workspace/command_execution.rst +++ b/docs/workspace/command_execution.rst @@ -63,7 +63,7 @@ Command Execution cluster_id = os.environ["TEST_DEFAULT_CLUSTER_ID"] - context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.python).result() + context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.PYTHON).result() # cleanup w.command_execution.destroy(cluster_id=cluster_id, context_id=context.id) @@ -110,11 +110,11 @@ Command Execution cluster_id = os.environ["TEST_DEFAULT_CLUSTER_ID"] - context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.python).result() + context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.PYTHON).result() text_results = w.command_execution.execute(cluster_id=cluster_id, context_id=context.id, - language=compute.Language.python, + language=compute.Language.PYTHON, command="print(1)").result() # cleanup diff --git a/docs/workspace/dashboards.rst b/docs/workspace/dashboards.rst index cbfdad09..6983366c 100644 --- a/docs/workspace/dashboards.rst +++ b/docs/workspace/dashboards.rst @@ -33,8 +33,7 @@ Dashboards :param name: str (optional) The title of this dashboard that appears in list views and at the top of the dashboard page. :param parent: str (optional) - The identifier of the workspace folder containing the dashboard. The default is the user's home - folder. + The identifier of the workspace folder containing the object. :param tags: List[str] (optional) :returns: :class:`Dashboard` diff --git a/docs/workspace/experiments.rst b/docs/workspace/experiments.rst index 9dc1d388..bba64dbf 100644 --- a/docs/workspace/experiments.rst +++ b/docs/workspace/experiments.rst @@ -301,7 +301,7 @@ Experiments The following limits also apply to metric, param, and tag keys and values: - * Metric keyes, param keys, and tag keys can be up to 250 characters in length * Parameter and tag + * Metric keys, param keys, and tag keys can be up to 250 characters in length * Parameter and tag values can be up to 250 characters in length :param metrics: List[:class:`Metric`] (optional) diff --git a/docs/workspace/groups.rst b/docs/workspace/groups.rst index 3990e9c0..58a5c4b8 100644 --- a/docs/workspace/groups.rst +++ b/docs/workspace/groups.rst @@ -9,7 +9,7 @@ Groups instead of to users individually. All Databricks workspace identities can be assigned as members of groups, and members inherit permissions that are assigned to their group. - .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, roles]) + .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, meta, roles]) Usage: @@ -38,6 +38,8 @@ Groups :param id: str (optional) Databricks group ID :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) :returns: :class:`Group` @@ -127,7 +129,7 @@ Groups :returns: Iterator over :class:`Group` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update group details. @@ -136,11 +138,13 @@ Groups :param id: str Unique ID for a group in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. - .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, roles]) + .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, meta, roles]) Replace a group. @@ -154,6 +158,8 @@ Groups :param external_id: str (optional) :param groups: List[:class:`ComplexValue`] (optional) :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) diff --git a/docs/workspace/index.rst b/docs/workspace/index.rst index b19b3aab..caf3d2f8 100644 --- a/docs/workspace/index.rst +++ b/docs/workspace/index.rst @@ -1,12 +1,12 @@ Workspace APIs ============== - + These APIs are available from WorkspaceClient - + .. toctree:: :maxdepth: 1 - + workspace-workspace workspace-compute workspace-jobs diff --git a/docs/workspace/instance_profiles.rst b/docs/workspace/instance_profiles.rst index cb7bddc2..b67b63a6 100644 --- a/docs/workspace/instance_profiles.rst +++ b/docs/workspace/instance_profiles.rst @@ -40,11 +40,10 @@ Instance Profiles [Databricks SQL Serverless]: https://docs.databricks.com/sql/admin/serverless.html :param is_meta_instance_profile: bool (optional) - By default, Databricks validates that it has sufficient permissions to launch instances with the - instance profile. This validation uses AWS dry-run mode for the RunInstances API. If validation - fails with an error message that does not indicate an IAM related permission issue, (e.g. `Your - requested instance type is not supported in your requested availability zone`), you can pass this - flag to skip the validation and forcibly add the instance profile. + Boolean flag indicating whether the instance profile should only be used in credential passthrough + scenarios. If true, it means the instance profile contains an meta IAM role which could assume a + wide range of roles. Therefore it should always be used with authorization. This field is optional, + the default value is `false`. :param skip_validation: bool (optional) By default, Databricks validates that it has sufficient permissions to launch instances with the instance profile. This validation uses AWS dry-run mode for the RunInstances API. If validation @@ -95,11 +94,10 @@ Instance Profiles [Databricks SQL Serverless]: https://docs.databricks.com/sql/admin/serverless.html :param is_meta_instance_profile: bool (optional) - By default, Databricks validates that it has sufficient permissions to launch instances with the - instance profile. This validation uses AWS dry-run mode for the RunInstances API. If validation - fails with an error message that does not indicate an IAM related permission issue, (e.g. `Your - requested instance type is not supported in your requested availability zone`), you can pass this - flag to skip the validation and forcibly add the instance profile. + Boolean flag indicating whether the instance profile should only be used in credential passthrough + scenarios. If true, it means the instance profile contains an meta IAM role which could assume a + wide range of roles. Therefore it should always be used with authorization. This field is optional, + the default value is `false`. diff --git a/docs/workspace/jobs.rst b/docs/workspace/jobs.rst index 92eefc53..f78f6184 100644 --- a/docs/workspace/jobs.rst +++ b/docs/workspace/jobs.rst @@ -110,7 +110,7 @@ Jobs See :method:wait_get_run_job_terminated_or_skipped for more details. - .. py:method:: create( [, access_control_list, compute, continuous, email_notifications, format, git_source, job_clusters, max_concurrent_runs, name, notification_settings, run_as, schedule, tags, tasks, timeout_seconds, trigger, webhook_notifications]) + .. py:method:: create( [, access_control_list, compute, continuous, email_notifications, format, git_source, health, job_clusters, max_concurrent_runs, name, notification_settings, parameters, run_as, schedule, tags, tasks, timeout_seconds, trigger, webhook_notifications]) Usage: @@ -161,6 +161,8 @@ Jobs :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param job_clusters: List[:class:`JobCluster`] (optional) A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. @@ -179,10 +181,12 @@ Jobs This value cannot exceed 1000\. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run. :param name: str (optional) - An optional name for the job. + An optional name for the job. The maximum length is 4096 bytes in UTF-8 encoding. :param notification_settings: :class:`JobNotificationSettings` (optional) Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this job. + :param parameters: List[:class:`JobParameterDefinition`] (optional) + Job-level parameter definitions :param run_as: :class:`JobRunAs` (optional) Write-only setting, available only in Create/Update/Reset and Submit calls. Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who created the @@ -434,8 +438,8 @@ Jobs :param expand_tasks: bool (optional) Whether to include task and cluster details in the response. :param limit: int (optional) - The number of jobs to return. This value must be greater than 0 and less or equal to 25. The default - value is 20. + The number of jobs to return. This value must be greater than 0 and less or equal to 100. The + default value is 20. :param name: str (optional) A filter on the list based on the exact (case insensitive) job name. :param offset: int (optional) @@ -488,7 +492,7 @@ Jobs :returns: Iterator over :class:`BaseRun` - .. py:method:: repair_run(run_id [, dbt_commands, jar_params, latest_repair_id, notebook_params, pipeline_params, python_named_params, python_params, rerun_all_failed_tasks, rerun_tasks, spark_submit_params, sql_params]) + .. py:method:: repair_run(run_id [, dbt_commands, jar_params, latest_repair_id, notebook_params, pipeline_params, python_named_params, python_params, rerun_all_failed_tasks, rerun_dependent_tasks, rerun_tasks, spark_submit_params, sql_params]) Usage: @@ -584,7 +588,10 @@ Jobs [Task parameter variables]: https://docs.databricks.com/jobs.html#parameter-variables :param rerun_all_failed_tasks: bool (optional) - If true, repair all failed tasks. Only one of rerun_tasks or rerun_all_failed_tasks can be used. + If true, repair all failed tasks. Only one of `rerun_tasks` or `rerun_all_failed_tasks` can be used. + :param rerun_dependent_tasks: bool (optional) + If true, repair all tasks that depend on the tasks in `rerun_tasks`, even if they were previously + successful. Can be also used in combination with `rerun_all_failed_tasks`. :param rerun_tasks: List[str] (optional) The task keys of the task runs to repair. :param spark_submit_params: List[str] (optional) @@ -665,7 +672,7 @@ Jobs - .. py:method:: run_now(job_id [, dbt_commands, idempotency_token, jar_params, notebook_params, pipeline_params, python_named_params, python_params, spark_submit_params, sql_params]) + .. py:method:: run_now(job_id [, dbt_commands, idempotency_token, jar_params, job_parameters, notebook_params, pipeline_params, python_named_params, python_params, spark_submit_params, sql_params]) Usage: @@ -729,6 +736,8 @@ Jobs Use [Task parameter variables](/jobs.html"#parameter-variables") to set parameters containing information about job runs. + :param job_parameters: List[Dict[str,str]] (optional) + Job-level parameters used in the run :param notebook_params: Dict[str,str] (optional) A map from keys to values for jobs with notebook task, for example `"notebook_params": {"name": "john doe", "age": "35"}`. The map is passed to the notebook and is accessible through the @@ -789,7 +798,7 @@ Jobs See :method:wait_get_run_job_terminated_or_skipped for more details. - .. py:method:: submit( [, access_control_list, git_source, idempotency_token, notification_settings, run_name, tasks, timeout_seconds, webhook_notifications]) + .. py:method:: submit( [, access_control_list, email_notifications, git_source, health, idempotency_token, notification_settings, run_name, tasks, timeout_seconds, webhook_notifications]) Usage: @@ -826,9 +835,14 @@ Jobs :param access_control_list: List[:class:`AccessControlRequest`] (optional) List of permissions to set on the job. + :param email_notifications: :class:`JobEmailNotifications` (optional) + An optional set of email addresses notified when the run begins or completes. The default behavior + is to not send any emails. :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param idempotency_token: str (optional) An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the diff --git a/docs/workspace/metastores.rst b/docs/workspace/metastores.rst index 870978b0..2cf34301 100644 --- a/docs/workspace/metastores.rst +++ b/docs/workspace/metastores.rst @@ -122,7 +122,7 @@ Metastores - .. py:method:: get(id) + .. py:method:: enable_optimization(metastore_id, enable) Usage: @@ -139,74 +139,74 @@ Metastores storage_root="s3://%s/%s" % (os.environ["TEST_BUCKET"], f'sdk-{time.time_ns()}')) - _ = w.metastores.get(get=created.metastore_id) + auto_maintenance = w.metastores.enable_optimization(enable=True, metastore_id=created.metastore_id) # cleanup w.metastores.delete(id=created.metastore_id, force=True) - Get a metastore. + Toggle predictive optimization on the metastore. - Gets a metastore that matches the supplied ID. The caller must be a metastore admin to retrieve this - info. + Enables or disables predictive optimization on the metastore. - :param id: str - Unique ID of the metastore. + :param metastore_id: str + Unique identifier of metastore. + :param enable: bool + Whether to enable predictive optimization on the metastore. - :returns: :class:`MetastoreInfo` + :returns: :class:`UpdatePredictiveOptimizationResponse` - .. py:method:: list() + .. py:method:: get(id) Usage: .. code-block:: + import os + import time + from databricks.sdk import WorkspaceClient w = WorkspaceClient() - all = w.metastores.list() + created = w.metastores.create(name=f'sdk-{time.time_ns()}', + storage_root="s3://%s/%s" % + (os.environ["TEST_BUCKET"], f'sdk-{time.time_ns()}')) + + _ = w.metastores.get(get=created.metastore_id) + + # cleanup + w.metastores.delete(id=created.metastore_id, force=True) - List metastores. + Get a metastore. - Gets an array of the available metastores (as __MetastoreInfo__ objects). The caller must be an admin - to retrieve this info. There is no guarantee of a specific ordering of the elements in the array. + Gets a metastore that matches the supplied ID. The caller must be a metastore admin to retrieve this + info. - :returns: Iterator over :class:`MetastoreInfo` + :param id: str + Unique ID of the metastore. + + :returns: :class:`MetastoreInfo` - .. py:method:: maintenance(metastore_id, enable) + .. py:method:: list() Usage: .. code-block:: - import os - import time - from databricks.sdk import WorkspaceClient w = WorkspaceClient() - created = w.metastores.create(name=f'sdk-{time.time_ns()}', - storage_root="s3://%s/%s" % - (os.environ["TEST_BUCKET"], f'sdk-{time.time_ns()}')) - - auto_maintenance = w.metastores.maintenance(enable=True, metastore_id=created.metastore_id) - - # cleanup - w.metastores.delete(id=created.metastore_id, force=True) + all = w.metastores.list() - Enables or disables auto maintenance on the metastore. - - Enables or disables auto maintenance on the metastore. + List metastores. - :param metastore_id: str - Unique identifier of metastore. - :param enable: bool - Whether to enable auto maintenance on the metastore. + Gets an array of the available metastores (as __MetastoreInfo__ objects). The caller must be an admin + to retrieve this info. There is no guarantee of a specific ordering of the elements in the array. - :returns: :class:`UpdateAutoMaintenanceResponse` + :returns: Iterator over :class:`MetastoreInfo` .. py:method:: summary() diff --git a/docs/workspace/policy_families.rst b/docs/workspace/policy_families.rst index 4f6481f3..32661273 100644 --- a/docs/workspace/policy_families.rst +++ b/docs/workspace/policy_families.rst @@ -10,4 +10,53 @@ Policy Families Policy families cannot be used directly to create clusters. Instead, you create cluster policies using a policy family. Cluster policies created using a policy family inherit the policy family's policy - definition. \ No newline at end of file + definition. + + .. py:method:: get(policy_family_id) + + Usage: + + .. code-block:: + + from databricks.sdk import WorkspaceClient + from databricks.sdk.service import compute + + w = WorkspaceClient() + + all = w.policy_families.list(compute.ListPolicyFamiliesRequest()) + + first_family = w.policy_families.get(policy_family_id=all[0].policy_family_id) + + Get policy family information. + + Retrieve the information for an policy family based on its identifier. + + :param policy_family_id: str + + :returns: :class:`PolicyFamily` + + + .. py:method:: list( [, max_results, page_token]) + + Usage: + + .. code-block:: + + from databricks.sdk import WorkspaceClient + from databricks.sdk.service import compute + + w = WorkspaceClient() + + all = w.policy_families.list(compute.ListPolicyFamiliesRequest()) + + List policy families. + + Retrieve a list of policy families. This API is paginated. + + :param max_results: int (optional) + The max number of policy families to return. + :param page_token: str (optional) + A token that can be used to get the next page of results. + + :returns: Iterator over :class:`PolicyFamily` + \ No newline at end of file diff --git a/docs/workspace/queries.rst b/docs/workspace/queries.rst index b74d53f0..9c015020 100644 --- a/docs/workspace/queries.rst +++ b/docs/workspace/queries.rst @@ -40,19 +40,19 @@ Queries / Results **Note**: You cannot add a visualization until you create the query. :param data_source_id: str (optional) - The ID of the data source / SQL warehouse where this query will run. + Data source ID. :param description: str (optional) - General description that can convey additional information about this query such as usage notes. + General description that conveys additional information about this query such as usage notes. :param name: str (optional) - The name or title of this query to display in list views. + The title of this query that appears in list views, widget headings, and on the query page. :param options: Any (optional) Exclusively used for storing a list parameter definitions. A parameter is an object with `title`, `name`, `type`, and `value` properties. The `value` field here is the default value. It can be overridden at runtime. :param parent: str (optional) - The identifier of the workspace folder containing the query. The default is the user's home folder. + The identifier of the workspace folder containing the object. :param query: str (optional) - The text of the query. + The text of the query to be run. :returns: :class:`Query` @@ -181,17 +181,17 @@ Queries / Results :param query_id: str :param data_source_id: str (optional) - The ID of the data source / SQL warehouse where this query will run. + Data source ID. :param description: str (optional) - General description that can convey additional information about this query such as usage notes. + General description that conveys additional information about this query such as usage notes. :param name: str (optional) - The name or title of this query to display in list views. + The title of this query that appears in list views, widget headings, and on the query page. :param options: Any (optional) Exclusively used for storing a list parameter definitions. A parameter is an object with `title`, `name`, `type`, and `value` properties. The `value` field here is the default value. It can be overridden at runtime. :param query: str (optional) - The text of the query. + The text of the query to be run. :returns: :class:`Query` \ No newline at end of file diff --git a/docs/workspace/service_principals.rst b/docs/workspace/service_principals.rst index ee0027e2..34cbb0c0 100644 --- a/docs/workspace/service_principals.rst +++ b/docs/workspace/service_principals.rst @@ -130,7 +130,7 @@ Service Principals :returns: Iterator over :class:`ServicePrincipal` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update service principal details. @@ -139,6 +139,8 @@ Service Principals :param id: str Unique ID for a service principal in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/serving_endpoints.rst b/docs/workspace/serving_endpoints.rst index cceb24d3..699d6be7 100644 --- a/docs/workspace/serving_endpoints.rst +++ b/docs/workspace/serving_endpoints.rst @@ -4,13 +4,13 @@ Serving endpoints The Serving Endpoints API allows you to create, update, and delete model serving endpoints. - You can use a serving endpoint to serve models from the Databricks Model Registry. Endpoints expose the - underlying models as scalable REST API endpoints using serverless compute. This means the endpoints and - associated compute resources are fully managed by Databricks and will not appear in your cloud account. A - serving endpoint can consist of one or more MLflow models from the Databricks Model Registry, called - served models. A serving endpoint can have at most ten served models. You can configure traffic settings - to define how requests should be routed to your served models behind an endpoint. Additionally, you can - configure the scale of resources that should be applied to each served model. + You can use a serving endpoint to serve models from the Databricks Model Registry or from Unity Catalog. + Endpoints expose the underlying models as scalable REST API endpoints using serverless compute. This means + the endpoints and associated compute resources are fully managed by Databricks and will not appear in your + cloud account. A serving endpoint can consist of one or more MLflow models from the Databricks Model + Registry, called served models. A serving endpoint can have at most ten served models. You can configure + traffic settings to define how requests should be routed to your served models behind an endpoint. + Additionally, you can configure the scale of resources that should be applied to each served model. .. py:method:: build_logs(name, served_model_name) diff --git a/docs/workspace/tables.rst b/docs/workspace/tables.rst index 8609cb03..7508cfcf 100644 --- a/docs/workspace/tables.rst +++ b/docs/workspace/tables.rst @@ -170,4 +170,20 @@ Tables A sql LIKE pattern (% and _) for table names. All tables will be returned if not set or empty. :returns: Iterator over :class:`TableSummary` + + + .. py:method:: update(full_name [, owner]) + + Update a table owner. + + Change the owner of the table. The caller must be the owner of the parent catalog, have the + **USE_CATALOG** privilege on the parent catalog and be the owner of the parent schema, or be the owner + of the table and have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** + privilege on the parent schema. + + :param full_name: str + Full name of the table. + :param owner: str (optional) + + \ No newline at end of file diff --git a/docs/workspace/users.rst b/docs/workspace/users.rst index 571df325..b6fda861 100644 --- a/docs/workspace/users.rst +++ b/docs/workspace/users.rst @@ -20,11 +20,14 @@ Users import time - from databricks.sdk import WorkspaceClient + from databricks.sdk import AccountClient - w = WorkspaceClient() + a = AccountClient() + + user = a.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') - user = w.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') + # cleanup + a.users.delete(delete=user.id) Create a new user. @@ -85,13 +88,16 @@ Users import time - from databricks.sdk import WorkspaceClient + from databricks.sdk import AccountClient - w = WorkspaceClient() + a = AccountClient() - user = w.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') + user = a.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') - fetch = w.users.get(get=user.id) + by_id = a.users.get(get=user.id) + + # cleanup + a.users.delete(delete=user.id) Get user details. @@ -116,7 +122,7 @@ Users all_users = w.users.list(attributes="id,userName", sort_by="userName", - sort_order=iam.ListSortOrder.descending) + sort_order=iam.ListSortOrder.DESCENDING) List users. @@ -146,7 +152,30 @@ Users :returns: Iterator over :class:`User` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) + + Usage: + + .. code-block:: + + import time + + from databricks.sdk import AccountClient + from databricks.sdk.service import iam + + a = AccountClient() + + user = a.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') + + a.users.patch(id=user.id, + schema=[iam.PatchSchema.URN_IETF_PARAMS_SCIM_API_MESSAGES20_PATCH_OP], + operations=[ + iam.Patch(op=iam.PatchOp.ADD, + value=iam.User(roles=[iam.ComplexValue(value="account_admin")])) + ]) + + # cleanup + a.users.delete(delete=user.id) Update user details. @@ -155,6 +184,8 @@ Users :param id: str Unique ID for a user in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/workspace-catalog.rst b/docs/workspace/workspace-catalog.rst index 15e4e8c4..3b0e859f 100644 --- a/docs/workspace/workspace-catalog.rst +++ b/docs/workspace/workspace-catalog.rst @@ -1,12 +1,12 @@ Unity Catalog ============= - + Configure data governance with Unity Catalog for metastores, catalogs, schemas, tables, external locations, and storage credentials - + .. toctree:: :maxdepth: 1 - + catalogs connections external_locations diff --git a/docs/workspace/workspace-compute.rst b/docs/workspace/workspace-compute.rst index 32b215b3..cbb4bb83 100644 --- a/docs/workspace/workspace-compute.rst +++ b/docs/workspace/workspace-compute.rst @@ -1,12 +1,12 @@ Compute ======= - + Use and configure compute for Databricks - + .. toctree:: :maxdepth: 1 - + cluster_policies clusters command_execution diff --git a/docs/workspace/workspace-files.rst b/docs/workspace/workspace-files.rst index 8c93a004..88530a6b 100644 --- a/docs/workspace/workspace-files.rst +++ b/docs/workspace/workspace-files.rst @@ -1,10 +1,10 @@ File Management =============== - + Manage files on Databricks in a filesystem-like interface - + .. toctree:: :maxdepth: 1 - + dbfs \ No newline at end of file diff --git a/docs/workspace/workspace-iam.rst b/docs/workspace/workspace-iam.rst index 4468aaaf..021ff539 100644 --- a/docs/workspace/workspace-iam.rst +++ b/docs/workspace/workspace-iam.rst @@ -1,12 +1,12 @@ Identity and Access Management ============================== - + Manage users, service principals, groups and their permissions in Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + account_access_control_proxy current_user groups diff --git a/docs/workspace/workspace-jobs.rst b/docs/workspace/workspace-jobs.rst index 0da2f655..a1a53a95 100644 --- a/docs/workspace/workspace-jobs.rst +++ b/docs/workspace/workspace-jobs.rst @@ -1,10 +1,10 @@ Jobs ==== - + Schedule automated jobs on Databricks Workspaces - + .. toctree:: :maxdepth: 1 - + jobs \ No newline at end of file diff --git a/docs/workspace/workspace-ml.rst b/docs/workspace/workspace-ml.rst index d1f926f9..e701cfd1 100644 --- a/docs/workspace/workspace-ml.rst +++ b/docs/workspace/workspace-ml.rst @@ -1,11 +1,11 @@ Machine Learning ================ - + Create and manage experiments, features, and other machine learning artifacts - + .. toctree:: :maxdepth: 1 - + experiments model_registry \ No newline at end of file diff --git a/docs/workspace/workspace-pipelines.rst b/docs/workspace/workspace-pipelines.rst index 4e4a8237..8213f87e 100644 --- a/docs/workspace/workspace-pipelines.rst +++ b/docs/workspace/workspace-pipelines.rst @@ -1,10 +1,10 @@ Delta Live Tables ================= - + Manage pipelines, runs, and other Delta Live Table resources - + .. toctree:: :maxdepth: 1 - + pipelines \ No newline at end of file diff --git a/docs/workspace/workspace-serving.rst b/docs/workspace/workspace-serving.rst index 0d20fba3..34c34751 100644 --- a/docs/workspace/workspace-serving.rst +++ b/docs/workspace/workspace-serving.rst @@ -1,10 +1,10 @@ Real-time Serving ================= - + Use real-time inference for machine learning - + .. toctree:: :maxdepth: 1 - + serving_endpoints \ No newline at end of file diff --git a/docs/workspace/workspace-settings.rst b/docs/workspace/workspace-settings.rst index 8174e8a7..71e66ac1 100644 --- a/docs/workspace/workspace-settings.rst +++ b/docs/workspace/workspace-settings.rst @@ -1,12 +1,12 @@ Settings ======== - + Manage security settings for Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + ip_access_lists token_management tokens diff --git a/docs/workspace/workspace-sharing.rst b/docs/workspace/workspace-sharing.rst index e4b3b7e7..5ba08d21 100644 --- a/docs/workspace/workspace-sharing.rst +++ b/docs/workspace/workspace-sharing.rst @@ -1,12 +1,13 @@ Delta Sharing ============= - + Configure data sharing with Unity Catalog for providers, recipients, and shares - + .. toctree:: :maxdepth: 1 - + + clean_rooms providers recipient_activation recipients diff --git a/docs/workspace/workspace-sql.rst b/docs/workspace/workspace-sql.rst index aa24ed62..bd49e65d 100644 --- a/docs/workspace/workspace-sql.rst +++ b/docs/workspace/workspace-sql.rst @@ -1,12 +1,12 @@ Databricks SQL ============== - + Manage Databricks SQL assets, including warehouses, dashboards, queries and query history, and alerts - + .. toctree:: :maxdepth: 1 - + alerts dashboards data_sources diff --git a/docs/workspace/workspace-workspace.rst b/docs/workspace/workspace-workspace.rst index 17348c9f..7845b778 100644 --- a/docs/workspace/workspace-workspace.rst +++ b/docs/workspace/workspace-workspace.rst @@ -1,12 +1,12 @@ Databricks Workspace ==================== - + Manage workspace-level entities that include notebooks, Git checkouts, and secrets - + .. toctree:: :maxdepth: 1 - + git_credentials repos secrets diff --git a/examples/metastores/enable_optimization_metastores.py b/examples/metastores/enable_optimization_metastores.py new file mode 100755 index 00000000..8d3d4cd0 --- /dev/null +++ b/examples/metastores/enable_optimization_metastores.py @@ -0,0 +1,15 @@ +import os +import time + +from databricks.sdk import WorkspaceClient + +w = WorkspaceClient() + +created = w.metastores.create(name=f'sdk-{time.time_ns()}', + storage_root="s3://%s/%s" % + (os.environ["TEST_BUCKET"], f'sdk-{time.time_ns()}')) + +auto_maintenance = w.metastores.enable_optimization(enable=True, metastore_id=created.metastore_id) + +# cleanup +w.metastores.delete(id=created.metastore_id, force=True) diff --git a/examples/users/patch_account_users.py b/examples/users/patch_account_users.py index 316d9e9d..e0cc8455 100755 --- a/examples/users/patch_account_users.py +++ b/examples/users/patch_account_users.py @@ -8,7 +8,7 @@ user = a.users.create(display_name=f'sdk-{time.time_ns()}', user_name=f'sdk-{time.time_ns()}@example.com') a.users.patch(id=user.id, - schema=[iam.PatchSchema.URN_IETF_PARAMS_SCIM_API_MESSAGES_2_0_PATCH_OP], + schema=[iam.PatchSchema.URN_IETF_PARAMS_SCIM_API_MESSAGES20_PATCH_OP], operations=[ iam.Patch(op=iam.PatchOp.ADD, value=iam.User(roles=[iam.ComplexValue(value="account_admin")]))