From de6292b7fe088a8ddc4761c79a280b0bd0a443b3 Mon Sep 17 00:00:00 2001 From: Hector Castejon Diaz Date: Mon, 24 Jun 2024 15:02:36 +0200 Subject: [PATCH] Release v0.29.0 ### Breaking Changes * Create a method to generate OAuth tokens ([#644](https://github.com/databricks/databricks-sdk-py/pull/644)) NOTE: this change renames `@credentials_provider`/`CredentialsProvider` to `@credentials_strategy`/`CredentialsStrategy`. Users using custom credentials need to update the code to use the new name. ### Improvements and Bug Fixes * Patch `dbutils.notebook.entry_point...` to return current local notebook path from env var ([#618](https://github.com/databricks/databricks-sdk-py/pull/618)). * Add `serverless_compute_id` field to the config ([#685](https://github.com/databricks/databricks-sdk-py/pull/685)). * Added `with_product(...)` and `with_user_agent_extra(...)` public functions to improve telemetry for mid-stream libraries ([#679](https://github.com/databricks/databricks-sdk-py/pull/679)). * Fixed Interactive OAuth on Azure & updated documentations ([#669](https://github.com/databricks/databricks-sdk-py/pull/669)). ### Documentation * Fix documentation examples ([#676](https://github.com/databricks/databricks-sdk-py/pull/676)). ### Internal Changes * Ignore DataPlane Services during generation ([#663](https://github.com/databricks/databricks-sdk-py/pull/663)). * Update OpenAPI spec ([#667](https://github.com/databricks/databricks-sdk-py/pull/667)). * Retry failed integration tests ([#674](https://github.com/databricks/databricks-sdk-py/pull/674)). ### API Changes * Changed `list()` method for [a.account_storage_credentials](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_storage_credentials.html) account-level service to return `databricks.sdk.service.catalog.ListAccountStorageCredentialsResponse` dataclass. * Changed `isolation_mode` field for `databricks.sdk.service.catalog.CatalogInfo` to `databricks.sdk.service.catalog.CatalogIsolationMode` dataclass. * Added `isolation_mode` field for `databricks.sdk.service.catalog.ExternalLocationInfo`. * Added `max_results` and `page_token` fields for `databricks.sdk.service.catalog.ListCatalogsRequest`. * Added `next_page_token` field for `databricks.sdk.service.catalog.ListCatalogsResponse`. * Added `table_serving_url` field for `databricks.sdk.service.catalog.OnlineTable`. * Added `isolation_mode` field for `databricks.sdk.service.catalog.StorageCredentialInfo`. * Changed `isolation_mode` field for `databricks.sdk.service.catalog.UpdateCatalog` to `databricks.sdk.service.catalog.CatalogIsolationMode` dataclass. * Added `isolation_mode` field for `databricks.sdk.service.catalog.UpdateExternalLocation`. * Added `isolation_mode` field for `databricks.sdk.service.catalog.UpdateStorageCredential`. * Added `databricks.sdk.service.catalog.CatalogIsolationMode` and `databricks.sdk.service.catalog.ListAccountStorageCredentialsResponse` dataclasses. * Added `create_schedule()`, `create_subscription()`, `delete_schedule()`, `delete_subscription()`, `get_schedule()`, `get_subscription()`, `list()`, `list_schedules()`, `list_subscriptions()` and `update_schedule()` methods for [w.lakeview](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/lakeview.html) workspace-level service. * Added `databricks.sdk.service.dashboards.CreateScheduleRequest`, `databricks.sdk.service.dashboards.CreateSubscriptionRequest`, `databricks.sdk.service.dashboards.CronSchedule`, `databricks.sdk.service.dashboards.DashboardView`, `databricks.sdk.service.dashboards.DeleteScheduleRequest`, `databricks.sdk.service.dashboards.DeleteSubscriptionRequest` dataclass, `databricks.sdk.service.dashboards.GetScheduleRequest`, `databricks.sdk.service.dashboards.GetSubscriptionRequest`, `databricks.sdk.service.dashboards.ListDashboardsRequest`, `databricks.sdk.service.dashboards.ListDashboardsResponse`, `databricks.sdk.service.dashboards.ListSchedulesRequest`, `databricks.sdk.service.dashboards.ListSchedulesResponse`, `databricks.sdk.service.dashboards.ListSubscriptionsRequest`, `databricks.sdk.service.dashboards.ListSubscriptionsResponse`, `databricks.sdk.service.dashboards.Schedule`, `databricks.sdk.service.dashboards.SchedulePauseStatus`, `databricks.sdk.service.dashboards.Subscriber`, `databricks.sdk.service.dashboards.Subscription`, `databricks.sdk.service.dashboards.SubscriptionSubscriberDestination`, `databricks.sdk.service.dashboards.SubscriptionSubscriberUser` and `databricks.sdk.service.dashboards.UpdateScheduleRequest` dataclasses. * Added `termination_category` field for `databricks.sdk.service.jobs.ForEachTaskErrorMessageStats`. * Added `on_streaming_backlog_exceeded` field for `databricks.sdk.service.jobs.JobEmailNotifications`. * Added `environment_key` field for `databricks.sdk.service.jobs.RunTask`. * Removed `condition_task`, `dbt_task`, `notebook_task`, `pipeline_task`, `python_wheel_task`, `run_job_task`, `spark_jar_task`, `spark_python_task`, `spark_submit_task` and `sql_task` fields for `databricks.sdk.service.jobs.SubmitRun`. * Added `environments` field for `databricks.sdk.service.jobs.SubmitRun`. * Added `dbt_task` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `environment_key` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `on_streaming_backlog_exceeded` field for `databricks.sdk.service.jobs.TaskEmailNotifications`. * Added `periodic` field for `databricks.sdk.service.jobs.TriggerSettings`. * Added `on_streaming_backlog_exceeded` field for `databricks.sdk.service.jobs.WebhookNotifications`. * Added `databricks.sdk.service.jobs.PeriodicTriggerConfiguration` dataclass. * Added `databricks.sdk.service.jobs.PeriodicTriggerConfigurationTimeUnit` dataclass. * Added `batch_get()` method for [w.consumer_listings](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/consumer_listings.html) workspace-level service. * Added `batch_get()` method for [w.consumer_providers](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/consumer_providers.html) workspace-level service. * Added `provider_summary` field for `databricks.sdk.service.marketplace.Listing`. * Added `databricks.sdk.service.marketplace.BatchGetListingsRequest`, `databricks.sdk.service.marketplace.BatchGetListingsResponse`, `databricks.sdk.service.marketplace.BatchGetProvidersRequest`, `databricks.sdk.service.marketplace.BatchGetProvidersResponse`, `databricks.sdk.service.marketplace.ProviderIconFile`, `databricks.sdk.service.marketplace.ProviderIconType`, `databricks.sdk.service.marketplace.ProviderListingSummaryInfo` and `databricks.sdk.service.oauth2.DataPlaneInfo` dataclasses. * Removed `create_deployment()` method for [w.apps](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/apps.html) workspace-level service. * Added `deploy()` and `start()` method1 for [w.apps](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/apps.html) workspace-level service. * Added [w.serving_endpoints_data_plane](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/serving_endpoints_data_plane.html) workspace-level service. * Added `service_principal_id` and `service_principal_name` fields for `databricks.sdk.service.serving.App`. * Added `mode` field for `databricks.sdk.service.serving.AppDeployment`. * Added `mode` field for `databricks.sdk.service.serving.CreateAppDeploymentRequest`. * Added `data_plane_info` field for `databricks.sdk.service.serving.ServingEndpointDetailed`. * Added `databricks.sdk.service.serving.AppDeploymentMode`, `databricks.sdk.service.serving.ModelDataPlaneInfo` and `databricks.sdk.service.serving.StartAppRequest` dataclasses. * Added `query_next_page()` method for [w.vector_search_indexes](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/vector_search_indexes.html) workspace-level service. * Added `query_type` field for `databricks.sdk.service.vectorsearch.QueryVectorIndexRequest`. * Added `next_page_token` field for `databricks.sdk.service.vectorsearch.QueryVectorIndexResponse`. OpenAPI SHA: 7437dabb9dadee402c1fc060df4c1ce8cc5369f0, Date: 2024-06-24 --- .codegen/_openapi_sha | 2 +- CHANGELOG.md | 72 ++ databricks/sdk/service/catalog.py | 120 ++- databricks/sdk/service/compute.py | 9 +- databricks/sdk/service/dashboards.py | 709 +++++++++++++++++- databricks/sdk/service/jobs.py | 269 +++---- databricks/sdk/service/marketplace.py | 64 ++ databricks/sdk/service/pipelines.py | 2 +- databricks/sdk/service/serving.py | 42 +- databricks/sdk/service/settings.py | 1 + databricks/sdk/service/sharing.py | 1 - databricks/sdk/service/sql.py | 124 ++- databricks/sdk/service/vectorsearch.py | 75 ++ databricks/sdk/version.py | 2 +- docs/dbdataclasses/catalog.rst | 27 +- docs/dbdataclasses/dashboards.rst | 72 ++ docs/dbdataclasses/jobs.rst | 31 + docs/dbdataclasses/marketplace.rst | 19 + docs/dbdataclasses/serving.rst | 4 + docs/dbdataclasses/settings.rst | 3 + docs/dbdataclasses/sharing.rst | 3 - docs/dbdataclasses/vectorsearch.rst | 4 + docs/workspace/catalog/catalogs.rst | 16 +- docs/workspace/catalog/external_locations.rst | 4 +- docs/workspace/catalog/functions.rst | 2 + docs/workspace/catalog/metastores.rst | 5 +- .../workspace/catalog/storage_credentials.rst | 8 +- docs/workspace/dashboards/lakeview.rst | 160 ++++ docs/workspace/jobs/jobs.rst | 43 +- docs/workspace/serving/apps.rst | 12 + docs/workspace/sql/alerts.rst | 26 +- docs/workspace/sql/dashboards.rst | 4 +- docs/workspace/sql/data_sources.rst | 8 + docs/workspace/sql/dbsql_permissions.rst | 16 + docs/workspace/sql/queries.rst | 36 +- docs/workspace/sql/statement_execution.rst | 5 +- .../vectorsearch/vector_search_indexes.rst | 21 +- .../update_catalog_workspace_bindings.py | 2 +- 38 files changed, 1733 insertions(+), 290 deletions(-) diff --git a/.codegen/_openapi_sha b/.codegen/_openapi_sha index de0f45ab..c4b47ca1 100644 --- a/.codegen/_openapi_sha +++ b/.codegen/_openapi_sha @@ -1 +1 @@ -37b925eba37dfb3d7e05b6ba2d458454ce62d3a0 \ No newline at end of file +7437dabb9dadee402c1fc060df4c1ce8cc5369f0 \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index a039d34d..cd0a18a4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,77 @@ # Version changelog +## 0.29.0 + +### Breaking Changes +* Create a method to generate OAuth tokens ([#644](https://github.com/databricks/databricks-sdk-py/pull/644)) + +NOTE: this change renames `@credentials_provider`/`CredentialsProvider` to `@credentials_strategy`/`CredentialsStrategy`. Users +using custom credentials need to update the code to use the new name. + + +### Improvements and Bug Fixes + +* Patch `dbutils.notebook.entry_point...` to return current local notebook path from env var ([#618](https://github.com/databricks/databricks-sdk-py/pull/618)). +* Add `serverless_compute_id` field to the config ([#685](https://github.com/databricks/databricks-sdk-py/pull/685)). +* Added `with_product(...)` and `with_user_agent_extra(...)` public functions to improve telemetry for mid-stream libraries ([#679](https://github.com/databricks/databricks-sdk-py/pull/679)). +* Fixed Interactive OAuth on Azure & updated documentations ([#669](https://github.com/databricks/databricks-sdk-py/pull/669)). + + +### Documentation + +* Fix documentation examples ([#676](https://github.com/databricks/databricks-sdk-py/pull/676)). + + +### Internal Changes + +* Ignore DataPlane Services during generation ([#663](https://github.com/databricks/databricks-sdk-py/pull/663)). +* Update OpenAPI spec ([#667](https://github.com/databricks/databricks-sdk-py/pull/667)). +* Retry failed integration tests ([#674](https://github.com/databricks/databricks-sdk-py/pull/674)). + +### API Changes + + * Changed `list()` method for [a.account_storage_credentials](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_storage_credentials.html) account-level service to return `databricks.sdk.service.catalog.ListAccountStorageCredentialsResponse` dataclass. + * Changed `isolation_mode` field for `databricks.sdk.service.catalog.CatalogInfo` to `databricks.sdk.service.catalog.CatalogIsolationMode` dataclass. + * Added `isolation_mode` field for `databricks.sdk.service.catalog.ExternalLocationInfo`. + * Added `max_results` and `page_token` fields for `databricks.sdk.service.catalog.ListCatalogsRequest`. + * Added `next_page_token` field for `databricks.sdk.service.catalog.ListCatalogsResponse`. + * Added `table_serving_url` field for `databricks.sdk.service.catalog.OnlineTable`. + * Added `isolation_mode` field for `databricks.sdk.service.catalog.StorageCredentialInfo`. + * Changed `isolation_mode` field for `databricks.sdk.service.catalog.UpdateCatalog` to `databricks.sdk.service.catalog.CatalogIsolationMode` dataclass. + * Added `isolation_mode` field for `databricks.sdk.service.catalog.UpdateExternalLocation`. + * Added `isolation_mode` field for `databricks.sdk.service.catalog.UpdateStorageCredential`. + * Added `databricks.sdk.service.catalog.CatalogIsolationMode` and `databricks.sdk.service.catalog.ListAccountStorageCredentialsResponse` dataclasses. + * Added `create_schedule()`, `create_subscription()`, `delete_schedule()`, `delete_subscription()`, `get_schedule()`, `get_subscription()`, `list()`, `list_schedules()`, `list_subscriptions()` and `update_schedule()` methods for [w.lakeview](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/lakeview.html) workspace-level service. + * Added `databricks.sdk.service.dashboards.CreateScheduleRequest`, `databricks.sdk.service.dashboards.CreateSubscriptionRequest`, `databricks.sdk.service.dashboards.CronSchedule`, `databricks.sdk.service.dashboards.DashboardView`, `databricks.sdk.service.dashboards.DeleteScheduleRequest`, `databricks.sdk.service.dashboards.DeleteSubscriptionRequest` dataclass, `databricks.sdk.service.dashboards.GetScheduleRequest`, `databricks.sdk.service.dashboards.GetSubscriptionRequest`, `databricks.sdk.service.dashboards.ListDashboardsRequest`, `databricks.sdk.service.dashboards.ListDashboardsResponse`, `databricks.sdk.service.dashboards.ListSchedulesRequest`, `databricks.sdk.service.dashboards.ListSchedulesResponse`, `databricks.sdk.service.dashboards.ListSubscriptionsRequest`, `databricks.sdk.service.dashboards.ListSubscriptionsResponse`, `databricks.sdk.service.dashboards.Schedule`, `databricks.sdk.service.dashboards.SchedulePauseStatus`, `databricks.sdk.service.dashboards.Subscriber`, `databricks.sdk.service.dashboards.Subscription`, `databricks.sdk.service.dashboards.SubscriptionSubscriberDestination`, `databricks.sdk.service.dashboards.SubscriptionSubscriberUser` and `databricks.sdk.service.dashboards.UpdateScheduleRequest` dataclasses. + * Added `termination_category` field for `databricks.sdk.service.jobs.ForEachTaskErrorMessageStats`. + * Added `on_streaming_backlog_exceeded` field for `databricks.sdk.service.jobs.JobEmailNotifications`. + * Added `environment_key` field for `databricks.sdk.service.jobs.RunTask`. + * Removed `condition_task`, `dbt_task`, `notebook_task`, `pipeline_task`, `python_wheel_task`, `run_job_task`, `spark_jar_task`, `spark_python_task`, `spark_submit_task` and `sql_task` fields for `databricks.sdk.service.jobs.SubmitRun`. + * Added `environments` field for `databricks.sdk.service.jobs.SubmitRun`. + * Added `dbt_task` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `environment_key` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `on_streaming_backlog_exceeded` field for `databricks.sdk.service.jobs.TaskEmailNotifications`. + * Added `periodic` field for `databricks.sdk.service.jobs.TriggerSettings`. + * Added `on_streaming_backlog_exceeded` field for `databricks.sdk.service.jobs.WebhookNotifications`. + * Added `databricks.sdk.service.jobs.PeriodicTriggerConfiguration` dataclass. + * Added `databricks.sdk.service.jobs.PeriodicTriggerConfigurationTimeUnit` dataclass. + * Added `batch_get()` method for [w.consumer_listings](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/consumer_listings.html) workspace-level service. + * Added `batch_get()` method for [w.consumer_providers](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/consumer_providers.html) workspace-level service. + * Added `provider_summary` field for `databricks.sdk.service.marketplace.Listing`. + * Added `databricks.sdk.service.marketplace.BatchGetListingsRequest`, `databricks.sdk.service.marketplace.BatchGetListingsResponse`, `databricks.sdk.service.marketplace.BatchGetProvidersRequest`, `databricks.sdk.service.marketplace.BatchGetProvidersResponse`, `databricks.sdk.service.marketplace.ProviderIconFile`, `databricks.sdk.service.marketplace.ProviderIconType`, `databricks.sdk.service.marketplace.ProviderListingSummaryInfo` and `databricks.sdk.service.oauth2.DataPlaneInfo` dataclasses. + * Removed `create_deployment()` method for [w.apps](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/apps.html) workspace-level service. + * Added `deploy()` and `start()` method1 for [w.apps](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/apps.html) workspace-level service. + * Added [w.serving_endpoints_data_plane](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/serving_endpoints_data_plane.html) workspace-level service. + * Added `service_principal_id` and `service_principal_name` fields for `databricks.sdk.service.serving.App`. + * Added `mode` field for `databricks.sdk.service.serving.AppDeployment`. + * Added `mode` field for `databricks.sdk.service.serving.CreateAppDeploymentRequest`. + * Added `data_plane_info` field for `databricks.sdk.service.serving.ServingEndpointDetailed`. + * Added `databricks.sdk.service.serving.AppDeploymentMode`, `databricks.sdk.service.serving.ModelDataPlaneInfo` and `databricks.sdk.service.serving.StartAppRequest` dataclasses. + * Added `query_next_page()` method for [w.vector_search_indexes](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/vector_search_indexes.html) workspace-level service. + * Added `query_type` field for `databricks.sdk.service.vectorsearch.QueryVectorIndexRequest`. + * Added `next_page_token` field for `databricks.sdk.service.vectorsearch.QueryVectorIndexResponse`. + +OpenAPI SHA: 7437dabb9dadee402c1fc060df4c1ce8cc5369f0, Date: 2024-06-24 ## 0.28.0 ### Improvements and new features diff --git a/databricks/sdk/service/catalog.py b/databricks/sdk/service/catalog.py index 9abdf478..e6456bc0 100755 --- a/databricks/sdk/service/catalog.py +++ b/databricks/sdk/service/catalog.py @@ -448,7 +448,7 @@ class CatalogInfo: full_name: Optional[str] = None """The full name of the catalog. Corresponds with the name field.""" - isolation_mode: Optional[IsolationMode] = None + isolation_mode: Optional[CatalogIsolationMode] = None """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" metastore_id: Optional[str] = None @@ -541,7 +541,7 @@ def from_dict(cls, d: Dict[str, any]) -> CatalogInfo: enable_predictive_optimization=_enum(d, 'enable_predictive_optimization', EnablePredictiveOptimization), full_name=d.get('full_name', None), - isolation_mode=_enum(d, 'isolation_mode', IsolationMode), + isolation_mode=_enum(d, 'isolation_mode', CatalogIsolationMode), metastore_id=d.get('metastore_id', None), name=d.get('name', None), options=d.get('options', None), @@ -571,13 +571,18 @@ class CatalogInfoSecurableKind(Enum): CATALOG_FOREIGN_SQLDW = 'CATALOG_FOREIGN_SQLDW' CATALOG_FOREIGN_SQLSERVER = 'CATALOG_FOREIGN_SQLSERVER' CATALOG_INTERNAL = 'CATALOG_INTERNAL' - CATALOG_ONLINE = 'CATALOG_ONLINE' - CATALOG_ONLINE_INDEX = 'CATALOG_ONLINE_INDEX' CATALOG_STANDARD = 'CATALOG_STANDARD' CATALOG_SYSTEM = 'CATALOG_SYSTEM' CATALOG_SYSTEM_DELTASHARING = 'CATALOG_SYSTEM_DELTASHARING' +class CatalogIsolationMode(Enum): + """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" + + ISOLATED = 'ISOLATED' + OPEN = 'OPEN' + + class CatalogType(Enum): """The type of the catalog.""" @@ -1222,8 +1227,9 @@ class CreateMetastore: """The user-specified name of the metastore.""" region: Optional[str] = None - """Cloud region which the metastore serves (e.g., `us-west-2`, `westus`). If this field is omitted, - the region of the workspace receiving the request will be used.""" + """Cloud region which the metastore serves (e.g., `us-west-2`, `westus`). The field can be omitted + in the __workspace-level__ __API__ but not in the __account-level__ __API__. If this field is + omitted, the region of the workspace receiving the request will be used.""" storage_root: Optional[str] = None """The storage root URL for metastore""" @@ -1494,7 +1500,7 @@ class CreateStorageCredential: """Comment associated with the credential.""" databricks_gcp_service_account: Optional[DatabricksGcpServiceAccountRequest] = None - """The managed GCP service account configuration.""" + """The Databricks managed GCP service account configuration.""" read_only: Optional[bool] = None """Whether the storage credential is only usable for read operations.""" @@ -1968,6 +1974,9 @@ class ExternalLocationInfo: encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" + isolation_mode: Optional[IsolationMode] = None + """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" + metastore_id: Optional[str] = None """Unique identifier of metastore hosting the external location.""" @@ -2000,6 +2009,7 @@ def as_dict(self) -> dict: if self.credential_id is not None: body['credential_id'] = self.credential_id if self.credential_name is not None: body['credential_name'] = self.credential_name if self.encryption_details: body['encryption_details'] = self.encryption_details.as_dict() + if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode.value if self.metastore_id is not None: body['metastore_id'] = self.metastore_id if self.name is not None: body['name'] = self.name if self.owner is not None: body['owner'] = self.owner @@ -2020,6 +2030,7 @@ def from_dict(cls, d: Dict[str, any]) -> ExternalLocationInfo: credential_id=d.get('credential_id', None), credential_name=d.get('credential_name', None), encryption_details=_from_dict(d, 'encryption_details', EncryptionDetails), + isolation_mode=_enum(d, 'isolation_mode', IsolationMode), metastore_id=d.get('metastore_id', None), name=d.get('name', None), owner=d.get('owner', None), @@ -2529,8 +2540,8 @@ class GetMetastoreSummaryResponseDeltaSharingScope(Enum): class IsolationMode(Enum): """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" - ISOLATED = 'ISOLATED' - OPEN = 'OPEN' + ISOLATION_MODE_ISOLATED = 'ISOLATION_MODE_ISOLATED' + ISOLATION_MODE_OPEN = 'ISOLATION_MODE_OPEN' @dataclass @@ -2574,16 +2585,22 @@ class ListCatalogsResponse: catalogs: Optional[List[CatalogInfo]] = None """An array of catalog information objects.""" + next_page_token: Optional[str] = None + """Opaque token to retrieve the next page of results. Absent if there are no more pages. + __page_token__ should be set to this value for the next request (for the next page of results).""" + def as_dict(self) -> dict: """Serializes the ListCatalogsResponse into a dictionary suitable for use as a JSON request body.""" body = {} if self.catalogs: body['catalogs'] = [v.as_dict() for v in self.catalogs] + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token return body @classmethod def from_dict(cls, d: Dict[str, any]) -> ListCatalogsResponse: """Deserializes the ListCatalogsResponse from a dictionary.""" - return cls(catalogs=_repeated_dict(d, 'catalogs', CatalogInfo)) + return cls(catalogs=_repeated_dict(d, 'catalogs', CatalogInfo), + next_page_token=d.get('next_page_token', None)) @dataclass @@ -3610,12 +3627,16 @@ class OnlineTable: status: Optional[OnlineTableStatus] = None """Online Table status""" + table_serving_url: Optional[str] = None + """Data serving REST API URL for this table""" + def as_dict(self) -> dict: """Serializes the OnlineTable into a dictionary suitable for use as a JSON request body.""" body = {} if self.name is not None: body['name'] = self.name if self.spec: body['spec'] = self.spec.as_dict() if self.status: body['status'] = self.status.as_dict() + if self.table_serving_url is not None: body['table_serving_url'] = self.table_serving_url return body @classmethod @@ -3623,7 +3644,8 @@ def from_dict(cls, d: Dict[str, any]) -> OnlineTable: """Deserializes the OnlineTable from a dictionary.""" return cls(name=d.get('name', None), spec=_from_dict(d, 'spec', OnlineTableSpec), - status=_from_dict(d, 'status', OnlineTableStatus)) + status=_from_dict(d, 'status', OnlineTableStatus), + table_serving_url=d.get('table_serving_url', None)) @dataclass @@ -3921,7 +3943,6 @@ class Privilege(Enum): REFRESH = 'REFRESH' SELECT = 'SELECT' SET_SHARE_PERMISSION = 'SET_SHARE_PERMISSION' - SINGLE_USER_ACCESS = 'SINGLE_USER_ACCESS' USAGE = 'USAGE' USE_CATALOG = 'USE_CATALOG' USE_CONNECTION = 'USE_CONNECTION' @@ -4350,11 +4371,14 @@ class StorageCredentialInfo: """Username of credential creator.""" databricks_gcp_service_account: Optional[DatabricksGcpServiceAccountResponse] = None - """The managed GCP service account configuration.""" + """The Databricks managed GCP service account configuration.""" id: Optional[str] = None """The unique identifier of the credential.""" + isolation_mode: Optional[IsolationMode] = None + """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" + metastore_id: Optional[str] = None """Unique identifier of parent metastore.""" @@ -4390,6 +4414,7 @@ def as_dict(self) -> dict: if self.databricks_gcp_service_account: body['databricks_gcp_service_account'] = self.databricks_gcp_service_account.as_dict() if self.id is not None: body['id'] = self.id + if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode.value if self.metastore_id is not None: body['metastore_id'] = self.metastore_id if self.name is not None: body['name'] = self.name if self.owner is not None: body['owner'] = self.owner @@ -4414,6 +4439,7 @@ def from_dict(cls, d: Dict[str, any]) -> StorageCredentialInfo: databricks_gcp_service_account=_from_dict(d, 'databricks_gcp_service_account', DatabricksGcpServiceAccountResponse), id=d.get('id', None), + isolation_mode=_enum(d, 'isolation_mode', IsolationMode), metastore_id=d.get('metastore_id', None), name=d.get('name', None), owner=d.get('owner', None), @@ -4831,7 +4857,7 @@ class UpdateCatalog: enable_predictive_optimization: Optional[EnablePredictiveOptimization] = None """Whether predictive optimization should be enabled for this object and objects under it.""" - isolation_mode: Optional[IsolationMode] = None + isolation_mode: Optional[CatalogIsolationMode] = None """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" name: Optional[str] = None @@ -4865,7 +4891,7 @@ def from_dict(cls, d: Dict[str, any]) -> UpdateCatalog: return cls(comment=d.get('comment', None), enable_predictive_optimization=_enum(d, 'enable_predictive_optimization', EnablePredictiveOptimization), - isolation_mode=_enum(d, 'isolation_mode', IsolationMode), + isolation_mode=_enum(d, 'isolation_mode', CatalogIsolationMode), name=d.get('name', None), new_name=d.get('new_name', None), owner=d.get('owner', None), @@ -4921,6 +4947,9 @@ class UpdateExternalLocation: force: Optional[bool] = None """Force update even if changing url invalidates dependent external tables or mounts.""" + isolation_mode: Optional[IsolationMode] = None + """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" + name: Optional[str] = None """Name of the external location.""" @@ -4947,6 +4976,7 @@ def as_dict(self) -> dict: if self.credential_name is not None: body['credential_name'] = self.credential_name if self.encryption_details: body['encryption_details'] = self.encryption_details.as_dict() if self.force is not None: body['force'] = self.force + if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode.value if self.name is not None: body['name'] = self.name if self.new_name is not None: body['new_name'] = self.new_name if self.owner is not None: body['owner'] = self.owner @@ -4963,6 +4993,7 @@ def from_dict(cls, d: Dict[str, any]) -> UpdateExternalLocation: credential_name=d.get('credential_name', None), encryption_details=_from_dict(d, 'encryption_details', EncryptionDetails), force=d.get('force', None), + isolation_mode=_enum(d, 'isolation_mode', IsolationMode), name=d.get('name', None), new_name=d.get('new_name', None), owner=d.get('owner', None), @@ -5328,11 +5359,14 @@ class UpdateStorageCredential: """Comment associated with the credential.""" databricks_gcp_service_account: Optional[DatabricksGcpServiceAccountRequest] = None - """The managed GCP service account configuration.""" + """The Databricks managed GCP service account configuration.""" force: Optional[bool] = None """Force update even if there are dependent external locations or external tables.""" + isolation_mode: Optional[IsolationMode] = None + """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" + name: Optional[str] = None """Name of the storage credential.""" @@ -5360,6 +5394,7 @@ def as_dict(self) -> dict: if self.databricks_gcp_service_account: body['databricks_gcp_service_account'] = self.databricks_gcp_service_account.as_dict() if self.force is not None: body['force'] = self.force + if self.isolation_mode is not None: body['isolation_mode'] = self.isolation_mode.value if self.name is not None: body['name'] = self.name if self.new_name is not None: body['new_name'] = self.new_name if self.owner is not None: body['owner'] = self.owner @@ -5379,6 +5414,7 @@ def from_dict(cls, d: Dict[str, any]) -> UpdateStorageCredential: databricks_gcp_service_account=_from_dict(d, 'databricks_gcp_service_account', DatabricksGcpServiceAccountRequest), force=d.get('force', None), + isolation_mode=_enum(d, 'isolation_mode', IsolationMode), name=d.get('name', None), new_name=d.get('new_name', None), owner=d.get('owner', None), @@ -6268,7 +6304,11 @@ def get(self, name: str, *, include_browse: Optional[bool] = None) -> CatalogInf res = self._api.do('GET', f'/api/2.1/unity-catalog/catalogs/{name}', query=query, headers=headers) return CatalogInfo.from_dict(res) - def list(self, *, include_browse: Optional[bool] = None) -> Iterator[CatalogInfo]: + def list(self, + *, + include_browse: Optional[bool] = None, + max_results: Optional[int] = None, + page_token: Optional[str] = None) -> Iterator[CatalogInfo]: """List catalogs. Gets an array of catalogs in the metastore. If the caller is the metastore admin, all catalogs will be @@ -6279,24 +6319,41 @@ def list(self, *, include_browse: Optional[bool] = None) -> Iterator[CatalogInfo :param include_browse: bool (optional) Whether to include catalogs in the response for which the principal can only access selective metadata for + :param max_results: int (optional) + Maximum number of catalogs to return. - when set to 0, the page length is set to a server configured + value (recommended); - when set to a value greater than 0, the page length is the minimum of this + value and a server configured value; - when set to a value less than 0, an invalid parameter error + is returned; - If not set, all valid catalogs are returned (not recommended). - Note: The number of + returned catalogs might be less than the specified max_results size, even zero. The only definitive + indication that no further catalogs can be fetched is when the next_page_token is unset from the + response. + :param page_token: str (optional) + Opaque pagination token to go to next page based on previous query. :returns: Iterator over :class:`CatalogInfo` """ query = {} if include_browse is not None: query['include_browse'] = include_browse + if max_results is not None: query['max_results'] = max_results + if page_token is not None: query['page_token'] = page_token headers = {'Accept': 'application/json', } - json = self._api.do('GET', '/api/2.1/unity-catalog/catalogs', query=query, headers=headers) - parsed = ListCatalogsResponse.from_dict(json).catalogs - return parsed if parsed is not None else [] + while True: + json = self._api.do('GET', '/api/2.1/unity-catalog/catalogs', query=query, headers=headers) + if 'catalogs' in json: + for v in json['catalogs']: + yield CatalogInfo.from_dict(v) + if 'next_page_token' not in json or not json['next_page_token']: + return + query['page_token'] = json['next_page_token'] def update(self, name: str, *, comment: Optional[str] = None, enable_predictive_optimization: Optional[EnablePredictiveOptimization] = None, - isolation_mode: Optional[IsolationMode] = None, + isolation_mode: Optional[CatalogIsolationMode] = None, new_name: Optional[str] = None, owner: Optional[str] = None, properties: Optional[Dict[str, str]] = None) -> CatalogInfo: @@ -6311,7 +6368,7 @@ def update(self, User-provided free-form text description. :param enable_predictive_optimization: :class:`EnablePredictiveOptimization` (optional) Whether predictive optimization should be enabled for this object and objects under it. - :param isolation_mode: :class:`IsolationMode` (optional) + :param isolation_mode: :class:`CatalogIsolationMode` (optional) Whether the current securable is accessible from all workspaces or a specific set of workspaces. :param new_name: str (optional) New name for the catalog. @@ -6649,6 +6706,7 @@ def update(self, credential_name: Optional[str] = None, encryption_details: Optional[EncryptionDetails] = None, force: Optional[bool] = None, + isolation_mode: Optional[IsolationMode] = None, new_name: Optional[str] = None, owner: Optional[str] = None, read_only: Optional[bool] = None, @@ -6672,6 +6730,8 @@ def update(self, Encryption options that apply to clients connecting to cloud storage. :param force: bool (optional) Force update even if changing url invalidates dependent external tables or mounts. + :param isolation_mode: :class:`IsolationMode` (optional) + Whether the current securable is accessible from all workspaces or a specific set of workspaces. :param new_name: str (optional) New name for the external location. :param owner: str (optional) @@ -6691,6 +6751,7 @@ def update(self, if credential_name is not None: body['credential_name'] = credential_name if encryption_details is not None: body['encryption_details'] = encryption_details.as_dict() if force is not None: body['force'] = force + if isolation_mode is not None: body['isolation_mode'] = isolation_mode.value if new_name is not None: body['new_name'] = new_name if owner is not None: body['owner'] = owner if read_only is not None: body['read_only'] = read_only @@ -6718,6 +6779,8 @@ def __init__(self, api_client): def create(self, function_info: CreateFunction) -> FunctionInfo: """Create a function. + **WARNING: This API is experimental and will change in future versions** + Creates a new function The user must have the following permissions in order for the function to be created: - @@ -7022,8 +7085,9 @@ def create(self, :param name: str The user-specified name of the metastore. :param region: str (optional) - Cloud region which the metastore serves (e.g., `us-west-2`, `westus`). If this field is omitted, the - region of the workspace receiving the request will be used. + Cloud region which the metastore serves (e.g., `us-west-2`, `westus`). The field can be omitted in + the __workspace-level__ __API__ but not in the __account-level__ __API__. If this field is omitted, + the region of the workspace receiving the request will be used. :param storage_root: str (optional) The storage root URL for metastore @@ -8277,7 +8341,7 @@ def create(self, :param comment: str (optional) Comment associated with the credential. :param databricks_gcp_service_account: :class:`DatabricksGcpServiceAccountRequest` (optional) - The managed GCP service account configuration. + The Databricks managed GCP service account configuration. :param read_only: bool (optional) Whether the storage credential is only usable for read operations. :param skip_validation: bool (optional) @@ -8393,6 +8457,7 @@ def update(self, comment: Optional[str] = None, databricks_gcp_service_account: Optional[DatabricksGcpServiceAccountRequest] = None, force: Optional[bool] = None, + isolation_mode: Optional[IsolationMode] = None, new_name: Optional[str] = None, owner: Optional[str] = None, read_only: Optional[bool] = None, @@ -8414,9 +8479,11 @@ def update(self, :param comment: str (optional) Comment associated with the credential. :param databricks_gcp_service_account: :class:`DatabricksGcpServiceAccountRequest` (optional) - The managed GCP service account configuration. + The Databricks managed GCP service account configuration. :param force: bool (optional) Force update even if there are dependent external locations or external tables. + :param isolation_mode: :class:`IsolationMode` (optional) + Whether the current securable is accessible from all workspaces or a specific set of workspaces. :param new_name: str (optional) New name for the storage credential. :param owner: str (optional) @@ -8439,6 +8506,7 @@ def update(self, if databricks_gcp_service_account is not None: body['databricks_gcp_service_account'] = databricks_gcp_service_account.as_dict() if force is not None: body['force'] = force + if isolation_mode is not None: body['isolation_mode'] = isolation_mode.value if new_name is not None: body['new_name'] = new_name if owner is not None: body['owner'] = owner if read_only is not None: body['read_only'] = read_only diff --git a/databricks/sdk/service/compute.py b/databricks/sdk/service/compute.py index c8c2542f..4e6a0215 100755 --- a/databricks/sdk/service/compute.py +++ b/databricks/sdk/service/compute.py @@ -790,7 +790,7 @@ class ClusterDetails: driver: Optional[SparkNode] = None """Node on which the Spark driver resides. The driver node contains the Spark master and the - application that manages the per-notebook Spark REPLs.""" + Databricks application that manages the per-notebook Spark REPLs.""" driver_instance_pool_id: Optional[str] = None """The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster @@ -2984,9 +2984,8 @@ def from_dict(cls, d: Dict[str, any]) -> EditResponse: @dataclass class Environment: - """The a environment entity used to preserve serverless environment side panel and jobs' - environment for non-notebook task. In this minimal environment spec, only pip dependencies are - supported. Next ID: 5""" + """The environment entity used to preserve serverless environment side panel and jobs' environment + for non-notebook task. In this minimal environment spec, only pip dependencies are supported.""" client: str """Client version used by the environment The client is the user-facing environment of the runtime. @@ -5076,7 +5075,7 @@ class Policy: """Additional human-readable description of the cluster policy.""" is_default: Optional[bool] = None - """If true, policy is a default policy created and managed by . Default policies cannot + """If true, policy is a default policy created and managed by Databricks. Default policies cannot be deleted, and their policy families cannot be changed.""" libraries: Optional[List[Library]] = None diff --git a/databricks/sdk/service/dashboards.py b/databricks/sdk/service/dashboards.py index 065b8b98..b24d0318 100755 --- a/databricks/sdk/service/dashboards.py +++ b/databricks/sdk/service/dashboards.py @@ -5,9 +5,9 @@ import logging from dataclasses import dataclass from enum import Enum -from typing import Dict, Optional +from typing import Dict, Iterator, List, Optional -from ._internal import _enum +from ._internal import _enum, _from_dict, _repeated_dict _LOG = logging.getLogger('databricks.sdk') @@ -47,6 +47,94 @@ def from_dict(cls, d: Dict[str, any]) -> CreateDashboardRequest: warehouse_id=d.get('warehouse_id', None)) +@dataclass +class CreateScheduleRequest: + cron_schedule: CronSchedule + """The cron expression describing the frequency of the periodic refresh for this schedule.""" + + dashboard_id: Optional[str] = None + """UUID identifying the dashboard to which the schedule belongs.""" + + display_name: Optional[str] = None + """The display name for schedule.""" + + pause_status: Optional[SchedulePauseStatus] = None + """The status indicates whether this schedule is paused or not.""" + + def as_dict(self) -> dict: + """Serializes the CreateScheduleRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.cron_schedule: body['cron_schedule'] = self.cron_schedule.as_dict() + if self.dashboard_id is not None: body['dashboard_id'] = self.dashboard_id + if self.display_name is not None: body['display_name'] = self.display_name + if self.pause_status is not None: body['pause_status'] = self.pause_status.value + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> CreateScheduleRequest: + """Deserializes the CreateScheduleRequest from a dictionary.""" + return cls(cron_schedule=_from_dict(d, 'cron_schedule', CronSchedule), + dashboard_id=d.get('dashboard_id', None), + display_name=d.get('display_name', None), + pause_status=_enum(d, 'pause_status', SchedulePauseStatus)) + + +@dataclass +class CreateSubscriptionRequest: + subscriber: Subscriber + """Subscriber details for users and destinations to be added as subscribers to the schedule.""" + + dashboard_id: Optional[str] = None + """UUID identifying the dashboard to which the subscription belongs.""" + + schedule_id: Optional[str] = None + """UUID identifying the schedule to which the subscription belongs.""" + + def as_dict(self) -> dict: + """Serializes the CreateSubscriptionRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.dashboard_id is not None: body['dashboard_id'] = self.dashboard_id + if self.schedule_id is not None: body['schedule_id'] = self.schedule_id + if self.subscriber: body['subscriber'] = self.subscriber.as_dict() + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> CreateSubscriptionRequest: + """Deserializes the CreateSubscriptionRequest from a dictionary.""" + return cls(dashboard_id=d.get('dashboard_id', None), + schedule_id=d.get('schedule_id', None), + subscriber=_from_dict(d, 'subscriber', Subscriber)) + + +@dataclass +class CronSchedule: + quartz_cron_expression: str + """A cron expression using quartz syntax. EX: `0 0 8 * * ?` represents everyday at 8am. See [Cron + Trigger] for details. + + [Cron Trigger]: http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html""" + + timezone_id: str + """A Java timezone id. The schedule will be resolved with respect to this timezone. See [Java + TimeZone] for details. + + [Java TimeZone]: https://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html""" + + def as_dict(self) -> dict: + """Serializes the CronSchedule into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.quartz_cron_expression is not None: + body['quartz_cron_expression'] = self.quartz_cron_expression + if self.timezone_id is not None: body['timezone_id'] = self.timezone_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> CronSchedule: + """Deserializes the CronSchedule from a dictionary.""" + return cls(quartz_cron_expression=d.get('quartz_cron_expression', None), + timezone_id=d.get('timezone_id', None)) + + @dataclass class Dashboard: create_time: Optional[str] = None @@ -111,12 +199,112 @@ def from_dict(cls, d: Dict[str, any]) -> Dashboard: warehouse_id=d.get('warehouse_id', None)) +class DashboardView(Enum): + + DASHBOARD_VIEW_BASIC = 'DASHBOARD_VIEW_BASIC' + DASHBOARD_VIEW_FULL = 'DASHBOARD_VIEW_FULL' + + +@dataclass +class DeleteScheduleResponse: + + def as_dict(self) -> dict: + """Serializes the DeleteScheduleResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> DeleteScheduleResponse: + """Deserializes the DeleteScheduleResponse from a dictionary.""" + return cls() + + +@dataclass +class DeleteSubscriptionResponse: + + def as_dict(self) -> dict: + """Serializes the DeleteSubscriptionResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> DeleteSubscriptionResponse: + """Deserializes the DeleteSubscriptionResponse from a dictionary.""" + return cls() + + class LifecycleState(Enum): ACTIVE = 'ACTIVE' TRASHED = 'TRASHED' +@dataclass +class ListDashboardsResponse: + dashboards: Optional[List[Dashboard]] = None + + next_page_token: Optional[str] = None + """A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, + there are no subsequent dashboards.""" + + def as_dict(self) -> dict: + """Serializes the ListDashboardsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.dashboards: body['dashboards'] = [v.as_dict() for v in self.dashboards] + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ListDashboardsResponse: + """Deserializes the ListDashboardsResponse from a dictionary.""" + return cls(dashboards=_repeated_dict(d, 'dashboards', Dashboard), + next_page_token=d.get('next_page_token', None)) + + +@dataclass +class ListSchedulesResponse: + next_page_token: Optional[str] = None + """A token that can be used as a `page_token` in subsequent requests to retrieve the next page of + results. If this field is omitted, there are no subsequent schedules.""" + + schedules: Optional[List[Schedule]] = None + + def as_dict(self) -> dict: + """Serializes the ListSchedulesResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token + if self.schedules: body['schedules'] = [v.as_dict() for v in self.schedules] + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ListSchedulesResponse: + """Deserializes the ListSchedulesResponse from a dictionary.""" + return cls(next_page_token=d.get('next_page_token', None), + schedules=_repeated_dict(d, 'schedules', Schedule)) + + +@dataclass +class ListSubscriptionsResponse: + next_page_token: Optional[str] = None + """A token that can be used as a `page_token` in subsequent requests to retrieve the next page of + results. If this field is omitted, there are no subsequent subscriptions.""" + + subscriptions: Optional[List[Subscription]] = None + + def as_dict(self) -> dict: + """Serializes the ListSubscriptionsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token + if self.subscriptions: body['subscriptions'] = [v.as_dict() for v in self.subscriptions] + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ListSubscriptionsResponse: + """Deserializes the ListSubscriptionsResponse from a dictionary.""" + return cls(next_page_token=d.get('next_page_token', None), + subscriptions=_repeated_dict(d, 'subscriptions', Subscription)) + + @dataclass class MigrateDashboardRequest: source_dashboard_id: str @@ -204,6 +392,179 @@ def from_dict(cls, d: Dict[str, any]) -> PublishedDashboard: warehouse_id=d.get('warehouse_id', None)) +@dataclass +class Schedule: + cron_schedule: CronSchedule + """The cron expression describing the frequency of the periodic refresh for this schedule.""" + + create_time: Optional[str] = None + """A timestamp indicating when the schedule was created.""" + + dashboard_id: Optional[str] = None + """UUID identifying the dashboard to which the schedule belongs.""" + + display_name: Optional[str] = None + """The display name for schedule.""" + + etag: Optional[str] = None + """The etag for the schedule. Must be left empty on create, must be provided on updates to ensure + that the schedule has not been modified since the last read, and can be optionally provided on + delete.""" + + pause_status: Optional[SchedulePauseStatus] = None + """The status indicates whether this schedule is paused or not.""" + + schedule_id: Optional[str] = None + """UUID identifying the schedule.""" + + update_time: Optional[str] = None + """A timestamp indicating when the schedule was last updated.""" + + def as_dict(self) -> dict: + """Serializes the Schedule into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.create_time is not None: body['create_time'] = self.create_time + if self.cron_schedule: body['cron_schedule'] = self.cron_schedule.as_dict() + if self.dashboard_id is not None: body['dashboard_id'] = self.dashboard_id + if self.display_name is not None: body['display_name'] = self.display_name + if self.etag is not None: body['etag'] = self.etag + if self.pause_status is not None: body['pause_status'] = self.pause_status.value + if self.schedule_id is not None: body['schedule_id'] = self.schedule_id + if self.update_time is not None: body['update_time'] = self.update_time + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> Schedule: + """Deserializes the Schedule from a dictionary.""" + return cls(create_time=d.get('create_time', None), + cron_schedule=_from_dict(d, 'cron_schedule', CronSchedule), + dashboard_id=d.get('dashboard_id', None), + display_name=d.get('display_name', None), + etag=d.get('etag', None), + pause_status=_enum(d, 'pause_status', SchedulePauseStatus), + schedule_id=d.get('schedule_id', None), + update_time=d.get('update_time', None)) + + +class SchedulePauseStatus(Enum): + + PAUSED = 'PAUSED' + UNPAUSED = 'UNPAUSED' + + +@dataclass +class Subscriber: + destination_subscriber: Optional[SubscriptionSubscriberDestination] = None + """The destination to receive the subscription email. This parameter is mutually exclusive with + `user_subscriber`.""" + + user_subscriber: Optional[SubscriptionSubscriberUser] = None + """The user to receive the subscription email. This parameter is mutually exclusive with + `destination_subscriber`.""" + + def as_dict(self) -> dict: + """Serializes the Subscriber into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.destination_subscriber: body['destination_subscriber'] = self.destination_subscriber.as_dict() + if self.user_subscriber: body['user_subscriber'] = self.user_subscriber.as_dict() + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> Subscriber: + """Deserializes the Subscriber from a dictionary.""" + return cls(destination_subscriber=_from_dict(d, 'destination_subscriber', + SubscriptionSubscriberDestination), + user_subscriber=_from_dict(d, 'user_subscriber', SubscriptionSubscriberUser)) + + +@dataclass +class Subscription: + subscriber: Subscriber + """Subscriber details for users and destinations to be added as subscribers to the schedule.""" + + create_time: Optional[str] = None + """A timestamp indicating when the subscription was created.""" + + created_by_user_id: Optional[int] = None + """UserId of the user who adds subscribers (users or notification destinations) to the dashboard's + schedule.""" + + dashboard_id: Optional[str] = None + """UUID identifying the dashboard to which the subscription belongs.""" + + etag: Optional[str] = None + """The etag for the subscription. Must be left empty on create, can be optionally provided on + delete to ensure that the subscription has not been deleted since the last read.""" + + schedule_id: Optional[str] = None + """UUID identifying the schedule to which the subscription belongs.""" + + subscription_id: Optional[str] = None + """UUID identifying the subscription.""" + + update_time: Optional[str] = None + """A timestamp indicating when the subscription was last updated.""" + + def as_dict(self) -> dict: + """Serializes the Subscription into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.create_time is not None: body['create_time'] = self.create_time + if self.created_by_user_id is not None: body['created_by_user_id'] = self.created_by_user_id + if self.dashboard_id is not None: body['dashboard_id'] = self.dashboard_id + if self.etag is not None: body['etag'] = self.etag + if self.schedule_id is not None: body['schedule_id'] = self.schedule_id + if self.subscriber: body['subscriber'] = self.subscriber.as_dict() + if self.subscription_id is not None: body['subscription_id'] = self.subscription_id + if self.update_time is not None: body['update_time'] = self.update_time + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> Subscription: + """Deserializes the Subscription from a dictionary.""" + return cls(create_time=d.get('create_time', None), + created_by_user_id=d.get('created_by_user_id', None), + dashboard_id=d.get('dashboard_id', None), + etag=d.get('etag', None), + schedule_id=d.get('schedule_id', None), + subscriber=_from_dict(d, 'subscriber', Subscriber), + subscription_id=d.get('subscription_id', None), + update_time=d.get('update_time', None)) + + +@dataclass +class SubscriptionSubscriberDestination: + destination_id: str + """The canonical identifier of the destination to receive email notification.""" + + def as_dict(self) -> dict: + """Serializes the SubscriptionSubscriberDestination into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.destination_id is not None: body['destination_id'] = self.destination_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> SubscriptionSubscriberDestination: + """Deserializes the SubscriptionSubscriberDestination from a dictionary.""" + return cls(destination_id=d.get('destination_id', None)) + + +@dataclass +class SubscriptionSubscriberUser: + user_id: int + """UserId of the subscriber.""" + + def as_dict(self) -> dict: + """Serializes the SubscriptionSubscriberUser into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.user_id is not None: body['user_id'] = self.user_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> SubscriptionSubscriberUser: + """Deserializes the SubscriptionSubscriberUser from a dictionary.""" + return cls(user_id=d.get('user_id', None)) + + @dataclass class TrashDashboardResponse: @@ -270,6 +631,50 @@ def from_dict(cls, d: Dict[str, any]) -> UpdateDashboardRequest: warehouse_id=d.get('warehouse_id', None)) +@dataclass +class UpdateScheduleRequest: + cron_schedule: CronSchedule + """The cron expression describing the frequency of the periodic refresh for this schedule.""" + + dashboard_id: Optional[str] = None + """UUID identifying the dashboard to which the schedule belongs.""" + + display_name: Optional[str] = None + """The display name for schedule.""" + + etag: Optional[str] = None + """The etag for the schedule. Must be left empty on create, must be provided on updates to ensure + that the schedule has not been modified since the last read, and can be optionally provided on + delete.""" + + pause_status: Optional[SchedulePauseStatus] = None + """The status indicates whether this schedule is paused or not.""" + + schedule_id: Optional[str] = None + """UUID identifying the schedule.""" + + def as_dict(self) -> dict: + """Serializes the UpdateScheduleRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.cron_schedule: body['cron_schedule'] = self.cron_schedule.as_dict() + if self.dashboard_id is not None: body['dashboard_id'] = self.dashboard_id + if self.display_name is not None: body['display_name'] = self.display_name + if self.etag is not None: body['etag'] = self.etag + if self.pause_status is not None: body['pause_status'] = self.pause_status.value + if self.schedule_id is not None: body['schedule_id'] = self.schedule_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> UpdateScheduleRequest: + """Deserializes the UpdateScheduleRequest from a dictionary.""" + return cls(cron_schedule=_from_dict(d, 'cron_schedule', CronSchedule), + dashboard_id=d.get('dashboard_id', None), + display_name=d.get('display_name', None), + etag=d.get('etag', None), + pause_status=_enum(d, 'pause_status', SchedulePauseStatus), + schedule_id=d.get('schedule_id', None)) + + class LakeviewAPI: """These APIs provide specific management operations for Lakeview dashboards. Generic resource management can be done with Workspace API (import, export, get-status, list, delete).""" @@ -309,6 +714,115 @@ def create(self, res = self._api.do('POST', '/api/2.0/lakeview/dashboards', body=body, headers=headers) return Dashboard.from_dict(res) + def create_schedule(self, + dashboard_id: str, + cron_schedule: CronSchedule, + *, + display_name: Optional[str] = None, + pause_status: Optional[SchedulePauseStatus] = None) -> Schedule: + """Create dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param cron_schedule: :class:`CronSchedule` + The cron expression describing the frequency of the periodic refresh for this schedule. + :param display_name: str (optional) + The display name for schedule. + :param pause_status: :class:`SchedulePauseStatus` (optional) + The status indicates whether this schedule is paused or not. + + :returns: :class:`Schedule` + """ + body = {} + if cron_schedule is not None: body['cron_schedule'] = cron_schedule.as_dict() + if display_name is not None: body['display_name'] = display_name + if pause_status is not None: body['pause_status'] = pause_status.value + headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } + + res = self._api.do('POST', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules', + body=body, + headers=headers) + return Schedule.from_dict(res) + + def create_subscription(self, dashboard_id: str, schedule_id: str, + subscriber: Subscriber) -> Subscription: + """Create schedule subscription. + + :param dashboard_id: str + UUID identifying the dashboard to which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule to which the subscription belongs. + :param subscriber: :class:`Subscriber` + Subscriber details for users and destinations to be added as subscribers to the schedule. + + :returns: :class:`Subscription` + """ + body = {} + if subscriber is not None: body['subscriber'] = subscriber.as_dict() + headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } + + res = self._api.do( + 'POST', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules/{schedule_id}/subscriptions', + body=body, + headers=headers) + return Subscription.from_dict(res) + + def delete_schedule(self, dashboard_id: str, schedule_id: str, *, etag: Optional[str] = None): + """Delete dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param schedule_id: str + UUID identifying the schedule. + :param etag: str (optional) + The etag for the schedule. Optionally, it can be provided to verify that the schedule has not been + modified from its last retrieval. + + + """ + + query = {} + if etag is not None: query['etag'] = etag + headers = {'Accept': 'application/json', } + + self._api.do('DELETE', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules/{schedule_id}', + query=query, + headers=headers) + + def delete_subscription(self, + dashboard_id: str, + schedule_id: str, + subscription_id: str, + *, + etag: Optional[str] = None): + """Delete schedule subscription. + + :param dashboard_id: str + UUID identifying the dashboard which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule which the subscription belongs. + :param subscription_id: str + UUID identifying the subscription. + :param etag: str (optional) + The etag for the subscription. Can be optionally provided to ensure that the subscription has not + been modified since the last read. + + + """ + + query = {} + if etag is not None: query['etag'] = etag + headers = {'Accept': 'application/json', } + + self._api.do( + 'DELETE', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules/{schedule_id}/subscriptions/{subscription_id}', + query=query, + headers=headers) + def get(self, dashboard_id: str) -> Dashboard: """Get dashboard. @@ -341,6 +855,158 @@ def get_published(self, dashboard_id: str) -> PublishedDashboard: res = self._api.do('GET', f'/api/2.0/lakeview/dashboards/{dashboard_id}/published', headers=headers) return PublishedDashboard.from_dict(res) + def get_schedule(self, dashboard_id: str, schedule_id: str) -> Schedule: + """Get dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param schedule_id: str + UUID identifying the schedule. + + :returns: :class:`Schedule` + """ + + headers = {'Accept': 'application/json', } + + res = self._api.do('GET', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules/{schedule_id}', + headers=headers) + return Schedule.from_dict(res) + + def get_subscription(self, dashboard_id: str, schedule_id: str, subscription_id: str) -> Subscription: + """Get schedule subscription. + + :param dashboard_id: str + UUID identifying the dashboard which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule which the subscription belongs. + :param subscription_id: str + UUID identifying the subscription. + + :returns: :class:`Subscription` + """ + + headers = {'Accept': 'application/json', } + + res = self._api.do( + 'GET', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules/{schedule_id}/subscriptions/{subscription_id}', + headers=headers) + return Subscription.from_dict(res) + + def list(self, + *, + page_size: Optional[int] = None, + page_token: Optional[str] = None, + show_trashed: Optional[bool] = None, + view: Optional[DashboardView] = None) -> Iterator[Dashboard]: + """List dashboards. + + :param page_size: int (optional) + The number of dashboards to return per page. + :param page_token: str (optional) + A page token, received from a previous `ListDashboards` call. This token can be used to retrieve the + subsequent page. + :param show_trashed: bool (optional) + The flag to include dashboards located in the trash. If unspecified, only active dashboards will be + returned. + :param view: :class:`DashboardView` (optional) + Indicates whether to include all metadata from the dashboard in the response. If unset, the response + defaults to `DASHBOARD_VIEW_BASIC` which only includes summary metadata from the dashboard. + + :returns: Iterator over :class:`Dashboard` + """ + + query = {} + if page_size is not None: query['page_size'] = page_size + if page_token is not None: query['page_token'] = page_token + if show_trashed is not None: query['show_trashed'] = show_trashed + if view is not None: query['view'] = view.value + headers = {'Accept': 'application/json', } + + while True: + json = self._api.do('GET', '/api/2.0/lakeview/dashboards', query=query, headers=headers) + if 'dashboards' in json: + for v in json['dashboards']: + yield Dashboard.from_dict(v) + if 'next_page_token' not in json or not json['next_page_token']: + return + query['page_token'] = json['next_page_token'] + + def list_schedules(self, + dashboard_id: str, + *, + page_size: Optional[int] = None, + page_token: Optional[str] = None) -> Iterator[Schedule]: + """List dashboard schedules. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param page_size: int (optional) + The number of schedules to return per page. + :param page_token: str (optional) + A page token, received from a previous `ListSchedules` call. Use this to retrieve the subsequent + page. + + :returns: Iterator over :class:`Schedule` + """ + + query = {} + if page_size is not None: query['page_size'] = page_size + if page_token is not None: query['page_token'] = page_token + headers = {'Accept': 'application/json', } + + while True: + json = self._api.do('GET', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules', + query=query, + headers=headers) + if 'schedules' in json: + for v in json['schedules']: + yield Schedule.from_dict(v) + if 'next_page_token' not in json or not json['next_page_token']: + return + query['page_token'] = json['next_page_token'] + + def list_subscriptions(self, + dashboard_id: str, + schedule_id: str, + *, + page_size: Optional[int] = None, + page_token: Optional[str] = None) -> Iterator[Subscription]: + """List schedule subscriptions. + + :param dashboard_id: str + UUID identifying the dashboard to which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule to which the subscription belongs. + :param page_size: int (optional) + The number of subscriptions to return per page. + :param page_token: str (optional) + A page token, received from a previous `ListSubscriptions` call. Use this to retrieve the subsequent + page. + + :returns: Iterator over :class:`Subscription` + """ + + query = {} + if page_size is not None: query['page_size'] = page_size + if page_token is not None: query['page_token'] = page_token + headers = {'Accept': 'application/json', } + + while True: + json = self._api.do( + 'GET', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules/{schedule_id}/subscriptions', + query=query, + headers=headers) + if 'subscriptions' in json: + for v in json['subscriptions']: + yield Subscription.from_dict(v) + if 'next_page_token' not in json or not json['next_page_token']: + return + query['page_token'] = json['next_page_token'] + def migrate(self, source_dashboard_id: str, *, @@ -465,3 +1131,42 @@ def update(self, body=body, headers=headers) return Dashboard.from_dict(res) + + def update_schedule(self, + dashboard_id: str, + schedule_id: str, + cron_schedule: CronSchedule, + *, + display_name: Optional[str] = None, + etag: Optional[str] = None, + pause_status: Optional[SchedulePauseStatus] = None) -> Schedule: + """Update dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param schedule_id: str + UUID identifying the schedule. + :param cron_schedule: :class:`CronSchedule` + The cron expression describing the frequency of the periodic refresh for this schedule. + :param display_name: str (optional) + The display name for schedule. + :param etag: str (optional) + The etag for the schedule. Must be left empty on create, must be provided on updates to ensure that + the schedule has not been modified since the last read, and can be optionally provided on delete. + :param pause_status: :class:`SchedulePauseStatus` (optional) + The status indicates whether this schedule is paused or not. + + :returns: :class:`Schedule` + """ + body = {} + if cron_schedule is not None: body['cron_schedule'] = cron_schedule.as_dict() + if display_name is not None: body['display_name'] = display_name + if etag is not None: body['etag'] = etag + if pause_status is not None: body['pause_status'] = pause_status.value + headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } + + res = self._api.do('PUT', + f'/api/2.0/lakeview/dashboards/{dashboard_id}/schedules/{schedule_id}', + body=body, + headers=headers) + return Schedule.from_dict(res) diff --git a/databricks/sdk/service/jobs.py b/databricks/sdk/service/jobs.py index fb700bb2..f96d7dd7 100755 --- a/databricks/sdk/service/jobs.py +++ b/databricks/sdk/service/jobs.py @@ -1321,6 +1321,13 @@ class JobEmailNotifications: """A list of email addresses to be notified when a run begins. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.""" + on_streaming_backlog_exceeded: Optional[List[str]] = None + """A list of email addresses to notify when any streaming backlog thresholds are exceeded for any + stream. Streaming backlog thresholds can be set in the `health` field using the following + metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or + `STREAMING_BACKLOG_FILES`. Alerting is based on the 10-minute average of these metrics. If the + issue persists, notifications are resent every 30 minutes.""" + on_success: Optional[List[str]] = None """A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` @@ -1338,6 +1345,8 @@ def as_dict(self) -> dict: ] if self.on_failure: body['on_failure'] = [v for v in self.on_failure] if self.on_start: body['on_start'] = [v for v in self.on_start] + if self.on_streaming_backlog_exceeded: + body['on_streaming_backlog_exceeded'] = [v for v in self.on_streaming_backlog_exceeded] if self.on_success: body['on_success'] = [v for v in self.on_success] return body @@ -1349,6 +1358,7 @@ def from_dict(cls, d: Dict[str, any]) -> JobEmailNotifications: None), on_failure=d.get('on_failure', None), on_start=d.get('on_start', None), + on_streaming_backlog_exceeded=d.get('on_streaming_backlog_exceeded', None), on_success=d.get('on_success', None)) @@ -1358,9 +1368,8 @@ class JobEnvironment: """The key of an environment. It has to be unique within a job.""" spec: Optional[compute.Environment] = None - """The a environment entity used to preserve serverless environment side panel and jobs' - environment for non-notebook task. In this minimal environment spec, only pip dependencies are - supported. Next ID: 5""" + """The environment entity used to preserve serverless environment side panel and jobs' environment + for non-notebook task. In this minimal environment spec, only pip dependencies are supported.""" def as_dict(self) -> dict: """Serializes the JobEnvironment into a dictionary suitable for use as a JSON request body.""" @@ -1789,9 +1798,21 @@ class JobSourceDirtyState(Enum): class JobsHealthMetric(Enum): - """Specifies the health metric that is being evaluated for a particular health rule.""" + """Specifies the health metric that is being evaluated for a particular health rule. + + * `RUN_DURATION_SECONDS`: Expected total time for a run in seconds. * `STREAMING_BACKLOG_BYTES`: + An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric + is in Private Preview. * `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag + across all streams. This metric is in Private Preview. * `STREAMING_BACKLOG_SECONDS`: An + estimate of the maximum consumer delay across all streams. This metric is in Private Preview. * + `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all + streams. This metric is in Private Preview.""" RUN_DURATION_SECONDS = 'RUN_DURATION_SECONDS' + STREAMING_BACKLOG_BYTES = 'STREAMING_BACKLOG_BYTES' + STREAMING_BACKLOG_FILES = 'STREAMING_BACKLOG_FILES' + STREAMING_BACKLOG_RECORDS = 'STREAMING_BACKLOG_RECORDS' + STREAMING_BACKLOG_SECONDS = 'STREAMING_BACKLOG_SECONDS' class JobsHealthOperator(Enum): @@ -1803,7 +1824,15 @@ class JobsHealthOperator(Enum): @dataclass class JobsHealthRule: metric: JobsHealthMetric - """Specifies the health metric that is being evaluated for a particular health rule.""" + """Specifies the health metric that is being evaluated for a particular health rule. + + * `RUN_DURATION_SECONDS`: Expected total time for a run in seconds. * `STREAMING_BACKLOG_BYTES`: + An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric + is in Private Preview. * `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag + across all streams. This metric is in Private Preview. * `STREAMING_BACKLOG_SECONDS`: An + estimate of the maximum consumer delay across all streams. This metric is in Private Preview. * + `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all + streams. This metric is in Private Preview.""" op: JobsHealthOperator """Specifies the operator used to compare the health metric value with the specified threshold.""" @@ -2000,6 +2029,36 @@ class PauseStatus(Enum): UNPAUSED = 'UNPAUSED' +@dataclass +class PeriodicTriggerConfiguration: + interval: int + """The interval at which the trigger should run.""" + + unit: PeriodicTriggerConfigurationTimeUnit + """The unit of time for the interval.""" + + def as_dict(self) -> dict: + """Serializes the PeriodicTriggerConfiguration into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.interval is not None: body['interval'] = self.interval + if self.unit is not None: body['unit'] = self.unit.value + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> PeriodicTriggerConfiguration: + """Deserializes the PeriodicTriggerConfiguration from a dictionary.""" + return cls(interval=d.get('interval', None), + unit=_enum(d, 'unit', PeriodicTriggerConfigurationTimeUnit)) + + +class PeriodicTriggerConfigurationTimeUnit(Enum): + + DAYS = 'DAYS' + HOURS = 'HOURS' + TIME_UNIT_UNSPECIFIED = 'TIME_UNIT_UNSPECIFIED' + WEEKS = 'WEEKS' + + @dataclass class PipelineParams: full_refresh: Optional[bool] = None @@ -3396,6 +3455,10 @@ class RunTask: """The time at which this run ended in epoch milliseconds (milliseconds since 1/1/1970 UTC). This field is set to 0 if the job is still running.""" + environment_key: Optional[str] = None + """The key that references an environment spec in a job. This field is required for Python script, + Python wheel and dbt tasks when using serverless compute.""" + execution_duration: Optional[int] = None """The time in milliseconds it took to execute the commands in the JAR or notebook until they completed, failed, timed out, were cancelled, or encountered an unexpected error. The duration @@ -3527,6 +3590,7 @@ def as_dict(self) -> dict: if self.description is not None: body['description'] = self.description if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.end_time is not None: body['end_time'] = self.end_time + if self.environment_key is not None: body['environment_key'] = self.environment_key if self.execution_duration is not None: body['execution_duration'] = self.execution_duration if self.existing_cluster_id is not None: body['existing_cluster_id'] = self.existing_cluster_id if self.for_each_task: body['for_each_task'] = self.for_each_task.as_dict() @@ -3569,6 +3633,7 @@ def from_dict(cls, d: Dict[str, any]) -> RunTask: description=d.get('description', None), email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), end_time=d.get('end_time', None), + environment_key=d.get('environment_key', None), execution_duration=d.get('execution_duration', None), existing_cluster_id=d.get('existing_cluster_id', None), for_each_task=_from_dict(d, 'for_each_task', RunForEachTask), @@ -4126,18 +4191,12 @@ class SubmitRun: access_control_list: Optional[List[iam.AccessControlRequest]] = None """List of permissions to set on the job.""" - condition_task: Optional[ConditionTask] = None - """If condition_task, specifies a condition with an outcome that can be used to control the - execution of other tasks. Does not require a cluster to execute and does not support retries or - notifications.""" - - dbt_task: Optional[DbtTask] = None - """If dbt_task, indicates that this must execute a dbt task. It requires both Databricks SQL and - the ability to use a serverless or a pro SQL warehouse.""" - email_notifications: Optional[JobEmailNotifications] = None """An optional set of email addresses notified when the run begins or completes.""" + environments: Optional[List[JobEnvironment]] = None + """A list of task execution environment specifications that can be referenced by tasks of this run.""" + git_source: Optional[GitSource] = None """An optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks. @@ -4165,20 +4224,10 @@ class SubmitRun: [How to ensure idempotency for jobs]: https://kb.databricks.com/jobs/jobs-idempotency.html""" - notebook_task: Optional[NotebookTask] = None - """If notebook_task, indicates that this task must run a notebook. This field may not be specified - in conjunction with spark_jar_task.""" - notification_settings: Optional[JobNotificationSettings] = None """Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this run.""" - pipeline_task: Optional[PipelineTask] = None - """If pipeline_task, indicates that this task must execute a Pipeline.""" - - python_wheel_task: Optional[PythonWheelTask] = None - """If python_wheel_task, indicates that this job must execute a PythonWheel.""" - queue: Optional[QueueSettings] = None """The queue settings of the one-time run.""" @@ -4186,38 +4235,9 @@ class SubmitRun: """Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who submits the request.""" - run_job_task: Optional[RunJobTask] = None - """If run_job_task, indicates that this task must execute another job.""" - run_name: Optional[str] = None """An optional name for the run. The default value is `Untitled`.""" - spark_jar_task: Optional[SparkJarTask] = None - """If spark_jar_task, indicates that this task must run a JAR.""" - - spark_python_task: Optional[SparkPythonTask] = None - """If spark_python_task, indicates that this task must run a Python file.""" - - spark_submit_task: Optional[SparkSubmitTask] = None - """If `spark_submit_task`, indicates that this task must be launched by the spark submit script. - This task can run only on new clusters. - - In the `new_cluster` specification, `libraries` and `spark_conf` are not supported. Instead, use - `--jars` and `--py-files` to add Java and Python libraries and `--conf` to set the Spark - configurations. - - `master`, `deploy-mode`, and `executor-cores` are automatically configured by Databricks; you - _cannot_ specify them in parameters. - - By default, the Spark submit job uses all available memory (excluding reserved memory for - Databricks services). You can set `--driver-memory`, and `--executor-memory` to a smaller value - to leave some room for off-heap usage. - - The `--jars`, `--py-files`, `--files` arguments support DBFS and S3 paths.""" - - sql_task: Optional[SqlTask] = None - """If sql_task, indicates that this job must execute a SQL task.""" - tasks: Optional[List[SubmitTask]] = None timeout_seconds: Optional[int] = None @@ -4231,24 +4251,15 @@ def as_dict(self) -> dict: body = {} if self.access_control_list: body['access_control_list'] = [v.as_dict() for v in self.access_control_list] - if self.condition_task: body['condition_task'] = self.condition_task.as_dict() - if self.dbt_task: body['dbt_task'] = self.dbt_task.as_dict() if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() + if self.environments: body['environments'] = [v.as_dict() for v in self.environments] if self.git_source: body['git_source'] = self.git_source.as_dict() if self.health: body['health'] = self.health.as_dict() if self.idempotency_token is not None: body['idempotency_token'] = self.idempotency_token - if self.notebook_task: body['notebook_task'] = self.notebook_task.as_dict() if self.notification_settings: body['notification_settings'] = self.notification_settings.as_dict() - if self.pipeline_task: body['pipeline_task'] = self.pipeline_task.as_dict() - if self.python_wheel_task: body['python_wheel_task'] = self.python_wheel_task.as_dict() if self.queue: body['queue'] = self.queue.as_dict() if self.run_as: body['run_as'] = self.run_as.as_dict() - if self.run_job_task: body['run_job_task'] = self.run_job_task.as_dict() if self.run_name is not None: body['run_name'] = self.run_name - if self.spark_jar_task: body['spark_jar_task'] = self.spark_jar_task.as_dict() - if self.spark_python_task: body['spark_python_task'] = self.spark_python_task.as_dict() - if self.spark_submit_task: body['spark_submit_task'] = self.spark_submit_task.as_dict() - if self.sql_task: body['sql_task'] = self.sql_task.as_dict() if self.tasks: body['tasks'] = [v.as_dict() for v in self.tasks] if self.timeout_seconds is not None: body['timeout_seconds'] = self.timeout_seconds if self.webhook_notifications: body['webhook_notifications'] = self.webhook_notifications.as_dict() @@ -4258,24 +4269,15 @@ def as_dict(self) -> dict: def from_dict(cls, d: Dict[str, any]) -> SubmitRun: """Deserializes the SubmitRun from a dictionary.""" return cls(access_control_list=_repeated_dict(d, 'access_control_list', iam.AccessControlRequest), - condition_task=_from_dict(d, 'condition_task', ConditionTask), - dbt_task=_from_dict(d, 'dbt_task', DbtTask), email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), + environments=_repeated_dict(d, 'environments', JobEnvironment), git_source=_from_dict(d, 'git_source', GitSource), health=_from_dict(d, 'health', JobsHealthRules), idempotency_token=d.get('idempotency_token', None), - notebook_task=_from_dict(d, 'notebook_task', NotebookTask), notification_settings=_from_dict(d, 'notification_settings', JobNotificationSettings), - pipeline_task=_from_dict(d, 'pipeline_task', PipelineTask), - python_wheel_task=_from_dict(d, 'python_wheel_task', PythonWheelTask), queue=_from_dict(d, 'queue', QueueSettings), run_as=_from_dict(d, 'run_as', JobRunAs), - run_job_task=_from_dict(d, 'run_job_task', RunJobTask), run_name=d.get('run_name', None), - spark_jar_task=_from_dict(d, 'spark_jar_task', SparkJarTask), - spark_python_task=_from_dict(d, 'spark_python_task', SparkPythonTask), - spark_submit_task=_from_dict(d, 'spark_submit_task', SparkSubmitTask), - sql_task=_from_dict(d, 'sql_task', SqlTask), tasks=_repeated_dict(d, 'tasks', SubmitTask), timeout_seconds=d.get('timeout_seconds', None), webhook_notifications=_from_dict(d, 'webhook_notifications', WebhookNotifications)) @@ -4312,6 +4314,10 @@ class SubmitTask: execution of other tasks. Does not require a cluster to execute and does not support retries or notifications.""" + dbt_task: Optional[DbtTask] = None + """If dbt_task, indicates that this must execute a dbt task. It requires both Databricks SQL and + the ability to use a serverless or a pro SQL warehouse.""" + depends_on: Optional[List[TaskDependency]] = None """An optional array of objects specifying the dependency graph of the task. All tasks specified in this field must complete successfully before executing this task. The key is `task_key`, and the @@ -4324,6 +4330,10 @@ class SubmitTask: """An optional set of email addresses notified when the task run begins or completes. The default behavior is to not send any emails.""" + environment_key: Optional[str] = None + """The key that references an environment spec in a job. This field is required for Python script, + Python wheel and dbt tasks when using serverless compute.""" + existing_cluster_id: Optional[str] = None """If existing_cluster_id, the ID of an existing cluster that is used for all runs. When running jobs or tasks on an existing cluster, you may need to manually restart the cluster if it stops @@ -4402,9 +4412,11 @@ def as_dict(self) -> dict: """Serializes the SubmitTask into a dictionary suitable for use as a JSON request body.""" body = {} if self.condition_task: body['condition_task'] = self.condition_task.as_dict() + if self.dbt_task: body['dbt_task'] = self.dbt_task.as_dict() if self.depends_on: body['depends_on'] = [v.as_dict() for v in self.depends_on] if self.description is not None: body['description'] = self.description if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() + if self.environment_key is not None: body['environment_key'] = self.environment_key if self.existing_cluster_id is not None: body['existing_cluster_id'] = self.existing_cluster_id if self.for_each_task: body['for_each_task'] = self.for_each_task.as_dict() if self.health: body['health'] = self.health.as_dict() @@ -4429,9 +4441,11 @@ def as_dict(self) -> dict: def from_dict(cls, d: Dict[str, any]) -> SubmitTask: """Deserializes the SubmitTask from a dictionary.""" return cls(condition_task=_from_dict(d, 'condition_task', ConditionTask), + dbt_task=_from_dict(d, 'dbt_task', DbtTask), depends_on=_repeated_dict(d, 'depends_on', TaskDependency), description=d.get('description', None), email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), + environment_key=d.get('environment_key', None), existing_cluster_id=d.get('existing_cluster_id', None), for_each_task=_from_dict(d, 'for_each_task', ForEachTask), health=_from_dict(d, 'health', JobsHealthRules), @@ -4734,6 +4748,13 @@ class TaskEmailNotifications: """A list of email addresses to be notified when a run begins. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.""" + on_streaming_backlog_exceeded: Optional[List[str]] = None + """A list of email addresses to notify when any streaming backlog thresholds are exceeded for any + stream. Streaming backlog thresholds can be set in the `health` field using the following + metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or + `STREAMING_BACKLOG_FILES`. Alerting is based on the 10-minute average of these metrics. If the + issue persists, notifications are resent every 30 minutes.""" + on_success: Optional[List[str]] = None """A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` @@ -4751,6 +4772,8 @@ def as_dict(self) -> dict: ] if self.on_failure: body['on_failure'] = [v for v in self.on_failure] if self.on_start: body['on_start'] = [v for v in self.on_start] + if self.on_streaming_backlog_exceeded: + body['on_streaming_backlog_exceeded'] = [v for v in self.on_streaming_backlog_exceeded] if self.on_success: body['on_success'] = [v for v in self.on_success] return body @@ -4762,6 +4785,7 @@ def from_dict(cls, d: Dict[str, any]) -> TaskEmailNotifications: None), on_failure=d.get('on_failure', None), on_start=d.get('on_start', None), + on_streaming_backlog_exceeded=d.get('on_streaming_backlog_exceeded', None), on_success=d.get('on_success', None)) @@ -4825,6 +4849,9 @@ class TriggerSettings: pause_status: Optional[PauseStatus] = None """Whether this trigger is paused or not.""" + periodic: Optional[PeriodicTriggerConfiguration] = None + """Periodic trigger settings.""" + table: Optional[TableUpdateTriggerConfiguration] = None """Old table trigger settings name. Deprecated in favor of `table_update`.""" @@ -4835,6 +4862,7 @@ def as_dict(self) -> dict: body = {} if self.file_arrival: body['file_arrival'] = self.file_arrival.as_dict() if self.pause_status is not None: body['pause_status'] = self.pause_status.value + if self.periodic: body['periodic'] = self.periodic.as_dict() if self.table: body['table'] = self.table.as_dict() if self.table_update: body['table_update'] = self.table_update.as_dict() return body @@ -4844,6 +4872,7 @@ def from_dict(cls, d: Dict[str, any]) -> TriggerSettings: """Deserializes the TriggerSettings from a dictionary.""" return cls(file_arrival=_from_dict(d, 'file_arrival', FileArrivalTriggerConfiguration), pause_status=_enum(d, 'pause_status', PauseStatus), + periodic=_from_dict(d, 'periodic', PeriodicTriggerConfiguration), table=_from_dict(d, 'table', TableUpdateTriggerConfiguration), table_update=_from_dict(d, 'table_update', TableUpdateTriggerConfiguration)) @@ -4991,6 +5020,14 @@ class WebhookNotifications: """An optional list of system notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the `on_start` property.""" + on_streaming_backlog_exceeded: Optional[List[Webhook]] = None + """An optional list of system notification IDs to call when any streaming backlog thresholds are + exceeded for any stream. Streaming backlog thresholds can be set in the `health` field using the + following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, + `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`. Alerting is based on the 10-minute + average of these metrics. If the issue persists, notifications are resent every 30 minutes. A + maximum of 3 destinations can be specified for the `on_streaming_backlog_exceeded` property.""" + on_success: Optional[List[Webhook]] = None """An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.""" @@ -5004,6 +5041,8 @@ def as_dict(self) -> dict: ] if self.on_failure: body['on_failure'] = [v.as_dict() for v in self.on_failure] if self.on_start: body['on_start'] = [v.as_dict() for v in self.on_start] + if self.on_streaming_backlog_exceeded: + body['on_streaming_backlog_exceeded'] = [v.as_dict() for v in self.on_streaming_backlog_exceeded] if self.on_success: body['on_success'] = [v.as_dict() for v in self.on_success] return body @@ -5014,6 +5053,7 @@ def from_dict(cls, d: Dict[str, any]) -> WebhookNotifications: d, 'on_duration_warning_threshold_exceeded', Webhook), on_failure=_repeated_dict(d, 'on_failure', Webhook), on_start=_repeated_dict(d, 'on_start', Webhook), + on_streaming_backlog_exceeded=_repeated_dict(d, 'on_streaming_backlog_exceeded', Webhook), on_success=_repeated_dict(d, 'on_success', Webhook)) @@ -5888,24 +5928,15 @@ def set_permissions( def submit(self, *, access_control_list: Optional[List[iam.AccessControlRequest]] = None, - condition_task: Optional[ConditionTask] = None, - dbt_task: Optional[DbtTask] = None, email_notifications: Optional[JobEmailNotifications] = None, + environments: Optional[List[JobEnvironment]] = None, git_source: Optional[GitSource] = None, health: Optional[JobsHealthRules] = None, idempotency_token: Optional[str] = None, - notebook_task: Optional[NotebookTask] = None, notification_settings: Optional[JobNotificationSettings] = None, - pipeline_task: Optional[PipelineTask] = None, - python_wheel_task: Optional[PythonWheelTask] = None, queue: Optional[QueueSettings] = None, run_as: Optional[JobRunAs] = None, - run_job_task: Optional[RunJobTask] = None, run_name: Optional[str] = None, - spark_jar_task: Optional[SparkJarTask] = None, - spark_python_task: Optional[SparkPythonTask] = None, - spark_submit_task: Optional[SparkSubmitTask] = None, - sql_task: Optional[SqlTask] = None, tasks: Optional[List[SubmitTask]] = None, timeout_seconds: Optional[int] = None, webhook_notifications: Optional[WebhookNotifications] = None) -> Wait[Run]: @@ -5917,14 +5948,10 @@ def submit(self, :param access_control_list: List[:class:`AccessControlRequest`] (optional) List of permissions to set on the job. - :param condition_task: :class:`ConditionTask` (optional) - If condition_task, specifies a condition with an outcome that can be used to control the execution - of other tasks. Does not require a cluster to execute and does not support retries or notifications. - :param dbt_task: :class:`DbtTask` (optional) - If dbt_task, indicates that this must execute a dbt task. It requires both Databricks SQL and the - ability to use a serverless or a pro SQL warehouse. :param email_notifications: :class:`JobEmailNotifications` (optional) An optional set of email addresses notified when the run begins or completes. + :param environments: List[:class:`JobEnvironment`] (optional) + A list of task execution environment specifications that can be referenced by tasks of this run. :param git_source: :class:`GitSource` (optional) An optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks. @@ -5949,47 +5976,16 @@ def submit(self, For more information, see [How to ensure idempotency for jobs]. [How to ensure idempotency for jobs]: https://kb.databricks.com/jobs/jobs-idempotency.html - :param notebook_task: :class:`NotebookTask` (optional) - If notebook_task, indicates that this task must run a notebook. This field may not be specified in - conjunction with spark_jar_task. :param notification_settings: :class:`JobNotificationSettings` (optional) Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this run. - :param pipeline_task: :class:`PipelineTask` (optional) - If pipeline_task, indicates that this task must execute a Pipeline. - :param python_wheel_task: :class:`PythonWheelTask` (optional) - If python_wheel_task, indicates that this job must execute a PythonWheel. :param queue: :class:`QueueSettings` (optional) The queue settings of the one-time run. :param run_as: :class:`JobRunAs` (optional) Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who submits the request. - :param run_job_task: :class:`RunJobTask` (optional) - If run_job_task, indicates that this task must execute another job. :param run_name: str (optional) An optional name for the run. The default value is `Untitled`. - :param spark_jar_task: :class:`SparkJarTask` (optional) - If spark_jar_task, indicates that this task must run a JAR. - :param spark_python_task: :class:`SparkPythonTask` (optional) - If spark_python_task, indicates that this task must run a Python file. - :param spark_submit_task: :class:`SparkSubmitTask` (optional) - If `spark_submit_task`, indicates that this task must be launched by the spark submit script. This - task can run only on new clusters. - - In the `new_cluster` specification, `libraries` and `spark_conf` are not supported. Instead, use - `--jars` and `--py-files` to add Java and Python libraries and `--conf` to set the Spark - configurations. - - `master`, `deploy-mode`, and `executor-cores` are automatically configured by Databricks; you - _cannot_ specify them in parameters. - - By default, the Spark submit job uses all available memory (excluding reserved memory for Databricks - services). You can set `--driver-memory`, and `--executor-memory` to a smaller value to leave some - room for off-heap usage. - - The `--jars`, `--py-files`, `--files` arguments support DBFS and S3 paths. - :param sql_task: :class:`SqlTask` (optional) - If sql_task, indicates that this job must execute a SQL task. :param tasks: List[:class:`SubmitTask`] (optional) :param timeout_seconds: int (optional) An optional timeout applied to each run of this job. A value of `0` means no timeout. @@ -6003,24 +5999,15 @@ def submit(self, body = {} if access_control_list is not None: body['access_control_list'] = [v.as_dict() for v in access_control_list] - if condition_task is not None: body['condition_task'] = condition_task.as_dict() - if dbt_task is not None: body['dbt_task'] = dbt_task.as_dict() if email_notifications is not None: body['email_notifications'] = email_notifications.as_dict() + if environments is not None: body['environments'] = [v.as_dict() for v in environments] if git_source is not None: body['git_source'] = git_source.as_dict() if health is not None: body['health'] = health.as_dict() if idempotency_token is not None: body['idempotency_token'] = idempotency_token - if notebook_task is not None: body['notebook_task'] = notebook_task.as_dict() if notification_settings is not None: body['notification_settings'] = notification_settings.as_dict() - if pipeline_task is not None: body['pipeline_task'] = pipeline_task.as_dict() - if python_wheel_task is not None: body['python_wheel_task'] = python_wheel_task.as_dict() if queue is not None: body['queue'] = queue.as_dict() if run_as is not None: body['run_as'] = run_as.as_dict() - if run_job_task is not None: body['run_job_task'] = run_job_task.as_dict() if run_name is not None: body['run_name'] = run_name - if spark_jar_task is not None: body['spark_jar_task'] = spark_jar_task.as_dict() - if spark_python_task is not None: body['spark_python_task'] = spark_python_task.as_dict() - if spark_submit_task is not None: body['spark_submit_task'] = spark_submit_task.as_dict() - if sql_task is not None: body['sql_task'] = sql_task.as_dict() if tasks is not None: body['tasks'] = [v.as_dict() for v in tasks] if timeout_seconds is not None: body['timeout_seconds'] = timeout_seconds if webhook_notifications is not None: body['webhook_notifications'] = webhook_notifications.as_dict() @@ -6035,47 +6022,29 @@ def submit_and_wait( self, *, access_control_list: Optional[List[iam.AccessControlRequest]] = None, - condition_task: Optional[ConditionTask] = None, - dbt_task: Optional[DbtTask] = None, email_notifications: Optional[JobEmailNotifications] = None, + environments: Optional[List[JobEnvironment]] = None, git_source: Optional[GitSource] = None, health: Optional[JobsHealthRules] = None, idempotency_token: Optional[str] = None, - notebook_task: Optional[NotebookTask] = None, notification_settings: Optional[JobNotificationSettings] = None, - pipeline_task: Optional[PipelineTask] = None, - python_wheel_task: Optional[PythonWheelTask] = None, queue: Optional[QueueSettings] = None, run_as: Optional[JobRunAs] = None, - run_job_task: Optional[RunJobTask] = None, run_name: Optional[str] = None, - spark_jar_task: Optional[SparkJarTask] = None, - spark_python_task: Optional[SparkPythonTask] = None, - spark_submit_task: Optional[SparkSubmitTask] = None, - sql_task: Optional[SqlTask] = None, tasks: Optional[List[SubmitTask]] = None, timeout_seconds: Optional[int] = None, webhook_notifications: Optional[WebhookNotifications] = None, timeout=timedelta(minutes=20)) -> Run: return self.submit(access_control_list=access_control_list, - condition_task=condition_task, - dbt_task=dbt_task, email_notifications=email_notifications, + environments=environments, git_source=git_source, health=health, idempotency_token=idempotency_token, - notebook_task=notebook_task, notification_settings=notification_settings, - pipeline_task=pipeline_task, - python_wheel_task=python_wheel_task, queue=queue, run_as=run_as, - run_job_task=run_job_task, run_name=run_name, - spark_jar_task=spark_jar_task, - spark_python_task=spark_python_task, - spark_submit_task=spark_submit_task, - sql_task=sql_task, tasks=tasks, timeout_seconds=timeout_seconds, webhook_notifications=webhook_notifications).result(timeout=timeout) diff --git a/databricks/sdk/service/marketplace.py b/databricks/sdk/service/marketplace.py index e832903e..57cd4f38 100755 --- a/databricks/sdk/service/marketplace.py +++ b/databricks/sdk/service/marketplace.py @@ -1297,11 +1297,16 @@ class Listing: id: Optional[str] = None + provider_summary: Optional[ProviderListingSummaryInfo] = None + """we can not use just ProviderListingSummary since we already have same name on entity side of the + state""" + def as_dict(self) -> dict: """Serializes the Listing into a dictionary suitable for use as a JSON request body.""" body = {} if self.detail: body['detail'] = self.detail.as_dict() if self.id is not None: body['id'] = self.id + if self.provider_summary: body['provider_summary'] = self.provider_summary.as_dict() if self.summary: body['summary'] = self.summary.as_dict() return body @@ -1310,6 +1315,7 @@ def from_dict(cls, d: Dict[str, any]) -> Listing: """Deserializes the Listing from a dictionary.""" return cls(detail=_from_dict(d, 'detail', ListingDetail), id=d.get('id', None), + provider_summary=_from_dict(d, 'provider_summary', ProviderListingSummaryInfo), summary=_from_dict(d, 'summary', ListingSummary)) @@ -1727,6 +1733,37 @@ def from_dict(cls, d: Dict[str, any]) -> ProviderAnalyticsDashboard: return cls(id=d.get('id', None)) +@dataclass +class ProviderIconFile: + icon_file_id: Optional[str] = None + + icon_file_path: Optional[str] = None + + icon_type: Optional[ProviderIconType] = None + + def as_dict(self) -> dict: + """Serializes the ProviderIconFile into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.icon_file_id is not None: body['icon_file_id'] = self.icon_file_id + if self.icon_file_path is not None: body['icon_file_path'] = self.icon_file_path + if self.icon_type is not None: body['icon_type'] = self.icon_type.value + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ProviderIconFile: + """Deserializes the ProviderIconFile from a dictionary.""" + return cls(icon_file_id=d.get('icon_file_id', None), + icon_file_path=d.get('icon_file_path', None), + icon_type=_enum(d, 'icon_type', ProviderIconType)) + + +class ProviderIconType(Enum): + + DARK = 'DARK' + PRIMARY = 'PRIMARY' + PROVIDER_ICON_TYPE_UNSPECIFIED = 'PROVIDER_ICON_TYPE_UNSPECIFIED' + + @dataclass class ProviderInfo: name: str @@ -1800,6 +1837,33 @@ def from_dict(cls, d: Dict[str, any]) -> ProviderInfo: term_of_service_link=d.get('term_of_service_link', None)) +@dataclass +class ProviderListingSummaryInfo: + """we can not use just ProviderListingSummary since we already have same name on entity side of the + state""" + + description: Optional[str] = None + + icon_files: Optional[List[ProviderIconFile]] = None + + name: Optional[str] = None + + def as_dict(self) -> dict: + """Serializes the ProviderListingSummaryInfo into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.description is not None: body['description'] = self.description + if self.icon_files: body['icon_files'] = [v.as_dict() for v in self.icon_files] + if self.name is not None: body['name'] = self.name + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> ProviderListingSummaryInfo: + """Deserializes the ProviderListingSummaryInfo from a dictionary.""" + return cls(description=d.get('description', None), + icon_files=_repeated_dict(d, 'icon_files', ProviderIconFile), + name=d.get('name', None)) + + @dataclass class RegionInfo: cloud: Optional[str] = None diff --git a/databricks/sdk/service/pipelines.py b/databricks/sdk/service/pipelines.py index 83b2991a..bba59811 100755 --- a/databricks/sdk/service/pipelines.py +++ b/databricks/sdk/service/pipelines.py @@ -1242,7 +1242,7 @@ class PipelineLibrary: """Specification of a maven library to be installed.""" notebook: Optional[NotebookLibrary] = None - """The path to a notebook that defines a pipeline and is stored in the workspace.""" + """The path to a notebook that defines a pipeline and is stored in the Databricks workspace.""" def as_dict(self) -> dict: """Serializes the PipelineLibrary into a dictionary suitable for use as a JSON request body.""" diff --git a/databricks/sdk/service/serving.py b/databricks/sdk/service/serving.py index 89332793..6c39c598 100755 --- a/databricks/sdk/service/serving.py +++ b/databricks/sdk/service/serving.py @@ -120,6 +120,10 @@ class App: pending_deployment: Optional[AppDeployment] = None """The pending deployment of the app.""" + service_principal_id: Optional[int] = None + + service_principal_name: Optional[str] = None + status: Optional[AppStatus] = None update_time: Optional[str] = None @@ -140,6 +144,9 @@ def as_dict(self) -> dict: if self.description is not None: body['description'] = self.description if self.name is not None: body['name'] = self.name if self.pending_deployment: body['pending_deployment'] = self.pending_deployment.as_dict() + if self.service_principal_id is not None: body['service_principal_id'] = self.service_principal_id + if self.service_principal_name is not None: + body['service_principal_name'] = self.service_principal_name if self.status: body['status'] = self.status.as_dict() if self.update_time is not None: body['update_time'] = self.update_time if self.updater is not None: body['updater'] = self.updater @@ -155,6 +162,8 @@ def from_dict(cls, d: Dict[str, any]) -> App: description=d.get('description', None), name=d.get('name', None), pending_deployment=_from_dict(d, 'pending_deployment', AppDeployment), + service_principal_id=d.get('service_principal_id', None), + service_principal_name=d.get('service_principal_name', None), status=_from_dict(d, 'status', AppStatus), update_time=d.get('update_time', None), updater=d.get('updater', None), @@ -324,19 +333,18 @@ def from_dict(cls, d: Dict[str, any]) -> AppStatus: class AutoCaptureConfigInput: catalog_name: Optional[str] = None """The name of the catalog in Unity Catalog. NOTE: On update, you cannot change the catalog name if - it was already set.""" + the inference table is already enabled.""" enabled: Optional[bool] = None - """If inference tables are enabled or not. NOTE: If you have already disabled payload logging once, - you cannot enable again.""" + """Indicates whether the inference table is enabled.""" schema_name: Optional[str] = None """The name of the schema in Unity Catalog. NOTE: On update, you cannot change the schema name if - it was already set.""" + the inference table is already enabled.""" table_name_prefix: Optional[str] = None """The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if - it was already set.""" + the inference table is already enabled.""" def as_dict(self) -> dict: """Serializes the AutoCaptureConfigInput into a dictionary suitable for use as a JSON request body.""" @@ -362,7 +370,7 @@ class AutoCaptureConfigOutput: """The name of the catalog in Unity Catalog.""" enabled: Optional[bool] = None - """If inference tables are enabled or not.""" + """Indicates whether the inference table is enabled.""" schema_name: Optional[str] = None """The name of the schema in Unity Catalog.""" @@ -2396,6 +2404,12 @@ def from_dict(cls, d: Dict[str, any]) -> ServingEndpointPermissionsRequest: serving_endpoint_id=d.get('serving_endpoint_id', None)) +@dataclass +class StartAppRequest: + name: Optional[str] = None + """The name of the app.""" + + @dataclass class StopAppRequest: name: Optional[str] = None @@ -2767,6 +2781,22 @@ def list_deployments(self, return query['page_token'] = json['next_page_token'] + def start(self, name: str) -> AppDeployment: + """Start an app. + + Start the last active deployment of the app in the workspace. + + :param name: str + The name of the app. + + :returns: :class:`AppDeployment` + """ + + headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } + + res = self._api.do('POST', f'/api/2.0/preview/apps/{name}/start', headers=headers) + return AppDeployment.from_dict(res) + def stop(self, name: str): """Stop an app. diff --git a/databricks/sdk/service/settings.py b/databricks/sdk/service/settings.py index 636f7544..b0232384 100755 --- a/databricks/sdk/service/settings.py +++ b/databricks/sdk/service/settings.py @@ -282,6 +282,7 @@ class ComplianceStandard(Enum): """Compliance stardard for SHIELD customers""" COMPLIANCE_STANDARD_UNSPECIFIED = 'COMPLIANCE_STANDARD_UNSPECIFIED' + CYBER_ESSENTIAL_PLUS = 'CYBER_ESSENTIAL_PLUS' FEDRAMP_HIGH = 'FEDRAMP_HIGH' FEDRAMP_IL5 = 'FEDRAMP_IL5' FEDRAMP_MODERATE = 'FEDRAMP_MODERATE' diff --git a/databricks/sdk/service/sharing.py b/databricks/sdk/service/sharing.py index fd01ea56..d716fad9 100755 --- a/databricks/sdk/service/sharing.py +++ b/databricks/sdk/service/sharing.py @@ -796,7 +796,6 @@ class Privilege(Enum): REFRESH = 'REFRESH' SELECT = 'SELECT' SET_SHARE_PERMISSION = 'SET_SHARE_PERMISSION' - SINGLE_USER_ACCESS = 'SINGLE_USER_ACCESS' USAGE = 'USAGE' USE_CATALOG = 'USE_CATALOG' USE_CONNECTION = 'USE_CONNECTION' diff --git a/databricks/sdk/service/sql.py b/databricks/sdk/service/sql.py index 889a0edc..fa7f93f6 100755 --- a/databricks/sdk/service/sql.py +++ b/databricks/sdk/service/sql.py @@ -182,7 +182,7 @@ class AlertQuery: data_source_id: Optional[str] = None """Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list""" @@ -856,7 +856,7 @@ class DataSource: id: Optional[str] = None """Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list""" @@ -1390,8 +1390,9 @@ class ExecuteStatementRequest: """The SQL statement to execute. The statement can optionally be parameterized, see `parameters`.""" warehouse_id: str - """Warehouse upon which to execute a statement. See also [What are SQL - warehouses?](/sql/admin/warehouse-type.html)""" + """Warehouse upon which to execute a statement. See also [What are SQL warehouses?] + + [What are SQL warehouses?]: https://docs.databricks.com/sql/admin/warehouse-type.html""" byte_limit: Optional[int] = None """Applies the given byte limit to the statement's result size. Byte counts are based on internal @@ -2241,7 +2242,7 @@ class Query: data_source_id: Optional[str] = None """Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list""" @@ -2374,7 +2375,7 @@ def from_dict(cls, d: Dict[str, any]) -> Query: class QueryEditContent: data_source_id: Optional[str] = None """Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list""" @@ -2807,7 +2808,7 @@ def from_dict(cls, d: Dict[str, any]) -> QueryOptions: class QueryPostContent: data_source_id: Optional[str] = None """Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list""" @@ -3273,8 +3274,10 @@ class StatementParameterListItem: type: Optional[str] = None """The data type, given as a string. For example: `INT`, `STRING`, `DECIMAL(10,2)`. If no type is given the type is assumed to be `STRING`. Complex types, such as `ARRAY`, `MAP`, and `STRUCT` - are not supported. For valid types, refer to the section [Data - types](/sql/language-manual/functions/cast.html) of the SQL language reference.""" + are not supported. For valid types, refer to the section [Data types] of the SQL language + reference. + + [Data types]: https://docs.databricks.com/sql/language-manual/functions/cast.html""" value: Optional[str] = None """The value to substitute, represented as a string. If omitted, the value is interpreted as NULL.""" @@ -3959,7 +3962,11 @@ class AlertsAPI: """The alerts API can be used to perform CRUD operations on alerts. An alert is a Databricks SQL object that periodically runs a query, evaluates a condition of its result, and notifies one or more users and/or notification destinations if the condition was met. Alerts can be scheduled using the `sql_task` type of - the Jobs API, e.g. :method:jobs/create.""" + the Jobs API, e.g. :method:jobs/create. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources""" def __init__(self, api_client): self._api = api_client @@ -3976,6 +3983,10 @@ def create(self, Creates an alert. An alert is a Databricks SQL object that periodically runs a query, evaluates a condition of its result, and notifies users or notification destinations if the condition was met. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param name: str Name of the alert. :param options: :class:`AlertOptions` @@ -4004,9 +4015,13 @@ def create(self, def delete(self, alert_id: str): """Delete an alert. - Deletes an alert. Deleted alerts are no longer accessible and cannot be restored. **Note:** Unlike + Deletes an alert. Deleted alerts are no longer accessible and cannot be restored. **Note**: Unlike queries and dashboards, alerts cannot be moved to the trash. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param alert_id: str @@ -4021,6 +4036,10 @@ def get(self, alert_id: str) -> Alert: Gets an alert. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param alert_id: str :returns: :class:`Alert` @@ -4036,6 +4055,10 @@ def list(self) -> Iterator[Alert]: Gets a list of alerts. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :returns: Iterator over :class:`Alert` """ @@ -4055,6 +4078,10 @@ def update(self, Updates an alert. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param alert_id: str :param name: str Name of the alert. @@ -4256,8 +4283,8 @@ def list(self, Fetch a paginated list of dashboard objects. - ### **Warning: Calling this API concurrently 10 or more times could result in throttling, service - degradation, or a temporary ban.** + **Warning**: Calling this API concurrently 10 or more times could result in throttling, service + degradation, or a temporary ban. :param order: :class:`ListOrder` (optional) Name of dashboard attribute to order by. @@ -4351,7 +4378,11 @@ class DataSourcesAPI: This API does not support searches. It returns the full list of SQL warehouses in your workspace. We advise you to use any text editor, REST client, or `grep` to search the response from this API for the - name of your SQL warehouse as it appears in Databricks SQL.""" + name of your SQL warehouse as it appears in Databricks SQL. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources""" def __init__(self, api_client): self._api = api_client @@ -4363,6 +4394,10 @@ def list(self) -> Iterator[DataSource]: API response are enumerated for clarity. However, you need only a SQL warehouse's `id` to create new queries against it. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :returns: Iterator over :class:`DataSource` """ @@ -4383,7 +4418,11 @@ class DbsqlPermissionsAPI: - `CAN_RUN`: Allows read access and run access (superset of `CAN_VIEW`) - - `CAN_MANAGE`: Allows all actions: read, run, edit, delete, modify permissions (superset of `CAN_RUN`)""" + - `CAN_MANAGE`: Allows all actions: read, run, edit, delete, modify permissions (superset of `CAN_RUN`) + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources""" def __init__(self, api_client): self._api = api_client @@ -4393,6 +4432,10 @@ def get(self, object_type: ObjectTypePlural, object_id: str) -> GetResponse: Gets a JSON representation of the access control list (ACL) for a specified object. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param object_type: :class:`ObjectTypePlural` The type of object permissions to check. :param object_id: str @@ -4418,6 +4461,10 @@ def set(self, Sets the access control list (ACL) for a specified object. This operation will complete rewrite the ACL. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param object_type: :class:`ObjectTypePlural` The type of object permission to set. :param object_id: str @@ -4446,6 +4493,10 @@ def transfer_ownership(self, Transfers ownership of a dashboard, query, or alert to an active user. Requires an admin API key. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param object_type: :class:`OwnableObjectType` The type of object on which to change ownership. :param object_id: :class:`TransferOwnershipObjectId` @@ -4469,7 +4520,11 @@ def transfer_ownership(self, class QueriesAPI: """These endpoints are used for CRUD operations on query definitions. Query definitions include the target SQL warehouse, query text, name, description, tags, parameters, and visualizations. Queries can be - scheduled using the `sql_task` type of the Jobs API, e.g. :method:jobs/create.""" + scheduled using the `sql_task` type of the Jobs API, e.g. :method:jobs/create. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources""" def __init__(self, api_client): self._api = api_client @@ -4495,9 +4550,13 @@ def create(self, **Note**: You cannot add a visualization until you create the query. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param data_source_id: str (optional) Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list :param description: str (optional) @@ -4539,6 +4598,10 @@ def delete(self, query_id: str): Moves a query to the trash. Trashed queries immediately disappear from searches and list views, and they cannot be used for alerts. The trash is deleted after 30 days. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str @@ -4554,6 +4617,10 @@ def get(self, query_id: str) -> Query: Retrieve a query object definition along with contextual permissions information about the currently authenticated user. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str :returns: :class:`Query` @@ -4574,8 +4641,12 @@ def list(self, Gets a list of queries. Optionally, this list can be filtered by a search term. - ### **Warning: Calling this API concurrently 10 or more times could result in throttling, service - degradation, or a temporary ban.** + **Warning**: Calling this API concurrently 10 or more times could result in throttling, service + degradation, or a temporary ban. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources :param order: str (optional) Name of query attribute to order by. Default sort order is ascending. Append a dash (`-`) to order @@ -4630,6 +4701,10 @@ def restore(self, query_id: str): Restore a query that has been moved to the trash. A restored query appears in list views and searches. You can use restored queries for alerts. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str @@ -4655,10 +4730,14 @@ def update(self, **Note**: You cannot undo this operation. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str :param data_source_id: str (optional) Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list :param description: str (optional) @@ -4960,8 +5039,9 @@ def execute_statement(self, :param statement: str The SQL statement to execute. The statement can optionally be parameterized, see `parameters`. :param warehouse_id: str - Warehouse upon which to execute a statement. See also [What are SQL - warehouses?](/sql/admin/warehouse-type.html) + Warehouse upon which to execute a statement. See also [What are SQL warehouses?] + + [What are SQL warehouses?]: https://docs.databricks.com/sql/admin/warehouse-type.html :param byte_limit: int (optional) Applies the given byte limit to the statement's result size. Byte counts are based on internal data representations and might not match the final size in the requested `format`. If the result was diff --git a/databricks/sdk/service/vectorsearch.py b/databricks/sdk/service/vectorsearch.py index a43ae586..2f0ceaab 100755 --- a/databricks/sdk/service/vectorsearch.py +++ b/databricks/sdk/service/vectorsearch.py @@ -644,6 +644,35 @@ class PipelineType(Enum): TRIGGERED = 'TRIGGERED' +@dataclass +class QueryVectorIndexNextPageRequest: + """Request payload for getting next page of results.""" + + endpoint_name: Optional[str] = None + """Name of the endpoint.""" + + index_name: Optional[str] = None + """Name of the vector index to query.""" + + page_token: Optional[str] = None + """Page token returned from previous `QueryVectorIndex` or `QueryVectorIndexNextPage` API.""" + + def as_dict(self) -> dict: + """Serializes the QueryVectorIndexNextPageRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.endpoint_name is not None: body['endpoint_name'] = self.endpoint_name + if self.index_name is not None: body['index_name'] = self.index_name + if self.page_token is not None: body['page_token'] = self.page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> QueryVectorIndexNextPageRequest: + """Deserializes the QueryVectorIndexNextPageRequest from a dictionary.""" + return cls(endpoint_name=d.get('endpoint_name', None), + index_name=d.get('index_name', None), + page_token=d.get('page_token', None)) + + @dataclass class QueryVectorIndexRequest: columns: List[str] @@ -665,6 +694,9 @@ class QueryVectorIndexRequest: query_text: Optional[str] = None """Query text. Required for Delta Sync Index using model endpoint.""" + query_type: Optional[str] = None + """The query type to use. Choices are `ANN` and `HYBRID`. Defaults to `ANN`.""" + query_vector: Optional[List[float]] = None """Query vector. Required for Direct Vector Access Index and Delta Sync Index using self-managed vectors.""" @@ -680,6 +712,7 @@ def as_dict(self) -> dict: if self.index_name is not None: body['index_name'] = self.index_name if self.num_results is not None: body['num_results'] = self.num_results if self.query_text is not None: body['query_text'] = self.query_text + if self.query_type is not None: body['query_type'] = self.query_type if self.query_vector: body['query_vector'] = [v for v in self.query_vector] if self.score_threshold is not None: body['score_threshold'] = self.score_threshold return body @@ -692,6 +725,7 @@ def from_dict(cls, d: Dict[str, any]) -> QueryVectorIndexRequest: index_name=d.get('index_name', None), num_results=d.get('num_results', None), query_text=d.get('query_text', None), + query_type=d.get('query_type', None), query_vector=d.get('query_vector', None), score_threshold=d.get('score_threshold', None)) @@ -701,6 +735,11 @@ class QueryVectorIndexResponse: manifest: Optional[ResultManifest] = None """Metadata about the result set.""" + next_page_token: Optional[str] = None + """[Optional] Token that can be used in `QueryVectorIndexNextPage` API to get next page of results. + If more than 1000 results satisfy the query, they are returned in groups of 1000. Empty value + means no more results.""" + result: Optional[ResultData] = None """Data returned in the query result.""" @@ -708,6 +747,7 @@ def as_dict(self) -> dict: """Serializes the QueryVectorIndexResponse into a dictionary suitable for use as a JSON request body.""" body = {} if self.manifest: body['manifest'] = self.manifest.as_dict() + if self.next_page_token is not None: body['next_page_token'] = self.next_page_token if self.result: body['result'] = self.result.as_dict() return body @@ -715,6 +755,7 @@ def as_dict(self) -> dict: def from_dict(cls, d: Dict[str, any]) -> QueryVectorIndexResponse: """Deserializes the QueryVectorIndexResponse from a dictionary.""" return cls(manifest=_from_dict(d, 'manifest', ResultManifest), + next_page_token=d.get('next_page_token', None), result=_from_dict(d, 'result', ResultData)) @@ -1330,6 +1371,7 @@ def query_index(self, filters_json: Optional[str] = None, num_results: Optional[int] = None, query_text: Optional[str] = None, + query_type: Optional[str] = None, query_vector: Optional[List[float]] = None, score_threshold: Optional[float] = None) -> QueryVectorIndexResponse: """Query an index. @@ -1350,6 +1392,8 @@ def query_index(self, Number of results to return. Defaults to 10. :param query_text: str (optional) Query text. Required for Delta Sync Index using model endpoint. + :param query_type: str (optional) + The query type to use. Choices are `ANN` and `HYBRID`. Defaults to `ANN`. :param query_vector: List[float] (optional) Query vector. Required for Direct Vector Access Index and Delta Sync Index using self-managed vectors. @@ -1363,6 +1407,7 @@ def query_index(self, if filters_json is not None: body['filters_json'] = filters_json if num_results is not None: body['num_results'] = num_results if query_text is not None: body['query_text'] = query_text + if query_type is not None: body['query_type'] = query_type if query_vector is not None: body['query_vector'] = [v for v in query_vector] if score_threshold is not None: body['score_threshold'] = score_threshold headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } @@ -1373,6 +1418,36 @@ def query_index(self, headers=headers) return QueryVectorIndexResponse.from_dict(res) + def query_next_page(self, + index_name: str, + *, + endpoint_name: Optional[str] = None, + page_token: Optional[str] = None) -> QueryVectorIndexResponse: + """Query next page. + + Use `next_page_token` returned from previous `QueryVectorIndex` or `QueryVectorIndexNextPage` request + to fetch next page of results. + + :param index_name: str + Name of the vector index to query. + :param endpoint_name: str (optional) + Name of the endpoint. + :param page_token: str (optional) + Page token returned from previous `QueryVectorIndex` or `QueryVectorIndexNextPage` API. + + :returns: :class:`QueryVectorIndexResponse` + """ + body = {} + if endpoint_name is not None: body['endpoint_name'] = endpoint_name + if page_token is not None: body['page_token'] = page_token + headers = {'Accept': 'application/json', 'Content-Type': 'application/json', } + + res = self._api.do('POST', + f'/api/2.0/vector-search/indexes/{index_name}/query-next-page', + body=body, + headers=headers) + return QueryVectorIndexResponse.from_dict(res) + def scan_index(self, index_name: str, *, diff --git a/databricks/sdk/version.py b/databricks/sdk/version.py index 1bf36757..9093e4e4 100644 --- a/databricks/sdk/version.py +++ b/databricks/sdk/version.py @@ -1 +1 @@ -__version__ = '0.28.0' +__version__ = '0.29.0' diff --git a/docs/dbdataclasses/catalog.rst b/docs/dbdataclasses/catalog.rst index bd12c70d..e2c120bc 100644 --- a/docs/dbdataclasses/catalog.rst +++ b/docs/dbdataclasses/catalog.rst @@ -127,12 +127,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: CATALOG_INTERNAL :value: "CATALOG_INTERNAL" - .. py:attribute:: CATALOG_ONLINE - :value: "CATALOG_ONLINE" - - .. py:attribute:: CATALOG_ONLINE_INDEX - :value: "CATALOG_ONLINE_INDEX" - .. py:attribute:: CATALOG_STANDARD :value: "CATALOG_STANDARD" @@ -142,6 +136,16 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: CATALOG_SYSTEM_DELTASHARING :value: "CATALOG_SYSTEM_DELTASHARING" +.. py:class:: CatalogIsolationMode + + Whether the current securable is accessible from all workspaces or a specific set of workspaces. + + .. py:attribute:: ISOLATED + :value: "ISOLATED" + + .. py:attribute:: OPEN + :value: "OPEN" + .. py:class:: CatalogType The type of the catalog. @@ -661,11 +665,11 @@ These dataclasses are used in the SDK to represent API requests and responses fo Whether the current securable is accessible from all workspaces or a specific set of workspaces. - .. py:attribute:: ISOLATED - :value: "ISOLATED" + .. py:attribute:: ISOLATION_MODE_ISOLATED + :value: "ISOLATION_MODE_ISOLATED" - .. py:attribute:: OPEN - :value: "OPEN" + .. py:attribute:: ISOLATION_MODE_OPEN + :value: "ISOLATION_MODE_OPEN" .. autoclass:: ListAccountMetastoreAssignmentsResponse :members: @@ -1072,9 +1076,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: SET_SHARE_PERMISSION :value: "SET_SHARE_PERMISSION" - .. py:attribute:: SINGLE_USER_ACCESS - :value: "SINGLE_USER_ACCESS" - .. py:attribute:: USAGE :value: "USAGE" diff --git a/docs/dbdataclasses/dashboards.rst b/docs/dbdataclasses/dashboards.rst index ce485028..dca31d64 100644 --- a/docs/dbdataclasses/dashboards.rst +++ b/docs/dbdataclasses/dashboards.rst @@ -8,10 +8,38 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: CreateScheduleRequest + :members: + :undoc-members: + +.. autoclass:: CreateSubscriptionRequest + :members: + :undoc-members: + +.. autoclass:: CronSchedule + :members: + :undoc-members: + .. autoclass:: Dashboard :members: :undoc-members: +.. py:class:: DashboardView + + .. py:attribute:: DASHBOARD_VIEW_BASIC + :value: "DASHBOARD_VIEW_BASIC" + + .. py:attribute:: DASHBOARD_VIEW_FULL + :value: "DASHBOARD_VIEW_FULL" + +.. autoclass:: DeleteScheduleResponse + :members: + :undoc-members: + +.. autoclass:: DeleteSubscriptionResponse + :members: + :undoc-members: + .. py:class:: LifecycleState .. py:attribute:: ACTIVE @@ -20,6 +48,18 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: TRASHED :value: "TRASHED" +.. autoclass:: ListDashboardsResponse + :members: + :undoc-members: + +.. autoclass:: ListSchedulesResponse + :members: + :undoc-members: + +.. autoclass:: ListSubscriptionsResponse + :members: + :undoc-members: + .. autoclass:: MigrateDashboardRequest :members: :undoc-members: @@ -32,6 +72,34 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: Schedule + :members: + :undoc-members: + +.. py:class:: SchedulePauseStatus + + .. py:attribute:: PAUSED + :value: "PAUSED" + + .. py:attribute:: UNPAUSED + :value: "UNPAUSED" + +.. autoclass:: Subscriber + :members: + :undoc-members: + +.. autoclass:: Subscription + :members: + :undoc-members: + +.. autoclass:: SubscriptionSubscriberDestination + :members: + :undoc-members: + +.. autoclass:: SubscriptionSubscriberUser + :members: + :undoc-members: + .. autoclass:: TrashDashboardResponse :members: :undoc-members: @@ -43,3 +111,7 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. autoclass:: UpdateDashboardRequest :members: :undoc-members: + +.. autoclass:: UpdateScheduleRequest + :members: + :undoc-members: diff --git a/docs/dbdataclasses/jobs.rst b/docs/dbdataclasses/jobs.rst index 6d585361..81d81020 100644 --- a/docs/dbdataclasses/jobs.rst +++ b/docs/dbdataclasses/jobs.rst @@ -297,10 +297,23 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:class:: JobsHealthMetric Specifies the health metric that is being evaluated for a particular health rule. + * `RUN_DURATION_SECONDS`: Expected total time for a run in seconds. * `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Private Preview. * `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Private Preview. * `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Private Preview. * `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Private Preview. .. py:attribute:: RUN_DURATION_SECONDS :value: "RUN_DURATION_SECONDS" + .. py:attribute:: STREAMING_BACKLOG_BYTES + :value: "STREAMING_BACKLOG_BYTES" + + .. py:attribute:: STREAMING_BACKLOG_FILES + :value: "STREAMING_BACKLOG_FILES" + + .. py:attribute:: STREAMING_BACKLOG_RECORDS + :value: "STREAMING_BACKLOG_RECORDS" + + .. py:attribute:: STREAMING_BACKLOG_SECONDS + :value: "STREAMING_BACKLOG_SECONDS" + .. py:class:: JobsHealthOperator Specifies the operator used to compare the health metric value with the specified threshold. @@ -340,6 +353,24 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: UNPAUSED :value: "UNPAUSED" +.. autoclass:: PeriodicTriggerConfiguration + :members: + :undoc-members: + +.. py:class:: PeriodicTriggerConfigurationTimeUnit + + .. py:attribute:: DAYS + :value: "DAYS" + + .. py:attribute:: HOURS + :value: "HOURS" + + .. py:attribute:: TIME_UNIT_UNSPECIFIED + :value: "TIME_UNIT_UNSPECIFIED" + + .. py:attribute:: WEEKS + :value: "WEEKS" + .. autoclass:: PipelineParams :members: :undoc-members: diff --git a/docs/dbdataclasses/marketplace.rst b/docs/dbdataclasses/marketplace.rst index 229bcf3e..5204dd1e 100644 --- a/docs/dbdataclasses/marketplace.rst +++ b/docs/dbdataclasses/marketplace.rst @@ -494,10 +494,29 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: ProviderIconFile + :members: + :undoc-members: + +.. py:class:: ProviderIconType + + .. py:attribute:: DARK + :value: "DARK" + + .. py:attribute:: PRIMARY + :value: "PRIMARY" + + .. py:attribute:: PROVIDER_ICON_TYPE_UNSPECIFIED + :value: "PROVIDER_ICON_TYPE_UNSPECIFIED" + .. autoclass:: ProviderInfo :members: :undoc-members: +.. autoclass:: ProviderListingSummaryInfo + :members: + :undoc-members: + .. autoclass:: RegionInfo :members: :undoc-members: diff --git a/docs/dbdataclasses/serving.rst b/docs/dbdataclasses/serving.rst index a3d16a16..46cfe6a3 100644 --- a/docs/dbdataclasses/serving.rst +++ b/docs/dbdataclasses/serving.rst @@ -506,6 +506,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo :members: :undoc-members: +.. autoclass:: StartAppRequest + :members: + :undoc-members: + .. autoclass:: StopAppRequest :members: :undoc-members: diff --git a/docs/dbdataclasses/settings.rst b/docs/dbdataclasses/settings.rst index 54274999..cc142abf 100644 --- a/docs/dbdataclasses/settings.rst +++ b/docs/dbdataclasses/settings.rst @@ -95,6 +95,9 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: COMPLIANCE_STANDARD_UNSPECIFIED :value: "COMPLIANCE_STANDARD_UNSPECIFIED" + .. py:attribute:: CYBER_ESSENTIAL_PLUS + :value: "CYBER_ESSENTIAL_PLUS" + .. py:attribute:: FEDRAMP_HIGH :value: "FEDRAMP_HIGH" diff --git a/docs/dbdataclasses/sharing.rst b/docs/dbdataclasses/sharing.rst index ff48c977..f25f3f57 100644 --- a/docs/dbdataclasses/sharing.rst +++ b/docs/dbdataclasses/sharing.rst @@ -289,9 +289,6 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: SET_SHARE_PERMISSION :value: "SET_SHARE_PERMISSION" - .. py:attribute:: SINGLE_USER_ACCESS - :value: "SINGLE_USER_ACCESS" - .. py:attribute:: USAGE :value: "USAGE" diff --git a/docs/dbdataclasses/vectorsearch.rst b/docs/dbdataclasses/vectorsearch.rst index 1395ecb0..179c4b89 100644 --- a/docs/dbdataclasses/vectorsearch.rst +++ b/docs/dbdataclasses/vectorsearch.rst @@ -132,6 +132,10 @@ These dataclasses are used in the SDK to represent API requests and responses fo .. py:attribute:: TRIGGERED :value: "TRIGGERED" +.. autoclass:: QueryVectorIndexNextPageRequest + :members: + :undoc-members: + .. autoclass:: QueryVectorIndexRequest :members: :undoc-members: diff --git a/docs/workspace/catalog/catalogs.rst b/docs/workspace/catalog/catalogs.rst index ed7315a6..200168ee 100644 --- a/docs/workspace/catalog/catalogs.rst +++ b/docs/workspace/catalog/catalogs.rst @@ -105,7 +105,7 @@ :returns: :class:`CatalogInfo` - .. py:method:: list( [, include_browse: Optional[bool]]) -> Iterator[CatalogInfo] + .. py:method:: list( [, include_browse: Optional[bool], max_results: Optional[int], page_token: Optional[str]]) -> Iterator[CatalogInfo] Usage: @@ -129,11 +129,21 @@ :param include_browse: bool (optional) Whether to include catalogs in the response for which the principal can only access selective metadata for + :param max_results: int (optional) + Maximum number of catalogs to return. - when set to 0, the page length is set to a server configured + value (recommended); - when set to a value greater than 0, the page length is the minimum of this + value and a server configured value; - when set to a value less than 0, an invalid parameter error + is returned; - If not set, all valid catalogs are returned (not recommended). - Note: The number of + returned catalogs might be less than the specified max_results size, even zero. The only definitive + indication that no further catalogs can be fetched is when the next_page_token is unset from the + response. + :param page_token: str (optional) + Opaque pagination token to go to next page based on previous query. :returns: Iterator over :class:`CatalogInfo` - .. py:method:: update(name: str [, comment: Optional[str], enable_predictive_optimization: Optional[EnablePredictiveOptimization], isolation_mode: Optional[IsolationMode], new_name: Optional[str], owner: Optional[str], properties: Optional[Dict[str, str]]]) -> CatalogInfo + .. py:method:: update(name: str [, comment: Optional[str], enable_predictive_optimization: Optional[EnablePredictiveOptimization], isolation_mode: Optional[CatalogIsolationMode], new_name: Optional[str], owner: Optional[str], properties: Optional[Dict[str, str]]]) -> CatalogInfo Usage: @@ -164,7 +174,7 @@ User-provided free-form text description. :param enable_predictive_optimization: :class:`EnablePredictiveOptimization` (optional) Whether predictive optimization should be enabled for this object and objects under it. - :param isolation_mode: :class:`IsolationMode` (optional) + :param isolation_mode: :class:`CatalogIsolationMode` (optional) Whether the current securable is accessible from all workspaces or a specific set of workspaces. :param new_name: str (optional) New name for the catalog. diff --git a/docs/workspace/catalog/external_locations.rst b/docs/workspace/catalog/external_locations.rst index 34c0d672..3f6114f1 100644 --- a/docs/workspace/catalog/external_locations.rst +++ b/docs/workspace/catalog/external_locations.rst @@ -163,7 +163,7 @@ :returns: Iterator over :class:`ExternalLocationInfo` - .. py:method:: update(name: str [, access_point: Optional[str], comment: Optional[str], credential_name: Optional[str], encryption_details: Optional[EncryptionDetails], force: Optional[bool], new_name: Optional[str], owner: Optional[str], read_only: Optional[bool], skip_validation: Optional[bool], url: Optional[str]]) -> ExternalLocationInfo + .. py:method:: update(name: str [, access_point: Optional[str], comment: Optional[str], credential_name: Optional[str], encryption_details: Optional[EncryptionDetails], force: Optional[bool], isolation_mode: Optional[IsolationMode], new_name: Optional[str], owner: Optional[str], read_only: Optional[bool], skip_validation: Optional[bool], url: Optional[str]]) -> ExternalLocationInfo Usage: @@ -212,6 +212,8 @@ Encryption options that apply to clients connecting to cloud storage. :param force: bool (optional) Force update even if changing url invalidates dependent external tables or mounts. + :param isolation_mode: :class:`IsolationMode` (optional) + Whether the current securable is accessible from all workspaces or a specific set of workspaces. :param new_name: str (optional) New name for the external location. :param owner: str (optional) diff --git a/docs/workspace/catalog/functions.rst b/docs/workspace/catalog/functions.rst index 97398be8..64648807 100644 --- a/docs/workspace/catalog/functions.rst +++ b/docs/workspace/catalog/functions.rst @@ -14,6 +14,8 @@ Create a function. + **WARNING: This API is experimental and will change in future versions** + Creates a new function The user must have the following permissions in order for the function to be created: - diff --git a/docs/workspace/catalog/metastores.rst b/docs/workspace/catalog/metastores.rst index 6fb93989..f8a3c287 100644 --- a/docs/workspace/catalog/metastores.rst +++ b/docs/workspace/catalog/metastores.rst @@ -88,8 +88,9 @@ :param name: str The user-specified name of the metastore. :param region: str (optional) - Cloud region which the metastore serves (e.g., `us-west-2`, `westus`). If this field is omitted, the - region of the workspace receiving the request will be used. + Cloud region which the metastore serves (e.g., `us-west-2`, `westus`). The field can be omitted in + the __workspace-level__ __API__ but not in the __account-level__ __API__. If this field is omitted, + the region of the workspace receiving the request will be used. :param storage_root: str (optional) The storage root URL for metastore diff --git a/docs/workspace/catalog/storage_credentials.rst b/docs/workspace/catalog/storage_credentials.rst index e3a5ac33..30b04654 100644 --- a/docs/workspace/catalog/storage_credentials.rst +++ b/docs/workspace/catalog/storage_credentials.rst @@ -54,7 +54,7 @@ :param comment: str (optional) Comment associated with the credential. :param databricks_gcp_service_account: :class:`DatabricksGcpServiceAccountRequest` (optional) - The managed GCP service account configuration. + The Databricks managed GCP service account configuration. :param read_only: bool (optional) Whether the storage credential is only usable for read operations. :param skip_validation: bool (optional) @@ -145,7 +145,7 @@ :returns: Iterator over :class:`StorageCredentialInfo` - .. py:method:: update(name: str [, aws_iam_role: Optional[AwsIamRoleRequest], azure_managed_identity: Optional[AzureManagedIdentityResponse], azure_service_principal: Optional[AzureServicePrincipal], cloudflare_api_token: Optional[CloudflareApiToken], comment: Optional[str], databricks_gcp_service_account: Optional[DatabricksGcpServiceAccountRequest], force: Optional[bool], new_name: Optional[str], owner: Optional[str], read_only: Optional[bool], skip_validation: Optional[bool]]) -> StorageCredentialInfo + .. py:method:: update(name: str [, aws_iam_role: Optional[AwsIamRoleRequest], azure_managed_identity: Optional[AzureManagedIdentityResponse], azure_service_principal: Optional[AzureServicePrincipal], cloudflare_api_token: Optional[CloudflareApiToken], comment: Optional[str], databricks_gcp_service_account: Optional[DatabricksGcpServiceAccountRequest], force: Optional[bool], isolation_mode: Optional[IsolationMode], new_name: Optional[str], owner: Optional[str], read_only: Optional[bool], skip_validation: Optional[bool]]) -> StorageCredentialInfo Usage: @@ -189,9 +189,11 @@ :param comment: str (optional) Comment associated with the credential. :param databricks_gcp_service_account: :class:`DatabricksGcpServiceAccountRequest` (optional) - The managed GCP service account configuration. + The Databricks managed GCP service account configuration. :param force: bool (optional) Force update even if there are dependent external locations or external tables. + :param isolation_mode: :class:`IsolationMode` (optional) + Whether the current securable is accessible from all workspaces or a specific set of workspaces. :param new_name: str (optional) New name for the storage credential. :param owner: str (optional) diff --git a/docs/workspace/dashboards/lakeview.rst b/docs/workspace/dashboards/lakeview.rst index cb953af5..17f82960 100644 --- a/docs/workspace/dashboards/lakeview.rst +++ b/docs/workspace/dashboards/lakeview.rst @@ -26,6 +26,68 @@ :returns: :class:`Dashboard` + .. py:method:: create_schedule(dashboard_id: str, cron_schedule: CronSchedule [, display_name: Optional[str], pause_status: Optional[SchedulePauseStatus]]) -> Schedule + + Create dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param cron_schedule: :class:`CronSchedule` + The cron expression describing the frequency of the periodic refresh for this schedule. + :param display_name: str (optional) + The display name for schedule. + :param pause_status: :class:`SchedulePauseStatus` (optional) + The status indicates whether this schedule is paused or not. + + :returns: :class:`Schedule` + + + .. py:method:: create_subscription(dashboard_id: str, schedule_id: str, subscriber: Subscriber) -> Subscription + + Create schedule subscription. + + :param dashboard_id: str + UUID identifying the dashboard to which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule to which the subscription belongs. + :param subscriber: :class:`Subscriber` + Subscriber details for users and destinations to be added as subscribers to the schedule. + + :returns: :class:`Subscription` + + + .. py:method:: delete_schedule(dashboard_id: str, schedule_id: str [, etag: Optional[str]]) + + Delete dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param schedule_id: str + UUID identifying the schedule. + :param etag: str (optional) + The etag for the schedule. Optionally, it can be provided to verify that the schedule has not been + modified from its last retrieval. + + + + + .. py:method:: delete_subscription(dashboard_id: str, schedule_id: str, subscription_id: str [, etag: Optional[str]]) + + Delete schedule subscription. + + :param dashboard_id: str + UUID identifying the dashboard which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule which the subscription belongs. + :param subscription_id: str + UUID identifying the subscription. + :param etag: str (optional) + The etag for the subscription. Can be optionally provided to ensure that the subscription has not + been modified since the last read. + + + + .. py:method:: get(dashboard_id: str) -> Dashboard Get dashboard. @@ -50,6 +112,83 @@ :returns: :class:`PublishedDashboard` + .. py:method:: get_schedule(dashboard_id: str, schedule_id: str) -> Schedule + + Get dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param schedule_id: str + UUID identifying the schedule. + + :returns: :class:`Schedule` + + + .. py:method:: get_subscription(dashboard_id: str, schedule_id: str, subscription_id: str) -> Subscription + + Get schedule subscription. + + :param dashboard_id: str + UUID identifying the dashboard which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule which the subscription belongs. + :param subscription_id: str + UUID identifying the subscription. + + :returns: :class:`Subscription` + + + .. py:method:: list( [, page_size: Optional[int], page_token: Optional[str], show_trashed: Optional[bool], view: Optional[DashboardView]]) -> Iterator[Dashboard] + + List dashboards. + + :param page_size: int (optional) + The number of dashboards to return per page. + :param page_token: str (optional) + A page token, received from a previous `ListDashboards` call. This token can be used to retrieve the + subsequent page. + :param show_trashed: bool (optional) + The flag to include dashboards located in the trash. If unspecified, only active dashboards will be + returned. + :param view: :class:`DashboardView` (optional) + Indicates whether to include all metadata from the dashboard in the response. If unset, the response + defaults to `DASHBOARD_VIEW_BASIC` which only includes summary metadata from the dashboard. + + :returns: Iterator over :class:`Dashboard` + + + .. py:method:: list_schedules(dashboard_id: str [, page_size: Optional[int], page_token: Optional[str]]) -> Iterator[Schedule] + + List dashboard schedules. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param page_size: int (optional) + The number of schedules to return per page. + :param page_token: str (optional) + A page token, received from a previous `ListSchedules` call. Use this to retrieve the subsequent + page. + + :returns: Iterator over :class:`Schedule` + + + .. py:method:: list_subscriptions(dashboard_id: str, schedule_id: str [, page_size: Optional[int], page_token: Optional[str]]) -> Iterator[Subscription] + + List schedule subscriptions. + + :param dashboard_id: str + UUID identifying the dashboard to which the subscription belongs. + :param schedule_id: str + UUID identifying the schedule to which the subscription belongs. + :param page_size: int (optional) + The number of subscriptions to return per page. + :param page_token: str (optional) + A page token, received from a previous `ListSubscriptions` call. Use this to retrieve the subsequent + page. + + :returns: Iterator over :class:`Subscription` + + .. py:method:: migrate(source_dashboard_id: str [, display_name: Optional[str], parent_path: Optional[str]]) -> Dashboard Migrate dashboard. @@ -126,4 +265,25 @@ The warehouse ID used to run the dashboard. :returns: :class:`Dashboard` + + + .. py:method:: update_schedule(dashboard_id: str, schedule_id: str, cron_schedule: CronSchedule [, display_name: Optional[str], etag: Optional[str], pause_status: Optional[SchedulePauseStatus]]) -> Schedule + + Update dashboard schedule. + + :param dashboard_id: str + UUID identifying the dashboard to which the schedule belongs. + :param schedule_id: str + UUID identifying the schedule. + :param cron_schedule: :class:`CronSchedule` + The cron expression describing the frequency of the periodic refresh for this schedule. + :param display_name: str (optional) + The display name for schedule. + :param etag: str (optional) + The etag for the schedule. Must be left empty on create, must be provided on updates to ensure that + the schedule has not been modified since the last read, and can be optionally provided on delete. + :param pause_status: :class:`SchedulePauseStatus` (optional) + The status indicates whether this schedule is paused or not. + + :returns: :class:`Schedule` \ No newline at end of file diff --git a/docs/workspace/jobs/jobs.rst b/docs/workspace/jobs/jobs.rst index 32cfd55c..773f6fb8 100644 --- a/docs/workspace/jobs/jobs.rst +++ b/docs/workspace/jobs/jobs.rst @@ -924,7 +924,7 @@ :returns: :class:`JobPermissions` - .. py:method:: submit( [, access_control_list: Optional[List[iam.AccessControlRequest]], condition_task: Optional[ConditionTask], dbt_task: Optional[DbtTask], email_notifications: Optional[JobEmailNotifications], git_source: Optional[GitSource], health: Optional[JobsHealthRules], idempotency_token: Optional[str], notebook_task: Optional[NotebookTask], notification_settings: Optional[JobNotificationSettings], pipeline_task: Optional[PipelineTask], python_wheel_task: Optional[PythonWheelTask], queue: Optional[QueueSettings], run_as: Optional[JobRunAs], run_job_task: Optional[RunJobTask], run_name: Optional[str], spark_jar_task: Optional[SparkJarTask], spark_python_task: Optional[SparkPythonTask], spark_submit_task: Optional[SparkSubmitTask], sql_task: Optional[SqlTask], tasks: Optional[List[SubmitTask]], timeout_seconds: Optional[int], webhook_notifications: Optional[WebhookNotifications]]) -> Wait[Run] + .. py:method:: submit( [, access_control_list: Optional[List[iam.AccessControlRequest]], email_notifications: Optional[JobEmailNotifications], environments: Optional[List[JobEnvironment]], git_source: Optional[GitSource], health: Optional[JobsHealthRules], idempotency_token: Optional[str], notification_settings: Optional[JobNotificationSettings], queue: Optional[QueueSettings], run_as: Optional[JobRunAs], run_name: Optional[str], tasks: Optional[List[SubmitTask]], timeout_seconds: Optional[int], webhook_notifications: Optional[WebhookNotifications]]) -> Wait[Run] Usage: @@ -962,14 +962,10 @@ :param access_control_list: List[:class:`AccessControlRequest`] (optional) List of permissions to set on the job. - :param condition_task: :class:`ConditionTask` (optional) - If condition_task, specifies a condition with an outcome that can be used to control the execution - of other tasks. Does not require a cluster to execute and does not support retries or notifications. - :param dbt_task: :class:`DbtTask` (optional) - If dbt_task, indicates that this must execute a dbt task. It requires both Databricks SQL and the - ability to use a serverless or a pro SQL warehouse. :param email_notifications: :class:`JobEmailNotifications` (optional) An optional set of email addresses notified when the run begins or completes. + :param environments: List[:class:`JobEnvironment`] (optional) + A list of task execution environment specifications that can be referenced by tasks of this run. :param git_source: :class:`GitSource` (optional) An optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks. @@ -994,47 +990,16 @@ For more information, see [How to ensure idempotency for jobs]. [How to ensure idempotency for jobs]: https://kb.databricks.com/jobs/jobs-idempotency.html - :param notebook_task: :class:`NotebookTask` (optional) - If notebook_task, indicates that this task must run a notebook. This field may not be specified in - conjunction with spark_jar_task. :param notification_settings: :class:`JobNotificationSettings` (optional) Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this run. - :param pipeline_task: :class:`PipelineTask` (optional) - If pipeline_task, indicates that this task must execute a Pipeline. - :param python_wheel_task: :class:`PythonWheelTask` (optional) - If python_wheel_task, indicates that this job must execute a PythonWheel. :param queue: :class:`QueueSettings` (optional) The queue settings of the one-time run. :param run_as: :class:`JobRunAs` (optional) Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who submits the request. - :param run_job_task: :class:`RunJobTask` (optional) - If run_job_task, indicates that this task must execute another job. :param run_name: str (optional) An optional name for the run. The default value is `Untitled`. - :param spark_jar_task: :class:`SparkJarTask` (optional) - If spark_jar_task, indicates that this task must run a JAR. - :param spark_python_task: :class:`SparkPythonTask` (optional) - If spark_python_task, indicates that this task must run a Python file. - :param spark_submit_task: :class:`SparkSubmitTask` (optional) - If `spark_submit_task`, indicates that this task must be launched by the spark submit script. This - task can run only on new clusters. - - In the `new_cluster` specification, `libraries` and `spark_conf` are not supported. Instead, use - `--jars` and `--py-files` to add Java and Python libraries and `--conf` to set the Spark - configurations. - - `master`, `deploy-mode`, and `executor-cores` are automatically configured by Databricks; you - _cannot_ specify them in parameters. - - By default, the Spark submit job uses all available memory (excluding reserved memory for Databricks - services). You can set `--driver-memory`, and `--executor-memory` to a smaller value to leave some - room for off-heap usage. - - The `--jars`, `--py-files`, `--files` arguments support DBFS and S3 paths. - :param sql_task: :class:`SqlTask` (optional) - If sql_task, indicates that this job must execute a SQL task. :param tasks: List[:class:`SubmitTask`] (optional) :param timeout_seconds: int (optional) An optional timeout applied to each run of this job. A value of `0` means no timeout. @@ -1046,7 +1011,7 @@ See :method:wait_get_run_job_terminated_or_skipped for more details. - .. py:method:: submit_and_wait( [, access_control_list: Optional[List[iam.AccessControlRequest]], condition_task: Optional[ConditionTask], dbt_task: Optional[DbtTask], email_notifications: Optional[JobEmailNotifications], git_source: Optional[GitSource], health: Optional[JobsHealthRules], idempotency_token: Optional[str], notebook_task: Optional[NotebookTask], notification_settings: Optional[JobNotificationSettings], pipeline_task: Optional[PipelineTask], python_wheel_task: Optional[PythonWheelTask], queue: Optional[QueueSettings], run_as: Optional[JobRunAs], run_job_task: Optional[RunJobTask], run_name: Optional[str], spark_jar_task: Optional[SparkJarTask], spark_python_task: Optional[SparkPythonTask], spark_submit_task: Optional[SparkSubmitTask], sql_task: Optional[SqlTask], tasks: Optional[List[SubmitTask]], timeout_seconds: Optional[int], webhook_notifications: Optional[WebhookNotifications], timeout: datetime.timedelta = 0:20:00]) -> Run + .. py:method:: submit_and_wait( [, access_control_list: Optional[List[iam.AccessControlRequest]], email_notifications: Optional[JobEmailNotifications], environments: Optional[List[JobEnvironment]], git_source: Optional[GitSource], health: Optional[JobsHealthRules], idempotency_token: Optional[str], notification_settings: Optional[JobNotificationSettings], queue: Optional[QueueSettings], run_as: Optional[JobRunAs], run_name: Optional[str], tasks: Optional[List[SubmitTask]], timeout_seconds: Optional[int], webhook_notifications: Optional[WebhookNotifications], timeout: datetime.timedelta = 0:20:00]) -> Run .. py:method:: update(job_id: int [, fields_to_remove: Optional[List[str]], new_settings: Optional[JobSettings]]) diff --git a/docs/workspace/serving/apps.rst b/docs/workspace/serving/apps.rst index 9ce324a9..21b5f3f0 100644 --- a/docs/workspace/serving/apps.rst +++ b/docs/workspace/serving/apps.rst @@ -132,6 +132,18 @@ :returns: Iterator over :class:`AppDeployment` + .. py:method:: start(name: str) -> AppDeployment + + Start an app. + + Start the last active deployment of the app in the workspace. + + :param name: str + The name of the app. + + :returns: :class:`AppDeployment` + + .. py:method:: stop(name: str) Stop an app. diff --git a/docs/workspace/sql/alerts.rst b/docs/workspace/sql/alerts.rst index 49a518bd..26ae453a 100644 --- a/docs/workspace/sql/alerts.rst +++ b/docs/workspace/sql/alerts.rst @@ -8,6 +8,10 @@ periodically runs a query, evaluates a condition of its result, and notifies one or more users and/or notification destinations if the condition was met. Alerts can be scheduled using the `sql_task` type of the Jobs API, e.g. :method:jobs/create. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources .. py:method:: create(name: str, options: AlertOptions, query_id: str [, parent: Optional[str], rearm: Optional[int]]) -> Alert @@ -43,6 +47,10 @@ Creates an alert. An alert is a Databricks SQL object that periodically runs a query, evaluates a condition of its result, and notifies users or notification destinations if the condition was met. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param name: str Name of the alert. :param options: :class:`AlertOptions` @@ -62,9 +70,13 @@ Delete an alert. - Deletes an alert. Deleted alerts are no longer accessible and cannot be restored. **Note:** Unlike + Deletes an alert. Deleted alerts are no longer accessible and cannot be restored. **Note**: Unlike queries and dashboards, alerts cannot be moved to the trash. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param alert_id: str @@ -105,6 +117,10 @@ Gets an alert. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param alert_id: str :returns: :class:`Alert` @@ -127,6 +143,10 @@ Gets a list of alerts. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :returns: Iterator over :class:`Alert` @@ -168,6 +188,10 @@ Updates an alert. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param alert_id: str :param name: str Name of the alert. diff --git a/docs/workspace/sql/dashboards.rst b/docs/workspace/sql/dashboards.rst index a59e625f..97ea1014 100644 --- a/docs/workspace/sql/dashboards.rst +++ b/docs/workspace/sql/dashboards.rst @@ -123,8 +123,8 @@ Fetch a paginated list of dashboard objects. - ### **Warning: Calling this API concurrently 10 or more times could result in throttling, service - degradation, or a temporary ban.** + **Warning**: Calling this API concurrently 10 or more times could result in throttling, service + degradation, or a temporary ban. :param order: :class:`ListOrder` (optional) Name of dashboard attribute to order by. diff --git a/docs/workspace/sql/data_sources.rst b/docs/workspace/sql/data_sources.rst index 5cf1ed52..dcab7506 100644 --- a/docs/workspace/sql/data_sources.rst +++ b/docs/workspace/sql/data_sources.rst @@ -11,6 +11,10 @@ This API does not support searches. It returns the full list of SQL warehouses in your workspace. We advise you to use any text editor, REST client, or `grep` to search the response from this API for the name of your SQL warehouse as it appears in Databricks SQL. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources .. py:method:: list() -> Iterator[DataSource] @@ -31,5 +35,9 @@ API response are enumerated for clarity. However, you need only a SQL warehouse's `id` to create new queries against it. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :returns: Iterator over :class:`DataSource` \ No newline at end of file diff --git a/docs/workspace/sql/dbsql_permissions.rst b/docs/workspace/sql/dbsql_permissions.rst index 07aa4f00..fbf1aac2 100644 --- a/docs/workspace/sql/dbsql_permissions.rst +++ b/docs/workspace/sql/dbsql_permissions.rst @@ -15,6 +15,10 @@ - `CAN_RUN`: Allows read access and run access (superset of `CAN_VIEW`) - `CAN_MANAGE`: Allows all actions: read, run, edit, delete, modify permissions (superset of `CAN_RUN`) + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources .. py:method:: get(object_type: ObjectTypePlural, object_id: str) -> GetResponse @@ -22,6 +26,10 @@ Gets a JSON representation of the access control list (ACL) for a specified object. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param object_type: :class:`ObjectTypePlural` The type of object permissions to check. :param object_id: str @@ -37,6 +45,10 @@ Sets the access control list (ACL) for a specified object. This operation will complete rewrite the ACL. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param object_type: :class:`ObjectTypePlural` The type of object permission to set. :param object_id: str @@ -52,6 +64,10 @@ Transfers ownership of a dashboard, query, or alert to an active user. Requires an admin API key. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param object_type: :class:`OwnableObjectType` The type of object on which to change ownership. :param object_id: :class:`TransferOwnershipObjectId` diff --git a/docs/workspace/sql/queries.rst b/docs/workspace/sql/queries.rst index d15de54f..d26ff2ba 100644 --- a/docs/workspace/sql/queries.rst +++ b/docs/workspace/sql/queries.rst @@ -7,6 +7,10 @@ These endpoints are used for CRUD operations on query definitions. Query definitions include the target SQL warehouse, query text, name, description, tags, parameters, and visualizations. Queries can be scheduled using the `sql_task` type of the Jobs API, e.g. :method:jobs/create. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources .. py:method:: create( [, data_source_id: Optional[str], description: Optional[str], name: Optional[str], options: Optional[Any], parent: Optional[str], query: Optional[str], run_as_role: Optional[RunAsRole], tags: Optional[List[str]]]) -> Query @@ -42,9 +46,13 @@ **Note**: You cannot add a visualization until you create the query. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param data_source_id: str (optional) Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list :param description: str (optional) @@ -74,6 +82,10 @@ Moves a query to the trash. Trashed queries immediately disappear from searches and list views, and they cannot be used for alerts. The trash is deleted after 30 days. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str @@ -109,6 +121,10 @@ Retrieve a query object definition along with contextual permissions information about the currently authenticated user. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str :returns: :class:`Query` @@ -120,8 +136,12 @@ Gets a list of queries. Optionally, this list can be filtered by a search term. - ### **Warning: Calling this API concurrently 10 or more times could result in throttling, service - degradation, or a temporary ban.** + **Warning**: Calling this API concurrently 10 or more times could result in throttling, service + degradation, or a temporary ban. + + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources :param order: str (optional) Name of query attribute to order by. Default sort order is ascending. Append a dash (`-`) to order @@ -154,6 +174,10 @@ Restore a query that has been moved to the trash. A restored query appears in list views and searches. You can use restored queries for alerts. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str @@ -194,10 +218,14 @@ **Note**: You cannot undo this operation. + **Note**: A new version of the Databricks SQL API will soon be available. [Learn more] + + [Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources + :param query_id: str :param data_source_id: str (optional) Data source ID maps to the ID of the data source used by the resource and is distinct from the - warehouse ID. [Learn more]. + warehouse ID. [Learn more] [Learn more]: https://docs.databricks.com/api/workspace/datasources/list :param description: str (optional) diff --git a/docs/workspace/sql/statement_execution.rst b/docs/workspace/sql/statement_execution.rst index d5c47946..7914977c 100644 --- a/docs/workspace/sql/statement_execution.rst +++ b/docs/workspace/sql/statement_execution.rst @@ -108,8 +108,9 @@ :param statement: str The SQL statement to execute. The statement can optionally be parameterized, see `parameters`. :param warehouse_id: str - Warehouse upon which to execute a statement. See also [What are SQL - warehouses?](/sql/admin/warehouse-type.html) + Warehouse upon which to execute a statement. See also [What are SQL warehouses?] + + [What are SQL warehouses?]: https://docs.databricks.com/sql/admin/warehouse-type.html :param byte_limit: int (optional) Applies the given byte limit to the statement's result size. Byte counts are based on internal data representations and might not match the final size in the requested `format`. If the result was diff --git a/docs/workspace/vectorsearch/vector_search_indexes.rst b/docs/workspace/vectorsearch/vector_search_indexes.rst index 5c2f5f45..415e19d9 100644 --- a/docs/workspace/vectorsearch/vector_search_indexes.rst +++ b/docs/workspace/vectorsearch/vector_search_indexes.rst @@ -91,7 +91,7 @@ :returns: Iterator over :class:`MiniVectorIndex` - .. py:method:: query_index(index_name: str, columns: List[str] [, filters_json: Optional[str], num_results: Optional[int], query_text: Optional[str], query_vector: Optional[List[float]], score_threshold: Optional[float]]) -> QueryVectorIndexResponse + .. py:method:: query_index(index_name: str, columns: List[str] [, filters_json: Optional[str], num_results: Optional[int], query_text: Optional[str], query_type: Optional[str], query_vector: Optional[List[float]], score_threshold: Optional[float]]) -> QueryVectorIndexResponse Query an index. @@ -111,6 +111,8 @@ Number of results to return. Defaults to 10. :param query_text: str (optional) Query text. Required for Delta Sync Index using model endpoint. + :param query_type: str (optional) + The query type to use. Choices are `ANN` and `HYBRID`. Defaults to `ANN`. :param query_vector: List[float] (optional) Query vector. Required for Direct Vector Access Index and Delta Sync Index using self-managed vectors. @@ -120,6 +122,23 @@ :returns: :class:`QueryVectorIndexResponse` + .. py:method:: query_next_page(index_name: str [, endpoint_name: Optional[str], page_token: Optional[str]]) -> QueryVectorIndexResponse + + Query next page. + + Use `next_page_token` returned from previous `QueryVectorIndex` or `QueryVectorIndexNextPage` request + to fetch next page of results. + + :param index_name: str + Name of the vector index to query. + :param endpoint_name: str (optional) + Name of the endpoint. + :param page_token: str (optional) + Page token returned from previous `QueryVectorIndex` or `QueryVectorIndexNextPage` API. + + :returns: :class:`QueryVectorIndexResponse` + + .. py:method:: scan_index(index_name: str [, last_primary_key: Optional[str], num_results: Optional[int]]) -> ScanVectorIndexResponse Scan an index. diff --git a/examples/workspace/catalogs/update_catalog_workspace_bindings.py b/examples/workspace/catalogs/update_catalog_workspace_bindings.py index e7677573..09a97dee 100755 --- a/examples/workspace/catalogs/update_catalog_workspace_bindings.py +++ b/examples/workspace/catalogs/update_catalog_workspace_bindings.py @@ -7,7 +7,7 @@ created = w.catalogs.create(name=f'sdk-{time.time_ns()}') -_ = w.catalogs.update(name=created.name, isolation_mode=catalog.isolation_mode_isolation_mode_isolated) +_ = w.catalogs.update(name=created.name, isolation_mode=catalog.CatalogIsolationMode.ISOLATED) # cleanup w.catalogs.delete(name=created.name, force=True)