Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release synapse #14047

Merged
merged 2 commits into from
Sep 27, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions sdk/synapse/azure-mgmt-synapse/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Release History

## 0.4.0 (2020-09-25)

**Features**

- Model BigDataPoolResourceInfo has a new parameter is_compute_isolation_enabled
- Model Workspace has a new parameter extra_properties
- Model Sku has a new parameter capacity

## 0.3.0 (2020-06-17)

**Features**
Expand Down
2 changes: 1 addition & 1 deletion sdk/synapse/azure-mgmt-synapse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This is the Microsoft Azure Synapse Management Client Library.
This package has been tested with Python 2.7, 3.5, 3.6, 3.7 and 3.8.
For a more complete view of Azure libraries, see the [azure sdk python release](https://aka.ms/azsdk/python/all)
For a more complete view of Azure libraries, see the [azure sdk python release](https://aka.ms/azsdk/python/all).


# Usage
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -290,6 +290,9 @@ class BigDataPoolResourceInfo(TrackedResource):
:type creation_date: datetime
:param auto_pause: Auto-pausing properties
:type auto_pause: ~azure.mgmt.synapse.models.AutoPauseProperties
:param is_compute_isolation_enabled: Whether compute isolation is required
or not.
:type is_compute_isolation_enabled: bool
:param spark_events_folder: The Spark events folder
:type spark_events_folder: str
:param node_count: The number of nodes in the Big Data pool.
Expand All @@ -302,7 +305,8 @@ class BigDataPoolResourceInfo(TrackedResource):
be written.
:type default_spark_log_folder: str
:param node_size: The level of compute power that each node in the Big
Data pool has. Possible values include: 'None', 'Small', 'Medium', 'Large'
Data pool has. Possible values include: 'None', 'Small', 'Medium',
'Large', 'XLarge', 'XXLarge'
:type node_size: str or ~azure.mgmt.synapse.models.NodeSize
:param node_size_family: The kind of nodes that the Big Data pool
provides. Possible values include: 'None', 'MemoryOptimized'
Expand All @@ -326,6 +330,7 @@ class BigDataPoolResourceInfo(TrackedResource):
'auto_scale': {'key': 'properties.autoScale', 'type': 'AutoScaleProperties'},
'creation_date': {'key': 'properties.creationDate', 'type': 'iso-8601'},
'auto_pause': {'key': 'properties.autoPause', 'type': 'AutoPauseProperties'},
'is_compute_isolation_enabled': {'key': 'properties.isComputeIsolationEnabled', 'type': 'bool'},
'spark_events_folder': {'key': 'properties.sparkEventsFolder', 'type': 'str'},
'node_count': {'key': 'properties.nodeCount', 'type': 'int'},
'library_requirements': {'key': 'properties.libraryRequirements', 'type': 'LibraryRequirements'},
Expand All @@ -341,6 +346,7 @@ def __init__(self, **kwargs):
self.auto_scale = kwargs.get('auto_scale', None)
self.creation_date = kwargs.get('creation_date', None)
self.auto_pause = kwargs.get('auto_pause', None)
self.is_compute_isolation_enabled = kwargs.get('is_compute_isolation_enabled', None)
self.spark_events_folder = kwargs.get('spark_events_folder', None)
self.node_count = kwargs.get('node_count', None)
self.library_requirements = kwargs.get('library_requirements', None)
Expand Down Expand Up @@ -1293,19 +1299,21 @@ def __init__(self, **kwargs):
self.key_name = kwargs.get('key_name', None)


class SubResource(Model):
"""Azure Synapse nested resource, which belongs to a factory.
class SubResource(AzureEntityResource):
"""Azure Synapse nested resource, which belongs to a workspace.

Variables are only populated by the server, and will be ignored when
sending a request.

:ivar id: The resource identifier.
:ivar id: Fully qualified resource Id for the resource. Ex -
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
:vartype id: str
:ivar name: The resource name.
:ivar name: The name of the resource
:vartype name: str
:ivar type: The resource type.
:ivar type: The type of the resource. Ex-
Microsoft.Compute/virtualMachines or Microsoft.Storage/storageAccounts.
:vartype type: str
:ivar etag: Etag identifies change in the resource.
:ivar etag: Resource Etag.
:vartype etag: str
"""

Expand All @@ -1325,10 +1333,6 @@ class SubResource(Model):

def __init__(self, **kwargs):
super(SubResource, self).__init__(**kwargs)
self.id = None
self.name = None
self.type = None
self.etag = None


class IntegrationRuntimeResource(SubResource):
Expand All @@ -1339,13 +1343,15 @@ class IntegrationRuntimeResource(SubResource):

All required parameters must be populated in order to send to Azure.

:ivar id: The resource identifier.
:ivar id: Fully qualified resource Id for the resource. Ex -
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
:vartype id: str
:ivar name: The resource name.
:ivar name: The name of the resource
:vartype name: str
:ivar type: The resource type.
:ivar type: The type of the resource. Ex-
Microsoft.Compute/virtualMachines or Microsoft.Storage/storageAccounts.
:vartype type: str
:ivar etag: Etag identifies change in the resource.
:ivar etag: Resource Etag.
:vartype etag: str
:param properties: Required. Integration runtime properties.
:type properties: ~azure.mgmt.synapse.models.IntegrationRuntime
Expand Down Expand Up @@ -3365,17 +3371,23 @@ class Sku(Model):
:type tier: str
:param name: The SKU name
:type name: str
:param capacity: If the SKU supports scale out/in then the capacity
integer should be included. If scale out/in is not possible for the
resource this may be omitted.
:type capacity: int
"""

_attribute_map = {
'tier': {'key': 'tier', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'capacity': {'key': 'capacity', 'type': 'int'},
}

def __init__(self, **kwargs):
super(Sku, self).__init__(**kwargs)
self.tier = kwargs.get('tier', None)
self.name = kwargs.get('name', None)
self.capacity = kwargs.get('capacity', None)


class SqlPool(TrackedResource):
Expand Down Expand Up @@ -4989,8 +5001,13 @@ class Workspace(TrackedResource):
~azure.mgmt.synapse.models.DataLakeStorageAccountDetails
:param sql_administrator_login_password: SQL administrator login password
:type sql_administrator_login_password: str
:ivar managed_resource_group_name: Workspace managed resource group
:vartype managed_resource_group_name: str
:param managed_resource_group_name: Workspace managed resource group. The
resource group name uniquely identifies the resource group within the user
subscriptionId. The resource group name must be no longer than 90
characters long, and must be alphanumeric characters
(Char.IsLetterOrDigit()) and '-', '_', '(', ')' and'.'. Note that the name
cannot end with '.'
:type managed_resource_group_name: str
:ivar provisioning_state: Resource provisioning state
:vartype provisioning_state: str
:param sql_administrator_login: Login for workspace SQL active directory
Expand All @@ -5009,6 +5026,8 @@ class Workspace(TrackedResource):
workspace
:type private_endpoint_connections:
list[~azure.mgmt.synapse.models.PrivateEndpointConnection]
:ivar extra_properties: Workspace level configs and feature flags
:vartype extra_properties: dict[str, object]
:param identity: Identity of the workspace
:type identity: ~azure.mgmt.synapse.models.ManagedIdentity
"""
Expand All @@ -5018,8 +5037,8 @@ class Workspace(TrackedResource):
'name': {'readonly': True},
'type': {'readonly': True},
'location': {'required': True},
'managed_resource_group_name': {'readonly': True},
'provisioning_state': {'readonly': True},
'extra_properties': {'readonly': True},
}

_attribute_map = {
Expand All @@ -5037,20 +5056,22 @@ class Workspace(TrackedResource):
'connectivity_endpoints': {'key': 'properties.connectivityEndpoints', 'type': '{str}'},
'managed_virtual_network': {'key': 'properties.managedVirtualNetwork', 'type': 'str'},
'private_endpoint_connections': {'key': 'properties.privateEndpointConnections', 'type': '[PrivateEndpointConnection]'},
'extra_properties': {'key': 'properties.extraProperties', 'type': '{object}'},
'identity': {'key': 'identity', 'type': 'ManagedIdentity'},
}

def __init__(self, **kwargs):
super(Workspace, self).__init__(**kwargs)
self.default_data_lake_storage = kwargs.get('default_data_lake_storage', None)
self.sql_administrator_login_password = kwargs.get('sql_administrator_login_password', None)
self.managed_resource_group_name = None
self.managed_resource_group_name = kwargs.get('managed_resource_group_name', None)
self.provisioning_state = None
self.sql_administrator_login = kwargs.get('sql_administrator_login', None)
self.virtual_network_profile = kwargs.get('virtual_network_profile', None)
self.connectivity_endpoints = kwargs.get('connectivity_endpoints', None)
self.managed_virtual_network = kwargs.get('managed_virtual_network', None)
self.private_endpoint_connections = kwargs.get('private_endpoint_connections', None)
self.extra_properties = None
self.identity = kwargs.get('identity', None)


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -290,6 +290,9 @@ class BigDataPoolResourceInfo(TrackedResource):
:type creation_date: datetime
:param auto_pause: Auto-pausing properties
:type auto_pause: ~azure.mgmt.synapse.models.AutoPauseProperties
:param is_compute_isolation_enabled: Whether compute isolation is required
or not.
:type is_compute_isolation_enabled: bool
:param spark_events_folder: The Spark events folder
:type spark_events_folder: str
:param node_count: The number of nodes in the Big Data pool.
Expand All @@ -302,7 +305,8 @@ class BigDataPoolResourceInfo(TrackedResource):
be written.
:type default_spark_log_folder: str
:param node_size: The level of compute power that each node in the Big
Data pool has. Possible values include: 'None', 'Small', 'Medium', 'Large'
Data pool has. Possible values include: 'None', 'Small', 'Medium',
'Large', 'XLarge', 'XXLarge'
:type node_size: str or ~azure.mgmt.synapse.models.NodeSize
:param node_size_family: The kind of nodes that the Big Data pool
provides. Possible values include: 'None', 'MemoryOptimized'
Expand All @@ -326,6 +330,7 @@ class BigDataPoolResourceInfo(TrackedResource):
'auto_scale': {'key': 'properties.autoScale', 'type': 'AutoScaleProperties'},
'creation_date': {'key': 'properties.creationDate', 'type': 'iso-8601'},
'auto_pause': {'key': 'properties.autoPause', 'type': 'AutoPauseProperties'},
'is_compute_isolation_enabled': {'key': 'properties.isComputeIsolationEnabled', 'type': 'bool'},
'spark_events_folder': {'key': 'properties.sparkEventsFolder', 'type': 'str'},
'node_count': {'key': 'properties.nodeCount', 'type': 'int'},
'library_requirements': {'key': 'properties.libraryRequirements', 'type': 'LibraryRequirements'},
Expand All @@ -335,12 +340,13 @@ class BigDataPoolResourceInfo(TrackedResource):
'node_size_family': {'key': 'properties.nodeSizeFamily', 'type': 'str'},
}

def __init__(self, *, location: str, tags=None, provisioning_state: str=None, auto_scale=None, creation_date=None, auto_pause=None, spark_events_folder: str=None, node_count: int=None, library_requirements=None, spark_version: str=None, default_spark_log_folder: str=None, node_size=None, node_size_family=None, **kwargs) -> None:
def __init__(self, *, location: str, tags=None, provisioning_state: str=None, auto_scale=None, creation_date=None, auto_pause=None, is_compute_isolation_enabled: bool=None, spark_events_folder: str=None, node_count: int=None, library_requirements=None, spark_version: str=None, default_spark_log_folder: str=None, node_size=None, node_size_family=None, **kwargs) -> None:
super(BigDataPoolResourceInfo, self).__init__(tags=tags, location=location, **kwargs)
self.provisioning_state = provisioning_state
self.auto_scale = auto_scale
self.creation_date = creation_date
self.auto_pause = auto_pause
self.is_compute_isolation_enabled = is_compute_isolation_enabled
self.spark_events_folder = spark_events_folder
self.node_count = node_count
self.library_requirements = library_requirements
Expand Down Expand Up @@ -1293,19 +1299,21 @@ def __init__(self, *, key_name=None, **kwargs) -> None:
self.key_name = key_name


class SubResource(Model):
"""Azure Synapse nested resource, which belongs to a factory.
class SubResource(AzureEntityResource):
"""Azure Synapse nested resource, which belongs to a workspace.

Variables are only populated by the server, and will be ignored when
sending a request.

:ivar id: The resource identifier.
:ivar id: Fully qualified resource Id for the resource. Ex -
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
:vartype id: str
:ivar name: The resource name.
:ivar name: The name of the resource
:vartype name: str
:ivar type: The resource type.
:ivar type: The type of the resource. Ex-
Microsoft.Compute/virtualMachines or Microsoft.Storage/storageAccounts.
:vartype type: str
:ivar etag: Etag identifies change in the resource.
:ivar etag: Resource Etag.
:vartype etag: str
"""

Expand All @@ -1325,10 +1333,6 @@ class SubResource(Model):

def __init__(self, **kwargs) -> None:
super(SubResource, self).__init__(**kwargs)
self.id = None
self.name = None
self.type = None
self.etag = None


class IntegrationRuntimeResource(SubResource):
Expand All @@ -1339,13 +1343,15 @@ class IntegrationRuntimeResource(SubResource):

All required parameters must be populated in order to send to Azure.

:ivar id: The resource identifier.
:ivar id: Fully qualified resource Id for the resource. Ex -
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
:vartype id: str
:ivar name: The resource name.
:ivar name: The name of the resource
:vartype name: str
:ivar type: The resource type.
:ivar type: The type of the resource. Ex-
Microsoft.Compute/virtualMachines or Microsoft.Storage/storageAccounts.
:vartype type: str
:ivar etag: Etag identifies change in the resource.
:ivar etag: Resource Etag.
:vartype etag: str
:param properties: Required. Integration runtime properties.
:type properties: ~azure.mgmt.synapse.models.IntegrationRuntime
Expand Down Expand Up @@ -3365,17 +3371,23 @@ class Sku(Model):
:type tier: str
:param name: The SKU name
:type name: str
:param capacity: If the SKU supports scale out/in then the capacity
integer should be included. If scale out/in is not possible for the
resource this may be omitted.
:type capacity: int
"""

_attribute_map = {
'tier': {'key': 'tier', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'capacity': {'key': 'capacity', 'type': 'int'},
}

def __init__(self, *, tier: str=None, name: str=None, **kwargs) -> None:
def __init__(self, *, tier: str=None, name: str=None, capacity: int=None, **kwargs) -> None:
super(Sku, self).__init__(**kwargs)
self.tier = tier
self.name = name
self.capacity = capacity


class SqlPool(TrackedResource):
Expand Down Expand Up @@ -4989,8 +5001,13 @@ class Workspace(TrackedResource):
~azure.mgmt.synapse.models.DataLakeStorageAccountDetails
:param sql_administrator_login_password: SQL administrator login password
:type sql_administrator_login_password: str
:ivar managed_resource_group_name: Workspace managed resource group
:vartype managed_resource_group_name: str
:param managed_resource_group_name: Workspace managed resource group. The
resource group name uniquely identifies the resource group within the user
subscriptionId. The resource group name must be no longer than 90
characters long, and must be alphanumeric characters
(Char.IsLetterOrDigit()) and '-', '_', '(', ')' and'.'. Note that the name
cannot end with '.'
:type managed_resource_group_name: str
:ivar provisioning_state: Resource provisioning state
:vartype provisioning_state: str
:param sql_administrator_login: Login for workspace SQL active directory
Expand All @@ -5009,6 +5026,8 @@ class Workspace(TrackedResource):
workspace
:type private_endpoint_connections:
list[~azure.mgmt.synapse.models.PrivateEndpointConnection]
:ivar extra_properties: Workspace level configs and feature flags
:vartype extra_properties: dict[str, object]
:param identity: Identity of the workspace
:type identity: ~azure.mgmt.synapse.models.ManagedIdentity
"""
Expand All @@ -5018,8 +5037,8 @@ class Workspace(TrackedResource):
'name': {'readonly': True},
'type': {'readonly': True},
'location': {'required': True},
'managed_resource_group_name': {'readonly': True},
'provisioning_state': {'readonly': True},
'extra_properties': {'readonly': True},
}

_attribute_map = {
Expand All @@ -5037,20 +5056,22 @@ class Workspace(TrackedResource):
'connectivity_endpoints': {'key': 'properties.connectivityEndpoints', 'type': '{str}'},
'managed_virtual_network': {'key': 'properties.managedVirtualNetwork', 'type': 'str'},
'private_endpoint_connections': {'key': 'properties.privateEndpointConnections', 'type': '[PrivateEndpointConnection]'},
'extra_properties': {'key': 'properties.extraProperties', 'type': '{object}'},
'identity': {'key': 'identity', 'type': 'ManagedIdentity'},
}

def __init__(self, *, location: str, tags=None, default_data_lake_storage=None, sql_administrator_login_password: str=None, sql_administrator_login: str=None, virtual_network_profile=None, connectivity_endpoints=None, managed_virtual_network: str=None, private_endpoint_connections=None, identity=None, **kwargs) -> None:
def __init__(self, *, location: str, tags=None, default_data_lake_storage=None, sql_administrator_login_password: str=None, managed_resource_group_name: str=None, sql_administrator_login: str=None, virtual_network_profile=None, connectivity_endpoints=None, managed_virtual_network: str=None, private_endpoint_connections=None, identity=None, **kwargs) -> None:
super(Workspace, self).__init__(tags=tags, location=location, **kwargs)
self.default_data_lake_storage = default_data_lake_storage
self.sql_administrator_login_password = sql_administrator_login_password
self.managed_resource_group_name = None
self.managed_resource_group_name = managed_resource_group_name
self.provisioning_state = None
self.sql_administrator_login = sql_administrator_login
self.virtual_network_profile = virtual_network_profile
self.connectivity_endpoints = connectivity_endpoints
self.managed_virtual_network = managed_virtual_network
self.private_endpoint_connections = private_endpoint_connections
self.extra_properties = None
self.identity = identity


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ class NodeSize(str, Enum):
small = "Small"
medium = "Medium"
large = "Large"
xlarge = "XLarge"
xx_large = "XXLarge"


class NodeSizeFamily(str, Enum):
Expand Down
Loading