Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix v3 appservice ci #11844

Closed
wants to merge 25 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
5693dd4
Whitelist what pickle is serializing in context (#11688)
lmazuel Jun 3, 2020
6123e8f
[text analytics] param to ivar in model docstrings (#11788)
iscai-msft Jun 3, 2020
e49027c
Core 1.6.0 release notes (#11786)
lmazuel Jun 3, 2020
3094344
[form recognizer] Remove US receipt (#11764)
iscai-msft Jun 3, 2020
a9de6f0
[formrecognizer] adds AsyncLROPoller and continuation token support (…
kristapratico Jun 3, 2020
ad179ad
add more logging info for next time model_version comes back None (#1…
iscai-msft Jun 3, 2020
0e07b20
Add AzureCliCredential and VSCodeCredential to public API (#11790)
chlowell Jun 3, 2020
3761730
[HDInsight] Fix hdi test failure (#11806)
aim-for-better Jun 4, 2020
12e2e34
disable some by design bandit warnings (#11495)
xiangyan99 Jun 4, 2020
92d6389
Increment package version after release of azure_core (#11795)
azure-sdk Jun 4, 2020
48d6007
Use subject claim as home_account_id when no client_info (#11639)
chlowell Jun 4, 2020
d4633cf
Refactor ClientCertificateCredential to use AadClient (#11719)
chlowell Jun 4, 2020
0ec1601
Refactor ClientSecretCredential to use AadClient (#11718)
chlowell Jun 4, 2020
9f67111
[Cosmos] Fixed incorrect ID type error (#11798)
annatisch Jun 4, 2020
17161be
[text analytics] Update readme (#11796)
iscai-msft Jun 4, 2020
d4bd596
try something (#11797)
kristapratico Jun 4, 2020
aad9601
Search refactoring 3 (#11804)
xiangyan99 Jun 4, 2020
2a730c7
update targeted pat (#11826)
scbedd Jun 4, 2020
6345cb8
Update ChangeLog of customvision (#11827)
lmazuel Jun 4, 2020
d7c0972
docs: fix typos (#11742)
pgrimaud Jun 4, 2020
da470e0
Add missing dependency for azure-common (#11407)
ad-m Jun 4, 2020
bce84db
fix typing and docs for initial_response parameter of LROPoller (#11717)
iscai-msft Jun 4, 2020
717c419
AzureCliCredential is unavailable when no account is logged in (#11829)
chlowell Jun 5, 2020
5102b89
Fix CI config for appservice directory.
mitchdenny Jun 5, 2020
543a38b
Fix CI config for appservice directory. (#11842)
mitchdenny Jun 5, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion common/smoketest/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ pip install -r requiriments.txt
pip install -r requiriments_async.txt
```

If a python version below 3.5 is being used, it is still possible to run the samples. When it gets to the async tests a message `'Async not suported'` will be displayed.
If a python version below 3.5 is being used, it is still possible to run the samples. When it gets to the async tests a message `'Async not supported'` will be displayed.

## Key concepts

Expand Down
2 changes: 1 addition & 1 deletion doc/dev/mgmt/generating-integration-test.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ now you can run live integration test:

## Fixing Test

It's obvious that when running test for the first time someting is not going to work.
It's obvious that when running test for the first time something is not going to work.

The best approach is to:
- fix the test manually
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/mgmt/generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ IMPORTANT NOTE: All the commands prefixed by `python` in this page assumes you h

### Autorest versioning

A few notes on [Autorest for Python versionning](https://github.com/Azure/autorest.python/blob/master/ChangeLog.md):
A few notes on [Autorest for Python versioning](https://github.com/Azure/autorest.python/blob/master/ChangeLog.md):
- Autorest for Python v2.x is deprecated, and should not be used anymore for any generation under any circumstances.
- Autorest for Python v3.x is the most currently used one. Should not be used, but still ok if service team are still in v3.x and they want to avoid breaking changes for a given version (rare).
- Autorest for Python v4.x is the current recommendation. This generator can generates async code, but this should be disabled with --no-async. No package should be shipped with async based on v4
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/mgmt/multiapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Because there is different flavors of Azure that are not necessarly provided wit

### Why a multi-api package?

Indeed, a simple solution would be to write down explictly what version of SDK supports what API version. Example: 1.0 supports 2015-06-01, 2.0 supports 2017-07-01, etc. The story for customers then would be to pin the specific SDK version for the specific API version they need. However, this was considered unacceptable in an end-to-end scenario:
Indeed, a simple solution would be to write down explicitly what version of SDK supports what API version. Example: 1.0 supports 2015-06-01, 2.0 supports 2017-07-01, etc. The story for customers then would be to pin the specific SDK version for the specific API version they need. However, this was considered unacceptable in an end-to-end scenario:
- It means you cannot install in the same Python environment packages that would target different cloud (Python doesn't allow installation of different versions of the same package together). Azure CLI or Ansible supports for different clouds would then be extremely complicated.
- This forces customers to use old SDK, that might have been fixed on different axis than API version (security fixes, new SDK features like async, etc.)
- Customers rarely needs only one package, but a set of them (storage, compute, network, etc.) and having to keep track of the correct list of packages is challenging.
Expand Down Expand Up @@ -44,7 +44,7 @@ Network interfaces operations are defines in a [network interface file](https://

**Python multi-api packaging is based on the assumptions that it's true.** If it's not, it's usually ok but requires a little more subtle packaging (see final section here)

Being that a given Swagger defines only *one* fixed API version, doing multi-api version in one package implies shipping several Swagger files into one package. This is achived by the `batch` directive of Autorest. More details on how to write Readme for Swagger in the specific page for it [swagger_conf.md](./swagger_conf.md).
Being that a given Swagger defines only *one* fixed API version, doing multi-api version in one package implies shipping several Swagger files into one package. This is archived by the `batch` directive of Autorest. More details on how to write Readme for Swagger in the specific page for it [swagger_conf.md](./swagger_conf.md).

Python SDK team is responsible to design the correct set of tags to set for the `batch` node. Each line of the batch directive should contains only *one* api version to match the folder name used. this might require adding new tags in the readme.md that are specific to only one API version. These tags are usually suffixed by "-only" ([example with compute](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/compute/resource-manager#tag-package-2019-03-01-only))

Expand Down
2 changes: 1 addition & 1 deletion doc/dev/mgmt/swagger_conf.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ In practical terms, we want to control the version of Autorest used, the output

## Writing the readme

Writing the readme is the responsability of the Python SDK team. There is currently two types of templates for Python readmes:
Writing the readme is the responsibility of the Python SDK team. There is currently two types of templates for Python readmes:
- Readme that handles only one API version, and generates packages that handle one API version only
- Readme that handles several API versions, and generates packages with multiples API and profile supports

Expand Down
2 changes: 1 addition & 1 deletion eng/pipelines/templates/jobs/update_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ steps:
- script: python3 -m packaging_tools.update_pr -v --pr-number $(System.PullRequest.PullRequestNumber) --repo $(Build.Repository.Name)
displayName: 'Update packaging of PR'
env:
GH_TOKEN: $(python-mgmt-update-pr-token)
GH_TOKEN: $(azuresdk-github-pat)
4 changes: 2 additions & 2 deletions sdk/appservice/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ trigger:
- restapi*
paths:
include:
- sdk/web/
- sdk/appservice/

pr:
branches:
Expand All @@ -32,7 +32,7 @@ pr:
- restapi*
paths:
include:
- sdk/web/
- sdk/appservice/


stages:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@
- Added operation CustomVisionPredictionClientOperationsMixin.detect_image
- Added operation group CustomVisionTrainingClientOperationsMixin

**Breaking changes**

- Credentials are now longer a simple string, but a `msrest.authentication.ApiKeyCredentials` instance instead

**General Breaking changes**

This version uses a next-generation code generator that *might*
Expand Down
6 changes: 3 additions & 3 deletions sdk/core/azure-common/azure/common/client_factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,6 @@
except ImportError:
from inspect import getargspec as get_arg_spec

import adal
from msrestazure.azure_active_directory import AdalAuthentication

from .credentials import get_azure_cli_credentials
from .cloud import get_cli_active_cloud

Expand Down Expand Up @@ -153,6 +150,9 @@ def get_client_from_json_dict(client_class, config_dict, **kwargs):
:param dict config_dict: A config dict.
:return: An instantiated client
"""
import adal
from msrestazure.azure_active_directory import AdalAuthentication

is_graphrbac = client_class.__name__ == 'GraphRbacManagementClient'
is_keyvault = client_class.__name__ == 'KeyVaultClient'
parameters = {
Expand Down
7 changes: 5 additions & 2 deletions sdk/core/azure-core/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@

# Release History

## 1.6.0 (Unreleased)
## 1.6.1 (Unreleased)


## 1.6.0 (2020-06-03)

### Bug fixes

Expand All @@ -13,7 +16,7 @@
- Added support for changesets as part of multipart message support #10485
- Add AsyncLROPoller in azure.core.polling #10801
- Add get_continuation_token/from_continuation_token/polling_method methods in pollers (sync and async) #10801
- HttpResponse objects are now pickable #10801
- HttpResponse and PipelineContext objects are now pickable #10801

## 1.5.0 (2020-05-04)

Expand Down
2 changes: 1 addition & 1 deletion sdk/core/azure-core/azure/core/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@
# regenerated.
# --------------------------------------------------------------------------

VERSION = "1.6.0"
VERSION = "1.6.1"
18 changes: 18 additions & 0 deletions sdk/core/azure-core/azure/core/pipeline/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,9 @@ class PipelineContext(dict):
:param transport: The HTTP transport type.
:param kwargs: Developer-defined keyword arguments.
"""
_PICKLE_CONTEXT = {
'deserialized_data'
}

def __init__(self, transport, **kwargs): # pylint: disable=super-init-not-called
self.transport = transport
Expand All @@ -75,6 +78,21 @@ def __getstate__(self):
del state['transport']
return state

def __reduce__(self):
reduced = super(PipelineContext, self).__reduce__()
saved_context = {}
for key, value in self.items():
if key in self._PICKLE_CONTEXT:
saved_context[key] = value
# 1 is for from __reduce__ spec of pickle (generic args for recreation)
# 2 is how dict is implementing __reduce__ (dict specific)
# tuple are read-only, we use a list in the meantime
reduced = list(reduced)
dict_reduced_result = list(reduced[1])
dict_reduced_result[2] = saved_context
reduced[1] = tuple(dict_reduced_result)
return tuple(reduced)

def __setstate__(self, state):
self.__dict__.update(state)
# Re-create the unpickable entries
Expand Down
5 changes: 2 additions & 3 deletions sdk/core/azure-core/azure/core/polling/_async_poller.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ async def async_poller(client, initial_response, deserialization_callback, polli
:param client: A pipeline service client.
:type client: ~azure.core.PipelineClient
:param initial_response: The initial call response
:type initial_response: ~azure.core.pipeline.transport.AsyncHttpResponse
:type initial_response: ~azure.core.pipeline.PipelineResponse
:param deserialization_callback: A callback that takes a Response and return a deserialized object.
If a subclass of Model is given, this passes "deserialize" as callback.
:type deserialization_callback: callable or msrest.serialization.Model
Expand All @@ -101,8 +101,7 @@ class AsyncLROPoller(Generic[PollingReturnType], Awaitable):
:param client: A pipeline service client
:type client: ~azure.core.PipelineClient
:param initial_response: The initial call response
:type initial_response:
~azure.core.pipeline.transport.HttpResponse or ~azure.core.pipeline.transport.AsyncHttpResponse
:type initial_response: ~azure.core.pipeline.PipelineResponse
:param deserialization_callback: A callback that takes a Response and return a deserialized object.
If a subclass of Model is given, this passes "deserialize" as callback.
:type deserialization_callback: callable or msrest.serialization.Model
Expand Down
6 changes: 2 additions & 4 deletions sdk/core/azure-core/azure/core/polling/_poller.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@
from urllib.parse import urlparse

from typing import Any, Callable, Union, List, Optional, Tuple, TypeVar, Generic
from azure.core.pipeline.transport._base import HttpResponse
from azure.core.tracing.decorator import distributed_trace
from azure.core.tracing.common import with_current_context

Expand Down Expand Up @@ -140,8 +139,7 @@ class LROPoller(Generic[PollingReturnType]):
:param client: A pipeline service client
:type client: ~azure.core.PipelineClient
:param initial_response: The initial call response
:type initial_response:
~azure.core.pipeline.transport.HttpResponse or ~azure.core.pipeline.transport.AsyncHttpResponse
:type initial_response: ~azure.core.pipeline.PipelineResponse
:param deserialization_callback: A callback that takes a Response and return a deserialized object.
If a subclass of Model is given, this passes "deserialize" as callback.
:type deserialization_callback: callable or msrest.serialization.Model
Expand All @@ -150,7 +148,7 @@ class LROPoller(Generic[PollingReturnType]):
"""

def __init__(self, client, initial_response, deserialization_callback, polling_method):
# type: (Any, HttpResponse, Callable, PollingMethod[PollingReturnType]) -> None
# type: (Any, Any, Callable, PollingMethod[PollingReturnType]) -> None
self._callbacks = [] # type: List[Callable]
self._polling_method = polling_method

Expand Down
32 changes: 32 additions & 0 deletions sdk/core/azure-core/tests/test_universal_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
#
#--------------------------------------------------------------------------
import logging
import pickle
try:
from unittest import mock
except ImportError:
Expand Down Expand Up @@ -56,6 +57,37 @@
HTTPPolicy,
)

def test_pipeline_context():
kwargs={
'stream':True,
'cont_token':"bla"
}
context = PipelineContext('transport', **kwargs)
context['foo'] = 'bar'
context['xyz'] = '123'
context['deserialized_data'] = 'marvelous'

assert context['foo'] == 'bar'
assert context.options == kwargs

with pytest.raises(TypeError):
context.clear()

with pytest.raises(TypeError):
context.update({})

assert context.pop('foo') == 'bar'
assert 'foo' not in context

serialized = pickle.dumps(context)

revived_context = pickle.loads(serialized)
assert revived_context.options == kwargs
assert revived_context.transport is None
assert 'deserialized_data' in revived_context
assert len(revived_context) == 1


def test_request_history():
class Non_deep_copiable(object):
def __deepcopy__(self, memodict={}):
Expand Down
5 changes: 5 additions & 0 deletions sdk/cosmos/azure-cosmos/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
## 4.0.1 (Unreleased)

- Fixed error raised when a non string ID is used in an item. It now raises TypeError rather than AttributeError. Issue 11793 - thank you @Rabbit994.


## 4.0.0 (2020-05-20)

- Stable release.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
from urllib3.util.retry import Retry
from azure.core.paging import ItemPaged # type: ignore
from azure.core import PipelineClient # type: ignore
from azure.core.exceptions import raise_with_traceback # type: ignore
from azure.core.pipeline.policies import ( # type: ignore
HTTPPolicy,
ContentDecodePolicy,
Expand Down Expand Up @@ -2480,11 +2481,14 @@ def __CheckAndUnifyQueryFormat(self, query_body):
def __ValidateResource(resource):
id_ = resource.get("id")
if id_:
if id_.find("/") != -1 or id_.find("\\") != -1 or id_.find("?") != -1 or id_.find("#") != -1:
raise ValueError("Id contains illegal chars.")

if id_[-1] == " ":
raise ValueError("Id ends with a space.")
try:
if id_.find("/") != -1 or id_.find("\\") != -1 or id_.find("?") != -1 or id_.find("#") != -1:
raise ValueError("Id contains illegal chars.")

if id_[-1] == " ":
raise ValueError("Id ends with a space.")
except AttributeError:
raise_with_traceback(TypeError, message="Id type must be a string.")

# Adds the partition key to options
def _AddPartitionKey(self, collection_link, document, options):
Expand Down
2 changes: 1 addition & 1 deletion sdk/cosmos/azure-cosmos/azure/cosmos/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.

VERSION = "4.0.0"
VERSION = "4.0.1"
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# https://docs.microsoft.com/azure/cosmos-db/create-sql-api-python#create-a-database-account
#
# 2. Microsoft Azure Cosmos
# pip install azure-cosmos==4.0.0
# pip install azure-cosmos>=4.0.0
# ----------------------------------------------------------------------------------------------------------
# Sample - how to get and use resource token that allows restricted access to data
# ----------------------------------------------------------------------------------------------------------
Expand Down
21 changes: 13 additions & 8 deletions sdk/cosmos/azure-cosmos/test/test_crud.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ def test_sql_query_crud(self):
self.assertEqual(0, len(databases), 'Unexpected number of query results.')

# query with a string.
databases = list(self.client.query_databases('SELECT * FROM root r WHERE r.id="' + db2.id + '"'))
databases = list(self.client.query_databases('SELECT * FROM root r WHERE r.id="' + db2.id + '"')) #nosec
self.assertEqual(1, len(databases), 'Unexpected number of query results.')
self.client.delete_database(db1.id)
self.client.delete_database(db2.id)
Expand Down Expand Up @@ -507,30 +507,30 @@ def test_partitioned_collection_document_crud_and_query(self):
# query document on the partition key specified in the predicate will pass even without setting enableCrossPartitionQuery or passing in the partitionKey value
documentlist = list(created_collection.query_items(
{
'query': 'SELECT * FROM root r WHERE r.id=\'' + replaced_document.get('id') + '\''
'query': 'SELECT * FROM root r WHERE r.id=\'' + replaced_document.get('id') + '\'' #nosec
}))
self.assertEqual(1, len(documentlist))

# query document on any property other than partitionKey will fail without setting enableCrossPartitionQuery or passing in the partitionKey value
try:
list(created_collection.query_items(
{
'query': 'SELECT * FROM root r WHERE r.key=\'' + replaced_document.get('key') + '\''
'query': 'SELECT * FROM root r WHERE r.key=\'' + replaced_document.get('key') + '\'' #nosec
}))
except Exception:
pass

# cross partition query
documentlist = list(created_collection.query_items(
query='SELECT * FROM root r WHERE r.key=\'' + replaced_document.get('key') + '\'',
query='SELECT * FROM root r WHERE r.key=\'' + replaced_document.get('key') + '\'', #nosec
enable_cross_partition_query=True
))

self.assertEqual(1, len(documentlist))

# query document by providing the partitionKey value
documentlist = list(created_collection.query_items(
query='SELECT * FROM root r WHERE r.key=\'' + replaced_document.get('key') + '\'',
query='SELECT * FROM root r WHERE r.key=\'' + replaced_document.get('key') + '\'', #nosec
partition_key=replaced_document.get('id')
))

Expand Down Expand Up @@ -746,14 +746,14 @@ def test_partitioned_collection_conflict_crud_and_query(self):
# query conflicts on any property other than partitionKey will fail without setting enableCrossPartitionQuery or passing in the partitionKey value
try:
list(created_collection.query_conflicts(
query='SELECT * FROM root r WHERE r.resourceType=\'' + conflict_definition.get(
query='SELECT * FROM root r WHERE r.resourceType=\'' + conflict_definition.get( #nosec
'resourceType') + '\''
))
except Exception:
pass

conflictlist = list(created_collection.query_conflicts(
query='SELECT * FROM root r WHERE r.resourceType=\'' + conflict_definition.get('resourceType') + '\'',
query='SELECT * FROM root r WHERE r.resourceType=\'' + conflict_definition.get('resourceType') + '\'', #nosec
enable_cross_partition_query=True
))

Expand All @@ -762,7 +762,7 @@ def test_partitioned_collection_conflict_crud_and_query(self):
# query conflicts by providing the partitionKey value
options = {'partitionKey': conflict_definition.get('id')}
conflictlist = list(created_collection.query_conflicts(
query='SELECT * FROM root r WHERE r.resourceType=\'' + conflict_definition.get('resourceType') + '\'',
query='SELECT * FROM root r WHERE r.resourceType=\'' + conflict_definition.get('resourceType') + '\'', #nosec
partition_key=conflict_definition['id']
))

Expand Down Expand Up @@ -939,6 +939,11 @@ def test_document_upsert(self):
self.assertEqual(created_document['id'],
document_definition['id'])

# test error for non-string id
with pytest.raises(TypeError):
document_definition['id'] = 7
created_collection.upsert_item(body=document_definition)

# read documents after creation and verify updated count
documents = list(created_collection.read_all_items())
self.assertEqual(
Expand Down
Loading