-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explore going the route of using Clients ? #861
Comments
I honestly don't see the benefit to bundling project together with connection: it only gets passed to a couple of constructors ( If we don't bundle the project, then we could just focus on adopting a common pattern for connection passing / lookup everywhere (#825 and related PRs). |
It's probably worth it to create a list of all config that could be a candidate for a global (or membership in a long-lived object like a Right now we have
|
While the back-end pubsub API has |
@tseaver Added this to my list |
Issues about the "always use a Client" notion we discussed in Monday's call:
|
@tseaver, hate to make more work for you, but is it crazy to toss in short examplar code snippets illustrating the problem ? Some questions I'm having trouble answering based on our conversations...
I get that this is an enormous number of questions, so if it's going to make GitHub explode and make everything unreadable, let's pick one of them for now and go with that... |
I think there are different cases:
|
At the moment, one can use a batch or a transaction as a context manager an not pass it everywhere, or call methods through it. E.g.: from gcloud import datastore
with datastore.Transaction():
keys = [entity.key for entity in some_list]
datastore.delete(keys) This works because the
Per #825, all of the methods / functions that trigger remote API calls now take an optional
It is a bit of a strawman, but having "always use a client" as the access pattern reduces the desirability of the various client-side objects: they become basically thin wrappers over the client. |
I'm proposing that we make people be a bit more explicit for batches and transactions: from gcloud import datastore
client = datastore.Client() # This guesses the credentials and project ID magically
# or
client = datastore.Client(project='project-id', key_path='/path/to/key.json')
keys = [ .... ]
with client.Transaction() as t:
t.delete(keys)
# or
with client.Transaction():
client.delete(keys) Wouldn't this make the "require connection" stuff just go away all together? We never guess, or look to the global scope, or anything like that... |
Wasn't the original desire for a client so that people would type less? So we sacrifice usability elsewhere for the sake of usability of a feature which no one has asked for (easy context switching). |
I don't see a huge difference in the amount of typing. What's the delta here?
Where are we sacrificing usability? This is where I'm so confused. I see the client pattern as far more usable than the module-level actions and module-level global-only configuration pattern we (and App Engine) have... |
RE: "Where are we sacrificing usability?", I was referring to
The client pattern is not "far more usable than the module-level actions", it is exactly identical (given our proposal). We are just swapping out
for
As for the actions, they are driven by the objects. |
Sure, the explicit case here is one extra line however I believe we could make that go away if we wanted to. Tres didn't seem to think that was a good idea. from gcloud import datastore
client = datastore.Client()
with client.Transaction():
client.delete(keys) versus from gcloud import datastore
with datastore.Transaction():
datastore.delete(keys)
I don't think it's identical when you look at dealing with multiple projects or multiple sets of credentials -- is it? Assuming it's not in those cases, I'm having tons of trouble with the idea of that we should design for usability for the one-and-only-one project and set of credentials. Making the multi-project or multi-credential use cases less usable (IMO). If we did the client pattern, wouldn't we get both ? Multiple clients with multiple projects with multiple sets of credentials are all really nice, and the one-and-only-one-everything is also really nice... No? |
But we already support multiple connections, it just requires more typing, which we just shuffle from passing arguments to instantiating client objects. When #864 is in, creating multiple |
Doesn't it seem weird and repetitive to mix in the configuration details at the time when you're trying to accomplish the action? Shouldn't we split the configure stage apart from the "make an API call" ? This is my understanding of what we've been aiming for since the beginning, sorry if it's wrong (or long). The simple case: one-and-only-one everything. Cool. from gcloud import datastore
key = datastore.Key('Product', '0')
product = datastore.get([key]) The multi-credential, multi-project case. Repetitive. from gcloud import datastore
# Configure:
connectionA = datastore.Connection(...)
connectionB = datastore.Connection(...)
# Do stuff (and also configure):
keyA = datastore.Key('Product', 'a', dataset_id='dataset-a')
keyB = datastore.Key('Product', 'b', dataset_id='dataset-b')
productA = datastore.get([keyA], connection=connectionA, dataset_id='dataset-a')
productB = datastore.get([keyB], connection=connectionB, dataset_id='dataset-b')
productA.value = 1
datastore.put(productA, connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyA], connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyB], connection=connectionB, dataset_id='dataset-b') Here's the direction I'm trying to push us... The simple case: one-and-only-one everything (without any magic). Somewhat cool. from gcloud import datastore
client = datastore.Client()
key = client.Key('Product', '0') # (or datastore.Key(), I'm not sure)
product = client.get(key) The simple case: one-and-only-one everything (with the magic). Mostly cool. from gcloud.datastore import default_client as datastore
key = datastore.Key('Product', '0')
product = datastore.get(key) The simple case: one-and-only-one everything (with different magic). Cool. from glcoud import datastore # Module-level methods just proxy to default_client
key = datastore.Key('Product', '0')
product = datastore.get(key) The multi-credential, multi-project case. Cool. from gcloud import datastore
# Configure:
clientA = datastore.Client(project='projectA')
clientB = datastore.Client(project='projectB')
datasetA = clientA.Dataset('dataset-a')
datasetB = clientB.Dataset('dataset-b')
# Do stuff:
keyA = datasetA.Key('Product', 'a')
keyB = datasetB.Key('Product', 'b')
productA = datasetA.get(keyA)
productB = datasetB.get(keyB)
productA.value = 1
datasetA.put(productA)
datasetA.delete(keyA)
datasetB.delete(keyB) |
RE: productA = datastore.get([keyA], connection=connectionA, dataset_id='dataset-a')
productB = datastore.get([keyB], connection=connectionB, dataset_id='dataset-b')
productA.value = 1
datastore.put(productA, connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyA], connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyB], connection=connectionA, dataset_id='dataset-a') We've been trying to point out that only the connection is the only config object that gets passed on a regular basis. The dataset ID is encoded in the key and is not needed in any of those method calls. So in the above proposal we go from the concept of Noun+Verb (
and we consider
to be superior to
I don't see an improvement. (Also note that the project is as-of-yet un-needed for |
Wait -- Sorry missed "The dataset ID is encoded in the key and is not needed in any of those method calls.": OK, so then
And the multiple method calls, we're comparing: client = datastore.Client(project='project-a')
client.delete(keyA)
if client.get(keyA2).whatever == 42:
client.delete(keyA2) to connection = datastore.Connection(project='project-a')
datastore.delete(keyA, connection=connection)
if datastore.get(keyA2, connection=connection).whatever == 42:
datastore.delete(keyA2, connection=connection) |
It's in the key. |
Sorry, updated my note -- misread. |
I still maintain that we are adding more concepts / classes to gain functionality which we already have. The only repetitive use is There is no way to save the repetitive typing of |
You're saying the first example with I do think we could get rid of connection = datastore.Connection(project='project-a')
connection.delete(keyA)
if connection.get(keyB).whatever == 42:
connection.delete(keyB) ... but then |
Functionality -- yes. It is certainly possible to do these things with what we have. Usability (aka, not repeating ourselves) -- I don't think so. It looks like we're typing connection far too often when we need to be specific about it, right? |
RE: "It looks like we're typing connection far too often " But we exchange typing Two simultaneous contexts require typing more to keep them distinct. That is something we can't escape. I'd prefer we just do nothing until we have users say "it's important for me to have multiple credentials / config active at once". And then we can hear their requirements and design based on real feedback. |
@dhermes I don't like implicit changing behavior. I.e.:
Us being able to do the smart thing for users that don't need or want extra configuration is fine. yes idiomatic is important. But are you saying that passing in the connection everywhere is more idiomatic in python? ISTM that if there isn't a clear more idiomatic way to do things, being consistent is a better policy. @jgeewax fair point, maybe client is the better name (but then you could still have a Database/Dataset which contained a client that you could interact with directly). |
@pcostell I wasn't trying to connect the idiomatic comment to the above discussion, just to point out that doing something to match the other runtimes isn't a good reason to do it. |
Interesting. My take on this is that simply focusing on the quantity of typing or the relative aesthetic elegance of the code is distracting from more important issues. To me, there are three: 1) consistency between gcloud API projections in different languages; 2) consistency between gcloud-python API and other APIs that users might use; and 3) user expectations about what should be "remembered". Consistency between gcloud API projects in different languages Why does it matter? In a polyglot world, it's not unreasonable that developers have to move between languages, from client-side JavaScript to server-side front-end Python to server-side back-end Java (as just one example). If I've used the Datastore API in one language, it should feel familiar when using it in another language on another part of the stack. Further (and perhaps more importantly), when I search the web for answers to my Datastore usage questions, if I find an answer on StackOverflow that is in Ruby, I should be able to immediately adapt that answer to my code in Python. The examples from Ruby and Node that JJ listed at the top of this thread made sense to me and as a user I would be surprised if Python worked differently. If the document on cloud.google.com had a "how to connect to Datastore" inset that had tabs for each language, I would be annoyed if I were flipping back and forth between the various languages and the Python sample deviated from the norm established by the rest. Consistency between gcloud-python and other APIs that users might use Why does it matter? Our users do not have access to Datastore outside of GCP (with the exception of the Datastore fake when doing local App Engine development) and they will not have experience working with Datastore until they become GCP customers. But our users will have experience running MySQL clients, SQLlite, CouchDB, MongoDB, and many others. And the bulk of the first-party Python APIs for those products employ the model of a client/connection object with operations hanging off that object. When I read the samples for the Datastore API that did NOT use the client/connection model, I was surprised as a user because I could not relate it to my prior experience with these other APIs. That created a cognitive load because I now needed to "figure out" how this API worked because all I knew upon first reading was that it didn't work like the others I was already familiar with. I have years of experience working with many different databases using different languages on different platforms. I want my APIs to tap into my knowledge, not set it aside. User expectations about what should be "remembered" Why does it matter? As I user I rely on the fact that computers should have a better memory than I do. I put things into computers so that I don't have to remember them. I put things into computers because there is more than I could possibly remember. The point of this is that I hate it when I tell I computer something and it doesn't remember it when I expected it to. It's bad user experience. When I read an API like: from gcloud.datastore import Connection, delete
connection = Connection(...)
delete(key0, connection=connection)
delete(key1, connection=connection) it feels "forgetful" to me. I've told you my connection. Why do you keep forgetting it? Why do I have to keep reminding you of it? On the other hand, though functionally equivalent, this: from gcloud import datastore
client = datastore.Client(...)
client.delete(key1)
client.delete(key2) does not create the same reaction for me. I've told the client about my connection, and without having to remind it anything about my connection, I can simply tell the client what I want it to do. In a sense, I have a delegated the responsibility of remembering those details to something and now I want to give my instructions to that something to "just take care of it". The client idiom meets that expectation, while the parameterized idiom does not. To be clear, I have read and understood the more technical points about "stackability" and the implementation considerations and I also totally get that the "forgetfulness" argument is only at play when the user is working with multiple connection/datasets. That said, I am more concerned about the user experience of this API (in the context of the universe of gcloud and other APIs it lives in). There will no doubt be users for whom multiple connections/datasets is a requirement. Their user experience should not deviate from the mainstream user experience. Their user experience should not turn into "forgetful" mode just because their business problem got a little more complex. |
@marcja RE: "User expectations about what should be "remembered"" We have made it very convenient for users that don't need to context switch to have the library detect automatically and remember their configuration details. For a user running in GCE from gcloud import datastore
key = datastore.Key('Kind', 1234) # Dataset ID is inferred from GCE environ
datastore.delete(key) # Connection credentials inferred from GCE service account For a user running in a custom environment $ export GCLOUD_DATASET_ID="foo"
$ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials/file.json" and then from gcloud import datastore
key = datastore.Key('Kind', 1234) # Dataset ID is inferred from env. var.
datastore.delete(key) # Connection credentials inferred from env. var. The snippets delete(key0, connection=connection)
delete(key1, connection=connection) were meant for a case where a user has two separate connections, in which case being explicit is important. |
@dhermes Yes, exactly as I said in my final paragraph above, I understand that it only applies to the multiple connections case, but in that case it feels "forgetful". |
Yea - I think we all agree that the module-level case is most and common and nice to have. The client-pattern doesn't preclude us from doing that though... We can totally still provide that style while still making the multiple-connection case nice... |
Just a thought. Are we really sure that people want to use multiple credentials? That means one single application has multiple identities. I think that is maintenance-wise cumbersome. I would just attach one single identity to an application, and add necessary permissions to that identity. For example, I would add a service account to the second project, instead of having multiple service accounts. I understand there is a strong need for multi-project access though. Maybe I miss something, what's the use case for multi credentials access? |
One use case is moving and reconciling data between two apps: get data out of one dataset, move into another dataset. Configuring the project1/project2 stuff with a client is a common pattern. Passing in arguments, to use @marcja 's term, seems "forgetful". Regardless, of that, the fact that configuring is always done in the global scope is an issue for me. I'll let @Alfus chime in on this one as well. |
@jgeewax RE: "use case is moving and reconciling data between two apps" @tmatsuo point is that you could just add a service account from project 1 as an admin for project 2 and use the same credential. As for global scope, that has bothered me in some ways too. However, if you don't do it within the package, then the "it just works" with no set-up (that I discussed above) is never possible. |
Cool.. so let's fix it... Here's a reason (or excuse) to do that... And the "it just works" stuff can still exist too. |
@dhermes The use cases I can come up with are following 2:
For the first case, I might store the users' credentials and re-use them. Anyways, I think I have to change credentials in a single application. For the second case, I would prefer the library handling thread/process safety for me. Re: the fact that configuring is always done in the global scope |
How can
work? We manage this by having the |
This is reasonable, but I encourage you all to write the help article first
|
@tmatsuo Right now the global is not threadsafe but we do pay some attention to thread-safety in batching / transactions, so it wouldn't be hard to move over. The case you lay out above (user creds. + service account) seems like such a rare case and potentially something we should discourage users from building.
You mean this is something you would immediately experiment with? It's not obvious to me that anyone who wasn't very familiar with Google APIs would think it even possible to have more than one way to authenticate an application. |
I think I'm failing to make my point. When I buy AV receiver, I expect to leverage my prior knowledge and be able To make the analogy clear, I know that, as an experienced developer, I have As a senior person at Google, I have heard the customer feedback that a) |
The way I'd suggested before was... Code I'd writefrom gcloud import datastore
datastore.get(datsatore.Key('Product', 123)) or from gcloud.datastore import default_client as datastore
datastore.get(datastore.Key('Product', 123)) Code under the hood:# Maybe this is in api.py?
from gcloud import datastore
# Create a global default client that users may or may not use.
default_client = datastore.Client()
# Bring that default client's methods into global scope.
get = default_client.get
delete = default_client.delete
set_project = default_client.set_project
set_credentials = default_client.set_credentials
# ... Hm, I want to change the settings on the globally shared default connection... datastore.set_project('...')
datastore.set_credentials('...')
# or
default_client.set_project('...')
default_client.set_credentials('...') I want to be explicit now about using the default client... from gcloud import datastore
client = datastore.default_client
other_client = datastore.Client(credentials=other_credentials)
Not sure I understand what
Again, not 100% certain on what you mean by that method, but I'm going to take a guess. If you mean that we shouldn't re-compute the magic values every time (and we also shouldn't compute them for the first time until someone needs them), I agree. So a Client shouldn't try to figure out what the magic things are until methods are called on it. That is: # datastore client
class Client(object):
def __init__(self, credentials=None, ...):
self._credentials = credentials
def get(self, key, ...):
self.connection._rpc('...')
@property
def connection(self):
if self._connection is None:
self._connection = Connection(credentials=self.credentials, ...)
return self._connection
@property
def credentials(self):
if self._credentials is None:
self._credentials = discovery_module.magically_figure_out_my_creds()
return self._credentials So instantiating a |
So... looks like this conversation has gone silent... but there's some talk on PR #886.. Here's the summary of what I see so far
Given that scenario, I really want to push that unless someone shows why going the client route is a horrible idea, we should get started on moving that direction (being a lot of work doesn't count). |
Seems to me that voting isn't quite what we need, especially without a more concrete thing to be considering. Rather than talking further past one another, can we work on exploring the idea more fully by writing up the "developer experience" documents which show all the ways clients will be used, similar to the one I wrote for pubsub? |
This whole thread has been about "what code a developer has to write" with many examples (I think at least), so I'm not sure what else we should explore. @dhermes mentioned that we're not stating any new facts, just debating "which is better". Given that, we've had several people chime in with feelings that the globally-configured way of doing things "seems forgetful" -- the majority preferring the client pattern. I've not heard anyone say "if we go the client pattern way, we can't still make the exact same code we have today continue to work just fine". So it seems to me, that if we wanted to offer both, we could if we went the client route, and we can't if we go the globally configured route. Overall, I'm having trouble seeing what the argument is against going the route of client pattern. Can anyone chime in and explain why it's a bad idea? I think the feedback has been "for multi-client multi-credential multi-project code, we prefer the client pattern", so I'm looking for other reasons of why it's a bad choice. |
|
Like Danny, I've run out of steam to argue. The reason that I suggested starting a DX document is that I think it will either make the concerns I have obvious, or else make it clear that they can be addressed cleanly. At the moment, the only way I can see to cleanly implement clients without re-welding them in to every class we just cleaned up, is to have the clients work at a layer "above" the classes we have now, with proxies for the "real" objects that pass in the connection from the client to every call that takes one. Maybe we can come up with a cleaner implementation than that, if we have a specific outline of how the client objects get used. |
* Add new "quickstart" samples [(#547)](GoogleCloudPlatform/python-docs-samples#547) * Quickstart tests [(#569)](GoogleCloudPlatform/python-docs-samples#569) * Add tests for quickstarts * Update secrets * Fix vision failure on Python 3 Change-Id: Ieb53e6cdd8b1a70089b970b7a2aa57dd3d24c3de * Generate most non-appengine readmes Change-Id: I3779282126cdd05b047194d356932b9995484115 * Update samples to support latest Google Cloud Python [(#656)](GoogleCloudPlatform/python-docs-samples#656) * Auto-update dependencies. [(#715)](GoogleCloudPlatform/python-docs-samples#715) * Vision cloud client snippets [(#751)](GoogleCloudPlatform/python-docs-samples#751) * fixes typo in detect_properties [(#761)](GoogleCloudPlatform/python-docs-samples#761) * Vision 1.1 [(#827)](GoogleCloudPlatform/python-docs-samples#827) * Adds vision 1.1 features * Update README * Updates GCS snippet to match local file [(#836)](GoogleCloudPlatform/python-docs-samples#836) * Improvess consistency in docs and fixes links in restructured text [(#839)](GoogleCloudPlatform/python-docs-samples#839) * Auto-update dependencies. [(#825)](GoogleCloudPlatform/python-docs-samples#825) * Crop hints tutorial [(#861)](GoogleCloudPlatform/python-docs-samples#861) * Adds crop hints tutorial. * Uses aspect ratio so that we actually crop. * Addresses review feedback * nits * Restructures samples for CI * Auto-update dependencies. [(#866)](GoogleCloudPlatform/python-docs-samples#866) * Adds document text detection tutorial. [(#868)](GoogleCloudPlatform/python-docs-samples#868) * Adds document text detection tutorial. * Feedback from review * Less whitespace and fewer hanging indents * Fixes a few style issues that came up in document text review. [(#871)](GoogleCloudPlatform/python-docs-samples#871) * Fixes a few style issues that came up in document text review. * Fixing my breaks * Auto-update dependencies. [(#872)](GoogleCloudPlatform/python-docs-samples#872) * An attempt at flattening the detect example [(#873)](GoogleCloudPlatform/python-docs-samples#873) * Adds web detection tutorial [(#874)](GoogleCloudPlatform/python-docs-samples#874) * Vision face tutorial [(#880)](GoogleCloudPlatform/python-docs-samples#880) * Updates sample to use the Cloud client library * Nits found after commit * Nudge for travis * flake8 hates my face * Auto-update dependencies. [(#876)](GoogleCloudPlatform/python-docs-samples#876) * Remove cloud config fixture [(#887)](GoogleCloudPlatform/python-docs-samples#887) * Remove cloud config fixture * Fix client secrets * Fix bigtable instance * Auto-update dependencies. [(#888)](GoogleCloudPlatform/python-docs-samples#888) * Remove resource [(#890)](GoogleCloudPlatform/python-docs-samples#890) * Remove resource fixture * Remove remote resource * Re-generate all readmes * Auto-update dependencies. [(#922)](GoogleCloudPlatform/python-docs-samples#922) * Auto-update dependencies. * Fix pubsub iam samples * Adds checks for all features using https. [(#944)](GoogleCloudPlatform/python-docs-samples#944) * Adds checks for all features using https. * Fixes overindent for lint * Fix README rst links [(#962)](GoogleCloudPlatform/python-docs-samples#962) * Fix README rst links * Update all READMEs * Auto-update dependencies. [(#1004)](GoogleCloudPlatform/python-docs-samples#1004) * Auto-update dependencies. * Fix natural language samples * Fix pubsub iam samples * Fix language samples * Fix bigquery samples * Auto-update dependencies. [(#1011)](GoogleCloudPlatform/python-docs-samples#1011) * Auto-update dependencies. [(#1033)](GoogleCloudPlatform/python-docs-samples#1033) * Vision GAPIC client library [(#1015)](GoogleCloudPlatform/python-docs-samples#1015) * Migrate quickstart to gapic * formatting * updating detect_faces, failing tests * Migrate detect_faces to gapic * Migrate detect_labels to gapic * Migrate detect_landmarks to gapic * Migrate detect_logos to gapic * remove "Likelihood" from test outputs * Migrate detect_safe_search to gapic * Migrate detect_text to gapic * Migrate detect_properties to gapic * Migrate detect_web to gapic * Migrate crophints to gapic * Migrate detect_document to gapic; * Migrate crop_hints.py to gapic * hard code the likelihood names * Make code snippets more self-contained * Migrate doctext.py to gapic * Migrate web_detect.py to gapic * Migrate faces.py to gapic * flake8 * fix missing string format * remove url scores from sample output * region tags update * region tag correction * move region tag in get crop hints * move region tags * import style * client creation * rename bound to vertex * add region tags * increment client library version * update README to include link to the migration guide * correct version number * update readme * update client library version in requirements and readme * Auto-update dependencies. [(#1055)](GoogleCloudPlatform/python-docs-samples#1055) * Auto-update dependencies. * Explicitly use latest bigtable client Change-Id: Id71e9e768f020730e4ca9514a0d7ebaa794e7d9e * Revert language update for now Change-Id: I8867f154e9a5aae00d0047c9caf880e5e8f50c53 * Remove pdb. smh Change-Id: I5ff905fadc026eebbcd45512d4e76e003e3b2b43 * Auto-update dependencies. [(#1093)](GoogleCloudPlatform/python-docs-samples#1093) * Auto-update dependencies. * Fix storage notification poll sample Change-Id: I6afbc79d15e050531555e4c8e51066996717a0f3 * Fix spanner samples Change-Id: I40069222c60d57e8f3d3878167591af9130895cb * Drop coverage because it's not useful Change-Id: Iae399a7083d7866c3c7b9162d0de244fbff8b522 * Try again to fix flaky logging test Change-Id: I6225c074701970c17c426677ef1935bb6d7e36b4 * Update all generated readme auth instructions [(#1121)](GoogleCloudPlatform/python-docs-samples#1121) Change-Id: I03b5eaef8b17ac3dc3c0339fd2c7447bd3e11bd2 * Added Link to Python Setup Guide [(#1158)](GoogleCloudPlatform/python-docs-samples#1158) * Update Readme.rst to add Python setup guide As requested in b/64770713. This sample is linked in documentation https://cloud.google.com/bigtable/docs/scaling, and it would make more sense to update the guide here than in the documentation. * Update README.rst * Update README.rst * Update README.rst * Update README.rst * Update README.rst * Update install_deps.tmpl.rst * Updated readmegen scripts and re-generated related README files * Fixed the lint error * Auto-update dependencies. [(#1138)](GoogleCloudPlatform/python-docs-samples#1138) * Auto-update dependencies. [(#1186)](GoogleCloudPlatform/python-docs-samples#1186) * Auto-update dependencies. [(#1245)](GoogleCloudPlatform/python-docs-samples#1245) * Vision beta [(#1211)](GoogleCloudPlatform/python-docs-samples#1211) * remove unicode [(#1246)](GoogleCloudPlatform/python-docs-samples#1246) * Added "Open in Cloud Shell" buttons to README files [(#1254)](GoogleCloudPlatform/python-docs-samples#1254) * Auto-update dependencies. [(#1282)](GoogleCloudPlatform/python-docs-samples#1282) * Auto-update dependencies. * Fix storage acl sample Change-Id: I413bea899fdde4c4859e4070a9da25845b81f7cf * Auto-update dependencies. [(#1320)](GoogleCloudPlatform/python-docs-samples#1320) * Vision API features update [(#1339)](GoogleCloudPlatform/python-docs-samples#1339) * Revert "Vision API features update [(#1339)](GoogleCloudPlatform/python-docs-samples#1339)" [(#1351)](GoogleCloudPlatform/python-docs-samples#1351) This reverts commit fba66eec5b72a8313eb3fba0a6601306801b9212. * Auto-update dependencies. [(#1377)](GoogleCloudPlatform/python-docs-samples#1377) * Auto-update dependencies. * Update requirements.txt * fix landmark sample [(#1424)](GoogleCloudPlatform/python-docs-samples#1424) * Vision GA [(#1427)](GoogleCloudPlatform/python-docs-samples#1427) * replace types. with vision.types. in detect.py * copy beta code snippets * update tests, flake * remove beta_snippets * update command line interface to include web-geo samples * flake * simplify detect document text * [DO NOT MERGE] Vision API OCR PDF/TIFF sample [(#1420)](GoogleCloudPlatform/python-docs-samples#1420) * add docpdf sample * import order * list blobs * filename change * add the renamed files * parse json string to AnnotateFileResponse message * show more of the response * simplify response processing to better focus on how to make the request * fix typo * linter * linter * linter * Regenerate the README files and fix the Open in Cloud Shell link for some samples [(#1441)](GoogleCloudPlatform/python-docs-samples#1441) * detect-pdf update [(#1460)](GoogleCloudPlatform/python-docs-samples#1460) * detect-pdf update * update test * Update READMEs to fix numbering and add git clone [(#1464)](GoogleCloudPlatform/python-docs-samples#1464) * Move ocr pdf/tiff samples to GA [(#1522)](GoogleCloudPlatform/python-docs-samples#1522) * Move ocr pdf/tiff samples to GA * Remove blank spaces and fragment * Fix the vision geo test. [(#1518)](GoogleCloudPlatform/python-docs-samples#1518) Sometimes, Vision sees Zepra. Othertimes, it sees Electra Tower. * [DO_NOT_MERGE] Add samples for object localization and handwritten ocr [(#1572)](GoogleCloudPlatform/python-docs-samples#1572) * Add samples for object localization and handwritten ocr * Update to released lib * Update beta_snippets.py * [DO NOT MERGE] Product search [(#1580)](GoogleCloudPlatform/python-docs-samples#1580) Product search * Update vision web_detect test image [(#1607)](GoogleCloudPlatform/python-docs-samples#1607) The original image no longer appears on cloud.google.com/vision * Vision - remove unused region tags [(#1620)](GoogleCloudPlatform/python-docs-samples#1620) * Vision region tag update [(#1635)](GoogleCloudPlatform/python-docs-samples#1635) * Udpate Beta Vision samples to use beta tags [(#1640)](GoogleCloudPlatform/python-docs-samples#1640) * Update samples to GA, cleanup tests, delete old samples [(#1704)](GoogleCloudPlatform/python-docs-samples#1704) * Add print output to crop hints tutorial [(#1797)](GoogleCloudPlatform/python-docs-samples#1797) * Remove unused code [(#1745)](GoogleCloudPlatform/python-docs-samples#1745) * Display the score/confidence value [(#1429)](GoogleCloudPlatform/python-docs-samples#1429) * Display the score/confidence value A small code addition to display the score/confidence value of a detected face above the face detection box on the output image. This is very useful to know the confidence! * Changes applied to meet coding style requirements I have edited the already submitted code to meet the coding style requirements! * Edits because white spaces * Remove [(#1431)](GoogleCloudPlatform/python-docs-samples#1431) I'm updating all the openapi files in the getting-started sample in all the sample repos to remove basePath: "/" Here's the reason from simonz130: From the OpenAPI 2 spec: * basePath: "If it is not included, the API is served directly under the host. The value MUST start with a leading slash (/). " * Paths for methods: "A relative path to an individual endpoint. The field name MUST begin with a slash. The path is appended to the basePath in order to construct the full URL." This OpenAPI getting-started sample have basePath: "/", which (per strict spec interpretation) means all the paths start with double-slashes. (e.g "//v1/shelves" rather than "/v1/shelves"). Removing basepath="/" fixes that. * Auto-update dependencies. [(#1846)](GoogleCloudPlatform/python-docs-samples#1846) ACK, merging. * update samples for product search GA [(#1861)](GoogleCloudPlatform/python-docs-samples#1861) * update samples for product search GA * update to use 0.35.1 * Use default font [(#1865)](GoogleCloudPlatform/python-docs-samples#1865) Test environment does not support all fonts. * use shared sample data bucket [(#1874)](GoogleCloudPlatform/python-docs-samples#1874) * Pass max_results through to API - issue #1173 [(#1917)](GoogleCloudPlatform/python-docs-samples#1917) * Fix Vision Product Search sample comment typo [(#1897)](GoogleCloudPlatform/python-docs-samples#1897) * vision: update samples to address changes in model annotations. [(#1991)](GoogleCloudPlatform/python-docs-samples#1991) changes to the vision model evaluation changed annotations for some of the sample data used in these tests. This corrects those expectations to reflect current evaluation. Background: internal issue 123358697 * Auto-update dependencies. [(#1980)](GoogleCloudPlatform/python-docs-samples#1980) * Auto-update dependencies. * Update requirements.txt * Update requirements.txt * Vision API: further fixes. [(#2002)](GoogleCloudPlatform/python-docs-samples#2002) * Vision API: further fixes. Redirects testing to the central cloud-samples-data asset bucket. Relaxes case considerations. Addresses web subtests, missed in previous PR. * Added two samples for "OCR with PDF/TIFF as source files" [(#2034)](GoogleCloudPlatform/python-docs-samples#2034) * Added two samples for "OCR with PDF/TIFF as source files" * Moved the code to beta_snippets.py * Fixed the sub-parser names. * Shortened the line that was too long. * Added newline at the end of the file * Using the builtin open function instead * Renamed a variable * Fixed the wrong arg parameter * Added extra comment lines * Regenerated README.rst * Added specific strings to be unit-tested * Added the sample for async image batch annotation [(#2045)](GoogleCloudPlatform/python-docs-samples#2045) * Added the sample for async image batch annotation * Fixed the wrong function name * Changes based on Noah's comments. * Need newer library version for latest beta [(#2052)](GoogleCloudPlatform/python-docs-samples#2052) * Fixed string in test [(#2135)](GoogleCloudPlatform/python-docs-samples#2135) * Fixed string in test * Updated to latest AutoML * Update detect.py [(#2174)](GoogleCloudPlatform/python-docs-samples#2174) 1) I got argument parse error when bucket_name=bucket_name is given 2)blob_list[0] gave me folder name * Revert "Update detect.py" [(#2274)](GoogleCloudPlatform/python-docs-samples#2274) * Revert "Update detect.py [(#2174)](GoogleCloudPlatform/python-docs-samples#2174)" This reverts commit 6eaad9a3166ab3262c1211c2f41fb4b5d8234b7d. * Update beta_snippets_test.py * Update beta_snippets.py * Update detect.py * Move import inside region tags [(#2211)](GoogleCloudPlatform/python-docs-samples#2211) * Move import inside region tags * Update detect.py * Fix comment. [(#2108)](GoogleCloudPlatform/python-docs-samples#2108) Comment should reflect real filename. * Fix a typo in output message / remove duplicate parser assignment. [(#1999)](GoogleCloudPlatform/python-docs-samples#1999) * Fix a typo in output message. Fixes a minor typo error in the `draw_hint` function. Because the tutorial is one of the starting points for new users, it's worth correcting it to avoid confusion. * Remove duplicate `argparse` assignment. `argparse.ArgumentParser()` was assigned twice in if statement so removed the duplicate. * move import re [(#2303)](GoogleCloudPlatform/python-docs-samples#2303) * Makes quickstart more REPL friendly [(#2354)](GoogleCloudPlatform/python-docs-samples#2354) * vision geo test fix [(#2353)](GoogleCloudPlatform/python-docs-samples#2353) Gus already LGTM * Purge products [(#2349)](GoogleCloudPlatform/python-docs-samples#2349) * add vision_product_search_purge_products_in_product_set * add vision_product_search_purge_orphan_products * update comment * flake * update print message * update python sample to use operation.result * longer timeout * remove unused variable * Adds updates for samples profiler ... vision [(#2439)](GoogleCloudPlatform/python-docs-samples#2439) * Update Pillow dependency per security alert CVE-2019-16865 [(#2492)](GoogleCloudPlatform/python-docs-samples#2492) * Add Set Endpoint Samples [(#2497)](GoogleCloudPlatform/python-docs-samples#2497) * Add Set Endpoint Samples * Add additional test result option * Sample Request update * Add filter_ * Auto-update dependencies. [(#2005)](GoogleCloudPlatform/python-docs-samples#2005) * Auto-update dependencies. * Revert update of appengine/flexible/datastore. * revert update of appengine/flexible/scipy * revert update of bigquery/bqml * revert update of bigquery/cloud-client * revert update of bigquery/datalab-migration * revert update of bigtable/quickstart * revert update of compute/api * revert update of container_registry/container_analysis * revert update of dataflow/run_template * revert update of datastore/cloud-ndb * revert update of dialogflow/cloud-client * revert update of dlp * revert update of functions/imagemagick * revert update of functions/ocr/app * revert update of healthcare/api-client/fhir * revert update of iam/api-client * revert update of iot/api-client/gcs_file_to_device * revert update of iot/api-client/mqtt_example * revert update of language/automl * revert update of run/image-processing * revert update of vision/automl * revert update testing/requirements.txt * revert update of vision/cloud-client/detect * revert update of vision/cloud-client/product_search * revert update of jobs/v2/api_client * revert update of jobs/v3/api_client * revert update of opencensus * revert update of translate/cloud-client * revert update to speech/cloud-client Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com> Co-authored-by: Doug Mahugh <dmahugh@gmail.com> * fix: get bounds for blocks instead of pages [(#2705)](GoogleCloudPlatform/python-docs-samples#2705) * fix: use `page.bounding_box` when feature is page Closes #2702 * fix: outline blocks instead of pages Co-authored-by: Leah E. Cole <6719667+leahecole@users.noreply.github.com> * Add vision ocr set endpoint samples [(#2569)](GoogleCloudPlatform/python-docs-samples#2569) * Add vision ocr set endpoint samples * Remove port number as it is optional in Python * Use unique output names * lint * Add support for python2 print statements * use uuid instead of datetime * remove all tests that use https as they perform duplicate work Co-authored-by: Leah E. Cole <6719667+leahecole@users.noreply.github.com> * vision: update samples to throw errors if one occurs [(#2725)](GoogleCloudPlatform/python-docs-samples#2725) * vision: update samples to throw errors if one occurs * Add link to error page docs * Add link to error message Co-authored-by: Leah E. Cole <6719667+leahecole@users.noreply.github.com> Co-authored-by: Gus Class <gguuss@gmail.com> * vision: move published samples into master [(#2743)](GoogleCloudPlatform/python-docs-samples#2743) Add generated samples for Vision API Add required attribute mime_type Resolve encoding error in py2 Remove autogenerated warnings Remove coding: utf-8 line Remove argument encoding checks Remove CLI Remove unnecessary statics, variables, and imports Blacken with l=88 Remove unused region tag and comments Verify that there are no published links pointing to removed region tags Shorten docstring Replace concrete file path with "path/to/your/document.pdf" Co-authored-by: Yu-Han Liu <dizcology@hotmail.com> * fix: vision product search tests to call setup and teardown and use uuid [(#2830)](GoogleCloudPlatform/python-docs-samples#2830) * vision: fix flaky test [(#2988)](GoogleCloudPlatform/python-docs-samples#2988) * vision: fix flaky tests to be more generic in the results [(#2915)](GoogleCloudPlatform/python-docs-samples#2915) * chore(deps): update dependency google-cloud-storage to v1.26.0 [(#3046)](GoogleCloudPlatform/python-docs-samples#3046) * chore(deps): update dependency google-cloud-storage to v1.26.0 * chore(deps): specify dependencies by python version * chore: up other deps to try to remove errors Co-authored-by: Leah E. Cole <6719667+leahecole@users.noreply.github.com> Co-authored-by: Leah Cole <coleleah@google.com> * Clarifying comment for batch requests [(#3071)](GoogleCloudPlatform/python-docs-samples#3071) * Clarifying comment for batch requests * vision: fixing linter for batch * vision: remove redundant flaky web test [(#3090)](GoogleCloudPlatform/python-docs-samples#3090) Fix: GoogleCloudPlatform/python-docs-samples#2880 * vision: fix flaky test [(#3091)](GoogleCloudPlatform/python-docs-samples#3091) Fix: GoogleCloudPlatform/python-docs-samples#2876 * chore(deps): update dependency google-cloud-vision to v0.42.0 [(#3170)](GoogleCloudPlatform/python-docs-samples#3170) * chore(deps): update dependency pillow to v6.2.2 [(#3186)](GoogleCloudPlatform/python-docs-samples#3186) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [pillow](https://python-pillow.org) ([source](https://togithub.com/python-pillow/Pillow)) | patch | `==6.2.1` -> `==6.2.2` | --- ### Release Notes <details> <summary>python-pillow/Pillow</summary> ### [`v6.2.2`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#​622-2020-01-02) [Compare Source](https://togithub.com/python-pillow/Pillow/compare/6.2.1...6.2.2) - This is the last Pillow release to support Python 2.7 [#​3642](https://togithub.com/python-pillow/Pillow/issues/3642) - Overflow checks for realloc for tiff decoding. CVE-2020-5310 [wiredfool, radarhere] - Catch SGI buffer overrun. CVE-2020-5311 [radarhere] - Catch PCX P mode buffer overrun. CVE-2020-5312 [radarhere] - Catch FLI buffer overrun. CVE-2020-5313 [radarhere] - Raise an error for an invalid number of bands in FPX image. CVE-2019-19911 [wiredfool, radarhere] </details> --- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples). * chore(deps): update dependency pillow to v7 [(#3218)](GoogleCloudPlatform/python-docs-samples#3218) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [pillow](https://python-pillow.org) ([source](https://togithub.com/python-pillow/Pillow)) | major | `==6.2.2` -> `==7.1.0` | --- ### Release Notes <details> <summary>python-pillow/Pillow</summary> ### [`v7.1.0`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#​710-2020-04-01) [Compare Source](https://togithub.com/python-pillow/Pillow/compare/7.0.0...7.1.0) - Fix multiple OOB reads in FLI decoding [#​4503](https://togithub.com/python-pillow/Pillow/issues/4503) [wiredfool] - Fix buffer overflow in SGI-RLE decoding [#​4504](https://togithub.com/python-pillow/Pillow/issues/4504) [wiredfool, hugovk] - Fix bounds overflow in JPEG 2000 decoding [#​4505](https://togithub.com/python-pillow/Pillow/issues/4505) [wiredfool] - Fix bounds overflow in PCX decoding [#​4506](https://togithub.com/python-pillow/Pillow/issues/4506) [wiredfool] - Fix 2 buffer overflows in TIFF decoding [#​4507](https://togithub.com/python-pillow/Pillow/issues/4507) [wiredfool] - Add APNG support [#​4243](https://togithub.com/python-pillow/Pillow/issues/4243) [pmrowla, radarhere, hugovk] - ImageGrab.grab() for Linux with XCB [#​4260](https://togithub.com/python-pillow/Pillow/issues/4260) [nulano, radarhere] - Added three new channel operations [#​4230](https://togithub.com/python-pillow/Pillow/issues/4230) [dwastberg, radarhere] - Prevent masking of Image reduce method in Jpeg2KImagePlugin [#​4474](https://togithub.com/python-pillow/Pillow/issues/4474) [radarhere, homm] - Added reading of earlier ImageMagick PNG EXIF data [#​4471](https://togithub.com/python-pillow/Pillow/issues/4471) [radarhere] - Fixed endian handling for I;16 getextrema [#​4457](https://togithub.com/python-pillow/Pillow/issues/4457) [radarhere] - Release buffer if function returns prematurely [#​4381](https://togithub.com/python-pillow/Pillow/issues/4381) [radarhere] - Add JPEG comment to info dictionary [#​4455](https://togithub.com/python-pillow/Pillow/issues/4455) [radarhere] - Fix size calculation of Image.thumbnail() [#​4404](https://togithub.com/python-pillow/Pillow/issues/4404) [orlnub123] - Fixed stroke on FreeType < 2.9 [#​4401](https://togithub.com/python-pillow/Pillow/issues/4401) [radarhere] - If present, only use alpha channel for bounding box [#​4454](https://togithub.com/python-pillow/Pillow/issues/4454) [radarhere] - Warn if an unknown feature is passed to features.check() [#​4438](https://togithub.com/python-pillow/Pillow/issues/4438) [jdufresne] - Fix Name field length when saving IM images [#​4424](https://togithub.com/python-pillow/Pillow/issues/4424) [hugovk, radarhere] - Allow saving of zero quality JPEG images [#​4440](https://togithub.com/python-pillow/Pillow/issues/4440) [radarhere] - Allow explicit zero width to hide outline [#​4334](https://togithub.com/python-pillow/Pillow/issues/4334) [radarhere] - Change ContainerIO return type to match file object mode [#​4297](https://togithub.com/python-pillow/Pillow/issues/4297) [jdufresne, radarhere] - Only draw each polygon pixel once [#​4333](https://togithub.com/python-pillow/Pillow/issues/4333) [radarhere] - Add support for shooting situation Exif IFD tags [#​4398](https://togithub.com/python-pillow/Pillow/issues/4398) [alexagv] - Handle multiple and malformed JPEG APP13 markers [#​4370](https://togithub.com/python-pillow/Pillow/issues/4370) [homm] - Depends: Update libwebp to 1.1.0 [#​4342](https://togithub.com/python-pillow/Pillow/issues/4342), libjpeg to 9d [#​4352](https://togithub.com/python-pillow/Pillow/issues/4352) [radarhere] ### [`v7.0.0`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#​700-2020-01-02) [Compare Source](https://togithub.com/python-pillow/Pillow/compare/6.2.2...7.0.0) - Drop support for EOL Python 2.7 [#​4109](https://togithub.com/python-pillow/Pillow/issues/4109) [hugovk, radarhere, jdufresne] - Fix rounding error on RGB to L conversion [#​4320](https://togithub.com/python-pillow/Pillow/issues/4320) [homm] - Exif writing fixes: Rational boundaries and signed/unsigned types [#​3980](https://togithub.com/python-pillow/Pillow/issues/3980) [kkopachev, radarhere] - Allow loading of WMF images at a given DPI [#​4311](https://togithub.com/python-pillow/Pillow/issues/4311) [radarhere] - Added reduce operation [#​4251](https://togithub.com/python-pillow/Pillow/issues/4251) [homm] - Raise ValueError for io.StringIO in Image.open [#​4302](https://togithub.com/python-pillow/Pillow/issues/4302) [radarhere, hugovk] - Fix thumbnail geometry when DCT scaling is used [#​4231](https://togithub.com/python-pillow/Pillow/issues/4231) [homm, radarhere] - Use default DPI when exif provides invalid x_resolution [#​4147](https://togithub.com/python-pillow/Pillow/issues/4147) [beipang2, radarhere] - Change default resize resampling filter from NEAREST to BICUBIC [#​4255](https://togithub.com/python-pillow/Pillow/issues/4255) [homm] - Fixed black lines on upscaled images with the BOX filter [#​4278](https://togithub.com/python-pillow/Pillow/issues/4278) [homm] - Better thumbnail aspect ratio preservation [#​4256](https://togithub.com/python-pillow/Pillow/issues/4256) [homm] - Add La mode packing and unpacking [#​4248](https://togithub.com/python-pillow/Pillow/issues/4248) [homm] - Include tests in coverage reports [#​4173](https://togithub.com/python-pillow/Pillow/issues/4173) [hugovk] - Handle broken Photoshop data [#​4239](https://togithub.com/python-pillow/Pillow/issues/4239) [radarhere] - Raise a specific exception if no data is found for an MPO frame [#​4240](https://togithub.com/python-pillow/Pillow/issues/4240) [radarhere] - Fix Unicode support for PyPy [#​4145](https://togithub.com/python-pillow/Pillow/issues/4145) [nulano] - Added UnidentifiedImageError [#​4182](https://togithub.com/python-pillow/Pillow/issues/4182) [radarhere, hugovk] - Remove deprecated **version** from plugins [#​4197](https://togithub.com/python-pillow/Pillow/issues/4197) [hugovk, radarhere] - Fixed freeing unallocated pointer when resizing with height too large [#​4116](https://togithub.com/python-pillow/Pillow/issues/4116) [radarhere] - Copy info in Image.transform [#​4128](https://togithub.com/python-pillow/Pillow/issues/4128) [radarhere] - Corrected DdsImagePlugin setting info gamma [#​4171](https://togithub.com/python-pillow/Pillow/issues/4171) [radarhere] - Depends: Update libtiff to 4.1.0 [#​4195](https://togithub.com/python-pillow/Pillow/issues/4195), Tk Tcl to 8.6.10 [#​4229](https://togithub.com/python-pillow/Pillow/issues/4229), libimagequant to 2.12.6 [#​4318](https://togithub.com/python-pillow/Pillow/issues/4318) [radarhere] - Improve handling of file resources [#​3577](https://togithub.com/python-pillow/Pillow/issues/3577) [jdufresne] - Removed CI testing of Fedora 29 [#​4165](https://togithub.com/python-pillow/Pillow/issues/4165) [hugovk] - Added pypy3 to tox envlist [#​4137](https://togithub.com/python-pillow/Pillow/issues/4137) [jdufresne] - Drop support for EOL PyQt4 and PySide [#​4108](https://togithub.com/python-pillow/Pillow/issues/4108) [hugovk, radarhere] - Removed deprecated setting of TIFF image sizes [#​4114](https://togithub.com/python-pillow/Pillow/issues/4114) [radarhere] - Removed deprecated PILLOW_VERSION [#​4107](https://togithub.com/python-pillow/Pillow/issues/4107) [hugovk] - Changed default frombuffer raw decoder args [#​1730](https://togithub.com/python-pillow/Pillow/issues/1730) [radarhere] </details> --- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples). * Simplify noxfile setup. [(#2806)](GoogleCloudPlatform/python-docs-samples#2806) * chore(deps): update dependency requests to v2.23.0 * Simplify noxfile and add version control. * Configure appengine/standard to only test Python 2.7. * Update Kokokro configs to match noxfile. * Add requirements-test to each folder. * Remove Py2 versions from everything execept appengine/standard. * Remove conftest.py. * Remove appengine/standard/conftest.py * Remove 'no-sucess-flaky-report' from pytest.ini. * Add GAE SDK back to appengine/standard tests. * Fix typo. * Roll pytest to python 2 version. * Add a bunch of testing requirements. * Remove typo. * Add appengine lib directory back in. * Add some additional requirements. * Fix issue with flake8 args. * Even more requirements. * Readd appengine conftest.py. * Add a few more requirements. * Even more Appengine requirements. * Add webtest for appengine/standard/mailgun. * Add some additional requirements. * Add workaround for issue with mailjet-rest. * Add responses for appengine/standard/mailjet. Co-authored-by: Renovate Bot <bot@renovateapp.com> * Update dependency google-cloud-vision to v1 [(#3227)](GoogleCloudPlatform/python-docs-samples#3227) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [google-cloud-vision](https://togithub.com/googleapis/python-vision) | major | `==0.42.0` -> `==1.0.0` | --- ### Release Notes <details> <summary>googleapis/python-vision</summary> ### [`v1.0.0`](https://togithub.com/googleapis/python-vision/blob/master/CHANGELOG.md#​100-httpswwwgithubcomgoogleapispython-visioncomparev0420v100-2020-02-28) [Compare Source](https://togithub.com/googleapis/python-vision/compare/v0.42.0...v1.0.0) ##### Features - bump release status to GA ([#​11](https://www.github.com/googleapis/python-vision/issues/11)) ([2129bde](https://www.github.com/googleapis/python-vision/commit/2129bdedfa0dca85c5adc5350bff10d4a485df77)) </details> --- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples). * Update dependency pillow to v7.1.1 [(#3263)](GoogleCloudPlatform/python-docs-samples#3263) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [pillow](https://python-pillow.org) ([source](https://togithub.com/python-pillow/Pillow)) | patch | `==7.1.0` -> `==7.1.1` | --- ### Release Notes <details> <summary>python-pillow/Pillow</summary> ### [`v7.1.1`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#​711-2020-04-02) [Compare Source](https://togithub.com/python-pillow/Pillow/compare/7.1.0...7.1.1) - Fix regression seeking and telling PNGs [#​4512](https://togithub.com/python-pillow/Pillow/issues/4512) [#​4514](https://togithub.com/python-pillow/Pillow/issues/4514) [hugovk, radarhere] </details> --- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples). * vision: increase timeout for tests [(#3383)](GoogleCloudPlatform/python-docs-samples#3383) Fix: GoogleCloudPlatform/python-docs-samples#2955 Fix: GoogleCloudPlatform/python-docs-samples#2992 * [vision] fix: longer timeout [(#3447)](GoogleCloudPlatform/python-docs-samples#3447) fixes #2962 * testing: replace @flaky with @pytest.mark.flaky [(#3496)](GoogleCloudPlatform/python-docs-samples#3496) * testing: replace @flaky with @pytest.mark.flaky * lint * mark few tests as flaky that involves LRO polling. * lint * chore(deps): update dependency pillow to v7.1.2 [(#3557)](GoogleCloudPlatform/python-docs-samples#3557) * chore(deps): update dependency google-cloud-storage to v1.28.0 [(#3260)](GoogleCloudPlatform/python-docs-samples#3260) Co-authored-by: Takashi Matsuo <tmatsuo@google.com> * [vision] fix: add timeout for LRO result and mark it as flaky [(#3684)](GoogleCloudPlatform/python-docs-samples#3684) fixes #3674 * [vision] fix: mark a test as flaky [(#3709)](GoogleCloudPlatform/python-docs-samples#3709) fixes #3702 * chore: some lint fixes [(#3751)](GoogleCloudPlatform/python-docs-samples#3751) * chore: some lint fixes * longer timeout, more retries * disable detect_test.py::test_async_detect_document * [vision] testing: retry upon errors [(#3764)](GoogleCloudPlatform/python-docs-samples#3764) fixes #3734 I only wrapped some of the tests. Potentially we can do it for everything. * [vision] testing: re-enable test_async_detect_document [(#3761)](GoogleCloudPlatform/python-docs-samples#3761) fixes #3753 also made the data PDF to be smaller. * chore(deps): update dependency google-cloud-storage to v1.28.1 [(#3785)](GoogleCloudPlatform/python-docs-samples#3785) * chore(deps): update dependency google-cloud-storage to v1.28.1 * [asset] testing: use uuid instead of time Co-authored-by: Takashi Matsuo <tmatsuo@google.com> * Replace GCLOUD_PROJECT with GOOGLE_CLOUD_PROJECT. [(#4022)](GoogleCloudPlatform/python-docs-samples#4022) * chore(deps): update dependency google-cloud-storage to v1.29.0 [(#4040)](GoogleCloudPlatform/python-docs-samples#4040) * chore(deps): update dependency pillow to v7.2.0 [(#4208)](GoogleCloudPlatform/python-docs-samples#4208) * testing(vision): use different ids for test functions [(#4227)](GoogleCloudPlatform/python-docs-samples#4227) fixes #4224 * chore(deps): update dependency pytest to v5.4.3 [(#4279)](GoogleCloudPlatform/python-docs-samples#4279) * chore(deps): update dependency pytest to v5.4.3 * specify pytest for python 2 in appengine Co-authored-by: Leah Cole <coleleah@google.com> * Update dependency flaky to v3.7.0 [(#4300)](GoogleCloudPlatform/python-docs-samples#4300) * Update dependency google-cloud-storage to v1.30.0 * Update dependency pytest to v6 [(#4390)](GoogleCloudPlatform/python-docs-samples#4390) * feat: fixed doc string comment mismatch in Product Search [(#4432)](GoogleCloudPlatform/python-docs-samples#4432) Changes documentation string for a GCS example from `file_path` to `image_uri`. * chore(deps): update dependency google-cloud-storage to v1.31.0 [(#4564)](GoogleCloudPlatform/python-docs-samples#4564) Co-authored-by: Takashi Matsuo <tmatsuo@google.com> * chore: update templates Co-authored-by: Jason Dobry <jmdobry@users.noreply.github.com> Co-authored-by: Jon Wayne Parrott <jonwayne@google.com> Co-authored-by: DPE bot <dpebot@google.com> Co-authored-by: Gus Class <gguuss@gmail.com> Co-authored-by: Brent Shaffer <betterbrent@google.com> Co-authored-by: Bill Prin <waprin@gmail.com> Co-authored-by: Yu-Han Liu <dizcology@hotmail.com> Co-authored-by: michaelawyu <chenyumic@google.com> Co-authored-by: Rebecca Taylor <remilytaylor@gmail.com> Co-authored-by: Frank Natividad <frankyn@users.noreply.github.com> Co-authored-by: Noah Negrey <nnegrey@users.noreply.github.com> Co-authored-by: Jeffrey Rennie <rennie@google.com> Co-authored-by: Tim Swast <swast@google.com> Co-authored-by: Alix Hamilton <ajhamilton@google.com> Co-authored-by: Rebecca Taylor <becca@becca.me> Co-authored-by: Krissda Prakalphakul <5546755+krissdap@users.noreply.github.com> Co-authored-by: Peshmerge <peshmerge@users.noreply.github.com> Co-authored-by: navinger <navinger2003@gmail.com> Co-authored-by: Charles Engelke <github@engelke.com> Co-authored-by: shollyman <shollyman@google.com> Co-authored-by: Shahin <happyhuman@users.noreply.github.com> Co-authored-by: Charles Engelke <engelke@google.com> Co-authored-by: Agnel Vishal <agnelvishal@gmail.com> Co-authored-by: Grega Kespret <grega.kespret@gmail.com> Co-authored-by: Da-Woon Chung <dorapen@gmail.com> Co-authored-by: Yu-Han Liu <yuhanliu@google.com> Co-authored-by: Torry Yang <sirtorry@users.noreply.github.com> Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com> Co-authored-by: Doug Mahugh <dmahugh@gmail.com> Co-authored-by: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com> Co-authored-by: Leah E. Cole <6719667+leahecole@users.noreply.github.com> Co-authored-by: Michelle Casbon <texasmichelle@users.noreply.github.com> Co-authored-by: WhiteSource Renovate <bot@renovateapp.com> Co-authored-by: Leah Cole <coleleah@google.com> Co-authored-by: Cameron Zahedi <czahedi@google.com> Co-authored-by: Takashi Matsuo <tmatsuo@google.com> Co-authored-by: Eric Schmidt <erschmid@google.com>
After our last talk, there have been quite a few different ideas tossed around to make it clear and obvious which credentials and project IDs are in use during a particular API call, some of those have been....
with connection: # do something
)We do (1), are talking about doing (2), while the others tend to do (3) -- and comparing the code, I think (3) is the nicest.
gcloud-node:
gcloud-ruby
gcloud-python
gcloud-python if we followed the client pattern:
Another option for gcloud-python using the client pattern:
/cc @dhermes @tseaver
The text was updated successfully, but these errors were encountered: