-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Provide AsyncIO support for generated code #365
Conversation
76fcc19
to
8d0e88c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is WIP, but can you also file an issue to track adding async client support to the ads templates?
gapic/schema/wrappers.py
Outdated
@utils.cached_property | ||
def client_output_async(self): | ||
"""Return the output from the client layer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there's a fair bit of duplication between client_output
and client_output_async
. Would it be worth combining and parameterizing them? It's not obvious to me just by eyeballing how they differ.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! I will create a function to hide common logic. In next pass, I will try to improve readability using similar pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. If it turns out to be tricky then no need to twist the logic too much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't find a way to unify this specific method without regression.
- The
client_output
is cached, so it can't accept parameters; - The output of
client_output
is frozen, so we can't modify the content in it (we can create a new one but it is worse than what we have now).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep client_output
and client_output_async
as the cached layer, then move the guts of the logic into an internal method that takes a parameter. The internal method should not be cached and should not be a property.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created the common internal method in 4ecf34d.
f1482f1
to
7123809
Compare
This PR may need a smarter way to reuse existing logic. The newly generated class should not be a burden for incoming important features like #359. |
5329fe7
to
659e0fa
Compare
To improve the maintainability of the microgenerator project, we might want to reduce the duplicated logic. Otherwise, new features will have to apply same changes in multiple places, which is quite error-prone. I want to list the options we have to compare:
Options 1, 2, 3 solve the maintainability issue, but have different drawbacks. I can change to either one, if you see fit. WDYT? @software-dov @busunkim96 For option 1, proxying methods from one class to another requires quite some boilerplates. For example, the existing Client class has not only instance method, but also class variables and class methods. The code looks like For option 2, the issue of helper functions is that the duplicated logic usually contains modifying member variables. Passing For option 3, inheritance increases code complexity, and challenges the scope of the interface abstraction. For example, it is unclear that should MTLS feature apply to all transport or only gRPC transport. It also makes it harder for users to understand the logic, since they need to read multiple class instead of one. |
My preferences would be 2, then 1, then 3, then 4. @arithmetic1728 has confirmed that mTLS logic should eventually apply to http transports, and it presumably will also apply to asynchronous clients. I could be persuaded (and please do, if experiments indicate it!) that composition is a less gross hack than helper functions. I think, in light of the above, I'm leaning more towards composition as a general strategy. |
@software-dov I have experiment with the composition pattern to unify logic between Client and AsyncClient, PTAL at commit 3cc730c. The composition pattern doesn't work cleanly with Transport classes due to function signature difference. AsyncTransport stubs returns
@software-dov WDYT? |
@lidizheng If I understand correctly, we want type compatibility between the transport classes |
For channels, if we loosen the type check, it is equivalent to option 3. Abstract Channel base class works. Another option is type it with |
Possibly bad idea: is there any way we can rewrite the synchronous Transport in terms of the async and just force explicit I'm a little lost reading through the proposed diffs and the existing |
Yes, the composition pattern works except the types. The complain can be suppressed by
When I'm writing gRPC AsyncIO, I found even though AsyncIO is the official asynchronous library. People still appreciate the ability to use normal synchronous Python. |
741fce3
to
14b1760
Compare
Codecov Report
@@ Coverage Diff @@
## master #365 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 26 26
Lines 1453 1466 +13
Branches 300 300
=========================================
+ Hits 1453 1466 +13
Continue to review full report at Codecov.
|
887d2b3
to
ef2267e
Compare
@@ -789,6 +785,12 @@ def test_transport_instance(): | |||
client = {{ service.client_name }}(transport=transport) | |||
assert client._transport is transport | |||
|
|||
def test_transport_instance_2(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this test verifying? That None
credentials default to anonymous?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to test if transport can initialize with no arguments. Just learnt that to use insecure channel, we need anonymous credential. So, this case is removed.
if scopes is None: | ||
scopes = cls.AUTH_SCOPES |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personal preference/style nit:
How about
scopes = cls.AUTH_SCOPES if scopes is None else scopes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to scopes = scopes or cls.AUTH_SCOPES
, same below.
gapic/templates/%namespace/%name_%version/%sub/services/%service/transports/grpc_asyncio.py.j2
Outdated
Show resolved
Hide resolved
94c0a99
to
bdb628f
Compare
Original version of MTLS setup in if isinstance(client_options, dict):
client_options = ClientOptions.from_dict(client_options)
if client_options is None:
client_options = ClientOptions.ClientOptions()
if transport is None and client_options.api_endpoint is None:
use_mtls_env = os.getenv("GOOGLE_API_USE_MTLS", "never")
if use_mtls_env == "never":
client_options.api_endpoint = self.DEFAULT_ENDPOINT
elif use_mtls_env == "always":
client_options.api_endpoint = self.DEFAULT_MTLS_ENDPOINT
elif use_mtls_env == "auto":
has_client_cert_source = (
client_options.client_cert_source is not None
or mtls.has_default_client_cert_source()
)
client_options.api_endpoint = (
self.DEFAULT_MTLS_ENDPOINT if has_client_cert_source else self.DEFAULT_ENDPOINT
)
else:
raise MutualTLSChannelError(
"Unsupported GOOGLE_API_USE_MTLS value. Accepted values: Never, Auto, Always"
)
# Save or instantiate the transport.
# Ordinarily, we provide the transport, but allowing a custom transport
# instance provides an extensibility point for unusual situations.
if isinstance(transport, {{ service.name }}Transport):
# transport is a {{ service.name }}Transport instance.
if credentials:
raise ValueError('When providing a transport instance, '
'provide its credentials directly.')
self._transport = transport
elif isinstance(transport, str):
Transport = type(self).get_transport_class(transport)
self._transport = Transport(
credentials=credentials, host=self.DEFAULT_ENDPOINT
)
else:
self._transport = {{ service.name }}GrpcTransport(
credentials=credentials,
host=client_options.api_endpoint,
api_mtls_endpoint=client_options.api_endpoint,
client_cert_source=client_options.client_cert_source,
) The behavior of MTLS is kind of strange in some cases.
In this PR, the MTLS logic is slightly updated to resolve above cases. Now, the magical difference between |
@arithmetic1728 Could you take a look at the mTLS changes? |
Yes, the behavior is supposed to be different when
I think the library assumes user could provide their own implementation of Transport class, so
Which test is it? |
@arithmetic1728 Thanks for the quick response. The test I mentioned is this.
This is the test case that expects
I can see benefits in both ways with treating them different or in the same way. By differentiating If we treat both cases in the same way, the transport class will always receive mTLS arguments. Then, users can decide whether to implement mTLS feature in their customized transport. If they want to ignore those arguments, they should be free to do so. WDYT about this approach? |
The logic for mTLS is as follows: Transport side: mTLS is controlled by the Client side: Please don't change the mTLS logic since all languages implement the same logic. Please let me know if you have any questions regarding mTLS. |
Yes, please see my previous comments. Let's schedule a meeting to discuss this, it might be easier that way. |
@arithmetic1728 Thanks! See you offline (online). |
bdb628f
to
236e6a6
Compare
gapic/templates/%namespace/%name_%version/%sub/services/%service/client.py.j2
Show resolved
Hide resolved
gapic/templates/%namespace/%name_%version/%sub/services/%service/async_client.py.j2
Outdated
Show resolved
Hide resolved
After discussing with Lidi, I think the mTLS change looks good to me. LGTM for the mTLS change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for the mTLS change.
94681bf
to
0c8940f
Compare
@software-dov @busunkim96 PTALAA. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, nothing stands out to me. I'll file an issue for cross patching to alternative templates.
I'm taking it on faith that all the asynchronicity is correct; even after reading multiple blag posts and running local experiments, I still haven't fully internalized it.
bc5bde8
to
833b567
Compare
@software-dov Thanks for the review! I updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM as well. Some questions/nits on docstrings and tests.
I think the google-api-core
version need to be bumped in the template setup.py.j2
.
gapic/templates/%namespace/%name_%version/%sub/services/%service/transports/grpc_asyncio.py.j2
Outdated
Show resolved
Hide resolved
gapic/templates/%namespace/%name_%version/%sub/services/%service/transports/grpc.py.j2
Outdated
Show resolved
Hide resolved
76906da
to
52336f8
Compare
@busunkim96 Thanks for reviewing. I'm shamed that I forgot to remove the debug logging. All comments addressed. Do I need to squash all commits? |
@software-dov What merge style is preferred in this repo? EDIT: |
Let's see how the CI react to the change.
The test might not pass until changes in googleapis/python-api-core#22 got released.