-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/#13070 defer annotations when future is active #13395
Fix/#13070 defer annotations when future is active #13395
Conversation
CodSpeed Performance ReportMerging #13395 will degrade performances by 4.26%Comparing Summary
Benchmarks breakdown
|
|
I just realised that the performance issue can probably be fixed by relying on the fact that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fantastic, thank you!!
The semantic index builder is the right place to look for this, and the semantic index is the right place to put it. Any analysis that we can do without requiring a dependency on other files, we want to do in semantic indexing.
I suspect the performance regression on the incremental check is because tomllib (which we are using for our benchmark) does use from __future__ import annotations
, and deferred types require an extra Salsa query and thus increase Salsa overhead on incremental check.
So I suspect that requiring from __future__ import annotations
to be the first statement in a file won't help with the performance.
And I'm not sure if we even want to require it. A mis-placed from __future__ import annotations
won't work at runtime (the module won't even import), but for our purposes I think we should assume the user intended for it to take effect, and we should still check the code assuming its use. Otherwise accidentally adding a line before your from __future__ import annotations
could suddenly result in a ton of bogus diagnostics on your forward-reference annotations, which isn't really useful.
I think we should add a comment to this effect, where we check for from __future__ import annotations
in the semantic index builder.
And I think we should emit a diagnostic in type inference if we run across a misplaced __future__
import. But this is pretty much a separate thing and could be its own PR. (EDIT: I actually think probably it shouldn't be done in type inference; misplaced __future__
imports are a syntax-level error that isn't really part of type-checking. So we'll want this diagnostic, but probably not in type inference, and definitely not in this PR.)
That's somewhat of a relief (if it's true) because the implementation I have so far is not ideal. I will still try to go through with it to make sure that it is not the source of the regression |
This is true, though the spot in the code you linked is for supporting attribute access of attributes on function objects. To support method calls, I think what we are missing is understanding attribute access on instances of classes (as opposed to class objects):
|
If this is a lot of work, an easier way to check the performance impact would be to see how performance on the benchmark fares if you just consider |
While this is not related to the PR, I did waste some time on the difference between "expected Unknown" (e.g. resolution failed) vs. "Not implemented Unknown" (e.g. this case should yield a value for the current AST but we didn't code it yet). Would it be worth considering a temporary value (until red-knot is feature complete or almost so) that would make this difference obvious? To be fair I'll also be more careful next time now that I know 🙂 |
That's an interesting idea! I think we could have This seems like a net positive to me! Not sure I would get to it soon, but if you're interested in doing it, I'd certainly consider a PR. |
Yes, see #12986 for previous discussion! I think the main concern with that PR was how to handle the inner enum in unions and intersections. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great. Thanks you
It's a bit a problem that I can't acknowledge the codspeed regression because the URl doesn't work haha Okay, there's a way. Navigate to the runs page and open the results from there https://codspeed.io/astral-sh/ruff/runs/66ebdbcf6ba711013f9a14a5 |
Summary
This PR tries to provide support for deferred resolution of type-hints in files with
from __future__ import annotations
. Fixes #13070.Implementation
Currently, the method
is_stub
is already used in multiple places to resolve deferred annotations. As all current usage ofis_stubs
are dedicated to deferred type inference, I decided in this implementation to refactoris_stubs
&&has_future_annotations
together inare_all_types_deferred
.We can also keep them separate and always test them together.
The main difficulty is how to know if that flag is active. I tried an implementation that checks this when building the semantic index. I do not know if it's the best choice (as
SemanticIndex
doesn't have any attribute similar to this), but the building of the semantic index already parses every statement in the file, which was ideal to find this flag.Happy to get any feedback on a better spot for this if you can think of one.
Test Plan
There is currently one test addressing deferred annotations resolution, focused on builtins. I don't quite understand why builtins require deferred annotations, so I focused on test cases I encounter when coding in python.
In my experience, the two most common occurrences requiring deferred annotations (in regular code) are:
Some testing led me to find that we don't currently support type resolution for methods, so I only tested the case of referencing a symbol before its definition.
I added 3 separate test case, all with the same base code:
*.py
,foo
must not resolve (Unknown
)*.pyi
),foo
must resolve toFoo
foo
must resolve toFoo