forked from astral-sh/ruff
-
Notifications
You must be signed in to change notification settings - Fork 0
[pull] main from astral-sh:main #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pull
wants to merge
80
commits into
psy-repos-rust:main
Choose a base branch
from
astral-sh:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…to before 3.14 (#17660) What it says on the tin 😄
We are currently representing type variables using a `KnownInstance` variant, which wraps a `TypeVarInstance` that contains the information about the typevar (name, bounds, constraints, default type). We were previously only constructing that type for PEP 695 typevars. This PR constructs that type for legacy typevars as well. It also detects functions that are generic because they use legacy typevars in their parameter list. With the existing logic for inferring specializations of function calls (#17301), that means that we are correctly detecting that the definition of `reveal_type` in the typeshed is generic, and inferring the correct specialization of `_T` for each call site. This does not yet handle legacy generic classes; that will come in a follow-on PR.
## Summary Subtyping was already modeled, but assignability also needs an explicit branch. Removes 921 ecosystem false positives. ## Test Plan New Markdown tests.
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
…ed_knot_test` as well as in `red_knot_python_semantic` (#17718)
## Summary Part of #15383, this PR adds `is_equivalent_to` support for overloaded callables. This is mainly done by delegating it to the subtyping check in that two types A and B are considered equivalent if A is a subtype of B and B is a subtype of A. ## Test Plan Add test cases for overloaded callables in `is_equivalent_to.md`
## Summary We were previously recording wrong reachability constraints for negative branches. Instead of `[cond] AND (NOT [True])` below, we were recording `[cond] AND (NOT ([cond] AND [True]))`, i.e. we were negating not just the last predicate, but the `AND`-ed reachability constraint from last clause. With this fix, we now record the correct constraints for the example from #17723: ```py def _(cond: bool): if cond: # reachability: [cond] if True: # reachability: [cond] AND [True] pass else: # reachability: [cond] AND (NOT [True]) x ``` closes #17723 ## Test Plan * Regression test. * Verified the ecosystem changes
## Summary @sharkdp and I realised in our 1:1 this morning that our control flow for `assert` statements isn't quite accurate at the moment. Namely, for something like this: ```py def _(x: int | None): assert x is None, reveal_type(x) ``` we currently reveal `None` for `x` here, but this is incorrect. In actual fact, the `msg` expression of an `assert` statement (the expression after the comma) will only be evaluated if the test (`x is None`) evaluates to `False`. As such, we should be adding a constraint of `~None` to `x` in the `msg` expression, which should simplify the inferred type of `x` to `int` in that context (`(int | None) & ~None` -> `int`). ## Test Plan Mdtests added. --------- Co-authored-by: David Peter <mail@david-peter.de>
…#17732) Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
## Summary Part of #15383, this PR adds the core infrastructure to check for invalid overloads and adds a diagnostic to raise if there are < 2 overloads for a given definition. ### Design notes The requirements to check the overloads are: * Requires `FunctionType` which has the `to_overloaded` method * The `FunctionType` **should** be for the function that is either the implementation or the last overload if the implementation doesn't exists * Avoid checking any `FunctionType` that are part of an overload chain * Consider visibility constraints This required a couple of iteration to make sure all of the above requirements are fulfilled. #### 1. Use a set to deduplicate The logic would first collect all the `FunctionType` that are part of the overload chain except for the implementation or the last overload if the implementation doesn't exists. Then, when iterating over all the function declarations within the scope, we'd avoid checking these functions. But, this approach would fail to consider visibility constraints as certain overloads _can_ be behind a version check. Those aren't part of the overload chain but those aren't a separate overload chain either. <details><summary>Implementation:</summary> <p> ```rs fn check_overloaded_functions(&mut self) { let function_definitions = || { self.types .declarations .iter() .filter_map(|(definition, ty)| { // Filter out function literals that result from anything other than a function // definition e.g., imports. if let DefinitionKind::Function(function) = definition.kind(self.db()) { ty.inner_type() .into_function_literal() .map(|ty| (ty, definition.symbol(self.db()), function.node())) } else { None } }) }; // A set of all the functions that are part of an overloaded function definition except for // the implementation function and the last overload in case the implementation doesn't // exists. This allows us to collect all the function definitions that needs to be skipped // when checking for invalid overload usages. let mut overloads: HashSet<FunctionType<'db>> = HashSet::default(); for (function, _) in function_definitions() { let Some(overloaded) = function.to_overloaded(self.db()) else { continue; }; if overloaded.implementation.is_some() { overloads.extend(overloaded.overloads.iter().copied()); } else if let Some((_, previous_overloads)) = overloaded.overloads.split_last() { overloads.extend(previous_overloads.iter().copied()); } } for (function, function_node) in function_definitions() { let Some(overloaded) = function.to_overloaded(self.db()) else { continue; }; if overloads.contains(&function) { continue; } // At this point, the `function` variable is either the implementation function or the // last overloaded function if the implementation doesn't exists. if overloaded.overloads.len() < 2 { if let Some(builder) = self .context .report_lint(&INVALID_OVERLOAD, &function_node.name) { let mut diagnostic = builder.into_diagnostic(format_args!( "Function `{}` requires at least two overloads", &function_node.name )); if let Some(first_overload) = overloaded.overloads.first() { diagnostic.annotate( self.context .secondary(first_overload.focus_range(self.db())) .message(format_args!("Only one overload defined here")), ); } } } } } ``` </p> </details> #### 2. Define a `predecessor` query The `predecessor` query would return the previous `FunctionType` for the given `FunctionType` i.e., the current logic would be extracted to be a query instead. This could then be used to make sure that we're checking the entire overload chain once. The way this would've been implemented is to have a `to_overloaded` implementation which would take the root of the overload chain instead of the leaf. But, this would require updates to the use-def map to somehow be able to return the _following_ functions for a given definition. #### 3. Create a successor link This is what Pyrefly uses, we'd create a forward link between two functions that are involved in an overload chain. This means that for a given function, we can get the successor function. This could be used to find the _leaf_ of the overload chain which can then be used with the `to_overloaded` method to get the entire overload chain. But, this would also require updating the use-def map to be able to "see" the _following_ function. ### Implementation This leads us to the final implementation that this PR implements which is to consider the overloaded functions using: * Collect all the **function symbols** that are defined **and** called within the same file. This could potentially be an overloaded function * Use the public bindings to get the leaf of the overload chain and use that to get the entire overload chain via `to_overloaded` and perform the check This has a limitation that in case a function redefines an overload, then that overload will not be checked. For example: ```py from typing import overload @overload def f() -> None: ... @overload def f(x: int) -> int: ... # The above overload will not be checked as the below function with the same name # shadows it def f(*args: int) -> int: ... ``` ## Test Plan Update existing mdtest and add snapshot diagnostics.
Re: #17526 ## Summary Adds tests to red knot and `linter.rs` for the semantic syntax. Specifically add tests for `ReboundComprehensionVariable`, `DuplicateTypeParameter`, and `MultipleCaseAssignment`. Refactor the `test_async_comprehension_in_sync_comprehension` → `test_semantic_error` to be more general for all semantic syntax test cases. ## Test Plan This is a test. ## Question I'm happy to contribute more tests the coming days. Should that happen here or should we merge this PR such that the refactor `test_async_comprehension_in_sync_comprehension` → `test_semantic_error` is available on main and others can chime in, too?
## Summary As mentioned in the spec (https://typing.python.org/en/latest/spec/overload.html#invalid-overload-definitions), part of #15383: > The `@overload`-decorated definitions must be followed by an overload implementation, which does not include an `@overload` decorator. Type checkers should report an error or warning if an implementation is missing. Overload definitions within stub files, protocols, and on abstract methods within abstract base classes are exempt from this check. ## Test Plan Remove TODOs from the test; create one diagnostic snapshot.
## Summary Part of #15383. As per the spec (https://typing.python.org/en/latest/spec/overload.html#invalid-overload-definitions): For `@staticmethod` and `@classmethod`: > If one overload signature is decorated with `@staticmethod` or `@classmethod`, all overload signatures must be similarly decorated. The implementation, if present, must also have a consistent decorator. Type checkers should report an error if these conditions are not met. For `@final` and `@override`: > If a `@final` or `@override` decorator is supplied for a function with overloads, the decorator should be applied only to the overload implementation if it is present. If an overload implementation isn’t present (for example, in a stub file), the `@final` or `@override` decorator should be applied only to the first overload. Type checkers should enforce these rules and generate an error when they are violated. If a `@final` or `@override` decorator follows these rules, a type checker should treat the decorator as if it is present on all overloads. ## Test Plan Update existing tests; add snapshots.
## Summary Model the lookup of `__new__` without going through `Type::try_call_dunder`. The `__new__` method is only looked up on the constructed type itself, not on the meta-type. This now removes ~930 false positives across the ecosystem (vs 255 for #17662). It introduces 30 new false positives related to the construction of enums via something like `Color = enum.Enum("Color", ["RED", "GREEN"])`. This is expected, because we don't handle custom metaclass `__call__` methods. The fact that we previously didn't emit diagnostics there was a coincidence (we incorrectly called `EnumMeta.__new__`, and since we don't fully understand its signature, that happened to work with `str`, `list` arguments). closes #17462 ## Test Plan Regression test
…on-generic ones (#17741)
Co-authored-by: Brendan Cooley <brendanc@ladodgers.com>
## Summary closes #17775 ## Test Plan Added corpus regression test
## Summary Remove mutability in parameter types for a few functions such as `with_self` and `try_call`. I tried the `Rc`-approach with cheap cloning [suggest here](#17733 (comment)) first, but it turns out we need a whole stack of prepended arguments (there can be [both `self` *and* `cls`](https://github.com/astral-sh/ruff/blob/3cf44e401a64658c17652cd3a17c897dc50261eb/crates/red_knot_python_semantic/resources/mdtest/call/constructor.md?plain=1#L113)), and we would need the same construct not just for `CallArguments` but also for `CallArgumentTypes`. At that point we're cloning `VecDeque`s anyway, so the overhead of cloning the whole `VecDeque` with all arguments didn't seem to justify the additional code complexity. ## Benchmarks Benchmarks on tomllib, black, jinja, isort seem neutral.
This adds support for legacy generic classes, which use a `typing.Generic` base class, or which inherit from another generic class that has been specialized with legacy typevars. --------- Co-authored-by: Carl Meyer <carl@astral.sh>
…e to arbitrary protocol types (#17810)
## Summary Currently this assertion fails on `main`, because we do not synthesize a `__call__` attribute for Callable types: ```py from typing import Protocol, Callable from knot_extensions import static_assert, is_assignable_to class Foo(Protocol): def __call__(self, x: int, /) -> str: ... static_assert(is_assignable_to(Callable[[int], str], Foo)) ``` This PR fixes that. See previous discussion about this in #16493 (comment) and #17682 (comment) ## Test Plan Existing mdtests updated; a couple of new ones added.
…g a protocol's interface (#17808) ## Summary Currently red-knot does not understand `Foo` and `Bar` here as being equivalent: ```py from typing import Protocol class A: ... class B: ... class C: ... class Foo(Protocol): x: A | B | C class Bar(Protocol): x: B | A | C ``` Nor does it understand `A | B | Foo` as being equivalent to `Bar | B | A`. This PR fixes that. ## Test Plan new mdtest assertions added that fail on `main`
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary Fixes #17821. ## Test Plan New CLI test. This might be overkill for such a simple fix, but it made me feel better to add a test.
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Micha Reiser <micha@reiser.io>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.1)
Can you help keep this open source service alive? 💖 Please sponsor : )