-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve logic that chooses co- vs. contra-variant inferences #57909
Improve logic that chooses co- vs. contra-variant inferences #57909
Conversation
This PR doesn't have any linked issues. Please open an issue that references this PR. From there we can discuss and prioritise. |
@typescript-bot test top800 @typescript-bot perf test this faster |
src/compiler/checker.ts
Outdated
@@ -26465,10 +26465,10 @@ export function createTypeChecker(host: TypeCheckerHost): TypeChecker { | |||
// and has inferences that would conflict. Otherwise, we prefer the contra-variant inference. | |||
const preferCovariantType = inferredCovariantType && (!inferredContravariantType || | |||
!(inferredCovariantType.flags & TypeFlags.Never) && | |||
some(inference.contraCandidates, t => isTypeSubtypeOf(inferredCovariantType, t)) && | |||
some(inference.contraCandidates, t => isTypeAssignableTo(inferredCovariantType, t)) && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is not the correct change then at the very least a test case should be proposed that shows how the isTypeSubtypeOf
is better.
When this check was originally introduced by @ahejlsberg here it was states that:
Furthermore, knowing that an error will result when the co-variant inference is not a subtype of the contra-variant inference, we now prefer the contra-variant inference because it is likely to have come from an explicit type annotation on a function. This improves our error reporting.
An alternative idea to fix this example would be to keep coAndContraRelationCheck
on inferenceContext
. From what I understand, the subtypeRelation
in this context is mainly used when dealing with multiple overloads - a single signature case uses assignableRelation
with chooseOverload
. So perhaps the used relation should determine what is being used here.
Hey @jakebailey, the results of running the DT tests are ready. There were interesting changes: Branch only errors:Package: react
Package: styled-theming
|
@jakebailey Here are the results of running the user tests comparing Something interesting changed - please have a look. Details
|
@jakebailey Here they are:
tscComparison Report - baseline..pr
System info unknown
Hosts
Scenarios
Developer Information: |
@typescript-bot pack this |
Hey @jakebailey, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running There is also a playground for this build and an npm module you can use via |
@jakebailey could you rerun top800, dt and perf suites? a new playground would also be appreciated :) |
@typescript-bot test top800 @typescript-bot perf test this faster |
Hey @jakebailey, I've packed this into an installable tgz. You can install it for testing by referencing it in your
and then running There is also a playground for this build and an npm module you can use via |
Hey @jakebailey, the results of running the DT tests are ready. There were interesting changes: Branch only errors:Package: styled-theming
|
@jakebailey Here are the results of running the user tests comparing Everything looks good! |
@jakebailey Here they are:
tscComparison Report - baseline..pr
System info unknown
Hosts
Scenarios
Developer Information: |
@jakebailey Here are the results of running the top 800 repos comparing Everything looks good! |
The only reported error here is actually desired! See the comment here. I didn't look into this one before because of that and because it just involves multiple libraries with complicated type. I plan to reduce it to a test case now and add it here. EDIT:// From this initial repro: TS playground to this TS playground. It's pretty lengthy but I already have problems with removing from it just about anything. |
Hey @jakebailey, the results of running the DT tests are ready. There were interesting changes: Branch only errors:Package: styled-theming
|
@jakebailey Here are the results of running the user tests comparing Everything looks good! |
Still just one error :p and I have commented on it already here |
@jakebailey Here they are:
tscComparison Report - baseline..pr
System info unknown
Hosts
Scenarios
Developer Information: |
@jakebailey Here are the results of running the top 400 repos comparing Everything looks good! |
Indeed, just rechecking since this is old and also we have new perf benchmarks / stats. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the tests are good, I'm fine with this (nice cheeky PR title that copies the title given to two other PRs that have edited this), but I do want to take a moment to write about this function at a conceptual level. The comment in the code does a poor job of this - it's literally restating what the code below does, rather than really expounding on the why it's doing what it does.
We're trying to pick between the covariant position inference result and the contravariant position inference result. Strictly speaking, when the results differ, you can easily be in a situation where there simply is no correct inference. For example, if we're inferring from (x: number) => string
to (x: T) => T
. Picking either number
or string
instead is basically just a choice of which error to produce at the argument site - there is no single T
that the input type will be assignable to (except any
). However, if you're a bit lucky, one of the two results will work in both positions and produce a type for which the overall argument is assignable. For example, if you have (x: number) => 0
, 0
will work for T
in both positions. So will number
. Ideally, we'd just note all the possible inferences and try them all to see if any choice results in a valid assignment of the argument, then pick that one. Instead we hem and haw a bit because we don't really want to do all that backtracking (it's costly), and try to find a local heuristic to pick one or the other for each type argument. And, moreover, when multiple results are valid, determining a useful "ranking" of them for the "best" match is helpful, since users typically have an intent or expectation for how we bias our choices in situations like these.
As of this PR, that ranking is:
- Covariant result if not never or any, assignable to all contravariant candidates, and the variable being inferred isn't referred to by another variable's constraint directly and every covariant inference candidate is assignable to the chosen covariant inference result
- Contravariant result, if present
- Covariant result
Why is this the ranking? Uh... I dunno. Mostly just empirical testing of "this gives good results". I really wish I could point to more rigor here. Certainly, I cannot point to an algebra from which this algorithm arises. Heck, I'd argue it's kinda wrong and bad - in the (x: number) => 0
example above, it picks 0
, but number
would be a more reasonable pick. By what metric? Feels. Literals are constraining, and I'd feel you should only pick them as a last resort.
But at least over our current implementation, assignability over subtyping makes sense, since arguments are ultimately compared via assignability and not subtyping (though using the compareTypes
on the inference context may be better still, since that would allow the subtype overload pass to use subtypes for its heuristic as those arguments are compared via subtype, while the second pass can use assignment), and excepting Any
in the same way as Never
from the ranking also tracks (since they're basically the same on the source side of a relationship, as is compared in the first step here).
Hm, baselines don't seem to be fully up to date. |
@jakebailey fixed that :) |
I see why you mention this but I failed to create a reasonable test case showing this is better in practice. I could apply this change blindly as it doesn't make a difference for the existing test suite. I also don't think it would surface anything in the extended test suite since that would essentially move the needle closer to the state before this PR for some cases. So I'm hesitant to make this change as I'd prefer to do it with a test case at hand. At the same time, I see a risk in introducing this change (and funny enough I failed to create a test case showing this too, even a contrived one) since contextual parameter types (cached!) would be assigned based on the proposed subtyped check from the first overload pass. This could potentially yield worse results for them when the secondary pass (using |
Should this PR be mentioned in the 5.6 Announcements under Notable Behavioral Changes? I realize the Beta and RC announcements have already gone out. |
To review this it might be helpful to see how this evolved over time:
#27028
#46392
#52123
#52180
#54072
The reason why this inference fails today is that
isTypeSubtypeOf
leads torequireOptionalProperties === true
. This has such inferences:So the covariant inference lacks
body
property and thus it fails the check and the contravariant inference gets chosen at the end.fixes #57908
fixes #58468