-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference: don't consider Const(::Type)
-argument as const-profitable
#51693
Conversation
d8237a5
to
b60da6f
Compare
@nanosoldier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM. Though I wonder if the other direction wouldn't often be more useful, if we could rearrange some things here to make that doable (eg only do constant prop/constant eval with the full Const and regular inference caching only on the signature intersected with plain Type, except when those are the same)
I'm not entirely sure I've grasped your idea, but removing the current constant propagation does sound tough, especially because there are many cases where we can optimize methods using extended lattice information (mostly |
b6e7781
to
e24b1c1
Compare
I was a little worried about this, because the Const version does have a fair bit more information, but in most cases I tried, concrete evaluation saved it in the end, so this is probably fine. That said, it does feel a bit like a hack. I do wonder whether we can't keep track of whether constant information would have let us improve the result and use that rather than continuing to hardcode heuristics like this. |
Okay, so
That kind of per-variable analysis may be possible as an extension of EA (or as a simplification of that given EA's current latency problem). For determining const-prop' profitability, we need a caller context to observe information about callee arguments that is derived by such a per-variable analysis, so the analysis doesn't necessarily need to be integrated within abstract interpretation, but can be implemented as a post-optimization analysis. |
So this PR may introduce inference regression in cases like: function const_prop_target(T, x)
if x isa Int
return concrete_eval_target(T, 42)
else
return missing
end
end
concrete_eval_target(T, x) = T === Base.RefValue ? x : nothing where julia> descend((Int,); optimize=false) do x
const_prop_target(Base.RefValue, x)
end
[ Info: Loading Cthulhu...
(::var"#10#11")(x) @ Main REPL[3]:2
Variables
#self#::Core.Const(var"#10#11"())
x::Int64
∘ ─ %0 = invoke #10(::Int64)::Union{Nothing, Int64} (+c,+e,+n,+t,+s,+m,+u)
@ REPL[3]:2 within `#10`
1 ─ %1 = Base.RefValue::Core.Const(Base.RefValue) (+c,+e,+n,+t,+s,+m,+u)
│ %2 = Main.const_prop_target(%1, x)::Union{Nothing, Int64} (+c,+e,+n,+t,+s,+m,+u) [constprop] Disabled by argument and rettype heuristics
└── return %2
Select a call to descend into or ↩ to ascend. [q]uit. [b]ookmark.
Toggles: [w]arn, [h]ide type-stable statements, [t]ype annotations, [s]yntax highlight for Source/LLVM/Native, [j]ump to source always, [o]ptimize, [d]ebuginfo, [r]emarks, [e]ffects, [i]nlining costs.
Show: [S]ource code, [A]ST, [T]yped code, [L]LVM IR, [N]ative code
Actions: [E]dit source code, [R]evise and redisplay
Advanced: dump [P]arams cache.
• %2 = const_prop_target(::Type{Base.RefValue},::Int64)::Union{Nothing, Int64} (+c,+e,+n,+t,+s,+m,+u)
↩ |
That seems like a rare and unimportant code pattern. Code shouldn't over-rely on constant prop anyways, since it only helps inlining, but not runtime execution. That said, I think we could help avoid the repeat inference here, without the concern of a regression in accuracy, by only doing constant-prop (instead of only using the normal cached typeinf_edge). It can still do inference on the compile-signature (e.g. |
@nanosoldier |
Your benchmark job has completed - no performance regressions were detected. A full report can be found here. |
I've observed that we're performing essentially-duplicated inference when type-argument(s) are represented as `Const`. For instance, in the code: ```julia descend((Int,)) do x Ref(x) end ``` We first perform non-constant inference for `(::Type{Base.RefValue})(::Int)` and then constant inference for `(::Const(Base.RefValue))(::Int)`, although the latter "constant" inference is just wasteful. This happens because `Type{Base.RefValue}` propagates the essentially same information as `Const(Base.RefValue)`. To eliminate the wasteful constant propagation, this commit adjusts `is_const_prop_profitable_arg` such that it no longer considers `Const(x::Type)` as being beneficial for constant propagation.
6f8f505
to
8b6ca36
Compare
@nanosoldier |
Are you suggesting that we avoid the normal cached type inference automatically, without the need for programmers to explicitly use |
Your job failed. |
I had to kill the job because the REPL was failing to precompile during pkgimage.mk |
Superseded by #54043. |
I've observed that we're performing essentially-duplicated inference when type-argument(s) are represented as
Const
. For instance, in the code:We first perform non-constant inference for
(::Type{Base.RefValue})(::Int)
and then constant inference for(::Const(Base.RefValue))(::Int)
, although the latter "constant" inference is just wasteful. This happens becauseType{Base.RefValue}
propagates the essentially same information asConst(Base.RefValue)
.To eliminate the wasteful constant propagation, this commit adjusts
is_const_prop_profitable_arg
such that it no longer considersConst(x::Type)
as being beneficial for constant propagation.@nanosoldier
runbenchmarks("inference", vs=":master")