Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference: don't consider Const(::Type)-argument as const-profitable #51693

Closed
wants to merge 1 commit into from

Conversation

aviatesk
Copy link
Member

I've observed that we're performing essentially-duplicated inference when type-argument(s) are represented as Const. For instance, in the code:

descend((Int,)) do x
    Ref(x)
end

We first perform non-constant inference for (::Type{Base.RefValue})(::Int) and then constant inference for (::Const(Base.RefValue))(::Int), although the latter "constant" inference is just wasteful. This happens because Type{Base.RefValue} propagates the essentially same information as Const(Base.RefValue).

To eliminate the wasteful constant propagation, this commit adjusts is_const_prop_profitable_arg such that it no longer considers Const(x::Type) as being beneficial for constant propagation.

@nanosoldier runbenchmarks("inference", vs=":master")

@aviatesk aviatesk requested a review from vtjnash October 13, 2023 15:06
@aviatesk aviatesk force-pushed the avi/elim-wasterful-constprop branch 2 times, most recently from d8237a5 to b60da6f Compare October 13, 2023 15:09
@oscardssmith oscardssmith added performance Must go faster compiler:inference Type inference labels Oct 13, 2023
@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

Copy link
Member

@vtjnash vtjnash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM. Though I wonder if the other direction wouldn't often be more useful, if we could rearrange some things here to make that doable (eg only do constant prop/constant eval with the full Const and regular inference caching only on the signature intersected with plain Type, except when those are the same)

@aviatesk
Copy link
Member Author

I'm not entirely sure I've grasped your idea, but removing the current constant propagation does sound tough, especially because there are many cases where we can optimize methods using extended lattice information (mostly Const) that is known only for some of call arguments.

@Keno
Copy link
Member

Keno commented Oct 15, 2023

I was a little worried about this, because the Const version does have a fair bit more information, but in most cases I tried, concrete evaluation saved it in the end, so this is probably fine. That said, it does feel a bit like a hack. I do wonder whether we can't keep track of whether constant information would have let us improve the result and use that rather than continuing to hardcode heuristics like this.

@aviatesk
Copy link
Member Author

because the Const version does have a fair bit more information, but in most cases I tried, concrete evaluation saved it in the end,

Okay, so Const(argx::Type) is occasionally profitable for const-propagated abstract interpretation, because the Const information might facilitate concrete-evaluation within the callee contexts (since argx::Type isn't necessarily eligible for concrete-evaluation while Const(argx::Type) always is, e.g. !isconstType(Type{Base.Refvalue})).

I do wonder whether we can't keep track of whether constant information would have let us improve the result and use that rather than continuing to hardcode heuristics like this.

That kind of per-variable analysis may be possible as an extension of EA (or as a simplification of that given EA's current latency problem). For determining const-prop' profitability, we need a caller context to observe information about callee arguments that is derived by such a per-variable analysis, so the analysis doesn't necessarily need to be integrated within abstract interpretation, but can be implemented as a post-optimization analysis.

@aviatesk
Copy link
Member Author

So this PR may introduce inference regression in cases like:

function const_prop_target(T, x)
    if x isa Int
        return concrete_eval_target(T, 42)
    else
        return missing
    end
end

concrete_eval_target(T, x) = T === Base.RefValue ? x : nothing

where concrete_eval_target needs to see T as Const(Base.RefValue) to get the optimal inference result, although this PR prohibits it because now we don't allow const_prop_target to propagate T as Const:

julia> descend((Int,); optimize=false) do x
           const_prop_target(Base.RefValue, x)
       end
[ Info: Loading Cthulhu...
(::var"#10#11")(x) @ Main REPL[3]:2
Variables
  #self#::Core.Const(var"#10#11"())
  x::Int64

%0 = invoke #10(::Int64)::Union{Nothing, Int64} (+c,+e,+n,+t,+s,+m,+u)
    @ REPL[3]:2 within `#10`
1%1 = Base.RefValue::Core.Const(Base.RefValue) (+c,+e,+n,+t,+s,+m,+u)
│   %2 = Main.const_prop_target(%1, x)::Union{Nothing, Int64} (+c,+e,+n,+t,+s,+m,+u) [constprop] Disabled by argument and rettype heuristics
└──      return %2
Select a call to descend into or  to ascend. [q]uit. [b]ookmark.
Toggles: [w]arn, [h]ide type-stable statements, [t]ype annotations, [s]yntax highlight for Source/LLVM/Native, [j]ump to source always, [o]ptimize, [d]ebuginfo, [r]emarks, [e]ffects, [i]nlining costs.
Show: [S]ource code, [A]ST, [T]yped code, [L]LVM IR, [N]ative code
Actions: [E]dit source code, [R]evise and redisplay
Advanced: dump [P]arams cache.
 • %2 = const_prop_target(::Type{Base.RefValue},::Int64)::Union{Nothing, Int64} (+c,+e,+n,+t,+s,+m,+u)
   

@aviatesk aviatesk added the DO NOT MERGE Do not merge this PR! label Oct 15, 2023
@vtjnash
Copy link
Member

vtjnash commented Oct 15, 2023

That seems like a rare and unimportant code pattern. Code shouldn't over-rely on constant prop anyways, since it only helps inlining, but not runtime execution.

That said, I think we could help avoid the repeat inference here, without the concern of a regression in accuracy, by only doing constant-prop (instead of only using the normal cached typeinf_edge). It can still do inference on the compile-signature (e.g. const_prop_target(::Type, ::Any)) and use that to derive heuristics. But just avoid the inference on the fully-specialized version (e.g. const_prop_target(::Type{Base.RefValue}, ::Int)). We have been inferring this closer to 3 times, despite knowing that only the const-prop and compile-sig versions are likely to be the most useful.

@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - no performance regressions were detected. A full report can be found here.

I've observed that we're performing essentially-duplicated inference
when type-argument(s) are represented as `Const`. For instance, in the
code:
```julia
descend((Int,)) do x
    Ref(x)
end
```

We first perform non-constant inference for `(::Type{Base.RefValue})(::Int)`
and then constant inference for `(::Const(Base.RefValue))(::Int)`,
although the latter "constant" inference is just wasteful. This happens
because `Type{Base.RefValue}` propagates the essentially same
information as `Const(Base.RefValue)`.

To eliminate the wasteful constant propagation, this commit adjusts
`is_const_prop_profitable_arg` such that it no longer considers
`Const(x::Type)` as being beneficial for constant propagation.
@aviatesk
Copy link
Member Author

@nanosoldier runbenchmarks("inference", vs=":master")

@aviatesk
Copy link
Member Author

aviatesk commented Oct 17, 2023

That said, I think we could help avoid the repeat inference here, without the concern of a regression in accuracy, by only doing constant-prop (instead of only using the normal cached typeinf_edge). It can still do inference on the compile-signature (e.g. const_prop_target(::Type, ::Any)) and use that to derive heuristics. But just avoid the inference on the fully-specialized version (e.g. const_prop_target(::Type{Base.RefValue}, ::Int)). We have been inferring this closer to 3 times, despite knowing that only the const-prop and compile-sig versions are likely to be the most useful.

Are you suggesting that we avoid the normal cached type inference automatically, without the need for programmers to explicitly use @nospecializeinfer? That sounds akin to the first design option we explored in #41931, which was basically your efforts at vtjnash@8ab7b6b. It might be worthwhile to reconsider that approach, ensuring that it doesn't block const-propagation. I will give a try on that direction.

@nanosoldier
Copy link
Collaborator

Your job failed.

@vtjnash
Copy link
Member

vtjnash commented Oct 26, 2023

I had to kill the job because the REPL was failing to precompile during pkgimage.mk

@aviatesk
Copy link
Member Author

Superseded by #54043.

@aviatesk aviatesk closed this Apr 12, 2024
@aviatesk aviatesk deleted the avi/elim-wasterful-constprop branch April 12, 2024 13:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference DO NOT MERGE Do not merge this PR! performance Must go faster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants