Description
The problem statement
We're starting to see a bit of a proliferation of uses of @assume_effects
. This isn't really a problem in general,
but the semantics here a subtle and getting it wrong is undefined behavior. As such, it would behoove us to think
about tooling to verify these assumptions wherever possible. Of course, you might say, "well, if the compiler could
verify the effects, we wouldn't need to assume them". That is the correct position of course, but not all
@assume_effects
are due to limitations that we ever expect to lift. For example, the FMA consistency annotation:
Lines 49 to 55 in 1bf65b9
requires a fairly complete and symbolic model of IEEE floating point semantics, that is beyond the scope of what I'd expect the compiler to ever have in the default compilation pipeline - it is more in the domain of proof assistants and model checkers. Ideally, we'd be able to have these kinds of tools as packages that can then be used to either find or check proofs of our assumptions (presumably on CI, since these kinds of checks can be expensive).
Prior art / the research question
I've taken a look at something like https://github.com/xldenis/creusot, which (as far as I understand) compiles Rust code to Why3's mlcfg representation (which can then generate proof obligations in a whole host of other tooling). It seems relatively straightforward to do the same thing for Julia by taking advantage of the usual IR-agnostic compilation trick that we do, but I think that's only half of the story, addressing the mechanism of how something like this might work, but not really the semantics, so that's kind of where this issue comes in.
I would like to pose to the question to the larger julia/PL community: What - if anything - should the semantics of proofs and proof obligations be in the core Julia system. I have some initial thoughts here, which I will post below, but I'm assuming there's a lot of standard approaches and tricks here that I'm just not familiar with, so I'd like to leave this fairly broad for suggestions.
My own thoughts
I think it would be prudent to have some sort of primitive representations of proofs within the core system. This doesn't mean that this system would actually be used for proof checking, but it seems necessary to at least have some object that represents true statements. That way, if you can construct such an object (or prove that the function that constructs it is :total and thus would construct the object if it were ran), you can assert the truth of something. For example, could we have something like:
struct NoThrowObligation <: ProofObligation
mi::MethodInstance
world::UInt
end
struct Proof
what::ProofObligation
# Proof construction using Julia compiler builtin
function NoThrowProof(obl::NoThrowObligation)
Base.infer_effects(obl.mi, obl.world).nothrow || error()
new(obl)
end
# Proof construction from composition
function NoThrowImplies(obl::NoThrowObligation)
# Compiler primitive that does `infer_effects`, but also generates a `NoThrowObligation` for every
# call it couldn't prove `:nothrow` for
obls = Base.infer_effect_obligations(obl.mi, obl.world).nothrow
obl, proofs->begin
all((p, o)->p.what === o, proofs, obls) || error()
new(obl)
end
end
# Other constructors monkey patched by external packages, e.g. the why3 construction.
end
These objects could then be exchanged for checking/assumptions, etc.
This design is obviously bad, but that's because I don't really know what the good designs in this world are. I assume there's a bunch of standard tricks to lift primitive notions of equality and composition into the proof domain, so I'd love to get some input on the core principles of doing this kind of thing, keeping in mind that this would be an entirely optional part of the language.