Skip to content

Conversation

@alexcrichton
Copy link
Member

@alexcrichton alexcrichton commented Sep 1, 2021

This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:

  • Function calls are always indirect which means they are more difficult
    to predict, resulting in worse performance.

  • Generating a relocation-per-function call requires expensive
    relocation resolution at module-load time, which can be a large
    contributing factor to how long it takes to load a precompiled module.

To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the colocated flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.

When switching the colocated flag to true this commit is also then
able to move much of the relocation resolution from wasmtime_jit::link
into wasmtime_cranelift::obj during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.

The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the bl instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.

This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the MachBuffer type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the MachBackend trait
and updates the implementation of MachBuffer to account for all these
new branches. Specifically the changes to MachBuffer are:

  • For AAarch64 the LabelUse::Branch26 value now supports veneers, and
    AArch64 calls use this to resolve relocations.

  • The emit_island function has been rewritten internally to handle
    some cases which previously didn't come up before, such as:

    • When emitting an island the deadline is now recalculated, where
      previously it was always set to infinitely in the future. This was ok
      prior since only a Branch19 supported veneers and once it was
      promoted no veneers were supported, so without multiple layers of
      promotion the lack of a new deadline was ok.

    • When emitting an island all pending fixups had veneers forced if
      their branch target wasn't known yet. This was generally ok for
      19-bit fixups since the only kind getting a veneer was a 19-bit
      fixup, but with mixed kinds it's a bit odd to force veneers for a
      26-bit fixup just because a nearby 19-bit fixup needed a veneer.
      Instead fixups are now re-enqueued unless they're known to be
      out-of-bounds. This may run the risk of generating more islands for
      19-bit branches but it should also reduce the number of islands for
      between-function calls.

    • Otherwise the internal logic was tweaked to ideally be a bit more
      simple, but that's a pretty subjective criteria in compilers...

I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.

The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.

cc #3230

This commit is a relatively major change to the way that Wasmtime
generates code for Wasm modules and how functions call each other.
Prior to this commit all function calls between functions, even if they
were defined in the same module, were done indirectly through a
register. To implement this the backend would emit an absolute 8-byte
relocation near all function calls, load that address into a register,
and then call it. While this technique is simple to implement and easy
to get right, it has two primary downsides associated with it:

* Function calls are always indirect which means they are more difficult
  to predict, resulting in worse performance.

* Generating a relocation-per-function call requires expensive
  relocation resolution at module-load time, which can be a large
  contributing factor to how long it takes to load a precompiled module.

To fix these issues, while also somewhat compromising on the previously
simple implementation technique, this commit switches wasm calls within
a module to using the `colocated` flag enabled in Cranelift-speak, which
basically means that a relative call instruction is used with a
relocation that's resolved relative to the pc of the call instruction
itself.

When switching the `colocated` flag to `true` this commit is also then
able to move much of the relocation resolution from `wasmtime_jit::link`
into `wasmtime_cranelift::obj` during object-construction time. This
frontloads all relocation work which means that there's actually no
relocations related to function calls in the final image, solving both
of our points above.

The main gotcha in implementing this technique is that there are
hardware limitations to relative function calls which mean we can't
simply blindly use them. AArch64, for example, can only go +/- 64 MB
from the `bl` instruction to the target, which means that if the
function we're calling is a greater distance away then we would fail to
resolve that relocation. On x86_64 the limits are +/- 2GB which are much
larger, but theoretically still feasible to hit. Consequently the main
increase in implementation complexity is fixing this issue.

This issue is actually already present in Cranelift itself, and is
internally one of the invariants handled by the `MachBuffer` type. When
generating a function relative jumps between basic blocks have similar
restrictions. This commit adds new methods for the `MachBackend` trait
and updates the implementation of `MachBuffer` to account for all these
new branches. Specifically the changes to `MachBuffer` are:

* For AAarch64 the `LabelUse::Branch26` value now supports veneers, and
  AArch64 calls use this to resolve relocations.

* The `emit_island` function has been rewritten internally to handle
  some cases which previously didn't come up before, such as:

  * When emitting an island the deadline is now recalculated, where
    previously it was always set to infinitely in the future. This was ok
    prior since only a `Branch19` supported veneers and once it was
    promoted no veneers were supported, so without multiple layers of
    promotion the lack of a new deadline was ok.

  * When emitting an island all pending fixups had veneers forced if
    their branch target wasn't known yet. This was generally ok for
    19-bit fixups since the only kind getting a veneer was a 19-bit
    fixup, but with mixed kinds it's a bit odd to force veneers for a
    26-bit fixup just because a nearby 19-bit fixup needed a veneer.
    Instead fixups are now re-enqueued unless they're known to be
    out-of-bounds. This may run the risk of generating more islands for
    19-bit branches but it should also reduce the number of islands for
    between-function calls.

  * Otherwise the internal logic was tweaked to ideally be a bit more
    simple, but that's a pretty subjective criteria in compilers...

I've added some simple testing of this for now. A synthetic compiler
option was create to simply add padded 0s between functions and test
cases implement various forms of calls that at least need veneers. A
test is also included for x86_64, but it is unfortunately pretty slow
because it requires generating 2GB of output. I'm hoping for now it's
not too bad, but we can disable the test if it's prohibitive and
otherwise just comment the necessary portions to be sure to run the
ignored test if these parts of the code have changed.

The final end-result of this commit is that for a large module I'm
working with the number of relocations dropped to zero, meaning that
nothing actually needs to be done to the text section when it's loaded
into memory (yay!). I haven't run final benchmarks yet but this is the
last remaining source of significant slowdown when loading modules,
after I land a number of other PRs both active and ones that I only have
locally for now.
@alexcrichton
Copy link
Member Author

This is a second take on #3254, this time with more MachBuffer

@github-actions github-actions bot added cranelift Issues related to the Cranelift code generator cranelift:area:machinst Issues related to instruction selection and the new MachInst backend. cranelift:area:aarch64 Issues related to AArch64 backend. cranelift:area:x64 Issues related to x64 codegen fuzzing Issues related to our fuzzing infrastructure labels Sep 1, 2021
@github-actions
Copy link

github-actions bot commented Sep 1, 2021

Subscribe to Label Action

cc @fitzgen

This issue or pull request has been labeled: "cranelift", "cranelift:area:aarch64", "cranelift:area:machinst", "cranelift:area:x64", "fuzzing"

Thus the following users have been cc'd because of the following labels:

  • fitzgen: fuzzing

To subscribe or unsubscribe from this label, edit the .github/subscribe-to-label.json configuration file.

Learn more.

addend: Addend,
) {
let name = name.clone();
// FIXME: This should use `I::LabelUse::from_reloc` to optionally
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crazy idea, but would it be possible for a single MachBuffer to somehow be used across all functions in a module? This would reduce the ability to compile functions in parallel though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compiling functions in parallel is an important optimization we rely on in production

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fundamentally it makes sense to have a serialized process only at the very end, which is what this separate "text builder" use of MachBuffer is. Conceptually it's like a linker; I think the stage separation is reasonable.

Copy link
Member

@cfallin cfallin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Congrats, you've written a linker! 🎉

This is great; I'm happy that we were able to consolidate functionality between intra- and inter-function cases, so thanks for the patience here! Just a few comments below but nothing significant.

addend: Addend,
) {
let name = name.clone();
// FIXME: This should use `I::LabelUse::from_reloc` to optionally
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fundamentally it makes sense to have a serialized process only at the very end, which is what this separate "text builder" use of MachBuffer is. Conceptually it's like a linker; I think the stage separation is reasonable.

}

// Finish up the text section now that we're done adding functions.
const CODE_SECTION_ALIGNMENT: u64 = 0x1000;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to have some notion of page alignment per platform, or maybe just take the least-common-multiple of those for platforms we support? I'm thinking specifically of macOS/aarch64, where mmap'ing part of a file with only 4K alignment might be problematic with the 16K (?) pages by default.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed! This PR is just moving this logic around as opposed to adding it (it's actually been around for quite some time, although it's more important to get right now that we're directly mmap'ing cache artifacts), so I'll leave this as-is for now, but I'll open an issue. The issue with M1 macs was actually just fixed in #3274 as well, so it definitely needs fixing soon!

@alexcrichton alexcrichton merged commit 1532516 into bytecodealliance:main Sep 1, 2021
@alexcrichton alexcrichton deleted the call-relative branch September 1, 2021 18:27
uweigand added a commit to uweigand/wasmtime that referenced this pull request Sep 11, 2021
- Add relocation handling needed after PR bytecodealliance#3275
- Fix incorrect handling of signed constants detected by PR bytecodealliance#3056 test
- Disable fuzzing tests that require pre-built v8 binaries
- Disable cranelift test that depends on i128
- Temporarily disable memory64 tests
@uweigand uweigand mentioned this pull request Sep 11, 2021
uweigand added a commit to uweigand/wasmtime that referenced this pull request Sep 13, 2021
- Add relocation handling needed after PR bytecodealliance#3275
- Fix incorrect handling of signed constants detected by PR bytecodealliance#3056 test
- Fix LabelUse max pos/neg ranges; fix overflow in buffers.rs
- Disable fuzzing tests that require pre-built v8 binaries
- Disable cranelift test that depends on i128
- Temporarily disable memory64 tests
alexcrichton pushed a commit that referenced this pull request Sep 20, 2021
- Add relocation handling needed after PR #3275
- Fix incorrect handling of signed constants detected by PR #3056 test
- Fix LabelUse max pos/neg ranges; fix overflow in buffers.rs
- Disable fuzzing tests that require pre-built v8 binaries
- Disable cranelift test that depends on i128
- Temporarily disable memory64 tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cranelift:area:aarch64 Issues related to AArch64 backend. cranelift:area:machinst Issues related to instruction selection and the new MachInst backend. cranelift:area:x64 Issues related to x64 codegen cranelift Issues related to the Cranelift code generator fuzzing Issues related to our fuzzing infrastructure

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants