Skip to content

CHERI-valid pointer stuffing produces worse codegen than an implementation which does huge wrapping offsets #96152

Closed

Description

I came across this here: tokio-rs/bytes#542 so I'm raising it more broadly because this seems like a portability hazard. godbolt demo

In a world where we can forget about provenance, one could set the lowest bit in a pointer with this:

pub fn old_style(a: *mut u8) -> *mut u8 {
    (a as usize | 1) as *mut u8
}

But of course we want to have a provenance model, including because we want to support architectures where pointer provenance is checked at runtime. So one might want to implement this function like so to be compatible with CHERI:

pub fn cheri_compat(a: *mut u8) -> *mut u8 {
    let old = a as usize;
    let new = old | 1;
    let diff = new.wrapping_sub(old);
    a.wrapping_add(diff)
}

But that version is slower. Instead of just mov + or, it gets compiled to mov + not + and + and. Which is very silly. We can get the original codegen back by writing this in a style which is almost certainly invalid on CHERI:

pub fn fast(a: *mut u8) -> *mut u8 {
    let old = a as usize;
    let new = old | 1;
    a.wrapping_sub(old).wrapping_add(new)
}

It doesn't make sense to me that users should have to choose between compatibility with CHERI and avoiding ptr-int-ptr casts while keeping a careful eye out for codegen regressions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

Labels

A-LLVMArea: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues.A-strict-provenanceArea: Strict provenance for raw pointersI-slowIssue: Problems and improvements with respect to performance of generated code.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions