Skip to content

Stop forcing an alloc just because something looked at the discriminant #137503

@scottmcm

Description

@scottmcm

The rust code https://rust.godbolt.org/z/5Tvx8Pfqj

pub fn demo(x: Result<i32, u32>) -> isize {
    std::intrinsics::discriminant_value(&x)
}

gives MIR

fn demo(_1: Result<i32, u32>) -> isize {
    debug x => _1;
    let mut _0: isize;

    bb0: {
        _0 = discriminant(_1);
        return;
    }
}

for which the LLVM IR we emit look like

define noundef i64 @demo(i32 noundef range(i32 0, 2) %0, i32 noundef %1) unnamed_addr {
start:
  %x = alloca [8 x i8], align 4
  store i32 %0, ptr %x, align 4
  %2 = getelementptr inbounds i8, ptr %x, i64 4
  store i32 %1, ptr %2, align 4
  %3 = load i32, ptr %x, align 4, !range !3, !noundef !4
  %_0 = zext i32 %3 to i64
  ret i64 %_0
}

Now, LLVM's optimizer is certainly capable of cleaning that up, but it'd be nice if it never needed to in the first place -- especially in debug where the optimizer doesn't run. Since the tag is in %0 as an SSA value already, we could save a bunch of emitting work too.

I think this is caused because the Rvalue::Discriminant is a NonMutatingUseContext::Inspect, which ends up here:

| PlaceContext::NonMutatingUse(
NonMutatingUseContext::Inspect
| NonMutatingUseContext::SharedBorrow
| NonMutatingUseContext::FakeBorrow
| NonMutatingUseContext::RawBorrow
| NonMutatingUseContext::Projection,
) => {
self.locals[local] = LocalKind::Memory;
}

And thus it keeps the variable from being SSA even if it otherwise could.

I don't know what the right fix is. Maybe all the Inspects are fine with SSA, maybe discriminant should be a different kind of use, or maybe the analysis should special-case Rvalue::Discriminant.

Metadata

Metadata

Assignees

Labels

A-codegenArea: Code generationT-compilerRelevant to the compiler team, which will review and decide on the PR/issue.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions