Skip to content

Possible debug info bug with matrix bitpieces #7044

@baldurk

Description

@baldurk

Description
The debug info uses bitpieces to map from matrices to elements but it uses a column-major conventions in conflict with how the matrix template members are defined.

The first convention is on the members - _11, _12 etc which are treated as row-major and have bit offsets with the first-row first-column element to be at bit offset 0, the first-row second-column element at bit offset 32, etc up to the last element at bit offset 480. This is what I'd expect for HLSL conventions.

The bitpieces themselves when mapping individual scalar SSAs seem to use a column-major convention, so so the first-column second-row element is mapped to bit offset 32.

Steps to Reproduce
I initially saw this on the ModelViewer sample in DirectX-Graphics-Samples if you want a real-world test case, but I've simplified it down to a synthetic test compiled with -Zi -Od: https://godbolt.org/z/K8jK8fj6j

The main function is simple and it just launders a vector through a matrix to test indexing:

float4 main(float3 test : TEST) : SV_Target0
{
    float4x4 mymat = float4x4(
        0, 4, 0, test.x,
        5, 0, 0, test.y,
        0, 0, 0, test.z,
        0, 0, 0, 1
        );

    float4 ret = 0.0f.xxxx;

    ret.x = mymat._14; // should be test.x
    ret.y = mymat._24; // should be test.y
    ret.z = mymat._34; // should be test.z

    return ret;
}

The actual executable part is what I'd expect (elided):

  %1 = call float @dx.op.loadInput.f32(i32 4, i32 0, i32 0, i8 0, i32 undef), !dbg !73 ; line:3 col:20  ; LoadInput(inputSigId,rowIndex,colIndex,gsVertexAxis)
  %2 = call float @dx.op.loadInput.f32(i32 4, i32 0, i32 0, i8 1, i32 undef), !dbg !73 ; line:3 col:20  ; LoadInput(inputSigId,rowIndex,colIndex,gsVertexAxis)
  %3 = call float @dx.op.loadInput.f32(i32 4, i32 0, i32 0, i8 2, i32 undef), !dbg !73 ; line:3 col:20  ; LoadInput(inputSigId,rowIndex,colIndex,gsVertexAxis)
  call void @dx.op.storeOutput.f32(i32 5, i32 0, i32 0, i8 0, float %1), !dbg !99 ; line:18 col:5  ; StoreOutput(outputSigId,rowIndex,colIndex,value)
  call void @dx.op.storeOutput.f32(i32 5, i32 0, i32 0, i8 1, float %2), !dbg !99 ; line:18 col:5  ; StoreOutput(outputSigId,rowIndex,colIndex,value)
  call void @dx.op.storeOutput.f32(i32 5, i32 0, i32 0, i8 2, float %3), !dbg !99 ; line:18 col:5  ; StoreOutput(outputSigId,rowIndex,colIndex,value)
  call void @dx.op.storeOutput.f32(i32 5, i32 0, i32 0, i8 3, float 0.000000e+00), !dbg !99 ; line:18 col:5  ; StoreOutput(outputSigId,rowIndex,colIndex,value)
  ret void, !dbg !99 ; line:18 col:5

So the _14, _24, _34 indices correspond to the first, second and third row in the fourth column.

Looking at the definition of the members you can see that this is bit-packing those row-major:

!11 = !DIDerivedType(tag: DW_TAG_member, name: "_14", scope: !5, file: !1, line: 5, baseType: !8, size: 32, align: 32, offset: 96, flags: DIFlagPublic)
!15 = !DIDerivedType(tag: DW_TAG_member, name: "_24", scope: !5, file: !1, line: 5, baseType: !8, size: 32, align: 32, offset: 224, flags: DIFlagPublic)
!19 = !DIDerivedType(tag: DW_TAG_member, name: "_34", scope: !5, file: !1, line: 5, baseType: !8, size: 32, align: 32, offset: 352, flags: DIFlagPublic)

However look at the way the test elements are mentioned as members:

  call void @llvm.dbg.value(metadata float %1, i64 0, metadata !78, metadata !88), !dbg !80 ; var:"mymat" !DIExpression(DW_OP_bit_piece, 384, 32) func:"main"
  call void @llvm.dbg.value(metadata float %2, i64 0, metadata !78, metadata !89), !dbg !80 ; var:"mymat" !DIExpression(DW_OP_bit_piece, 416, 32) func:"main"
  call void @llvm.dbg.value(metadata float %3, i64 0, metadata !78, metadata !90), !dbg !80 ; var:"mymat" !DIExpression(DW_OP_bit_piece, 448, 32) func:"main"

These are contiguous in the next-to-final elements of the matrix - if this were row-major that would make them the first elements of the final row.

I also added some constants so you can see the first-row second-column value of 4 and the second-row first-column value of 5, which are laid out column-wise:

  call void @llvm.dbg.value(metadata float 4.000000e+00, i64 0, metadata !78, metadata !79), !dbg !80 ; var:"mymat" !DIExpression(DW_OP_bit_piece, 128, 32) func:"main"
  call void @llvm.dbg.value(metadata float 5.000000e+00, i64 0, metadata !78, metadata !76), !dbg !80 ; var:"mymat" !DIExpression(DW_OP_bit_piece, 32, 32) func:"main"

Actual Behavior
The bitpieces identifying matrix elements are calculated column-wise while the members are row-wise. Either would work, but it's currently inconsistent and the bitpieces are not what I'd expect matching HLSL conventions.

The result of this is any program parsing the debug info that either hardcodes a row-major convention or even does something smart to reflect the layout out of the matrix members by _11, _12 offsets etc will end up transposing the matrix.

Environment

  • DXC version: trunk on Compiler Explorer
  • Host Operating System: Compiler Explorer / windows 10

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugBug, regression, crashdebug infoRelated to debug info generationmatrix-bugBugs relating to matrix types

    Type

    No type

    Projects

    Status

    Triaged

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions