Skip to content

minor fma cleanup #57041

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 21, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 6 additions & 8 deletions base/floatfuncs.jl
Original file line number Diff line number Diff line change
Expand Up @@ -276,6 +276,9 @@ significantly more expensive than `x*y+z`. `fma` is used to improve accuracy in
algorithms. See [`muladd`](@ref).
"""
function fma end
function fma_emulated(a::Float16, b::Float16, c::Float16)
Float16(muladd(Float32(a), Float32(b), Float32(c))) #don't use fma if the hardware doesn't have it.
Copy link
Member

@giordano giordano Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be simplified to

Suggested change
Float16(muladd(Float32(a), Float32(b), Float32(c))) #don't use fma if the hardware doesn't have it.
muladd(a, b, c) #don't use fma if the hardware doesn't have it.

LLVM would automatically do the demotion to float as necessary nowadays.

Copy link
Member

@giordano giordano Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ironically, on aarch64 with fp16 extension muladd is better than fma on Float16 because it doesn't force the Float16 -> Float32 -> Float16 dance:

julia> code_native(muladd, NTuple{3,Float16}; debuginfo=:none)
        .text
        .file   "muladd"
        .globl  julia_muladd_1256               // -- Begin function julia_muladd_1256
        .p2align        4
        .type   julia_muladd_1256,@function
julia_muladd_1256:                      // @julia_muladd_1256
; Function Signature: muladd(Float16, Float16, Float16)
// %bb.0:                               // %top
        //DEBUG_VALUE: muladd:x <- $h0
        //DEBUG_VALUE: muladd:x <- $h0
        //DEBUG_VALUE: muladd:y <- $h1
        //DEBUG_VALUE: muladd:y <- $h1
        //DEBUG_VALUE: muladd:z <- $h2
        //DEBUG_VALUE: muladd:z <- $h2
        stp     x29, x30, [sp, #-16]!           // 16-byte Folded Spill
        mov     x29, sp
        fmadd   h0, h0, h1, h2
        ldp     x29, x30, [sp], #16             // 16-byte Folded Reload
        ret
.Lfunc_end0:
        .size   julia_muladd_1256, .Lfunc_end0-julia_muladd_1256
                                        // -- End function
        .type   ".L+Core.Float16#1258",@object  // @"+Core.Float16#1258"
        .section        .rodata,"a",@progbits
        .p2align        3, 0x0
".L+Core.Float16#1258":
        .xword  ".L+Core.Float16#1258.jit"
        .size   ".L+Core.Float16#1258", 8

.set ".L+Core.Float16#1258.jit", 281472349230944
        .size   ".L+Core.Float16#1258.jit", 8
        .section        ".note.GNU-stack","",@progbits
julia> code_native(fma, NTuple{3,Float16}; debuginfo=:none)
        .text
        .file   "fma"
        .globl  julia_fma_1259                  // -- Begin function julia_fma_1259
        .p2align        4
        .type   julia_fma_1259,@function
julia_fma_1259:                         // @julia_fma_1259
; Function Signature: fma(Float16, Float16, Float16)
// %bb.0:                               // %top
        //DEBUG_VALUE: fma:a <- $h0
        //DEBUG_VALUE: fma:a <- $h0
        //DEBUG_VALUE: fma:b <- $h1
        //DEBUG_VALUE: fma:b <- $h1
        //DEBUG_VALUE: fma:c <- $h2
        //DEBUG_VALUE: fma:c <- $h2
        stp     x29, x30, [sp, #-16]!           // 16-byte Folded Spill
        fcvt    s0, h0
        fcvt    s1, h1
        mov     x29, sp
        fcvt    s2, h2
        fmadd   s0, s0, s1, s2
        fcvt    h0, s0
        ldp     x29, x30, [sp], #16             // 16-byte Folded Reload
        ret
.Lfunc_end0:
        .size   julia_fma_1259, .Lfunc_end0-julia_fma_1259
                                        // -- End function
        .type   ".L+Core.Float16#1261",@object  // @"+Core.Float16#1261"
        .section        .rodata,"a",@progbits
        .p2align        3, 0x0
".L+Core.Float16#1261":
        .xword  ".L+Core.Float16#1261.jit"
        .size   ".L+Core.Float16#1261", 8

.set ".L+Core.Float16#1261.jit", 281472349230944
        .size   ".L+Core.Float16#1261.jit", 8
        .section        ".note.GNU-stack","",@progbits

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no. muladd doesn't guarantee the accuracy of fma requires

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will also point out that pure f16 fma is not super useful as an operation. Most of the accelerators will do fp16 multiply with an f32 accumulator (and then potentially round back to f16 at the end).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, but that's not what Base.fma does.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That’s not really true of fp16, on aarch64 it’s a true type which supports everything (with twice the throughput on SIMD), bf16 is that though

end
function fma_emulated(a::Float32, b::Float32, c::Float32)::Float32
ab = Float64(a) * b
res = ab+c
Expand Down Expand Up @@ -348,19 +351,14 @@ function fma_emulated(a::Float64, b::Float64,c::Float64)
s = (abs(abhi) > abs(c)) ? (abhi-r+c+ablo) : (c-r+abhi+ablo)
return r+s
end
fma_llvm(x::Float32, y::Float32, z::Float32) = fma_float(x, y, z)
fma_llvm(x::Float64, y::Float64, z::Float64) = fma_float(x, y, z)

# Disable LLVM's fma if it is incorrect, e.g. because LLVM falls back
# onto a broken system libm; if so, use a software emulated fma
@assume_effects :consistent fma(x::Float32, y::Float32, z::Float32) = Core.Intrinsics.have_fma(Float32) ? fma_llvm(x,y,z) : fma_emulated(x,y,z)
@assume_effects :consistent fma(x::Float64, y::Float64, z::Float64) = Core.Intrinsics.have_fma(Float64) ? fma_llvm(x,y,z) : fma_emulated(x,y,z)

function fma(a::Float16, b::Float16, c::Float16)
Float16(muladd(Float32(a), Float32(b), Float32(c))) #don't use fma if the hardware doesn't have it.
@assume_effects :consistent function fma(x::T, y::T, z::T) where {T<:IEEEFloat}
Core.Intrinsics.have_fma(T) ? fma_float(x,y,z) : fma_emulated(x,y,z)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that Core.Intrinsics.have_fma(Float16) is always false at the moment because

static bool have_fma(Function &intr, Function &caller, const Triple &TT) JL_NOTSAFEPOINT {
auto unconditional = always_have_fma(intr, TT);
if (unconditional)
return *unconditional;
auto intr_name = intr.getName();
auto typ = intr_name.substr(strlen("julia.cpu.have_fma."));
Attribute FSAttr = caller.getFnAttribute("target-features");
StringRef FS =
FSAttr.isValid() ? FSAttr.getValueAsString() : jl_ExecutionEngine->getTargetFeatureString();
SmallVector<StringRef, 128> Features;
FS.split(Features, ',');
for (StringRef Feature : Features)
if (TT.isARM()) {
if (Feature == "+vfp4")
return typ == "f32" || typ == "f64";
else if (Feature == "+vfp4sp")
return typ == "f32";
} else if (TT.isX86()) {
if (Feature == "+fma" || Feature == "+fma4")
return typ == "f32" || typ == "f64";
}
return false;
}
only checks Float32/Float64 extensions. @gbaraldi is that correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this may work on riscv64 with #57043, but I'm still not entirely sure about what's going on there.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, at which point this is nfc for such architectures, but that can be fixed in a separate pr.

end

# This is necessary at least on 32-bit Intel Linux, since fma_llvm may
# This is necessary at least on 32-bit Intel Linux, since fma_float may
# have called glibc, and some broken glibc fma implementations don't
# properly restore the rounding mode
Rounding.setrounding_raw(Float32, Rounding.JL_FE_TONEAREST)
Expand Down
Loading