Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IR][AsmParser] Revamp how floating-point literals work in LLVM IR. #121838

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

jcranmer-intel
Copy link
Contributor

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.

This adds support for the following kinds of formats:
* Hexadecimal literals like 0x1.fp13
* Special values +inf/-inf, +qnan/-qnan
* NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the
bitpattern exactly no longer requires the 0xL or 0xK or similar code for
the floating-point type. This format is removed from the documentation,
but is still supported as a legacy format in the parser.
@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-backend-directx
@llvm/pr-subscribers-llvm-transforms
@llvm/pr-subscribers-llvm-analysis
@llvm/pr-subscribers-llvm-globalisel
@llvm/pr-subscribers-backend-loongarch
@llvm/pr-subscribers-debuginfo

@llvm/pr-subscribers-backend-hexagon

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-clang

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

Copy link

github-actions bot commented Jan 6, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@jcranmer-intel jcranmer-intel changed the title [AsmParser] Revamp how floating-point literals in LLVM IR. [IR][AsmParser] Revamp how floating-point literals in LLVM IR. Jan 7, 2025
TokStart[1] == '0' && TokStart[2] == 'x' &&
isxdigit(static_cast<unsigned char>(TokStart[3]))) {
int len = CurPtr-TokStart-3;
bool IsFloatConst = TokStart[0] == 'f';
int len = CurPtr - TokStart - 3;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's from the old code but since we're changing it anyway could you make it Len? Also, why int rather than unsigned or size_t

}
case lltok::FloatHexLiteral: {
assert(ExpectedTy && "Need type to parse float values");
auto &Semantics = ExpectedTy->getFltSemantics();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: const auto &

Copy link
Contributor

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you still support the legacy format, could you please restrict this PR to only the parser changes, and leave the printer changes (and the mass test update they require) to a followup?

@jcranmer-intel
Copy link
Contributor Author

As you still support the legacy format, could you please restrict this PR to only the parser changes, and leave the printer changes (and the mass test update they require) to a followup?

Sure, I can do it. I made them two separate in the commits partly for that reason.

@jayfoad
Copy link
Contributor

jayfoad commented Jan 7, 2025

[IR][AsmParser] Revamp how floating-point literals in LLVM IR.

"how floating-point literals" doesn't read right to me - is there a word missing?

@jcranmer-intel jcranmer-intel changed the title [IR][AsmParser] Revamp how floating-point literals in LLVM IR. [IR][AsmParser] Revamp how floating-point literals work in LLVM IR. Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

5 participants