Skip to content

Conversation

@sarnold
Copy link

@sarnold sarnold commented Feb 13, 2017

Fixes build on hardened PAX host with gcc-5 (linker error on relocs).
Completes no-PIE config by adding to ALL_* flags variables.
Borrowed from Gentoo gcc patches, tested on 2 hardened amd64 hosts.

Upstream-Status: Inappropriate [configuration patching artifact]

Commited by: Gentoo Toolchain Project toolchain@gentoo.org
Signed-off-by: Stephen Arnold stephen.arnold42@gmail.com

Fixes build on hardened PAX host with gcc-5 (linker error on relocs).
Completes no-PIE config by adding to ALL_* flags variables.
Borrowed from Gentoo gcc patches, tested on 2 hardened amd64 hosts.

Upstream-Status: Inappropriate [configuration patching artifact]

Commited by: Gentoo Toolchain Project <toolchain@gentoo.org>
Signed-off-by: Stephen Arnold <stephen.arnold42@gmail.com>
@kraj kraj merged this pull request into kraj:gcc-6-branch Feb 13, 2017
@sarnold sarnold deleted the gcc-6-branch-patch-1 branch February 15, 2017 03:44
kraj pushed a commit that referenced this pull request May 29, 2017
	* pt.c (most_specialized_instantiation): Cope with duplicate
	instantiations.

	PR c++/80891 (#1)
	* g++.dg/lookup/pr80891-1.C: New.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@248573 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request May 29, 2017
	* cp-tree.h (lookup_maybe_add): Add DEDUPING argument.
	* name-lookup.c (name_lookup): Add deduping field.
	(name_lookup::preserve_state, name_lookup::restore_state): Deal
	with deduping.
	(name_lookup::add_overload): New.
	(name_lookup::add_value, name_lookup::add_fns): Call add_overload.
	(name_lookup::search_adl): Set deduping.  Don't unmark here.
	* pt.c (most_specialized_instantiation): Revert previous change,
	Assert not given duplicates.
	* tree.c (lookup_mark): Just mark the underlying decls.
	(lookup_maybe_add): Dedup using marked decls.

	PR c++/80891 (gcc-mirror#5)
	* g++.dg/lookup/pr80891-5.C: New.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@248578 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Apr 19, 2018
When -fcf-protection -mcet is used, I got

FAIL: g++.dg/eh/sighandle.C

(gdb) bt
 #0  _Unwind_RaiseException (exc=exc@entry=0x416ed0)
    at /export/gnu/import/git/sources/gcc/libgcc/unwind.inc:140
 #1  0x00007ffff7d9936b in __cxxabiv1::__cxa_throw (obj=<optimized out>,
    tinfo=0x403dd0 <typeinfo for int@@CXXABI_1.3>, dest=0x0)
    at /export/gnu/import/git/sources/gcc/libstdc++-v3/libsupc++/eh_throw.cc:90
 gcc-mirror#2  0x0000000000401255 in sighandler (signo=11, si=0x7fffffffd6f8,
    uc=0x7fffffffd5c0)
    at /export/gnu/import/git/sources/gcc/gcc/testsuite/g++.dg/eh/sighandle.C:9
 gcc-mirror#3  <signal handler called> <<<< Signal frame which isn't on shadow stack
 gcc-mirror#4  dosegv ()
    at /export/gnu/import/git/sources/gcc/gcc/testsuite/g++.dg/eh/sighandle.C:14
 gcc-mirror#5  0x00000000004012e3 in main ()
    at /export/gnu/import/git/sources/gcc/gcc/testsuite/g++.dg/eh/sighandle.C:30
(gdb) p frames
$6 = 5
(gdb)

frame count should be 4, not 5.  This patch skips signal frames when
unwinding shadow stack.

gcc/testsuite/

	PR libgcc/85334
	* g++.dg/torture/pr85334.C: New test.

libgcc/

	PR libgcc/85334
	* unwind-generic.h (_Unwind_Frames_Increment): New.
	* config/i386/shadow-stack-unwind.h (_Unwind_Frames_Increment):
	Likewise.
	* unwind.inc (_Unwind_RaiseException_Phase2): Increment frame
	count with _Unwind_Frames_Increment.
	(_Unwind_ForcedUnwind_Phase2): Likewise.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@259502 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request May 24, 2018
This fixes a long-standing quirk present in the layout information for record
types displayed by the -gnatR3 switch: when a component has a variable
(starting) position, its corresponding line in the output has an irregular and
awkward format.  After this change, the format is the same as in all the other
cases.

For the following record:

    type R (m : natural) is record
        s : string (1 .. m);
        r : natural;
        b : boolean;
    end record;
    for R'alignment use 4;
    pragma Pack (R);

the output of -gnatR3 used to be:

for R'Object_Size use 17179869248;
for R'Value_Size use ((#1 + 8) * 8);
for R'Alignment use 4;
for R use record
   m at  0 range  0 .. 30;
   s at  4 range  0 .. ((#1 * 8)) - 1;
   r at bit offset (((#1 + 4) * 8)) size in bits = 31
   b at bit offset ((((#1 + 7) * 8) + 7)) size in bits = 1
end record;

and is changed into:

for R'Object_Size use 17179869248;
for R'Value_Size use ((#1 + 8) * 8);
for R'Alignment use 4;
for R use record
   m at  0 range  0 .. 30;
   s at  4 range  0 .. ((#1 * 8)) - 1;
   r at (#1 + 4) range  0 .. 30;
   b at (#1 + 7) range  7 ..  7;
end record;

2018-05-24  Eric Botcazou  <ebotcazou@adacore.com>

gcc/ada/

	* fe.h (Set_Normalized_First_Bit): Declare.
	(Set_Normalized_Position): Likewise.
	* repinfo.adb (List_Record_Layout): Do not use irregular output for a
	variable position.  Fix minor spacing issue.
	* gcc-interface/decl.c (annotate_rep): If a field has a variable
	offset, compute the normalized position and annotate it in addition to
	the bit offset.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@260669 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Nov 14, 2018
This adds a 4th information level for the -gnatR output, where relevant
compiler-generated types are listed in addition to the information
already output by -gnatR3.

For the following package P:

package P is

  type Arr0 is array (Positive range <>) of Boolean;

    type Rec (D1 : Positive; D2 : Boolean) is record
       C1 : Integer;
       C2 : Arr0 (1 .. D1);

       case D2 is
          when False =>
             C3 : Character;
          when True =>
             C4 : String (1 .. 3);
             C5 : Float;
       end case;
    end record;

    type Arr1 is array (1 .. 8) of Rec (1, True);

end P;

the output generated by -gnatR4 must be:

Representation information for unit P (spec)
--------------------------------------------

for Arr0'Alignment use 1;
for Arr0'Component_Size use 8;

for Rec'Object_Size use 17179869344;
for Rec'Value_Size use (if (gcc-mirror#2 != 0) then ((((#1 + 15) & -4) + 8) * 8)
else ((((#1 + 15) & -4) + 1) * 8) end);
for Rec'Alignment use 4;
for Rec use record
   D1 at  0 range  0 .. 31;
   D2 at  4 range  0 ..  7;
   C1 at  8 range  0 .. 31;
   C2 at 12 range  0 .. ((#1 * 8)) - 1;
   C3 at ((#1 + 15) & -4) range  0 ..  7;
   C4 at ((#1 + 15) & -4) range  0 .. 23;
   C5 at (((#1 + 15) & -4) + 4) range  0 .. 31;
end record;

for Arr1'Size use 1536;
for Arr1'Alignment use 4;
for Arr1'Component_Size use 192;

for Tarr1c'Size use 192;
for Tarr1c'Alignment use 4;
for Tarr1c use record
   D1 at  0 range  0 .. 31;
   D2 at  4 range  0 ..  7;
   C1 at  8 range  0 .. 31;
   C2 at 12 range  0 ..  7;
   C4 at 16 range  0 .. 23;
   C5 at 20 range  0 .. 31;
end record;

2018-11-14  Eric Botcazou  <ebotcazou@adacore.com>

gcc/ada/

	* doc/gnat_ugn/building_executable_programs_with_gnat.rst
	(-gnatR): Document new -gnatR4 level.
	* gnat_ugn.texi: Regenerate.
	* opt.ads (List_Representation_Info): Bump upper bound to 4.
	* repinfo.adb: Add with clause for GNAT.HTable.
	(Relevant_Entities_Size): New constant.
	(Entity_Header_Num): New type.
	(Entity_Hash): New function.
	(Relevant_Entities): New set implemented with GNAT.HTable.
	(List_Entities): Also list compiled-generated entities present
	in the Relevant_Entities set. Consider that the Component_Type
	of an array type is relevant.
	(List_Rep_Info): Reset Relevant_Entities for each unit.
	* switch-c.adb (Scan_Front_End_Switches): Add support for -gnatR4.
	* switch-m.adb (Normalize_Compiler_Switches): Likewise
	* usage.adb (Usage): Likewise.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@266131 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Apr 12, 2019
kraj pushed a commit that referenced this pull request May 7, 2019
…D_EXPR)

using SVE.

Given this input code:

int
sum_abs (uint8_t *restrict x, uint8_t *restrict y, int n)
{
  int sum = 0;

  for (int i = 0; i < n; i++)
    {
      sum += __builtin_abs (x[i] - y[i]);
    }

  return sum;
}

The resulting SVE code is:

0000000000000000 <sum_abs>:
   0:	7100005f 	cmp	w2, #0x0
   4:	5400026d 	b.le	50 <sum_abs+0x50>
   8:	d2800003 	mov	x3, #0x0                   	// #0
   c:	93407c42 	sxtw	x2, w2
  10:	2538c002 	mov	z2.b, #0
  14:	25221fe0 	whilelo	p0.b, xzr, x2
  18:	2538c023 	mov	z3.b, #1
  1c:	2518e3e1 	ptrue	p1.b
  20:	a4034000 	ld1b	{z0.b}, p0/z, [x0, x3]
  24:	a4034021 	ld1b	{z1.b}, p0/z, [x1, x3]
  28:	0430e3e3 	incb	x3
  2c:	0520c021 	sel	z1.b, p0, z1.b, z0.b
  30:	25221c60 	whilelo	p0.b, x3, x2
  34:	040d0420 	uabd	z0.b, p1/m, z0.b, z1.b
  38:	44830402 	udot	z2.s, z0.b, z3.b
  3c:	54ffff21 	b.ne	20 <sum_abs+0x20>  // b.any
  40:	2598e3e0 	ptrue	p0.s
  44:	04812042 	uaddv	d2, p0, z2.s
  48:	1e260040 	fmov	w0, s2
  4c:	d65f03c0 	ret
  50:	1e2703e2 	fmov	s2, wzr
  54:	1e260040 	fmov	w0, s2
  58:	d65f03c0 	ret

Notice how udot is used inside a fully masked loop.


gcc/Changelog:

2019-05-07  Alejandro Martinez  <alejandro.martinezvicente@arm.com>

	* config/aarch64/aarch64-sve.md (<su>abd<mode>_3): New define_expand.
	(aarch64_<su>abd<mode>_3): Likewise.
	(*aarch64_<su>abd<mode>_3): New define_insn.
	(<sur>sad<vsi2qi>): New define_expand.
	* config/aarch64/iterators.md: Added MAX_OPP attribute.
	* tree-vect-loop.c (use_mask_by_cond_expr_p): Add SAD_EXPR.
	(build_vect_cond_expr): Likewise.

gcc/testsuite/Changelog:
 
2019-05-07  Alejandro Martinez  <alejandro.martinezvicente@arm.com>

	* gcc.target/aarch64/sve/sad_1.c: New test for sum of absolute
	differences.



git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@270975 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Aug 2, 2019
Introduce exception handler ABI #1 to ensure single release, no access
after release of reraised Machine_Occurrences, and no failure to
re-reraise a Machine_Occurrence.

Unlike Ada exceptions, foreign exceptions do not get a new
Machine_Occurrence upon reraise, but each handler would delete the
exception upon completion, normal or exceptional, save for the case of
a 'raise;' statement within the handler, that avoided the delete by
clearing the exception pointer that the cleanup would use to release
it.  The cleared exception pointer might then be used by a subsequent
reraise within the same handler.  Get_Current_Excep.all would also
expose the Machine_Occurrence to reuse by Reraise_Occurrence, even for
native exceptions.

Under ABI #1, Begin_Handler_v1 claims responsibility for releasing an
exception by saving its cleanup and setting it to Claimed_Cleanup.
End_Handler_v1 restores the cleanup and runs it, as long as it isn't
still Claimed_Cleanup (which indicates an enclosing handler has
already claimed responsibility for releasing it), and as long as the
same exception is not being propagated up (the next handler of the
propagating exception will then claim responsibility for releasing
it), so reraise no longer needs to clear the exception pointer, and it
can just propagate the exception, just like Reraise_Occurrence.

ABI #1 is fully interoperable with ABI #0, i.e., exception handlers
that call the #0 primitives can be linked together with ones that call
the #1 primitives, and they will not misbehave.  When a #1 handler
claims responsibility for releasing an exception, even #0 reraises
dynamically nested within it will refrain from releasing it.  However,
when a #0 handler is a handler of a foreign exception that would have
been responsible for releasing it with #1, a Reraise_Occurrence of
that foreign or other Machine_Occurrence-carrying exception may still
cause the exception to be released multiple times, and to be used
after it is first released, even if other handlers of the foreign
exception use #1.


for  gcc/ada/ChangeLog

	* libgnat/a-exexpr.adb (Begin_Handler_v1, End_Handler_v1): New.
	(Claimed_Cleanup): New.
	(Begin_Handler, End_Handler): Document.
	* gcc-interface/trans.c (gigi): Switch to exception handler
	ABI #1.
	(Exception_Handler_to_gnu_gcc): Save the original cleanup
	returned by begin handler, pass it to end handler, and use
	EH_ELSE_EXPR to pass a propagating exception to end handler.
	(gnat_to_gnu): Leave the exception pointer alone for reraise.
	(add_cleanup): Handle EH_ELSE_EXPR, require it by itself.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@274029 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Aug 9, 2019
…tions

The addsi3_compare_op[12] patterns currently only have constraints to
pick the 32-bit variants of the instructions.  Although the assembler
may sometimes opportunistically match a 16-bit t2 instruction, there's
no real control over that within the compiler.  Consequently we might
emit a 32-bit adds instruction with a 16-bit subs instruction would
serve equally well.  We do, of course still have to be careful about
the small number of boundary cases by controlling the order quite
carefully.

This patch adds the constraints and templates to match the t2 16-bit
variants of these instructions.  Now, for example, we can generate

    subs r0, r0, #1 // 16-bit instruction

instead of 

    adds r0, r0, #1 // 32-bit instruction.

	*confit/arm/arm.md (addsi3_compare_op1): Add 16-bit thumb-2 variants.
	(addsi3_compare_op2): Likewise.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@274237 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Aug 22, 2019
Like the logical operations, expand all shifts early rather than only
sometimes.  The Neon shift expansions are never emitted (not even with
-fneon-for-64bits), so they are not useful.  So all the late expansions
and Neon shift patterns can be removed, and shifts are more optimized
as a result.  Since some extend patterns use Neon DImode shifts, remove
the Neon extend variants and related splits.

A simple example now generates the same efficient code after this
patch with -mfpu=neon and -mfpu=vfp (previously just the fact of
having Neon enabled resulted inefficient code for no reason).

unsigned long long f(unsigned long long x, unsigned long long y)
{ return x & (y >> 33); }

Before:
	strd    r4, r5, [sp, #-8]!
	lsr     r4, r3, #1
	mov     r5, #0
	and     r1, r1, r5
	and     r0, r0, r4
	ldrd    r4, r5, [sp]
	add     sp, sp, gcc-mirror#8
	bx      lr

After:
	and     r0, r0, r3, lsr #1
	mov     r1, #0
	bx      lr

Bootstrap and regress OK on arm-none-linux-gnueabihf --with-cpu=cortex-a57

    gcc/
	* config/arm/iterators.md (qhs_extenddi_cstr): Update.
	(qhs_extenddi_cstr): Likewise.
	* config/arm/arm.md (ashldi3): Always expand early.
	(ashlsi3): Likewise.
	(ashrsi3): Likewise.
	(zero_extend<mode>di2): Remove Neon variants.
	(extend<mode>di2): Likewise.
	* config/arm/neon.md (ashldi3_neon_noclobber): Remove.
	(signed_shift_di3_neon): Likewise.
	(unsigned_shift_di3_neon): Likewise.
	(ashrdi3_neon_imm_noclobber): Likewise.
	(lshrdi3_neon_imm_noclobber): Likewise.
	(<shift>di3_neon): Likewise.
	(split extend): Remove DI extend split patterns.

   gcc/testsuite/
	* gcc.target/arm/neon-extend-1.c: Remove test.
	* gcc.target/arm/neon-extend-2.c: Remove test.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@274824 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Oct 18, 2019
In almost all cases it is better to handle inequality handling against constants
by transforming comparisons of the form (reg <GE/LT/GEU/LTU> const) into
(reg <GT/LE/GTU/LEU> (const+1)).  However, there are many cases that we could
handle but currently failed to do so because we forced the constant into a
register too early in the pattern expansion.  To permit this to be done we need
to defer forcing the constant into a register until after we've had the chance
to do the transform - in some cases that may even mean that we no-longer need
to force the constant into a register at all.  For example, on Arm, the case:

_Bool f8 (unsigned long long a) { return a > 0xffffffff; }

previously compiled to

        mov     r3, #0
        cmp     r1, r3
        mvn     r2, #0
        cmpeq   r0, r2
        movhi   r0, #1
        movls   r0, #0
        bx      lr

But now compiles to

        cmp     r1, #1
        cmpeq   r0, #0
        movcs   r0, #1
        movcc   r0, #0
        bx      lr

Which although not yet completely optimal, is certainly better than
previously.

	* config/arm/arm.md (cbranchdi4): Accept reg_or_int_operand for
	operand 2.
	(cstoredi4): Similarly, but for operand 3.
	* config/arm/arm.c (arm_canoncialize_comparison): Allow canonicalization
	of unsigned compares with a constant on Arm.  Prefer using const+1 and
	adjusting the comparison over swapping the operands whenever the
	original constant was not valid.
	(arm_gen_dicompare_reg): If Y is not a valid operand, force it to a
	register here.
	(arm_validize_comparison): Do not force invalid DImode operands to
	registers here.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@277178 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Oct 22, 2019
On Arm we have both carry and borrow operations, but borrow is
essentially '~carry'.  Of course, with boolean logic ~carry is also
1-carry.

GCC transforms

	(1 - X - LTU (cc, 0))

into

	(GEU (cc, 0) - X)

Now the former matches a real insn in Arm state, using the RSC
instruction with #1 as the immediate, but we currently do not
recognize the canonicalized form.  Nevertheless, given the above
logic, this turns out to be quite straight forward as the original
expression matches arm_borrow_operation and the revised form can be
used with arm_carry_operation.  Since we match this new pattern we
also update rtx_costs to handle it.

	* config/arm/arm.md (rsbsi_carryin_reg): New pattern.
	* config/arm/arm.c (arm_rtx_costs_internal, case MINUS): Handle
	subtraction from a carry operation.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@277290 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Oct 31, 2019
…piling for Thumb2

Thumb2 code now uses the Arm implementation of legitimize_address.
That code has a case to handle addresses that are absolute CONST_INT
values, which is a common use case in deeply embedded targets (eg:
void *p = (void*)0x12345678).  Since thumb has very limited negative
offsets from a constant, we want to avoid forming a CSE base that will
then be used with a negative value.

This was reported upstream originally in
https://gcc.gnu.org/ml/gcc-help/2019-10/msg00122.html

For example,

void test1(void) {
  volatile uint32_t * const p = (uint32_t *) 0x43fe1800;

  p[3] = 1;
  p[4] = 2;
  p[1] = 3;
  p[7] = 4;
  p[0] = 6;
}

With the new code, instead of 

        ldr     r3, .L2
        subw    r2, r3, #2035
        movs    r1, #1
        str     r1, [r2]
        subw    r2, r3, #2031
        movs    r1, gcc-mirror#2
        str     r1, [r2]
        subw    r2, r3, #2043
        movs    r1, gcc-mirror#3
        str     r1, [r2]
        subw    r2, r3, #2019
        movs    r1, gcc-mirror#4
        subw    r3, r3, #2047
        str     r1, [r2]
        movs    r2, gcc-mirror#6
        str     r2, [r3]
        bx      lr


We now get

        ldr     r3, .L2
        movs    r2, #1
        str     r2, [r3, #2060]
        movs    r2, gcc-mirror#2
        str     r2, [r3, #2064]
        movs    r2, gcc-mirror#3
        str     r2, [r3, #2052]
        movs    r2, gcc-mirror#4
        str     r2, [r3, #2076]
        movs    r2, gcc-mirror#6
        str     r2, [r3, #2048]
        bx      lr


	* config/arm/arm.c (arm_legitimize_address): Don't form negative
	offsets from a CONST_INT address when TARGET_THUMB2.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@277677 138bc75d-0d04-0410-961f-82ee72b054a4
kraj pushed a commit that referenced this pull request Jun 17, 2020
Made apparent by recent commit dc70315
"openmp: Implement discovery of implicit declare target to clauses":

    +FAIL: libgomp.c/target-39.c (internal compiler error)
    +FAIL: libgomp.c/target-39.c (test for excess errors)
    +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable

This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa'
enabled (by default).

    during GIMPLE pass: hsagen
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’:
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault
       23 |   #pragma omp target map(from:err)
          |           ^~~
    [...]

GDB:

    Program received signal SIGSEGV, Segmentation fault.
    fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    6267      return (fndecl_built_in_p (node, BUILT_IN_NORMAL)
    (gdb) bt
    #0  fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304
    gcc-mirror#2  0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770
    gcc-mirror#3  0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999
    gcc-mirror#4  0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596
    gcc-mirror#5  0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680
    gcc-mirror#6  0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502
    [...]
    (gdb) up
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    (gdb) print function_decl
    $1 = (tree) 0x0
    (gdb) list
    5299      if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL))
    5300        {
    5301          tree function_decl = gimple_call_fndecl (stmt);
    5302          /* Prefetch pass can create type-mismatching prefetch builtin calls which
    5303             fail the gimple_call_builtin_p test above.  Handle them here.  */
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    5305            return;
    5306
    5307          if (function_decl == NULL_TREE)
    5308            {

The problem is present already since 2016-11-23 commit
56b1c60 (r242761) "Merge from HSA branch to
trunk", and the fix obvious enough.

	gcc/
	* hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl ==
	NULL_TREE' check earlier.
	gcc/testsuite/
	* c-c++-common/gomp/hsa-indirect-call-1.c: New file.
kraj pushed a commit that referenced this pull request Jun 17, 2020
Made apparent by recent commit dc70315
"openmp: Implement discovery of implicit declare target to clauses":

    +FAIL: libgomp.c/target-39.c (internal compiler error)
    +FAIL: libgomp.c/target-39.c (test for excess errors)
    +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable

This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa'
enabled (by default).

    during GIMPLE pass: hsagen
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’:
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault
       23 |   #pragma omp target map(from:err)
          |           ^~~
    [...]

GDB:

    Program received signal SIGSEGV, Segmentation fault.
    fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    6267      return (fndecl_built_in_p (node, BUILT_IN_NORMAL)
    (gdb) bt
    #0  fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304
    gcc-mirror#2  0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770
    gcc-mirror#3  0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999
    gcc-mirror#4  0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596
    gcc-mirror#5  0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680
    gcc-mirror#6  0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502
    [...]
    (gdb) up
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    (gdb) print function_decl
    $1 = (tree) 0x0
    (gdb) list
    5299      if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL))
    5300        {
    5301          tree function_decl = gimple_call_fndecl (stmt);
    5302          /* Prefetch pass can create type-mismatching prefetch builtin calls which
    5303             fail the gimple_call_builtin_p test above.  Handle them here.  */
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    5305            return;
    5306
    5307          if (function_decl == NULL_TREE)
    5308            {

The problem is present already since 2016-11-23 commit
56b1c60 (r242761) "Merge from HSA branch to
trunk", and the fix obvious enough.

	gcc/
	* hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl ==
	NULL_TREE' check earlier.
	gcc/testsuite/
	* c-c++-common/gomp/hsa-indirect-call-1.c: New file.

(cherry picked from commit 973bce0)
kraj pushed a commit that referenced this pull request Jun 17, 2020
Made apparent by recent commit dc70315
"openmp: Implement discovery of implicit declare target to clauses":

    +FAIL: libgomp.c/target-39.c (internal compiler error)
    +FAIL: libgomp.c/target-39.c (test for excess errors)
    +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable

This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa'
enabled (by default).

    during GIMPLE pass: hsagen
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’:
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault
       23 |   #pragma omp target map(from:err)
          |           ^~~
    [...]

GDB:

    Program received signal SIGSEGV, Segmentation fault.
    fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    6267      return (fndecl_built_in_p (node, BUILT_IN_NORMAL)
    (gdb) bt
    #0  fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304
    gcc-mirror#2  0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770
    gcc-mirror#3  0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999
    gcc-mirror#4  0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596
    gcc-mirror#5  0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680
    gcc-mirror#6  0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502
    [...]
    (gdb) up
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    (gdb) print function_decl
    $1 = (tree) 0x0
    (gdb) list
    5299      if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL))
    5300        {
    5301          tree function_decl = gimple_call_fndecl (stmt);
    5302          /* Prefetch pass can create type-mismatching prefetch builtin calls which
    5303             fail the gimple_call_builtin_p test above.  Handle them here.  */
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    5305            return;
    5306
    5307          if (function_decl == NULL_TREE)
    5308            {

The problem is present already since 2016-11-23 commit
56b1c60 (r242761) "Merge from HSA branch to
trunk", and the fix obvious enough.

	gcc/
	* hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl ==
	NULL_TREE' check earlier.
	gcc/testsuite/
	* c-c++-common/gomp/hsa-indirect-call-1.c: New file.

(cherry picked from commit 973bce0)
kraj pushed a commit that referenced this pull request Jun 17, 2020
Made apparent by recent commit dc70315
"openmp: Implement discovery of implicit declare target to clauses":

    +FAIL: libgomp.c/target-39.c (internal compiler error)
    +FAIL: libgomp.c/target-39.c (test for excess errors)
    +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable

This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa'
enabled (by default).

    during GIMPLE pass: hsagen
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’:
    source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault
       23 |   #pragma omp target map(from:err)
          |           ^~~
    [...]

GDB:

    Program received signal SIGSEGV, Segmentation fault.
    fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    6267      return (fndecl_built_in_p (node, BUILT_IN_NORMAL)
    (gdb) bt
    #0  fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304
    gcc-mirror#2  0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770
    gcc-mirror#3  0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999
    gcc-mirror#4  0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596
    gcc-mirror#5  0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680
    gcc-mirror#6  0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502
    [...]
    (gdb) up
    #1  0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    (gdb) print function_decl
    $1 = (tree) 0x0
    (gdb) list
    5299      if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL))
    5300        {
    5301          tree function_decl = gimple_call_fndecl (stmt);
    5302          /* Prefetch pass can create type-mismatching prefetch builtin calls which
    5303             fail the gimple_call_builtin_p test above.  Handle them here.  */
    5304          if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH))
    5305            return;
    5306
    5307          if (function_decl == NULL_TREE)
    5308            {

The problem is present already since 2016-11-23 commit
56b1c60 (r242761) "Merge from HSA branch to
trunk", and the fix obvious enough.

	gcc/
	* hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl ==
	NULL_TREE' check earlier.
	gcc/testsuite/
	* c-c++-common/gomp/hsa-indirect-call-1.c: New file.

(cherry picked from commit 973bce0)
kraj pushed a commit that referenced this pull request Aug 17, 2020
Since 21cfe72 there's a new
OMP_LIST_NONTEMPORAL value, but it was missing in
resolve_omp_clauses static array that is defined at the function
beginning:

./xgcc -B. /home/marxin/Programming/gcc/gcc/testsuite/gfortran.dg/gomp/nontemporal-1.f90 -fopenmp -c
../../gcc/fortran/openmp.c:4737:28: runtime error: index 21 out of bounds for type 'char *[21]'
    #0 0xbdb956 in resolve_omp_clauses ../../gcc/fortran/openmp.c:4737
    #1 0xbeb076 in resolve_omp_do ../../gcc/fortran/openmp.c:6139
    gcc-mirror#2 0xbf029a in gfc_resolve_omp_directive(gfc_code*, gfc_namespace*) ../../gcc/fortran/openmp.c:6792
    gcc-mirror#3 0xcb6363 in gfc_resolve_code(gfc_code*, gfc_namespace*) ../../gcc/fortran/resolve.c:12185
    gcc-mirror#4 0xcef8cf in resolve_codes ../../gcc/fortran/resolve.c:17303

gcc/fortran/ChangeLog:

	* openmp.c (resolve_omp_clauses): Add NONTEMPORAL to clause
	names.
kraj pushed a commit that referenced this pull request Aug 22, 2020
PR analyzer/94851 reports various false "NULL dereference" diagnostics.
The first case (comment #1) affects GCC 10.2 but no longer affects
trunk; I believe it was fixed by the state rewrite of
r11-2694-g808f4dfeb3a95f50f15e71148e5c1067f90a126d.

The patch adds a regression test for this case.

The other cases (comment gcc-mirror#3 and comment gcc-mirror#4) still affect trunk.
In both cases, the && in a conditional is optimized to bitwise &
  _1 = p_4 != 0B;
  _2 = p_4 != q_6(D);
  _3 = _1 & _2;
and the analyzer fails to fold this for the case where one (or both) of
the conditionals is false, and thus erroneously considers the path where
"p" is non-NULL despite being passed a NULL value.

Fix this by implementing folding for this case.

gcc/analyzer/ChangeLog:
	PR analyzer/94851
	* region-model-manager.cc
	(region_model_manager::maybe_fold_binop): Fold bitwise "& 0" to 0.

gcc/testsuite/ChangeLog:
	PR analyzer/94851
	* gcc.dg/analyzer/pr94851-1.c: New test.
	* gcc.dg/analyzer/pr94851-3.c: New test.
	* gcc.dg/analyzer/pr94851-4.c: New test.
kraj pushed a commit that referenced this pull request Oct 1, 2020
This PR points out that we accept

  template<typename T> struct tuple { tuple(T); }; // #1
  template<typename T> explicit tuple(T t) -> tuple<T>; // gcc-mirror#2
  tuple t = { 1 };

despite the 'explicit' deduction guide in a copy-list-initialization
context.  That's because in deduction_guides_for we first find the
user-defined deduction guide (gcc-mirror#2), and then ctor_deduction_guides_for
creates artificial deduction guides: one from the tuple(T) constructor and
a copy guide.  So we end up with these three guides:

  (1) template<class T> tuple(T) -> tuple<T> [DECL_NONCONVERTING_P]
  (2) template<class T> tuple(tuple<T>) -> tuple<T>
  (3) template<class T> tuple(T) -> tuple<T>

Then, in do_class_deduction, we prune this set, and get rid of (1).
Then overload resolution selects (3) and we succeed.

But [over.match.list]p1 says "In copy-list-initialization, if an explicit
constructor is chosen, the initialization is ill-formed."  It also goes
on to say that this differs from other situations where only converting
constructors are considered for copy-initialization.  Therefore for
list-initialization we consider explicit constructors and complain if one
is chosen.  E.g. convert_like_internal/ck_user can give an error.

So my logic runs that we should not prune the deduction_guides_for guides
in a copy-list-initialization context, and only complain if we actually
choose an explicit deduction guide.  This matches clang++/EDG/msvc++.

gcc/cp/ChangeLog:

	PR c++/90210
	* pt.c (do_class_deduction): Don't prune explicit deduction guides
	in copy-list-initialization.  In copy-list-initialization, if an
	explicit deduction guide was selected, give an error.

gcc/testsuite/ChangeLog:

	PR c++/90210
	* g++.dg/cpp1z/class-deduction73.C: New test.
kraj pushed a commit that referenced this pull request Oct 7, 2020
This patch improves block-scope extern handling by always injecting a
hidden copy into the enclosing namespace (or using a match already
there).  This hidden copy will be revealed if the user explicitly
declares it later.  We can get from the DECL_LOCAL_DECL_P local extern
to the alias via DECL_LOCAL_DECL_ALIAS.  This fixes several bugs and
removes the kludgy per-function extern_decl_map.  We only do this
pushing for non-dependent local externs -- dependent ones will be
pushed during instantiation.

User code that expected to be able to handle incompatible local
externs in different block-scopes will no longer work.  That code is
ill-formed.  (always was, despite what 31775 claimed).  I had to
adjust a number of testcases that fell into this.

I tried using DECL_VALUE_EXPR, but that didn't work out.  Due to
constexpr requirements we have to do the replacement very late (it
happens in the gimplifier).   Consider:

extern int l[]; // #1
constexpr bool foo ()
{
   extern int l[3]; // this does not complete the type of decl #1
   constexpr int *p = &l[2]; // ok
   return !p;
}

This requirement, coupled with our use of the common folding machinery
makes pr97306 hard to fix, as we end up with an expression containing
the two different decls for 'l', and only the c++ FE knows how to
reconcile those.  I punted on this.

	gcc/cp/
	* cp-tree.h (struct language_function): Delete extern_decl_map.
	(DECL_LOCAL_DECL_ALIAS): New.
	* name-lookup.h (is_local_extern): Delete.
	* name-lookup.c (set_local_extern_decl_linkage): Replace with ...
	(push_local_extern_decl): ... this new function.
	(do_pushdecl): Call new function after pushing new decl.  Unhide
	hidden non-functions.
	(is_local_extern): Delete.
	* decl.c (layout_var_decl): Do not allow VLA local externs.
	* decl2.c (mark_used): Also mark DECL_LOCAL_DECL_ALIAS. Drop old
	local-extern treatment.
	* parser.c (cp_parser_oacc_declare): Deal with local extern aliases.
	* pt.c (tsubst_expr): Adjust local extern instantiation.
	* cp-gimplify.c (cp_genericize_r): Remap DECL_LOCAL_DECLs.
	gcc/testsuite/
	* g++.dg/cpp0x/lambda/lambda-sfinae1.C: Avoid ill-formed local extern
	* g++.dg/init/pr42844.C: Add expected error.
	* g++.dg/lookup/extern-redecl1.C: Likewise.
	* g++.dg/lookup/koenig15.C: Avoid ill-formed.
	* g++.dg/lto/pr95677.C: New.
	* g++.dg/other/nested-extern-1.C: Correct expected behabviour.
	* g++.dg/other/nested-extern-2.C: Likewise.
	* g++.dg/other/nested-extern.cc: Split ...
	* g++.dg/other/nested-extern-1.cc: ... here ...
	* g++.dg/other/nested-extern-2.cc: ... here.
	* g++.dg/template/scope5.C: Avoid ill-formed
	* g++.old-deja/g++.law/missed-error2.C: Allow extension.
	* g++.old-deja/g++.pt/crash3.C: Add expected error.
kraj pushed a commit that referenced this pull request Oct 12, 2020
Prevents the following UBSAN error:

./xgcc -B. /home/marxin/Programming/gcc/gcc/testsuite/g++.dg/torture/pr49770.C -O2 -c
/home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482:22: runtime error: load of value 2, which is not a valid value for type 'bool'
    #0 0x1fdb4d1 in modref_tree<int>::merge(modref_tree<int>*, vec<modref_parm_map, va_heap, vl_ptr>*) /home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482
    #1 0x1fcadaa in merge_call_side_effects(modref_summary*, gimple*, modref_summary*, bool) /home/marxin/Programming/gcc2/gcc/ipa-modref.c:511
    gcc-mirror#2 0x1fcbadd in analyze_call /home/marxin/Programming/gcc2/gcc/ipa-modref.c:642
    gcc-mirror#3 0x1fcc061 in analyze_stmt /home/marxin/Programming/gcc2/gcc/ipa-modref.c:732
    gcc-mirror#4 0x1fccf31 in analyze_function /home/marxin/Programming/gcc2/gcc/ipa-modref.c:823
    gcc-mirror#5 0x1fd17e5 in execute /home/marxin/Programming/gcc2/gcc/ipa-modref.c:1441
    gcc-mirror#6 0x25cca6e in execute_one_pass(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2509
    gcc-mirror#7 0x25cd39b in execute_pass_list_1 /home/marxin/Programming/gcc2/gcc/passes.c:2597
    gcc-mirror#8 0x25cd450 in execute_pass_list_1 /home/marxin/Programming/gcc2/gcc/passes.c:2598
    gcc-mirror#9 0x25cd4ee in execute_pass_list(function*, opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2608
    gcc-mirror#10 0x25c7a5a in do_per_function_toporder(void (*)(function*, void*), void*) /home/marxin/Programming/gcc2/gcc/passes.c:1726
    gcc-mirror#11 0x25cfa3f in execute_ipa_pass_list(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2941
    gcc-mirror#12 0x173572d in ipa_passes /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2642
    gcc-mirror#13 0x17364ee in symbol_table::compile() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2777
    gcc-mirror#14 0x17372d9 in symbol_table::finalize_compilation_unit() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:3022
    gcc-mirror#15 0x2a1f00a in compile_file /home/marxin/Programming/gcc2/gcc/toplev.c:485
    gcc-mirror#16 0x2a27dc8 in do_compile /home/marxin/Programming/gcc2/gcc/toplev.c:2321
    gcc-mirror#17 0x2a283cc in toplev::main(int, char**) /home/marxin/Programming/gcc2/gcc/toplev.c:2460
    gcc-mirror#18 0x54f21cd in main /home/marxin/Programming/gcc2/gcc/main.c:39
    gcc-mirror#19 0x7ffff6f0de09 in __libc_start_main ../csu/libc-start.c:314
    gcc-mirror#20 0x9eac09 in _start (/home/marxin/Programming/gcc2/objdir/gcc/cc1plus+0x9eac09)

gcc/ChangeLog:

	* ipa-modref.c (merge_call_side_effects): Clear modref_parm_map
	fields in the vector.
kraj pushed a commit that referenced this pull request Oct 19, 2020
It fixes:

/home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482:22: runtime error: load of value 255, which is not a valid value for type 'bool'
    #0 0x18e5df3 in modref_tree<int>::merge(modref_tree<int>*, vec<modref_parm_map, va_heap, vl_ptr>*) /home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482
    #1 0x18dc180 in ipa_merge_modref_summary_after_inlining(cgraph_edge*) /home/marxin/Programming/gcc2/gcc/ipa-modref.c:1779
    gcc-mirror#2 0x18c1c72 in inline_call(cgraph_edge*, bool, vec<cgraph_edge*, va_heap, vl_ptr>*, int*, bool, bool*) /home/marxin/Programming/gcc2/gcc/ipa-inline-transform.c:492
    gcc-mirror#3 0x4a3589c in inline_small_functions /home/marxin/Programming/gcc2/gcc/ipa-inline.c:2216
    gcc-mirror#4 0x4a3b230 in ipa_inline /home/marxin/Programming/gcc2/gcc/ipa-inline.c:2697
    gcc-mirror#5 0x4a3d902 in execute /home/marxin/Programming/gcc2/gcc/ipa-inline.c:3096
    gcc-mirror#6 0x1edf831 in execute_one_pass(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2509
    gcc-mirror#7 0x1ee26af in execute_ipa_pass_list(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2936
    gcc-mirror#8 0x103f31b in ipa_passes /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2700
    gcc-mirror#9 0x103fb40 in symbol_table::compile() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2777
    gcc-mirror#10 0x104092b in symbol_table::finalize_compilation_unit() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:3022
    gcc-mirror#11 0x235723b in compile_file /home/marxin/Programming/gcc2/gcc/toplev.c:485
    gcc-mirror#12 0x235fff9 in do_compile /home/marxin/Programming/gcc2/gcc/toplev.c:2321
    gcc-mirror#13 0x23605fc in toplev::main(int, char**) /home/marxin/Programming/gcc2/gcc/toplev.c:2460
    gcc-mirror#14 0x4e2b93b in main /home/marxin/Programming/gcc2/gcc/main.c:39
    gcc-mirror#15 0x7ffff6f0ae09 in __libc_start_main ../csu/libc-start.c:314
    gcc-mirror#16 0x9a0be9 in _start (/home/marxin/Programming/gcc2/objdir/gcc/cc1+0x9a0be9)

gcc/ChangeLog:

	* ipa-modref.c (compute_parm_map): Clear vector.
kraj pushed a commit that referenced this pull request Nov 2, 2020
Enable thumb1_gen_const_int to generate RTL or asm depending on the
context, so that we avoid duplicating code to handle constants in
Thumb-1 with -mpure-code.

Use a template so that the algorithm is effectively shared, and
rely on two classes to handle the actual emission as RTL or asm.

The generated sequence is improved to handle right-shiftable and small
values with less instructions. We now generate:

128:
        movs    r0, r0, #128
264:
        movs    r3, gcc-mirror#33
        lsls    r3, gcc-mirror#3
510:
        movs    r3, #255
        lsls    r3, #1
512:
        movs    r3, #1
        lsls    r3, gcc-mirror#9
764:
        movs    r3, #191
        lsls    r3, gcc-mirror#2
65536:
        movs    r3, #1
        lsls    r3, gcc-mirror#16
0x123456:
        movs    r3, gcc-mirror#18 ;0x12
        lsls    r3, gcc-mirror#8
        adds    r3, gcc-mirror#52 ;0x34
        lsls    r3, gcc-mirror#8
        adds    r3, gcc-mirror#86 ;0x56
0x1123456:
        movs    r3, #137 ;0x89
        lsls    r3, gcc-mirror#8
        adds    r3, gcc-mirror#26 ;0x1a
        lsls    r3, gcc-mirror#8
        adds    r3, gcc-mirror#43 ;0x2b
        lsls    r3, #1
0x1000010:
        movs    r3, gcc-mirror#16
        lsls    r3, gcc-mirror#16
        adds    r3, #1
        lsls    r3, gcc-mirror#4
0x1000011:
        movs    r3, #1
        lsls    r3, gcc-mirror#24
        adds    r3, gcc-mirror#17
-8192:
	movs	r3, #1
	lsls	r3, gcc-mirror#13
	rsbs	r3, #0

The patch adds a testcase which does not fully exercise
thumb1_gen_const_int, as other existing patterns already catch small
constants.  These parts of thumb1_gen_const_int are used by
arm_thumb1_mi_thunk.

2020-11-02  Christophe Lyon  <christophe.lyon@linaro.org>

	gcc/
	* config/arm/arm.c (thumb1_const_rtl, thumb1_const_print): New
	classes.
	(thumb1_gen_const_int): Rename to ...
	(thumb1_gen_const_int_1): ... New helper function. Add capability
	to emit either RTL or asm, improve generated code.
	(thumb1_gen_const_int_rtl): New function.
	* config/arm/arm-protos.h (thumb1_gen_const_int): Rename to
	thumb1_gen_const_int_rtl.
	* config/arm/thumb1.md: Call thumb1_gen_const_int_rtl instead
	of thumb1_gen_const_int.

	gcc/testsuite/
	* gcc.target/arm/pure-code/no-literal-pool-m0.c: New.
kraj pushed a commit that referenced this pull request Nov 13, 2020
To access the "n - 100000"th element of "a" in this test, GCC will
generate the following code for msp430-elf with -mcpu=msp430x:

  RLAM.W  #1, R12
  MOV.W a-3392(R12), R12

Since there aren't actually 100,000 elements in a, this means that
"a-3392" offset calculated by the linker can overflow, as the address of
"a" can validly be less than 3392.

The relocations used for -mcpu=msp430 and -mlarge are not as strict and
the calculated value is allowed to wrap around the address space,
avoiding relocation overflows.

gcc/testsuite/ChangeLog:

	* gcc.c-torture/execute/index-1.c: Skip for the default MSP430 430X ISA.
kraj pushed a commit that referenced this pull request Dec 5, 2020
…ddress

Fix an ICE with the handling of RTL expressions like:

(subreg:QI (mem/c:SI (plus:SI (plus:SI (mult:SI (reg/v:SI 0 %r0 [orig:67 i ] [67])
                    (const_int 4 [0x4]))
                (reg/v/f:SI 7 %r7 [orig:59 doacross ] [59]))
            (const_int 40 [0x28])) [1 MEM[(unsigned int *)doacross_63 + 40B + i_106 * 4]+0 S4 A32]) 0)

that causes the compilation of libgomp to fail:

during RTL pass: reload
.../libgomp/ordered.c: In function 'GOMP_doacross_wait':
.../libgomp/ordered.c:507:1: internal compiler error: in change_address_1, at emit-rtl.c:2275
  507 | }
      | ^
0x10a3462b change_address_1
	.../gcc/emit-rtl.c:2275
0x10a353a7 adjust_address_1(rtx_def*, machine_mode, poly_int<1u, long>, int, int, int, poly_int<1u, long>)
	.../gcc/emit-rtl.c:2409
0x10ae2993 alter_subreg(rtx_def**, bool)
	.../gcc/final.c:3368
0x10ae25cf cleanup_subreg_operands(rtx_insn*)
	.../gcc/final.c:3322
0x110922a3 reload(rtx_insn*, int)
	.../gcc/reload1.c:1232
0x10de2bf7 do_reload
	.../gcc/ira.c:5812
0x10de3377 execute
	.../gcc/ira.c:5986

in a `vax-netbsdelf' build, where an attempt is made to change the mode
of the contained memory reference to the mode of the containing SUBREG.
Such RTL expressions are produced by the VAX shift and rotate patterns
(`ashift', `ashiftrt', `rotate', `rotatert') where the count operand
always has the QI mode regardless of the mode, either SI or DI, of the
datum shifted or rotated.

Such a mode change cannot work where the memory reference uses the
indexed addressing mode, where a multiplier is implied that in the VAX
ISA depends on the width of the memory access requested and therefore
changing the machine mode would change the address calculation as well.

Avoid the attempt then by forcing the reload of any SUBREGs containing
a mode-dependent memory reference, also fixing these regressions:

FAIL: gcc.c-torture/compile/pr46883.c   -Os  (internal compiler error)
FAIL: gcc.c-torture/compile/pr46883.c   -Os  (test for excess errors)
FAIL: gcc.c-torture/execute/20120808-1.c   -O2  (internal compiler error)
FAIL: gcc.c-torture/execute/20120808-1.c   -O2  (test for excess errors)
FAIL: gcc.c-torture/execute/20120808-1.c   -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions  (internal compiler error)
FAIL: gcc.c-torture/execute/20120808-1.c   -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions  (test for excess errors)
FAIL: gcc.c-torture/execute/20120808-1.c   -O3 -g  (internal compiler error)
FAIL: gcc.c-torture/execute/20120808-1.c   -O3 -g  (test for excess errors)
FAIL: gcc.c-torture/execute/20120808-1.c   -O2 -flto -fno-use-linker-plugin -flto-partition=none  (internal compiler error)
FAIL: gcc.c-torture/execute/20120808-1.c   -O2 -flto -fno-use-linker-plugin -flto-partition=none  (test for excess errors)
FAIL: gcc.c-torture/execute/20120808-1.c   -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects  (internal compiler error)
FAIL: gcc.c-torture/execute/20120808-1.c   -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects  (test for excess errors)
FAIL: gcc.dg/20050629-1.c (internal compiler error)
FAIL: gcc.dg/20050629-1.c (test for excess errors)
FAIL: c-c++-common/torture/pr53505.c   -Os  (internal compiler error)
FAIL: c-c++-common/torture/pr53505.c   -Os  (test for excess errors)
FAIL: gfortran.dg/coarray_failed_images_1.f08   -Os  (internal compiler error)
FAIL: gfortran.dg/coarray_stopped_images_1.f08   -Os  (internal compiler error)

With test case #0 included it causes a reload with:

(insn 15 14 16 4 (set (reg:SI 31)
        (ashift:SI (const_int 1 [0x1])
            (subreg:QI (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ]) 0))) "pr58901-0.c":15:12 94 {ashlsi3}
     (expr_list:REG_DEAD (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ])
        (nil)))

as follows:

Reloads for insn # 15
Reload 0: reload_in (SI) = (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ])
	ALL_REGS, RELOAD_FOR_INPUT (opnum = 2)
	reload_in_reg: (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ])
	reload_reg_rtx: (reg:SI 5 %r5)

resulting in:

(insn 37 14 15 4 (set (reg:SI 5 %r5)
        (mem/c:SI (plus:SI (plus:SI (mult:SI (reg/v:SI 1 %r1 [orig:25 i ] [25])
                        (const_int 4 [0x4]))
                    (reg/v/f:SI 4 %r4 [orig:29 s ] [29]))
                (const_int 4 [0x4])) [1 MEM[(int *)s_8(D) + 4B + _5 * 4]+0 S4 A32])) "pr58901-0.c":15:12 12 {movsi_2}
     (nil))
(insn 15 37 16 4 (set (reg:SI 2 %r2 [31])
        (ashift:SI (const_int 1 [0x1])
            (reg:QI 5 %r5))) "pr58901-0.c":15:12 94 {ashlsi3}
     (nil))

and assembly like:

.L3:
	movl 4(%r4)[%r1],%r5
	ashl %r5,$1,%r2
	xorl2 %r2,%r0
	incl %r1
	cmpl %r1,%r3
	jneq .L3

produced for the loop, providing optimization has been enabled.

Likewise with test case #1 the reload of:

(insn 17 16 18 4 (set (reg:SI 34)
        (and:SI (subreg:SI (reg/v:DI 27 [ t ]) 4)
            (const_int 1 [0x1]))) "pr58901-1.c":18:20 77 {*andsi_const_int}
     (expr_list:REG_DEAD (reg/v:DI 27 [ t ])
        (nil)))

is as follows:

Reloads for insn # 17
Reload 0: reload_in (DI) = (reg/v:DI 27 [ t ])
	reload_out (SI) = (reg:SI 2 %r2 [34])
	ALL_REGS, RELOAD_OTHER (opnum = 0)
	reload_in_reg: (reg/v:DI 27 [ t ])
	reload_out_reg: (reg:SI 2 %r2 [34])
	reload_reg_rtx: (reg:DI 4 %r4)

resulting in:

(insn 40 16 17 4 (set (reg:DI 4 %r4)
        (mem/c:DI (plus:SI (mult:SI (reg/v:SI 1 %r1 [orig:26 i ] [26])
                    (const_int 8 [0x8]))
                (reg/v/f:SI 3 %r3 [orig:30 s ] [30])) [1 MEM[(const struct s *)s_13(D) + _7 * 8]+0 S8 A32])) "pr58901-1.c":18:20 11 {movdi}
     (nil))
(insn 17 40 41 4 (set (reg:SI 4 %r4)
        (and:SI (reg:SI 5 %r5 [+4 ])
            (const_int 1 [0x1]))) "pr58901-1.c":18:20 77 {*andsi_const_int}
     (nil))

and assembly like:

.L3:
	movq (%r3)[%r1],%r4
	bicl3 $-2,%r5,%r4
	addl2 %r4,%r0
	jaoblss %r0,%r1,.L3

First posted at: <https://gcc.gnu.org/ml/gcc/2014-06/msg00060.html>.

2020-12-05  Matt Thomas  <matt@3am-software.com>
	    Maciej W. Rozycki  <macro@linux-mips.org>

	gcc/
	PR target/58901
	* reload.c (push_reload): Also reload the inner expression of a
	SUBREG for pseudos associated with a mode-dependent memory
	reference.
	(find_reloads): Force a reload likewise.

2020-12-05  Maciej W. Rozycki  <macro@linux-mips.org>

	gcc/testsuite/
	PR target/58901
	* gcc.c-torture/compile/pr58901-0.c: New test.
	* gcc.c-torture/compile/pr58901-1.c: New test.
kraj pushed a commit that referenced this pull request Feb 23, 2021
/home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_value.cpp:77:25: runtime error: left shift of 0x0000000000000000fffffffffffffffb by 96 places cannot be represented in type '__int128'
    #0 0x7ffff754edfe in __ubsan::Value::getSIntValue() const /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_value.cpp:77
    #1 0x7ffff7548719 in __ubsan::Value::isNegative() const /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_value.h:190
    gcc-mirror#2 0x7ffff7542a34 in handleShiftOutOfBoundsImpl /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_handlers.cpp:338
    gcc-mirror#3 0x7ffff75431b7 in __ubsan_handle_shift_out_of_bounds /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_handlers.cpp:370
    gcc-mirror#4 0x40067f in main (/home/marxin/Programming/testcases/a.out+0x40067f)
    gcc-mirror#5 0x7ffff72c8b24 in __libc_start_main (/lib64/libc.so.6+0x27b24)
    gcc-mirror#6 0x4005bd in _start (/home/marxin/Programming/testcases/a.out+0x4005bd)

Differential Revision: https://reviews.llvm.org/D97263

Cherry-pick from 16ede09.
kraj pushed a commit that referenced this pull request Dec 12, 2024
On Cortex-M4, the code generated is:
     cmp     r0, r1
     itte    ne
     lslne   r0, r0, r1
     asrne   r0, r0, #1
     moveq   r0, r1
     add     r0, r0, r1
     bx      lr

On Cortex-M7, the code generated is:
     cmp     r0, r1
     beq     .L3
     lsls    r0, r0, r1
     asrs    r0, r0, #1
     add     r0, r0, r1
     bx      lr
.L3:
     mov     r0, r1
     add     r0, r0, r1
     bx      lr

As Cortex-M7 only allow maximum one conditional instruction, force
Cortex-M4 to have a stable test case.

gcc/testsuite/ChangeLog:

	* gcc.target/arm/thumb-ifcvt.c: Use -mtune=cortex-m4.

Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
kraj pushed a commit that referenced this pull request Dec 12, 2024
On Cortex-M4, the code generated is:
     cmp     r0, r1
     itte    ne
     lslne   r0, r0, r1
     asrne   r0, r0, #1
     moveq   r0, r1
     add     r0, r0, r1
     bx      lr

On Cortex-M7, the code generated is:
     cmp     r0, r1
     beq     .L3
     lsls    r0, r0, r1
     asrs    r0, r0, #1
     add     r0, r0, r1
     bx      lr
.L3:
     mov     r0, r1
     add     r0, r0, r1
     bx      lr

As Cortex-M7 only allow maximum one conditional instruction, force
Cortex-M4 to have a stable test case.

gcc/testsuite/ChangeLog:

	* gcc.target/arm/thumb-ifcvt.c: Use -mtune=cortex-m4.

Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
(cherry picked from commit e7615f6)
kraj pushed a commit that referenced this pull request Dec 17, 2024
This crash started with my r12-7803 but I believe the problem lies
elsewhere.

build_vec_init has cleanup_flags whose purpose is -- if I grok this
correctly -- to avoid destructing an object multiple times.  Let's
say we are initializing an array of A.  Then we might end up in
a scenario similar to initlist-eh1.C:

  try
    {
      call A::A in a loop
      // #0
      try
        {
	  call a fn using the array
	}
      finally
	{
	  // #1
	  call A::~A in a loop
	}
    }
  catch
    {
      // gcc-mirror#2
      call A::~A in a loop
    }

cleanup_flags makes us emit a statement like

  D.3048 = 2;

at #0 to disable performing the cleanup at gcc-mirror#2, since #1 will take
care of the destruction of the array.

But if we are not emitting the loop because we can use a constant
initializer (and use a single { a, b, ...}), we shouldn't generate
the statement resetting the iterator to its initial value.  Otherwise
we crash in gimplify_var_or_parm_decl because it gets the stray decl
D.3048.

	PR c++/117985

gcc/cp/ChangeLog:

	* init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not
	generating the loop.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/initlist-array23.C: New test.
	* g++.dg/cpp0x/initlist-array24.C: New test.
kraj pushed a commit that referenced this pull request Jan 7, 2025
This patch removes the AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS tunable and
use_new_vector_costs entry in aarch64-tuning-flags.def and makes the
AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS paths in the backend the
default. To that end, the function aarch64_use_new_vector_costs_p and its uses
were removed. To prevent costing vec_to_scalar operations with 0, as
described in
https://gcc.gnu.org/pipermail/gcc-patches/2024-October/665481.html,
we adjusted vectorizable_store such that the variable n_adjacent_stores
also covers vec_to_scalar operations. This way vec_to_scalar operations
are not costed individually, but as a group.
As suggested by Richard Sandiford, the "known_ne" in the multilane-check
was replaced by "maybe_ne" in order to treat nunits==1+1X as a vector
rather than a scalar.

Two tests were adjusted due to changes in codegen. In both cases, the
old code performed loop unrolling once, but the new code does not:
Example from gcc.target/aarch64/sve/strided_load_2.c (compiled with
-O2 -ftree-vectorize -march=armv8.2-a+sve -mtune=generic -moverride=tune=none):
f_int64_t_32:
        cbz     w3, .L92
        mov     x4, 0
        uxtw    x3, w3
+       cntd    x5
+       whilelo p7.d, xzr, x3
+       mov     z29.s, w5
        mov     z31.s, w2
-       whilelo p6.d, xzr, x3
-       mov     x2, x3
-       index   z30.s, #0, #1
-       uqdecd  x2
-       ptrue   p5.b, all
-       whilelo p7.d, xzr, x2
+       index   z30.d, #0, #1
+       ptrue   p6.b, all
        .p2align 3,,7
 .L94:
-       ld1d    z27.d, p7/z, [x0, #1, mul vl]
-       ld1d    z28.d, p6/z, [x0]
-       movprfx z29, z31
-       mul     z29.s, p5/m, z29.s, z30.s
-       incw    x4
-       uunpklo z0.d, z29.s
-       uunpkhi z29.d, z29.s
-       ld1d    z25.d, p6/z, [x1, z0.d, lsl 3]
-       ld1d    z26.d, p7/z, [x1, z29.d, lsl 3]
-       add     z25.d, z28.d, z25.d
+       ld1d    z27.d, p7/z, [x0, x4, lsl 3]
+       movprfx z28, z31
+       mul     z28.s, p6/m, z28.s, z30.s
+       ld1d    z26.d, p7/z, [x1, z28.d, uxtw 3]
        add     z26.d, z27.d, z26.d
-       st1d    z26.d, p7, [x0, #1, mul vl]
-       whilelo p7.d, x4, x2
-       st1d    z25.d, p6, [x0]
-       incw    z30.s
-       incb    x0, all, mul gcc-mirror#2
-       whilelo p6.d, x4, x3
+       st1d    z26.d, p7, [x0, x4, lsl 3]
+       add     z30.s, z30.s, z29.s
+       incd    x4
+       whilelo p7.d, x4, x3
        b.any   .L94
 .L92:
        ret

Example from gcc.target/aarch64/sve/strided_store_2.c (compiled with
-O2 -ftree-vectorize -march=armv8.2-a+sve -mtune=generic -moverride=tune=none):
f_int64_t_32:
        cbz     w3, .L84
-       addvl   x5, x1, #1
        mov     x4, 0
        uxtw    x3, w3
-       mov     z31.s, w2
+       cntd    x5
        whilelo p7.d, xzr, x3
-       mov     x2, x3
-       index   z30.s, #0, #1
-       uqdecd  x2
-       ptrue   p5.b, all
-       whilelo p6.d, xzr, x2
+       mov     z29.s, w5
+       mov     z31.s, w2
+       index   z30.d, #0, #1
+       ptrue   p6.b, all
        .p2align 3,,7
 .L86:
-       ld1d    z28.d, p7/z, [x1, x4, lsl 3]
-       ld1d    z27.d, p6/z, [x5, x4, lsl 3]
-       movprfx z29, z30
-       mul     z29.s, p5/m, z29.s, z31.s
-       add     z28.d, z28.d, #1
-       uunpklo z26.d, z29.s
-       st1d    z28.d, p7, [x0, z26.d, lsl 3]
-       incw    x4
-       uunpkhi z29.d, z29.s
+       ld1d    z27.d, p7/z, [x1, x4, lsl 3]
+       movprfx z28, z30
+       mul     z28.s, p6/m, z28.s, z31.s
        add     z27.d, z27.d, #1
-       whilelo p6.d, x4, x2
-       st1d    z27.d, p7, [x0, z29.d, lsl 3]
-       incw    z30.s
+       st1d    z27.d, p7, [x0, z28.d, uxtw 3]
+       incd    x4
+       add     z30.s, z30.s, z29.s
        whilelo p7.d, x4, x3
        b.any   .L86
 .L84:
	ret

The patch was bootstrapped and tested on aarch64-linux-gnu, no
regression.
OK for mainline?

Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com>

gcc/
	* tree-vect-stmts.cc (vectorizable_store): Extend the use of
	n_adjacent_stores to also cover vec_to_scalar operations.
	* config/aarch64/aarch64-tuning-flags.def: Remove
	use_new_vector_costs as tuning option.
	* config/aarch64/aarch64.cc (aarch64_use_new_vector_costs_p):
	Remove.
	(aarch64_vector_costs::add_stmt_cost): Remove use of
	aarch64_use_new_vector_costs_p.
	(aarch64_vector_costs::finish_cost): Remove use of
	aarch64_use_new_vector_costs_p.
	* config/aarch64/tuning_models/cortexx925.h: Remove
	AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS.
	* config/aarch64/tuning_models/fujitsu_monaka.h: Likewise.
	* config/aarch64/tuning_models/generic_armv8_a.h: Likewise.
	* config/aarch64/tuning_models/generic_armv9_a.h: Likewise.
	* config/aarch64/tuning_models/neoverse512tvb.h: Likewise.
	* config/aarch64/tuning_models/neoversen2.h: Likewise.
	* config/aarch64/tuning_models/neoversen3.h: Likewise.
	* config/aarch64/tuning_models/neoversev1.h: Likewise.
	* config/aarch64/tuning_models/neoversev2.h: Likewise.
	* config/aarch64/tuning_models/neoversev3.h: Likewise.
	* config/aarch64/tuning_models/neoversev3ae.h: Likewise.

gcc/testsuite/
	* gcc.target/aarch64/sve/strided_load_2.c: Adjust expected outcome.
	* gcc.target/aarch64/sve/strided_store_2.c: Likewise.
kraj pushed a commit that referenced this pull request Jan 9, 2025
This crash started with my r12-7803 but I believe the problem lies
elsewhere.

build_vec_init has cleanup_flags whose purpose is -- if I grok this
correctly -- to avoid destructing an object multiple times.  Let's
say we are initializing an array of A.  Then we might end up in
a scenario similar to initlist-eh1.C:

  try
    {
      call A::A in a loop
      // #0
      try
        {
	  call a fn using the array
	}
      finally
	{
	  // #1
	  call A::~A in a loop
	}
    }
  catch
    {
      // gcc-mirror#2
      call A::~A in a loop
    }

cleanup_flags makes us emit a statement like

  D.3048 = 2;

at #0 to disable performing the cleanup at gcc-mirror#2, since #1 will take
care of the destruction of the array.

But if we are not emitting the loop because we can use a constant
initializer (and use a single { a, b, ...}), we shouldn't generate
the statement resetting the iterator to its initial value.  Otherwise
we crash in gimplify_var_or_parm_decl because it gets the stray decl
D.3048.

	PR c++/117985

gcc/cp/ChangeLog:

	* init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not
	generating the loop.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/initlist-array23.C: New test.
	* g++.dg/cpp0x/initlist-array24.C: New test.

(cherry picked from commit 40e5636)
kraj pushed a commit that referenced this pull request Jan 10, 2025
This code in cxx_eval_array_reference has been hard to get right.
In r12-2304 I added some code; in r13-5693 I removed some of it.

Here the problematic line is "S s = arr[0];" which causes a crash
on the assert in verify_ctor_sanity:

  gcc_assert (!ctx->object || !DECL_P (ctx->object)
              || ctx->global->get_value (ctx->object) == ctx->ctor);

ctx->object is the VAR_DECL 's', which is correct here.  The second
line points to the problem: we replaced ctx->ctor in
cxx_eval_array_reference:

  new_ctx.ctor = build_constructor (elem_type, NULL); // #1

which I think we shouldn't have; the CONSTRUCTOR we created in
cxx_eval_constant_expression/DECL_EXPR

  new_ctx.ctor = build_constructor (TREE_TYPE (r), NULL);

had the right type.

We still need #1 though.  E.g., in constexpr-96241.C, we never
set ctx.ctor/object before calling cxx_eval_array_reference, so
we have to build a CONSTRUCTOR there.  And in constexpr-101371-2.C
we have a ctx.ctor, but it has the wrong type, so we need a new one.

We can fix the problem by always clearing the object, and, as an
optimization, only create/free a new ctor when actually needed.

	PR c++/110382

gcc/cp/ChangeLog:

	* constexpr.cc (cxx_eval_array_reference): Create a new constructor
	only when we don't already have a matching one.  Clear the object
	when the type is non-scalar.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp1y/constexpr-110382.C: New test.

(cherry picked from commit 6e424fe)
kraj pushed a commit that referenced this pull request Jan 10, 2025
We evaluate constexpr functions on the original, pre-genericization bodies.
That means that the function body we're evaluating will not have gone
through cp_genericize_r's "Map block scope extern declarations to visible
declarations with the same name and type in outer scopes if any".  Here:

  constexpr bool bar() { return true; } // #1
  constexpr bool foo() {
    constexpr bool bar(void); // gcc-mirror#2
    return bar();
  }

it means that we:
1) register_constexpr_fundef (#1)
2) cp_genericize (#1)
   nothing interesting happens
3) register_constexpr_fundef (foo)
   does copy_fn, so we have two copies of the BIND_EXPR
4) cp_genericize (foo)
   this remaps gcc-mirror#2 to #1, but only on one copy of the BIND_EXPR
5) retrieve_constexpr_fundef (foo)
   we find it, no problem
6) retrieve_constexpr_fundef (gcc-mirror#2)
   and here gcc-mirror#2 isn't found in constexpr_fundef_table, because
   we're working on the BIND_EXPR copy where gcc-mirror#2 wasn't mapped to #1
   so we fail.  We've only registered #1.

It should work to use DECL_LOCAL_DECL_ALIAS (which used to be
extern_decl_map).  We evaluate constexpr functions on pre-cp_fold
bodies to avoid diagnostic problems, but the remapping I'm proposing
should not interfere with diagnostics.

This is not a problem for a global scope redeclaration; there we go
through duplicate_decls which keeps the DECL_UID:
  DECL_UID (olddecl) = olddecl_uid;
and DECL_UID is what constexpr_fundef_hasher::hash uses.

	PR c++/111132

gcc/cp/ChangeLog:

	* constexpr.cc (get_function_named_in_call): Use
	cp_get_fndecl_from_callee.
	* cvt.cc (cp_get_fndecl_from_callee): If there's a
	DECL_LOCAL_DECL_ALIAS, use it.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/constexpr-redeclaration3.C: New test.
	* g++.dg/cpp0x/constexpr-redeclaration4.C: New test.

(cherry picked from commit 8c90638)
kraj pushed a commit that referenced this pull request Jan 10, 2025
This crash started with my r12-7803 but I believe the problem lies
elsewhere.

build_vec_init has cleanup_flags whose purpose is -- if I grok this
correctly -- to avoid destructing an object multiple times.  Let's
say we are initializing an array of A.  Then we might end up in
a scenario similar to initlist-eh1.C:

  try
    {
      call A::A in a loop
      // #0
      try
        {
	  call a fn using the array
	}
      finally
	{
	  // #1
	  call A::~A in a loop
	}
    }
  catch
    {
      // gcc-mirror#2
      call A::~A in a loop
    }

cleanup_flags makes us emit a statement like

  D.3048 = 2;

at #0 to disable performing the cleanup at gcc-mirror#2, since #1 will take
care of the destruction of the array.

But if we are not emitting the loop because we can use a constant
initializer (and use a single { a, b, ...}), we shouldn't generate
the statement resetting the iterator to its initial value.  Otherwise
we crash in gimplify_var_or_parm_decl because it gets the stray decl
D.3048.

	PR c++/117985

gcc/cp/ChangeLog:

	* init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not
	generating the loop.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/initlist-array23.C: New test.
	* g++.dg/cpp0x/initlist-array24.C: New test.

(cherry picked from commit 40e5636)
kraj pushed a commit that referenced this pull request Feb 8, 2025
In a member-specification of a class, a noexcept-specifier is
a complete-class context.  Thus we delay parsing until the end of
the class via our DEFERRED_PARSE mechanism; see cp_parser_save_noexcept
and cp_parser_late_noexcept_specifier.

We also attempt to defer instantiation of noexcept-specifiers in order
to reduce the number of instantiations; this is done via DEFERRED_NOEXCEPT.

We can even have both, as in noexcept65.C: a DEFERRED_PARSE wrapped in
DEFERRED_NOEXCEPT, which uses the DEFPARSE_INSTANTIATIONS mechanism.
noexcept65.C works, because when we really need the noexcept, which is
when parsing the body of S::A::A(), the noexcept will have been parsed
already; noexcepts are parsed before bodies of member function.

But in this test we have:

  struct A {
      int x;
      template<class>
      void foo() noexcept(noexcept(x)) {}
      auto bar() -> decltype(foo<int>()) {} // #1
  };

and I think the decltype in #1 needs the unparsed noexcept before it
could have been parsed.  clang++ rejects the test and I suppose we
should reject it as well, rather than crashing on a DEFERRED_PARSE
in tsubst_expr.

	PR c++/117106
	PR c++/118190

gcc/cp/ChangeLog:

	* pt.cc (maybe_instantiate_noexcept): Give an error if the noexcept
	hasn't been parsed yet.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/noexcept89.C: New test.
	* g++.dg/cpp0x/noexcept90.C: New test.

Reviewed-by: Jason Merrill <jason@redhat.com>
kraj pushed a commit that referenced this pull request Mar 26, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I
did not go compile something that old, and identified this change via
git blame, so might be wrong)

=== cut here ===
struct Foo { int x; };
Foo& get (Foo &v) { return v; }
void bar () {
  Foo v; v.x = 1;
  (true ? get (v) : get (v)).*(&Foo::x) = 2;
  // v.x still equals 1 here...
}
=== cut here ===

The problem lies in build_m_component_ref, that computes the address of
the COND_EXPR using build_address to build the representation of
  (true ? get (v) : get (v)).*(&Foo::x);
and gets something like
  &(true ? get (v) : get (v))  // #1
instead of
  (true ? &get (v) : &get (v)) // gcc-mirror#2
and the write does not go where want it to, hence the miscompile.

This patch replaces the call to build_address by a call to
cp_build_addr_expr, which gives gcc-mirror#2, that is properly handled.

	PR c++/114525

gcc/cp/ChangeLog:

	* typeck2.cc (build_m_component_ref): Call cp_build_addr_expr
	instead of build_address.

gcc/testsuite/ChangeLog:

	* g++.dg/expr/cond18.C: New test.
kraj pushed a commit that referenced this pull request Mar 31, 2025
Here we instantiate the lambda three times in producing A<0>::f:
1) in tsubst_function_type, substituting the type of A<>::f
2) in tsubst_function_decl, substituting the parameters of A<>::f
3) in regenerate_decl_from_template when instantiating A<>::f

The first one gets thrown away by maybe_rebuild_function_decl_type.  Before
r15-7202, we happily built all of them and mangled the result wrongly as
lambda gcc-mirror#3.  After r15-7202, we try to mangle gcc-mirror#3 as #1, which breaks because
 #1 is already mangled as #1.

This patch avoids building gcc-mirror#3 by suppressing regenerate_decl_from_template
if the template signature includes a lambda, fixing the ICE.

We now mangle the lambda as gcc-mirror#2, which is still wrong.  Addressing that
should involve not calling tsubst_function_type from tsubst_function_decl,
and building the type from the parms types in the first place rather than
fixing it up in maybe_rebuild_function_decl_type.

	PR c++/119401

gcc/cp/ChangeLog:

	* pt.cc (regenerate_decl_from_template): Don't regenerate if the
	signature involves a lambda.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/lambda-targ11.C: New test.
kraj pushed a commit that referenced this pull request Apr 14, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I
did not go compile something that old, and identified this change via
git blame, so might be wrong)

=== cut here ===
struct Foo { int x; };
Foo& get (Foo &v) { return v; }
void bar () {
  Foo v; v.x = 1;
  (true ? get (v) : get (v)).*(&Foo::x) = 2;
  // v.x still equals 1 here...
}
=== cut here ===

The problem lies in build_m_component_ref, that computes the address of
the COND_EXPR using build_address to build the representation of
  (true ? get (v) : get (v)).*(&Foo::x);
and gets something like
  &(true ? get (v) : get (v))  // #1
instead of
  (true ? &get (v) : &get (v)) // gcc-mirror#2
and the write does not go where want it to, hence the miscompile.

This patch replaces the call to build_address by a call to
cp_build_addr_expr, which gives gcc-mirror#2, that is properly handled.

	PR c++/114525

gcc/cp/ChangeLog:

	* typeck2.cc (build_m_component_ref): Call cp_build_addr_expr
	instead of build_address.

gcc/testsuite/ChangeLog:

	* g++.dg/expr/cond18.C: New test.

(cherry picked from commit 35ce9af)
kraj pushed a commit that referenced this pull request Apr 14, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I
did not go compile something that old, and identified this change via
git blame, so might be wrong)

=== cut here ===
struct Foo { int x; };
Foo& get (Foo &v) { return v; }
void bar () {
  Foo v; v.x = 1;
  (true ? get (v) : get (v)).*(&Foo::x) = 2;
  // v.x still equals 1 here...
}
=== cut here ===

The problem lies in build_m_component_ref, that computes the address of
the COND_EXPR using build_address to build the representation of
  (true ? get (v) : get (v)).*(&Foo::x);
and gets something like
  &(true ? get (v) : get (v))  // #1
instead of
  (true ? &get (v) : &get (v)) // gcc-mirror#2
and the write does not go where want it to, hence the miscompile.

This patch replaces the call to build_address by a call to
cp_build_addr_expr, which gives gcc-mirror#2, that is properly handled.

	PR c++/114525

gcc/cp/ChangeLog:

	* typeck2.cc (build_m_component_ref): Call cp_build_addr_expr
	instead of build_address.

gcc/testsuite/ChangeLog:

	* g++.dg/expr/cond18.C: New test.

(cherry picked from commit 35ce9af)
kraj pushed a commit that referenced this pull request May 10, 2025
this patch fixes some of problems with cosint in scalar to vector pass.
In particular
 1) the pass uses optimize_insn_for_size which is intended to be used by
    expanders and splitters and requires the optimization pass to use
    set_rtl_profile (bb) for currently processed bb.
    This is not done, so we get random stale info about hotness of insn.
 2) register allocator move costs are all realtive to integer reg-reg move
    which has cost of 2, so it is (except for size tables and i386)
    a latency of instruction multiplied by 2.
    These costs has been duplicated and are now used in combination with
    rtx costs which are all based to COSTS_N_INSNS that multiplies latency
    by 4.
    Some of vectorizer costing contains COSTS_N_INSNS (move_cost) / 2
    to compensate, but some new code does not.  This patch adds compensatoin.

    Perhaps we should update the cost tables to use COSTS_N_INSNS everywher
    but I think we want to first fix inconsistencies.  Also the tables will
    get optically much longer, since we have many move costs and COSTS_N_INSNS
    is a lot of characters.
 3) variable m which decides how much to multiply integer variant (to account
    that with -m32 all 64bit computations needs 2 instructions) is declared
    unsigned which makes the signed computation of instruction gain to be
    done in unsigned type and breaks i.e. for division.
 4) I added integer_to_sse costs which are currently all duplicationof
    sse_to_integer. AMD chips are asymetric and moving one direction is faster
    than another.  I will chance costs incremnetally once vectorizer part
    is fixed up, too.

There are two failures gcc.target/i386/minmax-6.c and gcc.target/i386/minmax-7.c.
Both test stv on hasswell which no longer happens since SSE->INT and INT->SSE moves
are now more expensive.

There is only one instruction to convert:

Computing gain for chain #1...
  Instruction gain 8 for    11: {r110:SI=smax(r116:SI,0);clobber flags:CC;}
  Instruction conversion gain: 8
  Registers conversion cost: 8    <- this is integer_to_sse and sse_to_integer
  Total gain: 0

total gain used to be 4 since the patch doubles the conversion costs.
According to agner fog's tables the costs should be 1 cycle which is correct
here.

Final code gnerated is:

	vmovd	%esi, %xmm0         * latency 1
	cmpl	%edx, %esi
	je	.L2
	vpxor	%xmm1, %xmm1, %xmm1 * latency 1
	vpmaxsd	%xmm1, %xmm0, %xmm0 * latency 1
	vmovd	%xmm0, %eax         * latency 1
	imull	%edx, %eax
	cltq
	movzwl	(%rdi,%rax,2), %eax
	ret

	cmpl	%edx, %esi
	je	.L2
	xorl	%eax, %eax          * latency 1
	testl	%esi, %esi          * latency 1
	cmovs	%eax, %esi          * latency 2
	imull	%edx, %esi
	movslq	%esi, %rsi
	movzwl	(%rdi,%rsi,2), %eax
	ret

Instructions with latency info are those really different.
So the uncoverted code has sum of latencies 4 and real latency 3.
Converted code has sum of latencies 4 and real latency 3 (vmod+vpmaxsd+vmov).
So I do not quite see it should be a win.

There is also a bug in costing MIN/MAX

	    case ABS:
	    case SMAX:
	    case SMIN:
	    case UMAX:
	    case UMIN:
	      /* We do not have any conditional move cost, estimate it as a
		 reg-reg move.  Comparisons are costed as adds.  */
	      igain += m * (COSTS_N_INSNS (2) + ix86_cost->add);
	      /* Integer SSE ops are all costed the same.  */
	      igain -= ix86_cost->sse_op;
	      break;

Now COSTS_N_INSNS (2) is not quite right since reg-reg move should be 1 or perhaps 0.
For Haswell cmov really is 2 cycles, but I guess we want to have that in cost vectors
like all other instructions.

I am not sure if this is really a win in this case (other minmax testcases seems to make
sense).  I have xfailed it for now and will check if that affects specs on LNT testers.

I will proceed with similar fixes on vectorizer cost side. Sadly those introduces
quite some differences in the testuiste (partly triggered by other costing problems,
such as one of scatter/gather)

gcc/ChangeLog:

	* config/i386/i386-features.cc
	(general_scalar_chain::vector_const_cost): Add BB parameter; handle
	size costs; use COSTS_N_INSNS to compute move costs.
	(general_scalar_chain::compute_convert_gain): Use optimize_bb_for_size
	instead of optimize_insn_for size; use COSTS_N_INSNS to compute move costs;
	update calls of general_scalar_chain::vector_const_cost; use
	ix86_cost->integer_to_sse.
	(timode_immed_const_gain): Add bb parameter; use
	optimize_bb_for_size_p.
	(timode_scalar_chain::compute_convert_gain): Use optimize_bb_for_size_p.
	* config/i386/i386-features.h (class general_scalar_chain): Update
	prototype of vector_const_cost.
	* config/i386/i386.h (struct processor_costs): Add integer_to_sse.
	* config/i386/x86-tune-costs.h (struct processor_costs): Copy
	sse_to_integer to integer_to_sse everywhere.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/minmax-6.c: xfail test that pmax is used.
	* gcc.target/i386/minmax-7.c: xfall test that pmin is used.
kraj pushed a commit that referenced this pull request May 12, 2025
The test was designed to pass with thumb2, but code generation changed
with the introduction of Low Overhead Loops, so the test can fail if
one overrides the flags when running the testsuite.

In addition, useless subtract / extension instructions require -O2 to
remove them (-O is not sufficient), so replace -O with -O2 in
dg-options.

arm_thumb2_ok_no_arm_v8_1m_lob does not do what the test needs (it can
fail because some flags conflict, rather than because lob are
supported, and we do not need to check runtime support in this test
anyway), so the patch reverts back to arm_thumb2_ok.

Finally, replace the scan-assembler directives with
check-function-bodies, checking both types of code generation (with
and without LOL).  Depending on architecture version, the two insns
    and     r0, r1, r0, lsr #1
    ands    r3, r3, #255
can be swapped, so accept both orders.

gcc/testsuite/ChangeLog:

	PR target/116445
	* gcc.target/arm/unsigned-extend-2.c: Fix dg directives.
kraj pushed a commit that referenced this pull request Jun 9, 2025
This patch adds a new param vect-scalar-cost-multiplier to scale the scalar
costing during vectorization.  If the cost is set high enough and when using
the dynamic cost model it has the effect of effectively disabling the
costing vs scalar and assumes all vectorization to be profitable.

This is similar to using the unlimited cost model, but unlike unlimited it
does not fully disable the vector cost model.  That means that we still
perform comparisons between vector modes.  And it means it also still does
costing for alias analysis.

As an example, the following:

void
foo (char *restrict a, int *restrict b, int *restrict c,
     int *restrict d, int stride)
{
    if (stride <= 1)
        return;

    for (int i = 0; i < 3; i++)
        {
            int res = c[i];
            int t = b[i * stride];
            if (a[i] != 0)
                res = t * d[i];
            c[i] = res;
        }
}

compiled with -O3 -march=armv8-a+sve -fvect-cost-model=dynamic fails to
vectorize as it assumes scalar would be faster, and with
-fvect-cost-model=unlimited it picks a vector type that's so big that the large
sequence generated is working on mostly inactive lanes:

        ...
        and     p3.b, p3/z, p4.b, p4.b
        whilelo p0.s, wzr, w7
        ld1w    z23.s, p3/z, [x3, gcc-mirror#3, mul vl]
        ld1w    z28.s, p0/z, [x5, z31.s, sxtw 2]
        add     x0, x5, x0
        punpklo p6.h, p6.b
        ld1w    z27.s, p4/z, [x0, z31.s, sxtw 2]
        and     p6.b, p6/z, p0.b, p0.b
        punpklo p4.h, p7.b
        ld1w    z24.s, p6/z, [x3, gcc-mirror#2, mul vl]
        and     p4.b, p4/z, p2.b, p2.b
        uqdecw  w6
        ld1w    z26.s, p4/z, [x3]
        whilelo p1.s, wzr, w6
        mul     z27.s, p5/m, z27.s, z23.s
        ld1w    z29.s, p1/z, [x4, z31.s, sxtw 2]
        punpkhi p7.h, p7.b
        mul     z24.s, p5/m, z24.s, z28.s
        and     p7.b, p7/z, p1.b, p1.b
        mul     z26.s, p5/m, z26.s, z30.s
        ld1w    z25.s, p7/z, [x3, #1, mul vl]
        st1w    z27.s, p3, [x2, gcc-mirror#3, mul vl]
        mul     z25.s, p5/m, z25.s, z29.s
        st1w    z24.s, p6, [x2, gcc-mirror#2, mul vl]
        st1w    z25.s, p7, [x2, #1, mul vl]
        st1w    z26.s, p4, [x2]
        ...

With -fvect-cost-model=dynamic --param vect-scalar-cost-multiplier=200
you get more reasonable code:

foo:
        cmp     w4, 1
        ble     .L1
        ptrue   p7.s, vl3
        index   z0.s, #0, w4
        ld1b    z29.s, p7/z, [x0]
        ld1w    z30.s, p7/z, [x1, z0.s, sxtw 2]
	ptrue   p6.b, all
        cmpne   p7.b, p7/z, z29.b, #0
        ld1w    z31.s, p7/z, [x3]
	mul     z31.s, p6/m, z31.s, z30.s
        st1w    z31.s, p7, [x2]
.L1:
        ret

This model has been useful internally for performance exploration and cost-model
validation.  It allows us to force realistic vectorization overriding the cost
model to be able to tell whether it's correct wrt to profitability.

gcc/ChangeLog:

	* params.opt (vect-scalar-cost-multiplier): New.
	* tree-vect-loop.cc (vect_estimate_min_profitable_iters): Use it.
	* doc/invoke.texi (vect-scalar-cost-multiplier): Document it.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/sve/cost_model_16.c: New test.
kraj pushed a commit that referenced this pull request Jun 13, 2025
…o_debug_section [PR116614]

cat abc.C
  #define A(n) struct T##n {} t##n;
  #define B(n) A(n##0) A(n##1) A(n#gcc-mirror#2) A(n#gcc-mirror#3) A(n#gcc-mirror#4) A(n#gcc-mirror#5) A(n#gcc-mirror#6) A(n#gcc-mirror#7) A(n#gcc-mirror#8) A(n#gcc-mirror#9)
  #define C(n) B(n##0) B(n##1) B(n#gcc-mirror#2) B(n#gcc-mirror#3) B(n#gcc-mirror#4) B(n#gcc-mirror#5) B(n#gcc-mirror#6) B(n#gcc-mirror#7) B(n#gcc-mirror#8) B(n#gcc-mirror#9)
  #define D(n) C(n##0) C(n##1) C(n#gcc-mirror#2) C(n#gcc-mirror#3) C(n#gcc-mirror#4) C(n#gcc-mirror#5) C(n#gcc-mirror#6) C(n#gcc-mirror#7) C(n#gcc-mirror#8) C(n#gcc-mirror#9)
  #define E(n) D(n##0) D(n##1) D(n#gcc-mirror#2) D(n#gcc-mirror#3) D(n#gcc-mirror#4) D(n#gcc-mirror#5) D(n#gcc-mirror#6) D(n#gcc-mirror#7) D(n#gcc-mirror#8) D(n#gcc-mirror#9)
  E(1) E(2) E(3)
  int main () { return 0; }
./xg++ -B ./ -o abc{.o,.C} -flto -flto-partition=1to1 -O2 -g -fdebug-types-section -c
./xgcc -B ./ -o abc{,.o} -flto -flto-partition=1to1 -O2
(not included in testsuite as it takes a while to compile) FAILs with
lto-wrapper: fatal error: Too many copied sections: Operation not supported
compilation terminated.
/usr/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status

The following patch fixes that.  Most of the 64K+ section support for
reading and writing was already there years ago (and especially reading used
quite often already) and a further bug fixed in it in the PR104617 fix.

Yet, the fix isn't solely about removing the
  if (new_i - 1 >= SHN_LORESERVE)
    {
      *err = ENOTSUP;
      return "Too many copied sections";
    }
5 lines, the missing part was that the function only handled reading of
the .symtab_shndx section but not copying/updating of it.
If the result has less than 64K-epsilon sections, that actually wasn't
needed, but e.g. with -fdebug-types-section one can exceed that pretty
easily (reported to us on WebKitGtk build on ppc64le).
Updating the section is slightly more complicated, because it basically
needs to be done in lock step with updating the .symtab section, if one
doesn't need to use SHN_XINDEX in there, the section should (or should be
updated to) contain SHN_UNDEF entry, otherwise needs to have whatever would
be overwise stored but couldn't fit.  But repeating due to that all the
symtab decisions what to discard and how to rewrite it would be ugly.

So, the patch instead emits the .symtab_shndx section (or sections) last
and prepares the content during the .symtab processing and in a second
pass when going just through .symtab_shndx sections just uses the saved
content.

2024-09-07  Jakub Jelinek  <jakub@redhat.com>

	PR lto/116614
	* simple-object-elf.c (SHN_COMMON): Align comment with neighbouring
	comments.
	(SHN_HIRESERVE): Use uppercase hex digits instead of lowercase for
	consistency.
	(simple_object_elf_find_sections): Formatting fixes.
	(simple_object_elf_fetch_attributes): Likewise.
	(simple_object_elf_attributes_merge): Likewise.
	(simple_object_elf_start_write): Likewise.
	(simple_object_elf_write_ehdr): Likewise.
	(simple_object_elf_write_shdr): Likewise.
	(simple_object_elf_write_to_file): Likewise.
	(simple_object_elf_copy_lto_debug_section): Likewise.  Don't fail for
	new_i - 1 >= SHN_LORESERVE, instead arrange in that case to copy
	over .symtab_shndx sections, though emit those last and compute their
	section content when processing associated .symtab sections.  Handle
	simple_object_internal_read failure even in the .symtab_shndx reading
	case.

(cherry picked from commit bb8dd09)
kraj pushed a commit that referenced this pull request Jul 9, 2025
When using SVE INDEX to load an Advanced SIMD vector, we need to
take account of the different element ordering for big-endian
targets.  For example, when big-endian targets store the V4SI
constant { 0, 1, 2, 3 } in registers, 0 becomes the most
significant element, whereas INDEX always operates from the
least significant element.  A big-endian target would therefore
load V4SI { 0, 1, 2, 3 } using:

    INDEX Z0.S, gcc-mirror#3, #-1

rather than little-endian's:

    INDEX Z0.S, #0, #1

While there, I noticed that we would only check the first vector
in a multi-vector SVE constant, which would trigger an ICE if the
other vectors turned out to be invalid.  This is pretty difficult to
trigger at the moment, since we only allow single-register modes to be
used as frontend & middle-end vector modes, but it can be seen using
the RTL frontend.

gcc/
	* config/aarch64/aarch64.cc (aarch64_sve_index_series_p): New
	function, split out from...
	(aarch64_simd_valid_imm): ...here.  Account for the different
	SVE and Advanced SIMD element orders on big-endian targets.
	Check each vector in a structure mode.

gcc/testsuite/
	* gcc.dg/rtl/aarch64/vec-series-1.c: New test.
	* gcc.dg/rtl/aarch64/vec-series-2.c: Likewise.
	* gcc.target/aarch64/sve/acle/general/dupq_2.c: Fix expected
	output for this big-endian test.
	* gcc.target/aarch64/sve/acle/general/dupq_4.c: Likewise.
	* gcc.target/aarch64/sve/vec_init_3.c: Restrict to little-endian
	targets and add more tests.
	* gcc.target/aarch64/sve/vec_init_4.c: New big-endian version
	of vec_init_3.c.
kraj pushed a commit that referenced this pull request Jul 15, 2025
This was failing for two reasons:

1) We were wrongly treating the basic_string constructor as
zero-initializing the object, which it doesn't.
2) Given that, when we went to look for a value for the anonymous union,
we concluded that it was value-initialized, and trying to evaluate that
broke because we weren't setting ctx->ctor for it.

This patch fixes both issues, #1 by setting CONSTRUCTOR_NO_CLEARING and gcc-mirror#2
by inserting a new CONSTRUCTOR for the member rather than evaluate it out of
context, which is consistent with cxx_eval_store_expression.

	PR c++/120577

gcc/cp/ChangeLog:

	* constexpr.cc (cxx_eval_call_expression): Set
	CONSTRUCTOR_NO_CLEARING on initial value for ctor.
	(cxx_eval_component_reference): Make value-initialization
	of an aggregate member explicit.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/constexpr-union9.C: New test.
kraj pushed a commit that referenced this pull request Jul 21, 2025
When using SVE INDEX to load an Advanced SIMD vector, we need to
take account of the different element ordering for big-endian
targets.  For example, when big-endian targets store the V4SI
constant { 0, 1, 2, 3 } in registers, 0 becomes the most
significant element, whereas INDEX always operates from the
least significant element.  A big-endian target would therefore
load V4SI { 0, 1, 2, 3 } using:

    INDEX Z0.S, gcc-mirror#3, #-1

rather than little-endian's:

    INDEX Z0.S, #0, #1

While there, I noticed that we would only check the first vector
in a multi-vector SVE constant, which would trigger an ICE if the
other vectors turned out to be invalid.  This is pretty difficult to
trigger at the moment, since we only allow single-register modes to be
used as frontend & middle-end vector modes, but it can be seen using
the RTL frontend.

gcc/
	* config/aarch64/aarch64.cc (aarch64_sve_index_series_p): New
	function, split out from...
	(aarch64_simd_valid_imm): ...here.  Account for the different
	SVE and Advanced SIMD element orders on big-endian targets.
	Check each vector in a structure mode.

gcc/testsuite/
	* gcc.dg/rtl/aarch64/vec-series-1.c: New test.
	* gcc.dg/rtl/aarch64/vec-series-2.c: Likewise.
	* gcc.target/aarch64/sve/acle/general/dupq_2.c: Fix expected
	output for this big-endian test.
	* gcc.target/aarch64/sve/acle/general/dupq_4.c: Likewise.
	* gcc.target/aarch64/sve/vec_init_3.c: Restrict to little-endian
	targets and add more tests.
	* gcc.target/aarch64/sve/vec_init_4.c: New big-endian version
	of vec_init_3.c.

(cherry picked from commit 41c4463)
kraj pushed a commit that referenced this pull request Jul 26, 2025
This was failing for two reasons:

1) We were wrongly treating the basic_string constructor as
zero-initializing the object, which it doesn't.
2) Given that, when we went to look for a value for the anonymous union,
we concluded that it was value-initialized, and trying to evaluate that
broke because we weren't setting ctx->ctor for it.

This patch fixes both issues, #1 by setting CONSTRUCTOR_NO_CLEARING and gcc-mirror#2
by inserting a new CONSTRUCTOR for the member rather than evaluate it out of
context, which is consistent with cxx_eval_store_expression.

	PR c++/120577

gcc/cp/ChangeLog:

	* constexpr.cc (cxx_eval_call_expression): Set
	CONSTRUCTOR_NO_CLEARING on initial value for ctor.
	(cxx_eval_component_reference): Make value-initialization
	of an aggregate member explicit.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/constexpr-union9.C: New test.

(cherry picked from commit f23b5df)
kraj pushed a commit that referenced this pull request Jul 26, 2025
This was failing for two reasons:

1) We were wrongly treating the basic_string constructor as
zero-initializing the object, which it doesn't.
2) Given that, when we went to look for a value for the anonymous union,
we concluded that it was value-initialized, and trying to evaluate that
broke because we weren't setting ctx->ctor for it.

This patch fixes both issues, #1 by setting CONSTRUCTOR_NO_CLEARING and gcc-mirror#2
by inserting a new CONSTRUCTOR for the member rather than evaluate it out of
context, which is consistent with cxx_eval_store_expression.

	PR c++/120577

gcc/cp/ChangeLog:

	* constexpr.cc (cxx_eval_call_expression): Set
	CONSTRUCTOR_NO_CLEARING on initial value for ctor.
	(cxx_eval_component_reference): Make value-initialization
	of an aggregate member explicit.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/constexpr-union9.C: New test.

(cherry picked from commit f23b5df)
kraj pushed a commit that referenced this pull request Aug 19, 2025
When comparing constraints during correspondence checking for a using
from a partial specialization, we need to substitute the partial
specialization arguments into the constraints rather than the primary
template arguments.  Otherwise we incorrectly reject e.g. the below
testcase as ambiguous since we substitute T=int* instead of T=int
into #1's constraints and don't notice the correspondence.

This patch corrects the recent r16-2771-gb9f1cc4e119da9 fix by using
outer_template_args instead of TI_ARGS of the DECL_CONTEXT, which
should always give the correct outer arguments for substitution.

	PR c++/121351

gcc/cp/ChangeLog:

	* class.cc (add_method): Use outer_template_args when
	substituting outer template arguments into constraints.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/concepts-using7.C: New test.

Reviewed-by: Jason Merrill <jason@redhat.com>
kraj pushed a commit that referenced this pull request Aug 26, 2025
When comparing constraints during correspondence checking for a using
from a partial specialization, we need to substitute the partial
specialization arguments into the constraints rather than the primary
template arguments.  Otherwise we incorrectly reject e.g. the below
testcase as ambiguous since we substitute T=int* instead of T=int
into #1's constraints and don't notice the correspondence.

This patch corrects the recent r16-2771-gb9f1cc4e119da9 fix by using
outer_template_args instead of TI_ARGS of the DECL_CONTEXT, which
should always give the correct outer arguments for substitution.

	PR c++/121351

gcc/cp/ChangeLog:

	* class.cc (add_method): Use outer_template_args when
	substituting outer template arguments into constraints.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/concepts-using7.C: New test.

Reviewed-by: Jason Merrill <jason@redhat.com>
(cherry picked from commit 0ab1e31)
kraj pushed a commit that referenced this pull request Aug 26, 2025
…op is invariant [PR121290]

Consider the example:

void
f (int *restrict x, int *restrict y, int *restrict z, int n)
{
  for (int i = 0; i < 4; ++i)
    {
      int res = 0;
      for (int j = 0; j < 100; ++j)
        res += y[j] * z[i];
      x[i] = res;
    }
}

we currently vectorize as

f:
        movi    v30.4s, 0
        ldr     q31, [x2]
        add     x2, x1, 400
.L2:
        ld1r    {v29.4s}, [x1], 4
        mla     v30.4s, v29.4s, v31.4s
        cmp     x2, x1
        bne     .L2
        str     q30, [x0]
        ret

which is not useful because by doing outer-loop vectorization we're performing
less work per iteration than we would had we done inner-loop vectorization and
simply unrolled the inner loop.

This patch teaches the cost model that if all your leafs are invariant, then
adjust the loop cost by * VF, since every vector iteration has at least one lane
really just doing 1 scalar.

There are a couple of ways we could have solved this, one is to increase the
unroll factor to process more iterations of the inner loop.  This removes the
need for the broadcast, however we don't support unrolling the inner loop within
the outer loop.  We only support unrolling by increasing the VF, which would
affect the outer loop as well as the inner loop.

We also don't directly support costing inner-loop vs outer-loop vectorization,
and as such we're left trying to predict/steer the cost model ahead of time to
what we think should be profitable.  This patch attempts to do so using a
heuristic which penalizes the outer-loop vectorization.

We now cost the loop as

note:  Cost model analysis:
  Vector inside of loop cost: 2000
  Vector prologue cost: 4
  Vector epilogue cost: 0
  Scalar iteration cost: 300
  Scalar outside cost: 0
  Vector outside cost: 4
  prologue iterations: 0
  epilogue iterations: 0
missed:  cost model: the vector iteration cost = 2000 divided by the scalar iteration cost = 300 is greater or equal to the vectorization factor = 4.
missed:  not vectorized: vectorization not profitable.
missed:  not vectorized: vector version will never be profitable.
missed:  Loop costings may not be worthwhile.

And subsequently generate:

.L5:
        add     w4, w4, w7
        ld1w    z24.s, p6/z, [x0, #1, mul vl]
        ld1w    z23.s, p6/z, [x0, gcc-mirror#2, mul vl]
        ld1w    z22.s, p6/z, [x0, gcc-mirror#3, mul vl]
        ld1w    z29.s, p6/z, [x0]
        mla     z26.s, p6/m, z24.s, z30.s
        add     x0, x0, x8
        mla     z27.s, p6/m, z23.s, z30.s
        mla     z28.s, p6/m, z22.s, z30.s
        mla     z25.s, p6/m, z29.s, z30.s
        cmp     w4, w6
        bls     .L5

and avoids the load and replicate if it knows it has enough vector pipes to do
so.

gcc/ChangeLog:

	PR target/121290
	* config/aarch64/aarch64.cc
	(class aarch64_vector_costs ): Add m_loop_fully_scalar_dup.
	(aarch64_vector_costs::add_stmt_cost): Detect invariant inner loops.
	(adjust_body_cost): Adjust final costing if m_loop_fully_scalar_dup.

gcc/testsuite/ChangeLog:

	PR target/121290
	* gcc.target/aarch64/pr121290.c: New test.
kraj pushed a commit that referenced this pull request Aug 27, 2025
The test was designed to pass with thumb2, but code generation changed
with the introduction of Low Overhead Loops, so the test can fail if
one overrides the flags when running the testsuite.

In addition, useless subtract / extension instructions require -O2 to
remove them (-O is not sufficient), so replace -O with -O2 in
dg-options.

arm_thumb2_ok_no_arm_v8_1m_lob does not do what the test needs (it can
fail because some flags conflict, rather than because lob are
supported, and we do not need to check runtime support in this test
anyway), so the patch reverts back to arm_thumb2_ok.

Finally, replace the scan-assembler directives with
check-function-bodies, checking both types of code generation (with
and without LOL).  Depending on architecture version, the two insns
    and     r0, r1, r0, lsr #1
    ands    r3, r3, #255
can be swapped, so accept both orders.

gcc/testsuite/ChangeLog:

	PR target/116445
	* gcc.target/arm/unsigned-extend-2.c: Fix dg directives.

(cherry picked from commit 20c2591)
kraj pushed a commit that referenced this pull request Oct 15, 2025
The vadcq and vsbcq patterns had two problems:
- the adc / sbc part of the pattern did not mention the use of vfpcc
- the carry calcultation part should use a different unspec code

In addtion, the get_fpscr_nzcvqc and set_fpscr_nzcvqc were
over-cautious by using unspec_volatile when unspec is really what they
need.  Making them unspec enables to remove redundant accesses to
FPSCR_nzcvqc.

With unspec_volatile, we used to generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmov.i32	q0, #0x1  @ v4si
	push	{lr}
	sub	sp, sp, gcc-mirror#12
	vmrs	r3, FPSCR_nzcvqc    ;; [1]
	bic	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc     ;; [2]
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	ldr	r0, .L8
	ubfx	r3, r3, gcc-mirror#29, #1
	str	r3, [sp, gcc-mirror#4]
	bl	print_uint32x4_t
	add	sp, sp, gcc-mirror#12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

with unspec, we generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmrs	r3, FPSCR_nzcvqc     ;; [1]
	bic	r3, r3, #536870912   ;; [3]
	vmov.i32	q0, #0x1  @ v4si
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	push	{lr}
	ubfx	r3, r3, gcc-mirror#29, #1
	sub	sp, sp, gcc-mirror#12
	ldr	r0, .L8
	str	r3, [sp, gcc-mirror#4]
	bl	print_uint32x4_t
	add	sp, sp, gcc-mirror#12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

That is, unspec in get_fpscr_nzcvqc enables to:
- move [1] earlier
- delete redundant [2]

and unspec in set_fpscr_nzcvqc enables to move push {lr} and stack
manipulation later.

gcc/ChangeLog:

	PR target/122189
	* config/arm/iterators.md (VxCIQ_carry, VxCIQ_M_carry, VxCQ_carry)
	(VxCQ_M_carry): New iterators.
	* config/arm/mve.md (get_fpscr_nzcvqc, set_fpscr_nzcvqc): Use
	unspec instead of unspec_volatile.
	(vadciq, vadciq_m, vadcq, vadcq_m): Use vfpcc in operation.  Use a
	different unspec code for carry calcultation.
	* config/arm/unspecs.md (VADCQ_U_carry, VADCQ_M_U_carry)
	(VADCQ_S_carry, VADCQ_M_S_carry, VSBCIQ_U_carry ,VSBCIQ_S_carry
	,VSBCIQ_M_U_carry ,VSBCIQ_M_S_carry ,VSBCQ_U_carry ,VSBCQ_S_carry
	,VSBCQ_M_U_carry ,VSBCQ_M_S_carry ,VADCIQ_U_carry
	,VADCIQ_M_U_carry ,VADCIQ_S_carry ,VADCIQ_M_S_carry): New unspec
	codes.

gcc/testsuite/ChangeLog:

	PR target/122189
	* gcc.target/arm/mve/intrinsics/vadcq-check-carry.c: New test.
	* gcc.target/arm/mve/intrinsics/vadcq_m_s32.c: Adjust instructions
	order.
	* gcc.target/arm/mve/intrinsics/vadcq_m_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c: Likewise.
kraj pushed a commit that referenced this pull request Oct 22, 2025
The vectorizer has learned how to do boolean reductions of masks to a C bool
for the operations OR, XOR and AND.

This implements the new optabs for Adv.SIMD.  Adv.SIMD today can already
vectorize such loops but does so through SHIFT-AND-INSERT to perform the
reductions step-wise and inorder.  As an example, an OR reduction today does:

        movi    v3.4s, 0
        ext     v5.16b, v30.16b, v3.16b, gcc-mirror#8
        orr     v5.16b, v5.16b, v30.16b
        ext     v29.16b, v5.16b, v3.16b, gcc-mirror#4
        orr     v29.16b, v29.16b, v5.16b
        ext     v4.16b, v29.16b, v3.16b, gcc-mirror#2
        orr     v4.16b, v4.16b, v29.16b
        ext     v3.16b, v4.16b, v3.16b, #1
        orr     v3.16b, v3.16b, v4.16b
        fmov    w1, s3
        and     w1, w1, 1

For reducing to a boolean however we don't need the stepwise reduction and can
just look at the bit patterns. For e.g. OR we now generate:

        umaxp	v3.4s, v3.4s, v3.4s
        fmov	x1, d3
        cmp	x1, 0
        cset	w0, ne

For the remaining codegen see test vect-reduc-bool-9.c.

gcc/ChangeLog:

	* config/aarch64/aarch64-simd.md (reduc_sbool_and_scal_<mode>,
	reduc_sbool_ior_scal_<mode>, reduc_sbool_xor_scal_<mode>): New.
	* config/aarch64/iterators.md (VALLI): New.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/vect-reduc-bool-1.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-2.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-3.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-4.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-5.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-6.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-7.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-8.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-9.c: New test.
kraj pushed a commit that referenced this pull request Nov 8, 2025
…upper bits in a GPR

So pre-commit CI flagged an issue with the initial version of this patch.  In
particular the cmp-mem-const-{1,2} tests are failing.

I didn't see that in my internal testing, but that well could be an artifact of
having multiple patches touching in the same broad space that the tester is
evaluating.  If I apply just this patch I can trigger the cmp-mem-const{1,2}
failures.

The code we're getting now is actually better than we were getting before, but
the new patterns avoid the path through combine that emits the message about
narrowing the load down to a byte load, hence the failure.

Given we're getting better code now than before, I'm just skipping this test on
risc-v.    That's the only non-whitespace change since the original version of
this patch.

--

This addresses the first level issues seen in generating better performing code
for testcases derived from pr121136.  It likely regresses code size in some
cases as in many cases it selects code sequences that should be better
performing, though larger to encode.

Improving -Os code generation should remain the primary focus of pr121136.  Any
improvements in code size with this change are a nice side effect, but not the
primary goal.

--

Let's take this test (derived from the PR):

_Bool func1_0x1U (unsigned int x) { return x <= 0x1U; }

_Bool func2_0x1U (unsigned int x) { return ((x >> __builtin_ctz (0x1U + 1U)) == 0); }

_Bool func3_0x1U (unsigned int x) { return ((x / (0x1U + 1U)) == 0); }

Those should produce the same output.  We currently get these fragments for the
3 cases.  In particular note how the second variant is a two instruction
sequence.

        sltiu   a0,a0,2

        srliw   a0,a0,1
        seqz    a0,a0

        sltiu   a0,a0,2

This patch will adjust that second sequence to match the first and third and is
optimal.

Let's take another case.  This is interesting as it's right at the simm12
border:

_Bool func1_0x7ffU (unsigned long x) { return x <= 0x7ffU; }

_Bool func2_0x7ffU (unsigned long x) { return ((x >> __builtin_ctzl (0x7ffU + 1UL)) == 0); }

_Bool func3_0x7ffU (unsigned long x) { return ((x / (0x7ffU + 1UL)) == 0); }

We get:

        li      a5,2047
        sltu    a0,a5,a0
        seqz    a0,a0

        srli    a0,a0,11
        seqz    a0,a0

        li      a5,2047
        sltu    a0,a5,a0
        seqz    a0,a0

In this case the second sequence is pretty good.  Not perfect, but clearly
better than the other two.  This patch will fix the code for case #1 and case

So anyway, that's the basic motivation here.  So to be 100% clear, while the
bug is focused on code size, I'm focused on the performance of the resulting
code.

This has been tested on riscv32-elf and riscv64-elf.  It's also bootstrapped
and regression tested on the Pioneer.  The BPI won't have results for this
patch until late tomorrow.

--

	PR rtl-optimization/121136
gcc/
	* config/riscv/riscv.md: Add define_insn to test the
	upper bits of a register against zero using sltiu when
	the bits are extracted via zero_extract or logial right shift.
	Add 3->2 define_splits for gtu/leu cases testing upper bits
	against zero.

gcc/testsuite
	* gcc.target/riscv/pr121136.c: New test.
	* gcc.dg/cmp-mem-const-1.c: Skip for risc-v.
	* gcc.dg/cmp-mem-const-2.c: Likewise.
kraj pushed a commit that referenced this pull request Nov 12, 2025
The vadcq and vsbcq patterns had two problems:
- the adc / sbc part of the pattern did not mention the use of vfpcc
- the carry calcultation part should use a different unspec code

In addtion, the get_fpscr_nzcvqc and set_fpscr_nzcvqc were
over-cautious by using unspec_volatile when unspec is really what they
need.  Making them unspec enables to remove redundant accesses to
FPSCR_nzcvqc.

With unspec_volatile, we used to generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmov.i32	q0, #0x1  @ v4si
	push	{lr}
	sub	sp, sp, gcc-mirror#12
	vmrs	r3, FPSCR_nzcvqc    ;; [1]
	bic	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc     ;; [2]
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	ldr	r0, .L8
	ubfx	r3, r3, gcc-mirror#29, #1
	str	r3, [sp, gcc-mirror#4]
	bl	print_uint32x4_t
	add	sp, sp, gcc-mirror#12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

with unspec, we generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmrs	r3, FPSCR_nzcvqc     ;; [1]
	bic	r3, r3, #536870912   ;; [3]
	vmov.i32	q0, #0x1  @ v4si
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	push	{lr}
	ubfx	r3, r3, gcc-mirror#29, #1
	sub	sp, sp, gcc-mirror#12
	ldr	r0, .L8
	str	r3, [sp, gcc-mirror#4]
	bl	print_uint32x4_t
	add	sp, sp, gcc-mirror#12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

That is, unspec in get_fpscr_nzcvqc enables to:
- move [1] earlier
- delete redundant [2]

and unspec in set_fpscr_nzcvqc enables to move push {lr} and stack
manipulation later.

gcc/ChangeLog:

	PR target/122189
	* config/arm/iterators.md (VxCIQ_carry, VxCIQ_M_carry, VxCQ_carry)
	(VxCQ_M_carry): New iterators.
	* config/arm/mve.md (get_fpscr_nzcvqc, set_fpscr_nzcvqc): Use
	unspec instead of unspec_volatile.
	(vadciq, vadciq_m, vadcq, vadcq_m): Use vfpcc in operation.  Use a
	different unspec code for carry calcultation.
	* config/arm/unspecs.md (VADCQ_U_carry, VADCQ_M_U_carry)
	(VADCQ_S_carry, VADCQ_M_S_carry, VSBCIQ_U_carry ,VSBCIQ_S_carry
	,VSBCIQ_M_U_carry ,VSBCIQ_M_S_carry ,VSBCQ_U_carry ,VSBCQ_S_carry
	,VSBCQ_M_U_carry ,VSBCQ_M_S_carry ,VADCIQ_U_carry
	,VADCIQ_M_U_carry ,VADCIQ_S_carry ,VADCIQ_M_S_carry): New unspec
	codes.

gcc/testsuite/ChangeLog:

	PR target/122189
	* gcc.target/arm/mve/intrinsics/vadcq-check-carry.c: New test.
	* gcc.target/arm/mve/intrinsics/vadcq_m_s32.c: Adjust instructions
	order.
	* gcc.target/arm/mve/intrinsics/vadcq_m_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c: Likewise.

	(cherry picked from commits
	0272058 and
	697ccad)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants