Skip to content

Conversation

@propixel-prc
Copy link

No description provided.

@pbeeler
Copy link

pbeeler commented Jan 15, 2015

why do you want to merge gcc4.8 into master?

@pathawks
Copy link

I believe this is a read-only repo.

rguenth and others added 28 commits February 24, 2015 15:09
	Backport from mainline
	2014-11-19  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/63844
	* omp-low.c (fixup_child_record_type): Use a restrict qualified
	referece type for the receiver parameter.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@220941 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@220954 138bc75d-0d04-0410-961f-82ee72b054a4
	Backport from mainline
	2015-02-16  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/63593
	* tree-predcom.c (execute_pred_commoning_chain): Delay removing
	stmts and releasing SSA names until...
	(execute_pred_commoning): ... after processing all chains.

	* gcc.dg/pr63593.c: New testcase.

	2015-02-18  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/65063
	* tree-predcom.c (determine_unroll_factor): Return 1 if we
	have replaced looparound PHIs.

	* gcc.dg/pr65063.c: New testcase.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@220960 138bc75d-0d04-0410-961f-82ee72b054a4
	* config/avr/avr.c (avr_adjust_insn_length): Call recog_memoized
	only with NONDEBUG_INSN_P.



git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@220965 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@220993 138bc75d-0d04-0410-961f-82ee72b054a4
	Backport from mainline
	2014-11-27  Richard Biener  <rguenther@suse.de>

	PR tree-optimization/61634
	* tree-vect-slp.c: (vect_detect_hybrid_slp_stmts): Rewrite to
	propagate hybrid down the SLP tree for one scalar statement.
	(vect_detect_hybrid_slp_1): New walker function.
	(vect_detect_hybrid_slp_2): Likewise.
	(vect_detect_hybrid_slp): Properly handle pattern statements
	in a pre-scan over all loop stmts.

	* gcc.dg/vect/pr61634.c: New testcase.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221006 138bc75d-0d04-0410-961f-82ee72b054a4
	Backport from mainline
	2015-02-25  Adhemerval Zanella  <azanella@linux.vnet.ibm.com>

	* config/rs6000/htm.md (tcheck): Fix assembly encoding.

gcc/testsuite/
	Backport from mainline
	2015-02-25  Peter Bergner  <bergner@vnet.ibm.com>

	* gcc.target/powerpc/htm-builtin-1.c (dg-do) Change to assemble.
	(dg-options): Add -save-temps.
	(dg-final): Add cleanup-saved-temps.

	2015-02-25  Adhemerval Zanella  <azanella@linux.vnet.ibm.com>

	* gcc.target/powerpc/htm-builtin-1.c: Fix tcheck expect value.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221019 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221036 138bc75d-0d04-0410-961f-82ee72b054a4
	PR lto/65193
	Backport from mainline
	2014-07-24  Jan Hubicka  <hubicka@ucw.cz>
 
	* lto-streamer-out.c (tree_is_indexable): Consider IMPORTED_DECL
	as non-indexable.

	* g++.dg/lto/pr65193_0.C: New testcase.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221054 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221072 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221081 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221093 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221129 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221167 138bc75d-0d04-0410-961f-82ee72b054a4
    Backport from mainline
    2015-01-14  Thomas Preud'homme  <thomas.preudhomme@arm.com>

    gcc/
    PR target/64453
    * config/arm/arm.c (callee_saved_reg_p): Define.
    (arm_compute_save_reg0_reg12_mask): Use callee_saved_reg_p to check if
    register is callee saved instead of !call_used_regs[reg].
    (thumb1_compute_save_reg_mask): Likewise.

    gcc/testsuite/
    PR target/64453
    * gcc.target/arm/pr64453.c: New.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221170 138bc75d-0d04-0410-961f-82ee72b054a4
    Backport from mainline
    2014-11-27  Thomas Preud'homme  <thomas.preudhomme@arm.com>

    gcc/
    PR target/59593
    * config/arm/arm.c (dump_minipool): dispatch to consttable pattern
    based on mode size.
    * config/arm/arm.md (consttable_1): Make it TARGET_EITHER.
    (consttable_2): Make it TARGET_EITHER and move HFmode handling from
    consttable_4 to it.
    (consttable_4): Move HFmode handling to consttable_2 pattern.

    gcc/testsuite/
    PR target/59593
    * gcc.target/arm/constant-pool.c: New test.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221173 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221200 138bc75d-0d04-0410-961f-82ee72b054a4
	Backport from trunk
	2015-03-03  Michael Meissner  <meissner@linux.vnet.ibm.com>

	PR 65138/target
	* config/rs6000/rs6000-cpus.def (powerpc64le): Add new generic
	processor type for 64-bit little endian PowerPC.

	* config/rs6000/rs6000.c (rs6000_option_override_internal): If
	-mdebug=reg, print TARGET_DEFAULT.  Fix logic to use
	TARGET_DEFAULT if there is no default cpu.  Fix -mdebug=reg
	printing built-in mask so it does not pass NULL pointers.

	* config/rs6000/rs6000-tables.opt: Regenerate.

	* doc/invoke.texi (IBM RS/6000 and PowerPC options): Document
	-mcpu=powerpc64le.

	Backport from trunk
	2015-01-19  David Edelsohn  <dje.gcc@gmail.com>

	* config/rs6000/default64.h: Include rs6000-cpus.def.
	(TARGET_DEFAULT) [LITTLE_ENDIAN]: Use ISA 2.7 (POWER8).
	(TARGET_DEFAULT) [BIG_ENDIAN]: Use POWER4.
	* config/rs6000/driver-rs6000.c (detect_processor_aix): Add POWER7
	and POWER8.
	* config/rs6000/linux64.h (PROCESSOR_DEFAULT64): Always default to
	POWER8.
	* config/rs6000/rs6000.c (rs6000_file_start): Emit .machine
	pseudo-op to specify assembler dialect.



git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221225 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221227 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221251 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221259 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221272 138bc75d-0d04-0410-961f-82ee72b054a4
	* config.gcc (powerpc*-*-linux*): Arrange for powerpc64le-linux
	to be single-arch by default.  Set cpu_is_64bit for powerpc64
	given --with-cpu=native.
	* config/rs6000/t-fprules: Do not set default MULTILIB vars.
	* config/rs6000/t-linux (MULTIARCH_DIRNAME): Support powerpc64
	and powerpc64le.
	* config/rs6000/linux64.h (SUBSUBTARGET_OVERRIDE_OPTIONS): Test
	rs6000_isa_flags rather than TARGET_64BIT.



git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221290 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221294 138bc75d-0d04-0410-961f-82ee72b054a4
	PR target/53988
	* config/sh/sh.md (*tst<mode>_t_zero): Remove insns.

gcc/testsuite/
	PR target/53988
	* gcc.target/sh/pr53988.c: Mark tests as xfail.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221307 138bc75d-0d04-0410-961f-82ee72b054a4
	* config/rs6000/t-linux: For powerpc64* target set
	MULTILIB_OSDIRNAMES instead of MULTIARCH_DIRNAME.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221324 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221340 138bc75d-0d04-0410-961f-82ee72b054a4
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch@221367 138bc75d-0d04-0410-961f-82ee72b054a4
hubot pushed a commit that referenced this pull request Feb 7, 2025
In a member-specification of a class, a noexcept-specifier is
a complete-class context.  Thus we delay parsing until the end of
the class via our DEFERRED_PARSE mechanism; see cp_parser_save_noexcept
and cp_parser_late_noexcept_specifier.

We also attempt to defer instantiation of noexcept-specifiers in order
to reduce the number of instantiations; this is done via DEFERRED_NOEXCEPT.

We can even have both, as in noexcept65.C: a DEFERRED_PARSE wrapped in
DEFERRED_NOEXCEPT, which uses the DEFPARSE_INSTANTIATIONS mechanism.
noexcept65.C works, because when we really need the noexcept, which is
when parsing the body of S::A::A(), the noexcept will have been parsed
already; noexcepts are parsed before bodies of member function.

But in this test we have:

  struct A {
      int x;
      template<class>
      void foo() noexcept(noexcept(x)) {}
      auto bar() -> decltype(foo<int>()) {} // #1
  };

and I think the decltype in #1 needs the unparsed noexcept before it
could have been parsed.  clang++ rejects the test and I suppose we
should reject it as well, rather than crashing on a DEFERRED_PARSE
in tsubst_expr.

	PR c++/117106
	PR c++/118190

gcc/cp/ChangeLog:

	* pt.cc (maybe_instantiate_noexcept): Give an error if the noexcept
	hasn't been parsed yet.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/noexcept89.C: New test.
	* g++.dg/cpp0x/noexcept90.C: New test.

Reviewed-by: Jason Merrill <jason@redhat.com>
hubot pushed a commit that referenced this pull request Mar 25, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I
did not go compile something that old, and identified this change via
git blame, so might be wrong)

=== cut here ===
struct Foo { int x; };
Foo& get (Foo &v) { return v; }
void bar () {
  Foo v; v.x = 1;
  (true ? get (v) : get (v)).*(&Foo::x) = 2;
  // v.x still equals 1 here...
}
=== cut here ===

The problem lies in build_m_component_ref, that computes the address of
the COND_EXPR using build_address to build the representation of
  (true ? get (v) : get (v)).*(&Foo::x);
and gets something like
  &(true ? get (v) : get (v))  // #1
instead of
  (true ? &get (v) : &get (v)) // #2
and the write does not go where want it to, hence the miscompile.

This patch replaces the call to build_address by a call to
cp_build_addr_expr, which gives #2, that is properly handled.

	PR c++/114525

gcc/cp/ChangeLog:

	* typeck2.cc (build_m_component_ref): Call cp_build_addr_expr
	instead of build_address.

gcc/testsuite/ChangeLog:

	* g++.dg/expr/cond18.C: New test.
hubot pushed a commit that referenced this pull request Mar 31, 2025
Here we instantiate the lambda three times in producing A<0>::f:
1) in tsubst_function_type, substituting the type of A<>::f
2) in tsubst_function_decl, substituting the parameters of A<>::f
3) in regenerate_decl_from_template when instantiating A<>::f

The first one gets thrown away by maybe_rebuild_function_decl_type.  Before
r15-7202, we happily built all of them and mangled the result wrongly as
lambda #3.  After r15-7202, we try to mangle #3 as #1, which breaks because
 #1 is already mangled as #1.

This patch avoids building #3 by suppressing regenerate_decl_from_template
if the template signature includes a lambda, fixing the ICE.

We now mangle the lambda as #2, which is still wrong.  Addressing that
should involve not calling tsubst_function_type from tsubst_function_decl,
and building the type from the parms types in the first place rather than
fixing it up in maybe_rebuild_function_decl_type.

	PR c++/119401

gcc/cp/ChangeLog:

	* pt.cc (regenerate_decl_from_template): Don't regenerate if the
	signature involves a lambda.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/lambda-targ11.C: New test.
Peter0x44 pushed a commit to Peter0x44/gcc that referenced this pull request Apr 6, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I
did not go compile something that old, and identified this change via
git blame, so might be wrong)

=== cut here ===
struct Foo { int x; };
Foo& get (Foo &v) { return v; }
void bar () {
  Foo v; v.x = 1;
  (true ? get (v) : get (v)).*(&Foo::x) = 2;
  // v.x still equals 1 here...
}
=== cut here ===

The problem lies in build_m_component_ref, that computes the address of
the COND_EXPR using build_address to build the representation of
  (true ? get (v) : get (v)).*(&Foo::x);
and gets something like
  &(true ? get (v) : get (v))  // gcc-mirror#1
instead of
  (true ? &get (v) : &get (v)) // gcc-mirror#2
and the write does not go where want it to, hence the miscompile.

This patch replaces the call to build_address by a call to
cp_build_addr_expr, which gives gcc-mirror#2, that is properly handled.

	PR c++/114525

gcc/cp/ChangeLog:

	* typeck2.cc (build_m_component_ref): Call cp_build_addr_expr
	instead of build_address.

gcc/testsuite/ChangeLog:

	* g++.dg/expr/cond18.C: New test.
Peter0x44 pushed a commit to Peter0x44/gcc that referenced this pull request Apr 6, 2025
Here we instantiate the lambda three times in producing A<0>::f:
1) in tsubst_function_type, substituting the type of A<>::f
2) in tsubst_function_decl, substituting the parameters of A<>::f
3) in regenerate_decl_from_template when instantiating A<>::f

The first one gets thrown away by maybe_rebuild_function_decl_type.  Before
r15-7202, we happily built all of them and mangled the result wrongly as
lambda gcc-mirror#3.  After r15-7202, we try to mangle gcc-mirror#3 as gcc-mirror#1, which breaks because
 gcc-mirror#1 is already mangled as gcc-mirror#1.

This patch avoids building gcc-mirror#3 by suppressing regenerate_decl_from_template
if the template signature includes a lambda, fixing the ICE.

We now mangle the lambda as gcc-mirror#2, which is still wrong.  Addressing that
should involve not calling tsubst_function_type from tsubst_function_decl,
and building the type from the parms types in the first place rather than
fixing it up in maybe_rebuild_function_decl_type.

	PR c++/119401

gcc/cp/ChangeLog:

	* pt.cc (regenerate_decl_from_template): Don't regenerate if the
	signature involves a lambda.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/lambda-targ11.C: New test.
@IoIxD
Copy link

IoIxD commented Apr 9, 2025

heart goes out to the guy who made this PR and was never told that this isn't the official repo

hubot pushed a commit that referenced this pull request Apr 14, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I
did not go compile something that old, and identified this change via
git blame, so might be wrong)

=== cut here ===
struct Foo { int x; };
Foo& get (Foo &v) { return v; }
void bar () {
  Foo v; v.x = 1;
  (true ? get (v) : get (v)).*(&Foo::x) = 2;
  // v.x still equals 1 here...
}
=== cut here ===

The problem lies in build_m_component_ref, that computes the address of
the COND_EXPR using build_address to build the representation of
  (true ? get (v) : get (v)).*(&Foo::x);
and gets something like
  &(true ? get (v) : get (v))  // #1
instead of
  (true ? &get (v) : &get (v)) // #2
and the write does not go where want it to, hence the miscompile.

This patch replaces the call to build_address by a call to
cp_build_addr_expr, which gives #2, that is properly handled.

	PR c++/114525

gcc/cp/ChangeLog:

	* typeck2.cc (build_m_component_ref): Call cp_build_addr_expr
	instead of build_address.

gcc/testsuite/ChangeLog:

	* g++.dg/expr/cond18.C: New test.

(cherry picked from commit 35ce9af)
hubot pushed a commit that referenced this pull request Apr 14, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I
did not go compile something that old, and identified this change via
git blame, so might be wrong)

=== cut here ===
struct Foo { int x; };
Foo& get (Foo &v) { return v; }
void bar () {
  Foo v; v.x = 1;
  (true ? get (v) : get (v)).*(&Foo::x) = 2;
  // v.x still equals 1 here...
}
=== cut here ===

The problem lies in build_m_component_ref, that computes the address of
the COND_EXPR using build_address to build the representation of
  (true ? get (v) : get (v)).*(&Foo::x);
and gets something like
  &(true ? get (v) : get (v))  // #1
instead of
  (true ? &get (v) : &get (v)) // #2
and the write does not go where want it to, hence the miscompile.

This patch replaces the call to build_address by a call to
cp_build_addr_expr, which gives #2, that is properly handled.

	PR c++/114525

gcc/cp/ChangeLog:

	* typeck2.cc (build_m_component_ref): Call cp_build_addr_expr
	instead of build_address.

gcc/testsuite/ChangeLog:

	* g++.dg/expr/cond18.C: New test.

(cherry picked from commit 35ce9af)
hubot pushed a commit that referenced this pull request May 10, 2025
this patch fixes some of problems with cosint in scalar to vector pass.
In particular
 1) the pass uses optimize_insn_for_size which is intended to be used by
    expanders and splitters and requires the optimization pass to use
    set_rtl_profile (bb) for currently processed bb.
    This is not done, so we get random stale info about hotness of insn.
 2) register allocator move costs are all realtive to integer reg-reg move
    which has cost of 2, so it is (except for size tables and i386)
    a latency of instruction multiplied by 2.
    These costs has been duplicated and are now used in combination with
    rtx costs which are all based to COSTS_N_INSNS that multiplies latency
    by 4.
    Some of vectorizer costing contains COSTS_N_INSNS (move_cost) / 2
    to compensate, but some new code does not.  This patch adds compensatoin.

    Perhaps we should update the cost tables to use COSTS_N_INSNS everywher
    but I think we want to first fix inconsistencies.  Also the tables will
    get optically much longer, since we have many move costs and COSTS_N_INSNS
    is a lot of characters.
 3) variable m which decides how much to multiply integer variant (to account
    that with -m32 all 64bit computations needs 2 instructions) is declared
    unsigned which makes the signed computation of instruction gain to be
    done in unsigned type and breaks i.e. for division.
 4) I added integer_to_sse costs which are currently all duplicationof
    sse_to_integer. AMD chips are asymetric and moving one direction is faster
    than another.  I will chance costs incremnetally once vectorizer part
    is fixed up, too.

There are two failures gcc.target/i386/minmax-6.c and gcc.target/i386/minmax-7.c.
Both test stv on hasswell which no longer happens since SSE->INT and INT->SSE moves
are now more expensive.

There is only one instruction to convert:

Computing gain for chain #1...
  Instruction gain 8 for    11: {r110:SI=smax(r116:SI,0);clobber flags:CC;}
  Instruction conversion gain: 8
  Registers conversion cost: 8    <- this is integer_to_sse and sse_to_integer
  Total gain: 0

total gain used to be 4 since the patch doubles the conversion costs.
According to agner fog's tables the costs should be 1 cycle which is correct
here.

Final code gnerated is:

	vmovd	%esi, %xmm0         * latency 1
	cmpl	%edx, %esi
	je	.L2
	vpxor	%xmm1, %xmm1, %xmm1 * latency 1
	vpmaxsd	%xmm1, %xmm0, %xmm0 * latency 1
	vmovd	%xmm0, %eax         * latency 1
	imull	%edx, %eax
	cltq
	movzwl	(%rdi,%rax,2), %eax
	ret

	cmpl	%edx, %esi
	je	.L2
	xorl	%eax, %eax          * latency 1
	testl	%esi, %esi          * latency 1
	cmovs	%eax, %esi          * latency 2
	imull	%edx, %esi
	movslq	%esi, %rsi
	movzwl	(%rdi,%rsi,2), %eax
	ret

Instructions with latency info are those really different.
So the uncoverted code has sum of latencies 4 and real latency 3.
Converted code has sum of latencies 4 and real latency 3 (vmod+vpmaxsd+vmov).
So I do not quite see it should be a win.

There is also a bug in costing MIN/MAX

	    case ABS:
	    case SMAX:
	    case SMIN:
	    case UMAX:
	    case UMIN:
	      /* We do not have any conditional move cost, estimate it as a
		 reg-reg move.  Comparisons are costed as adds.  */
	      igain += m * (COSTS_N_INSNS (2) + ix86_cost->add);
	      /* Integer SSE ops are all costed the same.  */
	      igain -= ix86_cost->sse_op;
	      break;

Now COSTS_N_INSNS (2) is not quite right since reg-reg move should be 1 or perhaps 0.
For Haswell cmov really is 2 cycles, but I guess we want to have that in cost vectors
like all other instructions.

I am not sure if this is really a win in this case (other minmax testcases seems to make
sense).  I have xfailed it for now and will check if that affects specs on LNT testers.

I will proceed with similar fixes on vectorizer cost side. Sadly those introduces
quite some differences in the testuiste (partly triggered by other costing problems,
such as one of scatter/gather)

gcc/ChangeLog:

	* config/i386/i386-features.cc
	(general_scalar_chain::vector_const_cost): Add BB parameter; handle
	size costs; use COSTS_N_INSNS to compute move costs.
	(general_scalar_chain::compute_convert_gain): Use optimize_bb_for_size
	instead of optimize_insn_for size; use COSTS_N_INSNS to compute move costs;
	update calls of general_scalar_chain::vector_const_cost; use
	ix86_cost->integer_to_sse.
	(timode_immed_const_gain): Add bb parameter; use
	optimize_bb_for_size_p.
	(timode_scalar_chain::compute_convert_gain): Use optimize_bb_for_size_p.
	* config/i386/i386-features.h (class general_scalar_chain): Update
	prototype of vector_const_cost.
	* config/i386/i386.h (struct processor_costs): Add integer_to_sse.
	* config/i386/x86-tune-costs.h (struct processor_costs): Copy
	sse_to_integer to integer_to_sse everywhere.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/minmax-6.c: xfail test that pmax is used.
	* gcc.target/i386/minmax-7.c: xfall test that pmin is used.
keith-packard pushed a commit to keith-packard/gcc that referenced this pull request May 11, 2025
hubot pushed a commit that referenced this pull request May 12, 2025
The test was designed to pass with thumb2, but code generation changed
with the introduction of Low Overhead Loops, so the test can fail if
one overrides the flags when running the testsuite.

In addition, useless subtract / extension instructions require -O2 to
remove them (-O is not sufficient), so replace -O with -O2 in
dg-options.

arm_thumb2_ok_no_arm_v8_1m_lob does not do what the test needs (it can
fail because some flags conflict, rather than because lob are
supported, and we do not need to check runtime support in this test
anyway), so the patch reverts back to arm_thumb2_ok.

Finally, replace the scan-assembler directives with
check-function-bodies, checking both types of code generation (with
and without LOL).  Depending on architecture version, the two insns
    and     r0, r1, r0, lsr #1
    ands    r3, r3, #255
can be swapped, so accept both orders.

gcc/testsuite/ChangeLog:

	PR target/116445
	* gcc.target/arm/unsigned-extend-2.c: Fix dg directives.
hubot pushed a commit that referenced this pull request Jun 9, 2025
This patch adds a new param vect-scalar-cost-multiplier to scale the scalar
costing during vectorization.  If the cost is set high enough and when using
the dynamic cost model it has the effect of effectively disabling the
costing vs scalar and assumes all vectorization to be profitable.

This is similar to using the unlimited cost model, but unlike unlimited it
does not fully disable the vector cost model.  That means that we still
perform comparisons between vector modes.  And it means it also still does
costing for alias analysis.

As an example, the following:

void
foo (char *restrict a, int *restrict b, int *restrict c,
     int *restrict d, int stride)
{
    if (stride <= 1)
        return;

    for (int i = 0; i < 3; i++)
        {
            int res = c[i];
            int t = b[i * stride];
            if (a[i] != 0)
                res = t * d[i];
            c[i] = res;
        }
}

compiled with -O3 -march=armv8-a+sve -fvect-cost-model=dynamic fails to
vectorize as it assumes scalar would be faster, and with
-fvect-cost-model=unlimited it picks a vector type that's so big that the large
sequence generated is working on mostly inactive lanes:

        ...
        and     p3.b, p3/z, p4.b, p4.b
        whilelo p0.s, wzr, w7
        ld1w    z23.s, p3/z, [x3, #3, mul vl]
        ld1w    z28.s, p0/z, [x5, z31.s, sxtw 2]
        add     x0, x5, x0
        punpklo p6.h, p6.b
        ld1w    z27.s, p4/z, [x0, z31.s, sxtw 2]
        and     p6.b, p6/z, p0.b, p0.b
        punpklo p4.h, p7.b
        ld1w    z24.s, p6/z, [x3, #2, mul vl]
        and     p4.b, p4/z, p2.b, p2.b
        uqdecw  w6
        ld1w    z26.s, p4/z, [x3]
        whilelo p1.s, wzr, w6
        mul     z27.s, p5/m, z27.s, z23.s
        ld1w    z29.s, p1/z, [x4, z31.s, sxtw 2]
        punpkhi p7.h, p7.b
        mul     z24.s, p5/m, z24.s, z28.s
        and     p7.b, p7/z, p1.b, p1.b
        mul     z26.s, p5/m, z26.s, z30.s
        ld1w    z25.s, p7/z, [x3, #1, mul vl]
        st1w    z27.s, p3, [x2, #3, mul vl]
        mul     z25.s, p5/m, z25.s, z29.s
        st1w    z24.s, p6, [x2, #2, mul vl]
        st1w    z25.s, p7, [x2, #1, mul vl]
        st1w    z26.s, p4, [x2]
        ...

With -fvect-cost-model=dynamic --param vect-scalar-cost-multiplier=200
you get more reasonable code:

foo:
        cmp     w4, 1
        ble     .L1
        ptrue   p7.s, vl3
        index   z0.s, #0, w4
        ld1b    z29.s, p7/z, [x0]
        ld1w    z30.s, p7/z, [x1, z0.s, sxtw 2]
	ptrue   p6.b, all
        cmpne   p7.b, p7/z, z29.b, #0
        ld1w    z31.s, p7/z, [x3]
	mul     z31.s, p6/m, z31.s, z30.s
        st1w    z31.s, p7, [x2]
.L1:
        ret

This model has been useful internally for performance exploration and cost-model
validation.  It allows us to force realistic vectorization overriding the cost
model to be able to tell whether it's correct wrt to profitability.

gcc/ChangeLog:

	* params.opt (vect-scalar-cost-multiplier): New.
	* tree-vect-loop.cc (vect_estimate_min_profitable_iters): Use it.
	* doc/invoke.texi (vect-scalar-cost-multiplier): Document it.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/sve/cost_model_16.c: New test.
hubot pushed a commit that referenced this pull request Jun 13, 2025
…o_debug_section [PR116614]

cat abc.C
  #define A(n) struct T##n {} t##n;
  #define B(n) A(n##0) A(n##1) A(n##2) A(n##3) A(n##4) A(n##5) A(n##6) A(n##7) A(n##8) A(n##9)
  #define C(n) B(n##0) B(n##1) B(n##2) B(n##3) B(n##4) B(n##5) B(n##6) B(n##7) B(n##8) B(n##9)
  #define D(n) C(n##0) C(n##1) C(n##2) C(n##3) C(n##4) C(n##5) C(n##6) C(n##7) C(n##8) C(n##9)
  #define E(n) D(n##0) D(n##1) D(n##2) D(n##3) D(n##4) D(n##5) D(n##6) D(n##7) D(n##8) D(n##9)
  E(1) E(2) E(3)
  int main () { return 0; }
./xg++ -B ./ -o abc{.o,.C} -flto -flto-partition=1to1 -O2 -g -fdebug-types-section -c
./xgcc -B ./ -o abc{,.o} -flto -flto-partition=1to1 -O2
(not included in testsuite as it takes a while to compile) FAILs with
lto-wrapper: fatal error: Too many copied sections: Operation not supported
compilation terminated.
/usr/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status

The following patch fixes that.  Most of the 64K+ section support for
reading and writing was already there years ago (and especially reading used
quite often already) and a further bug fixed in it in the PR104617 fix.

Yet, the fix isn't solely about removing the
  if (new_i - 1 >= SHN_LORESERVE)
    {
      *err = ENOTSUP;
      return "Too many copied sections";
    }
5 lines, the missing part was that the function only handled reading of
the .symtab_shndx section but not copying/updating of it.
If the result has less than 64K-epsilon sections, that actually wasn't
needed, but e.g. with -fdebug-types-section one can exceed that pretty
easily (reported to us on WebKitGtk build on ppc64le).
Updating the section is slightly more complicated, because it basically
needs to be done in lock step with updating the .symtab section, if one
doesn't need to use SHN_XINDEX in there, the section should (or should be
updated to) contain SHN_UNDEF entry, otherwise needs to have whatever would
be overwise stored but couldn't fit.  But repeating due to that all the
symtab decisions what to discard and how to rewrite it would be ugly.

So, the patch instead emits the .symtab_shndx section (or sections) last
and prepares the content during the .symtab processing and in a second
pass when going just through .symtab_shndx sections just uses the saved
content.

2024-09-07  Jakub Jelinek  <jakub@redhat.com>

	PR lto/116614
	* simple-object-elf.c (SHN_COMMON): Align comment with neighbouring
	comments.
	(SHN_HIRESERVE): Use uppercase hex digits instead of lowercase for
	consistency.
	(simple_object_elf_find_sections): Formatting fixes.
	(simple_object_elf_fetch_attributes): Likewise.
	(simple_object_elf_attributes_merge): Likewise.
	(simple_object_elf_start_write): Likewise.
	(simple_object_elf_write_ehdr): Likewise.
	(simple_object_elf_write_shdr): Likewise.
	(simple_object_elf_write_to_file): Likewise.
	(simple_object_elf_copy_lto_debug_section): Likewise.  Don't fail for
	new_i - 1 >= SHN_LORESERVE, instead arrange in that case to copy
	over .symtab_shndx sections, though emit those last and compute their
	section content when processing associated .symtab sections.  Handle
	simple_object_internal_read failure even in the .symtab_shndx reading
	case.

(cherry picked from commit bb8dd09)
hubot pushed a commit that referenced this pull request Jul 9, 2025
When using SVE INDEX to load an Advanced SIMD vector, we need to
take account of the different element ordering for big-endian
targets.  For example, when big-endian targets store the V4SI
constant { 0, 1, 2, 3 } in registers, 0 becomes the most
significant element, whereas INDEX always operates from the
least significant element.  A big-endian target would therefore
load V4SI { 0, 1, 2, 3 } using:

    INDEX Z0.S, #3, #-1

rather than little-endian's:

    INDEX Z0.S, #0, #1

While there, I noticed that we would only check the first vector
in a multi-vector SVE constant, which would trigger an ICE if the
other vectors turned out to be invalid.  This is pretty difficult to
trigger at the moment, since we only allow single-register modes to be
used as frontend & middle-end vector modes, but it can be seen using
the RTL frontend.

gcc/
	* config/aarch64/aarch64.cc (aarch64_sve_index_series_p): New
	function, split out from...
	(aarch64_simd_valid_imm): ...here.  Account for the different
	SVE and Advanced SIMD element orders on big-endian targets.
	Check each vector in a structure mode.

gcc/testsuite/
	* gcc.dg/rtl/aarch64/vec-series-1.c: New test.
	* gcc.dg/rtl/aarch64/vec-series-2.c: Likewise.
	* gcc.target/aarch64/sve/acle/general/dupq_2.c: Fix expected
	output for this big-endian test.
	* gcc.target/aarch64/sve/acle/general/dupq_4.c: Likewise.
	* gcc.target/aarch64/sve/vec_init_3.c: Restrict to little-endian
	targets and add more tests.
	* gcc.target/aarch64/sve/vec_init_4.c: New big-endian version
	of vec_init_3.c.
hubot pushed a commit that referenced this pull request Jul 15, 2025
This was failing for two reasons:

1) We were wrongly treating the basic_string constructor as
zero-initializing the object, which it doesn't.
2) Given that, when we went to look for a value for the anonymous union,
we concluded that it was value-initialized, and trying to evaluate that
broke because we weren't setting ctx->ctor for it.

This patch fixes both issues, #1 by setting CONSTRUCTOR_NO_CLEARING and #2
by inserting a new CONSTRUCTOR for the member rather than evaluate it out of
context, which is consistent with cxx_eval_store_expression.

	PR c++/120577

gcc/cp/ChangeLog:

	* constexpr.cc (cxx_eval_call_expression): Set
	CONSTRUCTOR_NO_CLEARING on initial value for ctor.
	(cxx_eval_component_reference): Make value-initialization
	of an aggregate member explicit.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/constexpr-union9.C: New test.
yfeldblum added a commit to yfeldblum/gcc that referenced this pull request Jul 18, 2025
When an exception is thrown and caught, destruction of the exception checks whether the exception was allocated in the `emergency_pool`, which is a global variable.

This global variable has a runtime constructor, which means access to it is valid only once the constructor has run during the module init phase.

But throwing and catching an exception is permitted at any time, not just during the lifetime of `main`. And this must be true whether libsupc++ is linked dynamically or statically.

LLVM Address Sanitizer aborts with `initialization-order-fiasco` when, in a binary which links libsupc++ statically, an exception is thrown and caught in some global constructor which happens to run prior to the global constructor of `emergency_pool`.

```
ERROR: AddressSanitizer: initialization-order-fiasco ...
READ of size 8 at ... thread T0
SCARINESS: 14 (8-byte-read-initialization-order-fiasco)
    #0 ... in (anonymous namespace)::pool::in_pool(void*) gcc-11.x/libstdc++-v3/libsupc++/eh_alloc.cc:258
    gcc-mirror#1 ... in __cxa_free_exception gcc-11.x/libstdc++-v3/libsupc++/eh_alloc.cc:302
    gcc-mirror#2 ... in __gxx_exception_cleanup(_Unwind_Reason_Code, _Unwind_Exception*) gcc-11.x/libstdc++-v3/libsupc++/eh_throw.cc:51
    gcc-mirror#3 ... in __cxa_end_catch gcc-11.x/libstdc++-v3/libsupc++/eh_catch.cc:125
    ...
    ... in __cxx_global_var_init ...
    ...
    ... in call_init.part.0 glibc-2.40/elf/dl-init.c:74:3
    ... in call_init glibc-2.40/elf/dl-init.c:120:14
    ... in _dl_init glibc-2.40/elf/dl-init.c:121:5
    ... in _dl_start_user glibc-2.40/elf/../sysdeps/aarch64/dl-start.S:46
... is located 56 bytes inside of global variable '(anonymous namespace)::emergency_pool' defined in 'gcc-11.x/libstdc++-v3/libsupc++/eh_alloc.cc' (...) of size 72
  registered at:
    #0 ... in __asan_register_globals.part.0 llvm-project/compiler-rt/lib/asan/asan_globals.cpp:393:3
    gcc-mirror#1 ... in __asan_register_globals llvm-project/compiler-rt/lib/asan/asan_globals.cpp:392:3
    gcc-mirror#2 ... in __asan_register_elf_globals llvm-project/compiler-rt/lib/asan/asan_globals.cpp:376:26
    gcc-mirror#3 ... in call_init.part.0 glibc-2.40/elf/dl-init.c:74:3
    gcc-mirror#4 ... in call_init glibc-2.40/elf/dl-init.c:120:14
    gcc-mirror#5 ... in _dl_init glibc-2.40/elf/dl-init.c:121:5
    gcc-mirror#6 ... in _dl_start_user glibc-2.40/elf/../sysdeps/aarch64/dl-start.S:46
```
yfeldblum added a commit to yfeldblum/gcc that referenced this pull request Jul 18, 2025
When an exception is thrown and caught, destruction of the exception checks whether the exception was allocated in the `emergency_pool`, which is a global variable.

This global variable has a runtime constructor, which means access to it is valid only once the constructor has run during the module init phase.

But throwing and catching an exception is permitted at any time, not just during the lifetime of `main`. And this must be true whether libsupc++ is linked dynamically or statically.

LLVM Address Sanitizer aborts with `initialization-order-fiasco` when, in a binary which links libsupc++ statically, an exception is thrown and caught in some global constructor which happens to run prior to the global constructor of `emergency_pool`.

```
ERROR: AddressSanitizer: initialization-order-fiasco ...
READ of size 8 at ... thread T0
SCARINESS: 14 (8-byte-read-initialization-order-fiasco)
    #0 ... in (anonymous namespace)::pool::in_pool(void*) gcc-11.x/libstdc++-v3/libsupc++/eh_alloc.cc:258
    gcc-mirror#1 ... in __cxa_free_exception gcc-11.x/libstdc++-v3/libsupc++/eh_alloc.cc:302
    gcc-mirror#2 ... in __gxx_exception_cleanup(_Unwind_Reason_Code, _Unwind_Exception*) gcc-11.x/libstdc++-v3/libsupc++/eh_throw.cc:51
    gcc-mirror#3 ... in __cxa_end_catch gcc-11.x/libstdc++-v3/libsupc++/eh_catch.cc:125
    ...
    ... in __cxx_global_var_init ...
    ...
    ... in call_init.part.0 glibc-2.40/elf/dl-init.c:74:3
    ... in call_init glibc-2.40/elf/dl-init.c:120:14
    ... in _dl_init glibc-2.40/elf/dl-init.c:121:5
    ... in _dl_start_user glibc-2.40/elf/../sysdeps/aarch64/dl-start.S:46
... is located 56 bytes inside of global variable '(anonymous namespace)::emergency_pool' defined in 'gcc-11.x/libstdc++-v3/libsupc++/eh_alloc.cc' (...) of size 72
  registered at:
    #0 ... in __asan_register_globals.part.0 llvm-project/compiler-rt/lib/asan/asan_globals.cpp:393:3
    gcc-mirror#1 ... in __asan_register_globals llvm-project/compiler-rt/lib/asan/asan_globals.cpp:392:3
    gcc-mirror#2 ... in __asan_register_elf_globals llvm-project/compiler-rt/lib/asan/asan_globals.cpp:376:26
    gcc-mirror#3 ... in call_init.part.0 glibc-2.40/elf/dl-init.c:74:3
    gcc-mirror#4 ... in call_init glibc-2.40/elf/dl-init.c:120:14
    gcc-mirror#5 ... in _dl_init glibc-2.40/elf/dl-init.c:121:5
    gcc-mirror#6 ... in _dl_start_user glibc-2.40/elf/../sysdeps/aarch64/dl-start.S:46
```
hubot pushed a commit that referenced this pull request Jul 21, 2025
When using SVE INDEX to load an Advanced SIMD vector, we need to
take account of the different element ordering for big-endian
targets.  For example, when big-endian targets store the V4SI
constant { 0, 1, 2, 3 } in registers, 0 becomes the most
significant element, whereas INDEX always operates from the
least significant element.  A big-endian target would therefore
load V4SI { 0, 1, 2, 3 } using:

    INDEX Z0.S, #3, #-1

rather than little-endian's:

    INDEX Z0.S, #0, #1

While there, I noticed that we would only check the first vector
in a multi-vector SVE constant, which would trigger an ICE if the
other vectors turned out to be invalid.  This is pretty difficult to
trigger at the moment, since we only allow single-register modes to be
used as frontend & middle-end vector modes, but it can be seen using
the RTL frontend.

gcc/
	* config/aarch64/aarch64.cc (aarch64_sve_index_series_p): New
	function, split out from...
	(aarch64_simd_valid_imm): ...here.  Account for the different
	SVE and Advanced SIMD element orders on big-endian targets.
	Check each vector in a structure mode.

gcc/testsuite/
	* gcc.dg/rtl/aarch64/vec-series-1.c: New test.
	* gcc.dg/rtl/aarch64/vec-series-2.c: Likewise.
	* gcc.target/aarch64/sve/acle/general/dupq_2.c: Fix expected
	output for this big-endian test.
	* gcc.target/aarch64/sve/acle/general/dupq_4.c: Likewise.
	* gcc.target/aarch64/sve/vec_init_3.c: Restrict to little-endian
	targets and add more tests.
	* gcc.target/aarch64/sve/vec_init_4.c: New big-endian version
	of vec_init_3.c.

(cherry picked from commit 41c4463)
TelGome referenced this pull request in TelGome/gcc Jul 24, 2025
This patch would like to fix below format issue of trailing operator.

=== ERROR type #1: trailing operator (4 error(s)) ===
gcc/config/riscv/riscv-vector-builtins.cc:4641:39:  if ((exts &
RVV_REQUIRE_ELEN_FP_16) &&
gcc/config/riscv/riscv-vector-builtins.cc:4651:39:  if ((exts &
RVV_REQUIRE_ELEN_FP_32) &&
gcc/config/riscv/riscv-vector-builtins.cc:4661:39:  if ((exts &
RVV_REQUIRE_ELEN_FP_64) &&
gcc/config/riscv/riscv-vector-builtins.cc:4670:36:  if ((exts &
RVV_REQUIRE_ELEN_64) &&

Passed the ./contrib/check_GNU_style.sh for this patch,  and double
checked there is no other format issue of the original patch.

Committed as format change.

commit b6dc846
Author: Pan Li pan2.li@intel.com

gcc/ChangeLog:

	* config/riscv/riscv-vector-builtins.cc
	(validate_instance_type_required_extensions): Remove the
	operator from the trailing and put it to new line.

Signed-off-by: Pan Li <pan2.li@intel.com>
hubot pushed a commit that referenced this pull request Jul 25, 2025
This was failing for two reasons:

1) We were wrongly treating the basic_string constructor as
zero-initializing the object, which it doesn't.
2) Given that, when we went to look for a value for the anonymous union,
we concluded that it was value-initialized, and trying to evaluate that
broke because we weren't setting ctx->ctor for it.

This patch fixes both issues, #1 by setting CONSTRUCTOR_NO_CLEARING and #2
by inserting a new CONSTRUCTOR for the member rather than evaluate it out of
context, which is consistent with cxx_eval_store_expression.

	PR c++/120577

gcc/cp/ChangeLog:

	* constexpr.cc (cxx_eval_call_expression): Set
	CONSTRUCTOR_NO_CLEARING on initial value for ctor.
	(cxx_eval_component_reference): Make value-initialization
	of an aggregate member explicit.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/constexpr-union9.C: New test.

(cherry picked from commit f23b5df)
hubot pushed a commit that referenced this pull request Jul 26, 2025
This was failing for two reasons:

1) We were wrongly treating the basic_string constructor as
zero-initializing the object, which it doesn't.
2) Given that, when we went to look for a value for the anonymous union,
we concluded that it was value-initialized, and trying to evaluate that
broke because we weren't setting ctx->ctor for it.

This patch fixes both issues, #1 by setting CONSTRUCTOR_NO_CLEARING and #2
by inserting a new CONSTRUCTOR for the member rather than evaluate it out of
context, which is consistent with cxx_eval_store_expression.

	PR c++/120577

gcc/cp/ChangeLog:

	* constexpr.cc (cxx_eval_call_expression): Set
	CONSTRUCTOR_NO_CLEARING on initial value for ctor.
	(cxx_eval_component_reference): Make value-initialization
	of an aggregate member explicit.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/constexpr-union9.C: New test.

(cherry picked from commit f23b5df)
TelGome referenced this pull request in TelGome/gcc Jul 26, 2025
    This patch would like to fix below format issue of trailing operator.

    === ERROR type #1: trailing operator (4 error(s)) ===
    gcc/config/riscv/riscv-vector-builtins.cc:4641:39:  if ((exts &
    RVV_REQUIRE_ELEN_FP_16) &&
    gcc/config/riscv/riscv-vector-builtins.cc:4651:39:  if ((exts &
    RVV_REQUIRE_ELEN_FP_32) &&
    gcc/config/riscv/riscv-vector-builtins.cc:4661:39:  if ((exts &
    RVV_REQUIRE_ELEN_FP_64) &&
    gcc/config/riscv/riscv-vector-builtins.cc:4670:36:  if ((exts &
    RVV_REQUIRE_ELEN_64) &&

    Passed the ./contrib/check_GNU_style.sh for this patch,  and double
    checked there is no other format issue of the original patch.

    Committed as format change.

commit b6dc846
Author: Pan Li <pan2.li@intel.com>

    gcc/ChangeLog:

            * config/riscv/riscv-vector-builtins.cc
            (validate_instance_type_required_extensions): Remove the
            operator from the trailing and put it to new line.

    Signed-off-by: Pan Li <pan2.li@intel.com>
hubot pushed a commit that referenced this pull request Aug 19, 2025
When comparing constraints during correspondence checking for a using
from a partial specialization, we need to substitute the partial
specialization arguments into the constraints rather than the primary
template arguments.  Otherwise we incorrectly reject e.g. the below
testcase as ambiguous since we substitute T=int* instead of T=int
into #1's constraints and don't notice the correspondence.

This patch corrects the recent r16-2771-gb9f1cc4e119da9 fix by using
outer_template_args instead of TI_ARGS of the DECL_CONTEXT, which
should always give the correct outer arguments for substitution.

	PR c++/121351

gcc/cp/ChangeLog:

	* class.cc (add_method): Use outer_template_args when
	substituting outer template arguments into constraints.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/concepts-using7.C: New test.

Reviewed-by: Jason Merrill <jason@redhat.com>
hubot pushed a commit that referenced this pull request Aug 26, 2025
When comparing constraints during correspondence checking for a using
from a partial specialization, we need to substitute the partial
specialization arguments into the constraints rather than the primary
template arguments.  Otherwise we incorrectly reject e.g. the below
testcase as ambiguous since we substitute T=int* instead of T=int
into #1's constraints and don't notice the correspondence.

This patch corrects the recent r16-2771-gb9f1cc4e119da9 fix by using
outer_template_args instead of TI_ARGS of the DECL_CONTEXT, which
should always give the correct outer arguments for substitution.

	PR c++/121351

gcc/cp/ChangeLog:

	* class.cc (add_method): Use outer_template_args when
	substituting outer template arguments into constraints.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp2a/concepts-using7.C: New test.

Reviewed-by: Jason Merrill <jason@redhat.com>
(cherry picked from commit 0ab1e31)
hubot pushed a commit that referenced this pull request Aug 26, 2025
…op is invariant [PR121290]

Consider the example:

void
f (int *restrict x, int *restrict y, int *restrict z, int n)
{
  for (int i = 0; i < 4; ++i)
    {
      int res = 0;
      for (int j = 0; j < 100; ++j)
        res += y[j] * z[i];
      x[i] = res;
    }
}

we currently vectorize as

f:
        movi    v30.4s, 0
        ldr     q31, [x2]
        add     x2, x1, 400
.L2:
        ld1r    {v29.4s}, [x1], 4
        mla     v30.4s, v29.4s, v31.4s
        cmp     x2, x1
        bne     .L2
        str     q30, [x0]
        ret

which is not useful because by doing outer-loop vectorization we're performing
less work per iteration than we would had we done inner-loop vectorization and
simply unrolled the inner loop.

This patch teaches the cost model that if all your leafs are invariant, then
adjust the loop cost by * VF, since every vector iteration has at least one lane
really just doing 1 scalar.

There are a couple of ways we could have solved this, one is to increase the
unroll factor to process more iterations of the inner loop.  This removes the
need for the broadcast, however we don't support unrolling the inner loop within
the outer loop.  We only support unrolling by increasing the VF, which would
affect the outer loop as well as the inner loop.

We also don't directly support costing inner-loop vs outer-loop vectorization,
and as such we're left trying to predict/steer the cost model ahead of time to
what we think should be profitable.  This patch attempts to do so using a
heuristic which penalizes the outer-loop vectorization.

We now cost the loop as

note:  Cost model analysis:
  Vector inside of loop cost: 2000
  Vector prologue cost: 4
  Vector epilogue cost: 0
  Scalar iteration cost: 300
  Scalar outside cost: 0
  Vector outside cost: 4
  prologue iterations: 0
  epilogue iterations: 0
missed:  cost model: the vector iteration cost = 2000 divided by the scalar iteration cost = 300 is greater or equal to the vectorization factor = 4.
missed:  not vectorized: vectorization not profitable.
missed:  not vectorized: vector version will never be profitable.
missed:  Loop costings may not be worthwhile.

And subsequently generate:

.L5:
        add     w4, w4, w7
        ld1w    z24.s, p6/z, [x0, #1, mul vl]
        ld1w    z23.s, p6/z, [x0, #2, mul vl]
        ld1w    z22.s, p6/z, [x0, #3, mul vl]
        ld1w    z29.s, p6/z, [x0]
        mla     z26.s, p6/m, z24.s, z30.s
        add     x0, x0, x8
        mla     z27.s, p6/m, z23.s, z30.s
        mla     z28.s, p6/m, z22.s, z30.s
        mla     z25.s, p6/m, z29.s, z30.s
        cmp     w4, w6
        bls     .L5

and avoids the load and replicate if it knows it has enough vector pipes to do
so.

gcc/ChangeLog:

	PR target/121290
	* config/aarch64/aarch64.cc
	(class aarch64_vector_costs ): Add m_loop_fully_scalar_dup.
	(aarch64_vector_costs::add_stmt_cost): Detect invariant inner loops.
	(adjust_body_cost): Adjust final costing if m_loop_fully_scalar_dup.

gcc/testsuite/ChangeLog:

	PR target/121290
	* gcc.target/aarch64/pr121290.c: New test.
hubot pushed a commit that referenced this pull request Aug 26, 2025
The test was designed to pass with thumb2, but code generation changed
with the introduction of Low Overhead Loops, so the test can fail if
one overrides the flags when running the testsuite.

In addition, useless subtract / extension instructions require -O2 to
remove them (-O is not sufficient), so replace -O with -O2 in
dg-options.

arm_thumb2_ok_no_arm_v8_1m_lob does not do what the test needs (it can
fail because some flags conflict, rather than because lob are
supported, and we do not need to check runtime support in this test
anyway), so the patch reverts back to arm_thumb2_ok.

Finally, replace the scan-assembler directives with
check-function-bodies, checking both types of code generation (with
and without LOL).  Depending on architecture version, the two insns
    and     r0, r1, r0, lsr #1
    ands    r3, r3, #255
can be swapped, so accept both orders.

gcc/testsuite/ChangeLog:

	PR target/116445
	* gcc.target/arm/unsigned-extend-2.c: Fix dg directives.

(cherry picked from commit 20c2591)
hubot pushed a commit that referenced this pull request Oct 15, 2025
The vadcq and vsbcq patterns had two problems:
- the adc / sbc part of the pattern did not mention the use of vfpcc
- the carry calcultation part should use a different unspec code

In addtion, the get_fpscr_nzcvqc and set_fpscr_nzcvqc were
over-cautious by using unspec_volatile when unspec is really what they
need.  Making them unspec enables to remove redundant accesses to
FPSCR_nzcvqc.

With unspec_volatile, we used to generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmov.i32	q0, #0x1  @ v4si
	push	{lr}
	sub	sp, sp, #12
	vmrs	r3, FPSCR_nzcvqc    ;; [1]
	bic	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc     ;; [2]
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	ldr	r0, .L8
	ubfx	r3, r3, #29, #1
	str	r3, [sp, #4]
	bl	print_uint32x4_t
	add	sp, sp, #12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

with unspec, we generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmrs	r3, FPSCR_nzcvqc     ;; [1]
	bic	r3, r3, #536870912   ;; [3]
	vmov.i32	q0, #0x1  @ v4si
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	push	{lr}
	ubfx	r3, r3, #29, #1
	sub	sp, sp, #12
	ldr	r0, .L8
	str	r3, [sp, #4]
	bl	print_uint32x4_t
	add	sp, sp, #12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

That is, unspec in get_fpscr_nzcvqc enables to:
- move [1] earlier
- delete redundant [2]

and unspec in set_fpscr_nzcvqc enables to move push {lr} and stack
manipulation later.

gcc/ChangeLog:

	PR target/122189
	* config/arm/iterators.md (VxCIQ_carry, VxCIQ_M_carry, VxCQ_carry)
	(VxCQ_M_carry): New iterators.
	* config/arm/mve.md (get_fpscr_nzcvqc, set_fpscr_nzcvqc): Use
	unspec instead of unspec_volatile.
	(vadciq, vadciq_m, vadcq, vadcq_m): Use vfpcc in operation.  Use a
	different unspec code for carry calcultation.
	* config/arm/unspecs.md (VADCQ_U_carry, VADCQ_M_U_carry)
	(VADCQ_S_carry, VADCQ_M_S_carry, VSBCIQ_U_carry ,VSBCIQ_S_carry
	,VSBCIQ_M_U_carry ,VSBCIQ_M_S_carry ,VSBCQ_U_carry ,VSBCQ_S_carry
	,VSBCQ_M_U_carry ,VSBCQ_M_S_carry ,VADCIQ_U_carry
	,VADCIQ_M_U_carry ,VADCIQ_S_carry ,VADCIQ_M_S_carry): New unspec
	codes.

gcc/testsuite/ChangeLog:

	PR target/122189
	* gcc.target/arm/mve/intrinsics/vadcq-check-carry.c: New test.
	* gcc.target/arm/mve/intrinsics/vadcq_m_s32.c: Adjust instructions
	order.
	* gcc.target/arm/mve/intrinsics/vadcq_m_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c: Likewise.
hubot pushed a commit that referenced this pull request Oct 22, 2025
The vectorizer has learned how to do boolean reductions of masks to a C bool
for the operations OR, XOR and AND.

This implements the new optabs for Adv.SIMD.  Adv.SIMD today can already
vectorize such loops but does so through SHIFT-AND-INSERT to perform the
reductions step-wise and inorder.  As an example, an OR reduction today does:

        movi    v3.4s, 0
        ext     v5.16b, v30.16b, v3.16b, #8
        orr     v5.16b, v5.16b, v30.16b
        ext     v29.16b, v5.16b, v3.16b, #4
        orr     v29.16b, v29.16b, v5.16b
        ext     v4.16b, v29.16b, v3.16b, #2
        orr     v4.16b, v4.16b, v29.16b
        ext     v3.16b, v4.16b, v3.16b, #1
        orr     v3.16b, v3.16b, v4.16b
        fmov    w1, s3
        and     w1, w1, 1

For reducing to a boolean however we don't need the stepwise reduction and can
just look at the bit patterns. For e.g. OR we now generate:

        umaxp	v3.4s, v3.4s, v3.4s
        fmov	x1, d3
        cmp	x1, 0
        cset	w0, ne

For the remaining codegen see test vect-reduc-bool-9.c.

gcc/ChangeLog:

	* config/aarch64/aarch64-simd.md (reduc_sbool_and_scal_<mode>,
	reduc_sbool_ior_scal_<mode>, reduc_sbool_xor_scal_<mode>): New.
	* config/aarch64/iterators.md (VALLI): New.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/vect-reduc-bool-1.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-2.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-3.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-4.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-5.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-6.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-7.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-8.c: New test.
	* gcc.target/aarch64/vect-reduc-bool-9.c: New test.
vathpela pushed a commit to vathpela/gcc that referenced this pull request Nov 6, 2025
…upper bits in a GPR

So pre-commit CI flagged an issue with the initial version of this patch.  In
particular the cmp-mem-const-{1,2} tests are failing.

I didn't see that in my internal testing, but that well could be an artifact of
having multiple patches touching in the same broad space that the tester is
evaluating.  If I apply just this patch I can trigger the cmp-mem-const{1,2}
failures.

The code we're getting now is actually better than we were getting before, but
the new patterns avoid the path through combine that emits the message about
narrowing the load down to a byte load, hence the failure.

Given we're getting better code now than before, I'm just skipping this test on
risc-v.    That's the only non-whitespace change since the original version of
this patch.

--

This addresses the first level issues seen in generating better performing code
for testcases derived from pr121136.  It likely regresses code size in some
cases as in many cases it selects code sequences that should be better
performing, though larger to encode.

Improving -Os code generation should remain the primary focus of pr121136.  Any
improvements in code size with this change are a nice side effect, but not the
primary goal.

--

Let's take this test (derived from the PR):

_Bool func1_0x1U (unsigned int x) { return x <= 0x1U; }

_Bool func2_0x1U (unsigned int x) { return ((x >> __builtin_ctz (0x1U + 1U)) == 0); }

_Bool func3_0x1U (unsigned int x) { return ((x / (0x1U + 1U)) == 0); }

Those should produce the same output.  We currently get these fragments for the
3 cases.  In particular note how the second variant is a two instruction
sequence.

        sltiu   a0,a0,2

        srliw   a0,a0,1
        seqz    a0,a0

        sltiu   a0,a0,2

This patch will adjust that second sequence to match the first and third and is
optimal.

Let's take another case.  This is interesting as it's right at the simm12
border:

_Bool func1_0x7ffU (unsigned long x) { return x <= 0x7ffU; }

_Bool func2_0x7ffU (unsigned long x) { return ((x >> __builtin_ctzl (0x7ffU + 1UL)) == 0); }

_Bool func3_0x7ffU (unsigned long x) { return ((x / (0x7ffU + 1UL)) == 0); }

We get:

        li      a5,2047
        sltu    a0,a5,a0
        seqz    a0,a0

        srli    a0,a0,11
        seqz    a0,a0

        li      a5,2047
        sltu    a0,a5,a0
        seqz    a0,a0

In this case the second sequence is pretty good.  Not perfect, but clearly
better than the other two.  This patch will fix the code for case gcc-mirror#1 and case

So anyway, that's the basic motivation here.  So to be 100% clear, while the
bug is focused on code size, I'm focused on the performance of the resulting
code.

This has been tested on riscv32-elf and riscv64-elf.  It's also bootstrapped
and regression tested on the Pioneer.  The BPI won't have results for this
patch until late tomorrow.

--

	PR rtl-optimization/121136
gcc/
	* config/riscv/riscv.md: Add define_insn to test the
	upper bits of a register against zero using sltiu when
	the bits are extracted via zero_extract or logial right shift.
	Add 3->2 define_splits for gtu/leu cases testing upper bits
	against zero.

gcc/testsuite
	* gcc.target/riscv/pr121136.c: New test.
	* gcc.dg/cmp-mem-const-1.c: Skip for risc-v.
	* gcc.dg/cmp-mem-const-2.c: Likewise.
hubot pushed a commit that referenced this pull request Nov 12, 2025
The vadcq and vsbcq patterns had two problems:
- the adc / sbc part of the pattern did not mention the use of vfpcc
- the carry calcultation part should use a different unspec code

In addtion, the get_fpscr_nzcvqc and set_fpscr_nzcvqc were
over-cautious by using unspec_volatile when unspec is really what they
need.  Making them unspec enables to remove redundant accesses to
FPSCR_nzcvqc.

With unspec_volatile, we used to generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmov.i32	q0, #0x1  @ v4si
	push	{lr}
	sub	sp, sp, #12
	vmrs	r3, FPSCR_nzcvqc    ;; [1]
	bic	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc     ;; [2]
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	ldr	r0, .L8
	ubfx	r3, r3, #29, #1
	str	r3, [sp, #4]
	bl	print_uint32x4_t
	add	sp, sp, #12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

with unspec, we generate:
test_2:
	@ args = 0, pretend = 0, frame = 8
	@ frame_needed = 0, uses_anonymous_args = 0
	vmrs	r3, FPSCR_nzcvqc     ;; [1]
	bic	r3, r3, #536870912   ;; [3]
	vmov.i32	q0, #0x1  @ v4si
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q3, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	orr	r3, r3, #536870912
	vmsr	FPSCR_nzcvqc, r3
	vadc.i32	q0, q0, q0
	vmrs	r3, FPSCR_nzcvqc
	push	{lr}
	ubfx	r3, r3, #29, #1
	sub	sp, sp, #12
	ldr	r0, .L8
	str	r3, [sp, #4]
	bl	print_uint32x4_t
	add	sp, sp, #12
	@ sp needed
	pop	{pc}
.L9:
	.align	2
.L8:
	.word	.LC1

That is, unspec in get_fpscr_nzcvqc enables to:
- move [1] earlier
- delete redundant [2]

and unspec in set_fpscr_nzcvqc enables to move push {lr} and stack
manipulation later.

gcc/ChangeLog:

	PR target/122189
	* config/arm/iterators.md (VxCIQ_carry, VxCIQ_M_carry, VxCQ_carry)
	(VxCQ_M_carry): New iterators.
	* config/arm/mve.md (get_fpscr_nzcvqc, set_fpscr_nzcvqc): Use
	unspec instead of unspec_volatile.
	(vadciq, vadciq_m, vadcq, vadcq_m): Use vfpcc in operation.  Use a
	different unspec code for carry calcultation.
	* config/arm/unspecs.md (VADCQ_U_carry, VADCQ_M_U_carry)
	(VADCQ_S_carry, VADCQ_M_S_carry, VSBCIQ_U_carry ,VSBCIQ_S_carry
	,VSBCIQ_M_U_carry ,VSBCIQ_M_S_carry ,VSBCQ_U_carry ,VSBCQ_S_carry
	,VSBCQ_M_U_carry ,VSBCQ_M_S_carry ,VADCIQ_U_carry
	,VADCIQ_M_U_carry ,VADCIQ_S_carry ,VADCIQ_M_S_carry): New unspec
	codes.

gcc/testsuite/ChangeLog:

	PR target/122189
	* gcc.target/arm/mve/intrinsics/vadcq-check-carry.c: New test.
	* gcc.target/arm/mve/intrinsics/vadcq_m_s32.c: Adjust instructions
	order.
	* gcc.target/arm/mve/intrinsics/vadcq_m_u32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_s32.c: Likewise.
	* gcc.target/arm/mve/intrinsics/vsbcq_m_u32.c: Likewise.

	(cherry picked from commits
	0272058 and
	697ccad)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants