forked from gcc-mirror/gcc
-
Notifications
You must be signed in to change notification settings - Fork 3
gcc-6: Add fix for missing no-PIE flags #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Fixes build on hardened PAX host with gcc-5 (linker error on relocs). Completes no-PIE config by adding to ALL_* flags variables. Borrowed from Gentoo gcc patches, tested on 2 hardened amd64 hosts. Upstream-Status: Inappropriate [configuration patching artifact] Commited by: Gentoo Toolchain Project <toolchain@gentoo.org> Signed-off-by: Stephen Arnold <stephen.arnold42@gmail.com>
kraj
pushed a commit
that referenced
this pull request
May 29, 2017
* pt.c (most_specialized_instantiation): Cope with duplicate instantiations. PR c++/80891 (#1) * g++.dg/lookup/pr80891-1.C: New. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@248573 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
May 29, 2017
* cp-tree.h (lookup_maybe_add): Add DEDUPING argument. * name-lookup.c (name_lookup): Add deduping field. (name_lookup::preserve_state, name_lookup::restore_state): Deal with deduping. (name_lookup::add_overload): New. (name_lookup::add_value, name_lookup::add_fns): Call add_overload. (name_lookup::search_adl): Set deduping. Don't unmark here. * pt.c (most_specialized_instantiation): Revert previous change, Assert not given duplicates. * tree.c (lookup_mark): Just mark the underlying decls. (lookup_maybe_add): Dedup using marked decls. PR c++/80891 (gcc-mirror#5) * g++.dg/lookup/pr80891-5.C: New. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@248578 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Apr 19, 2018
When -fcf-protection -mcet is used, I got FAIL: g++.dg/eh/sighandle.C (gdb) bt #0 _Unwind_RaiseException (exc=exc@entry=0x416ed0) at /export/gnu/import/git/sources/gcc/libgcc/unwind.inc:140 #1 0x00007ffff7d9936b in __cxxabiv1::__cxa_throw (obj=<optimized out>, tinfo=0x403dd0 <typeinfo for int@@CXXABI_1.3>, dest=0x0) at /export/gnu/import/git/sources/gcc/libstdc++-v3/libsupc++/eh_throw.cc:90 gcc-mirror#2 0x0000000000401255 in sighandler (signo=11, si=0x7fffffffd6f8, uc=0x7fffffffd5c0) at /export/gnu/import/git/sources/gcc/gcc/testsuite/g++.dg/eh/sighandle.C:9 gcc-mirror#3 <signal handler called> <<<< Signal frame which isn't on shadow stack gcc-mirror#4 dosegv () at /export/gnu/import/git/sources/gcc/gcc/testsuite/g++.dg/eh/sighandle.C:14 gcc-mirror#5 0x00000000004012e3 in main () at /export/gnu/import/git/sources/gcc/gcc/testsuite/g++.dg/eh/sighandle.C:30 (gdb) p frames $6 = 5 (gdb) frame count should be 4, not 5. This patch skips signal frames when unwinding shadow stack. gcc/testsuite/ PR libgcc/85334 * g++.dg/torture/pr85334.C: New test. libgcc/ PR libgcc/85334 * unwind-generic.h (_Unwind_Frames_Increment): New. * config/i386/shadow-stack-unwind.h (_Unwind_Frames_Increment): Likewise. * unwind.inc (_Unwind_RaiseException_Phase2): Increment frame count with _Unwind_Frames_Increment. (_Unwind_ForcedUnwind_Phase2): Likewise. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@259502 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
May 24, 2018
This fixes a long-standing quirk present in the layout information for record types displayed by the -gnatR3 switch: when a component has a variable (starting) position, its corresponding line in the output has an irregular and awkward format. After this change, the format is the same as in all the other cases. For the following record: type R (m : natural) is record s : string (1 .. m); r : natural; b : boolean; end record; for R'alignment use 4; pragma Pack (R); the output of -gnatR3 used to be: for R'Object_Size use 17179869248; for R'Value_Size use ((#1 + 8) * 8); for R'Alignment use 4; for R use record m at 0 range 0 .. 30; s at 4 range 0 .. ((#1 * 8)) - 1; r at bit offset (((#1 + 4) * 8)) size in bits = 31 b at bit offset ((((#1 + 7) * 8) + 7)) size in bits = 1 end record; and is changed into: for R'Object_Size use 17179869248; for R'Value_Size use ((#1 + 8) * 8); for R'Alignment use 4; for R use record m at 0 range 0 .. 30; s at 4 range 0 .. ((#1 * 8)) - 1; r at (#1 + 4) range 0 .. 30; b at (#1 + 7) range 7 .. 7; end record; 2018-05-24 Eric Botcazou <ebotcazou@adacore.com> gcc/ada/ * fe.h (Set_Normalized_First_Bit): Declare. (Set_Normalized_Position): Likewise. * repinfo.adb (List_Record_Layout): Do not use irregular output for a variable position. Fix minor spacing issue. * gcc-interface/decl.c (annotate_rep): If a field has a variable offset, compute the normalized position and annotate it in addition to the bit offset. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@260669 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Nov 14, 2018
This adds a 4th information level for the -gnatR output, where relevant compiler-generated types are listed in addition to the information already output by -gnatR3. For the following package P: package P is type Arr0 is array (Positive range <>) of Boolean; type Rec (D1 : Positive; D2 : Boolean) is record C1 : Integer; C2 : Arr0 (1 .. D1); case D2 is when False => C3 : Character; when True => C4 : String (1 .. 3); C5 : Float; end case; end record; type Arr1 is array (1 .. 8) of Rec (1, True); end P; the output generated by -gnatR4 must be: Representation information for unit P (spec) -------------------------------------------- for Arr0'Alignment use 1; for Arr0'Component_Size use 8; for Rec'Object_Size use 17179869344; for Rec'Value_Size use (if (gcc-mirror#2 != 0) then ((((#1 + 15) & -4) + 8) * 8) else ((((#1 + 15) & -4) + 1) * 8) end); for Rec'Alignment use 4; for Rec use record D1 at 0 range 0 .. 31; D2 at 4 range 0 .. 7; C1 at 8 range 0 .. 31; C2 at 12 range 0 .. ((#1 * 8)) - 1; C3 at ((#1 + 15) & -4) range 0 .. 7; C4 at ((#1 + 15) & -4) range 0 .. 23; C5 at (((#1 + 15) & -4) + 4) range 0 .. 31; end record; for Arr1'Size use 1536; for Arr1'Alignment use 4; for Arr1'Component_Size use 192; for Tarr1c'Size use 192; for Tarr1c'Alignment use 4; for Tarr1c use record D1 at 0 range 0 .. 31; D2 at 4 range 0 .. 7; C1 at 8 range 0 .. 31; C2 at 12 range 0 .. 7; C4 at 16 range 0 .. 23; C5 at 20 range 0 .. 31; end record; 2018-11-14 Eric Botcazou <ebotcazou@adacore.com> gcc/ada/ * doc/gnat_ugn/building_executable_programs_with_gnat.rst (-gnatR): Document new -gnatR4 level. * gnat_ugn.texi: Regenerate. * opt.ads (List_Representation_Info): Bump upper bound to 4. * repinfo.adb: Add with clause for GNAT.HTable. (Relevant_Entities_Size): New constant. (Entity_Header_Num): New type. (Entity_Hash): New function. (Relevant_Entities): New set implemented with GNAT.HTable. (List_Entities): Also list compiled-generated entities present in the Relevant_Entities set. Consider that the Component_Type of an array type is relevant. (List_Rep_Info): Reset Relevant_Entities for each unit. * switch-c.adb (Scan_Front_End_Switches): Add support for -gnatR4. * switch-m.adb (Normalize_Compiler_Switches): Likewise * usage.adb (Usage): Likewise. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@266131 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Apr 12, 2019
kraj
pushed a commit
that referenced
this pull request
May 7, 2019
…D_EXPR) using SVE. Given this input code: int sum_abs (uint8_t *restrict x, uint8_t *restrict y, int n) { int sum = 0; for (int i = 0; i < n; i++) { sum += __builtin_abs (x[i] - y[i]); } return sum; } The resulting SVE code is: 0000000000000000 <sum_abs>: 0: 7100005f cmp w2, #0x0 4: 5400026d b.le 50 <sum_abs+0x50> 8: d2800003 mov x3, #0x0 // #0 c: 93407c42 sxtw x2, w2 10: 2538c002 mov z2.b, #0 14: 25221fe0 whilelo p0.b, xzr, x2 18: 2538c023 mov z3.b, #1 1c: 2518e3e1 ptrue p1.b 20: a4034000 ld1b {z0.b}, p0/z, [x0, x3] 24: a4034021 ld1b {z1.b}, p0/z, [x1, x3] 28: 0430e3e3 incb x3 2c: 0520c021 sel z1.b, p0, z1.b, z0.b 30: 25221c60 whilelo p0.b, x3, x2 34: 040d0420 uabd z0.b, p1/m, z0.b, z1.b 38: 44830402 udot z2.s, z0.b, z3.b 3c: 54ffff21 b.ne 20 <sum_abs+0x20> // b.any 40: 2598e3e0 ptrue p0.s 44: 04812042 uaddv d2, p0, z2.s 48: 1e260040 fmov w0, s2 4c: d65f03c0 ret 50: 1e2703e2 fmov s2, wzr 54: 1e260040 fmov w0, s2 58: d65f03c0 ret Notice how udot is used inside a fully masked loop. gcc/Changelog: 2019-05-07 Alejandro Martinez <alejandro.martinezvicente@arm.com> * config/aarch64/aarch64-sve.md (<su>abd<mode>_3): New define_expand. (aarch64_<su>abd<mode>_3): Likewise. (*aarch64_<su>abd<mode>_3): New define_insn. (<sur>sad<vsi2qi>): New define_expand. * config/aarch64/iterators.md: Added MAX_OPP attribute. * tree-vect-loop.c (use_mask_by_cond_expr_p): Add SAD_EXPR. (build_vect_cond_expr): Likewise. gcc/testsuite/Changelog: 2019-05-07 Alejandro Martinez <alejandro.martinezvicente@arm.com> * gcc.target/aarch64/sve/sad_1.c: New test for sum of absolute differences. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@270975 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Aug 2, 2019
Introduce exception handler ABI #1 to ensure single release, no access after release of reraised Machine_Occurrences, and no failure to re-reraise a Machine_Occurrence. Unlike Ada exceptions, foreign exceptions do not get a new Machine_Occurrence upon reraise, but each handler would delete the exception upon completion, normal or exceptional, save for the case of a 'raise;' statement within the handler, that avoided the delete by clearing the exception pointer that the cleanup would use to release it. The cleared exception pointer might then be used by a subsequent reraise within the same handler. Get_Current_Excep.all would also expose the Machine_Occurrence to reuse by Reraise_Occurrence, even for native exceptions. Under ABI #1, Begin_Handler_v1 claims responsibility for releasing an exception by saving its cleanup and setting it to Claimed_Cleanup. End_Handler_v1 restores the cleanup and runs it, as long as it isn't still Claimed_Cleanup (which indicates an enclosing handler has already claimed responsibility for releasing it), and as long as the same exception is not being propagated up (the next handler of the propagating exception will then claim responsibility for releasing it), so reraise no longer needs to clear the exception pointer, and it can just propagate the exception, just like Reraise_Occurrence. ABI #1 is fully interoperable with ABI #0, i.e., exception handlers that call the #0 primitives can be linked together with ones that call the #1 primitives, and they will not misbehave. When a #1 handler claims responsibility for releasing an exception, even #0 reraises dynamically nested within it will refrain from releasing it. However, when a #0 handler is a handler of a foreign exception that would have been responsible for releasing it with #1, a Reraise_Occurrence of that foreign or other Machine_Occurrence-carrying exception may still cause the exception to be released multiple times, and to be used after it is first released, even if other handlers of the foreign exception use #1. for gcc/ada/ChangeLog * libgnat/a-exexpr.adb (Begin_Handler_v1, End_Handler_v1): New. (Claimed_Cleanup): New. (Begin_Handler, End_Handler): Document. * gcc-interface/trans.c (gigi): Switch to exception handler ABI #1. (Exception_Handler_to_gnu_gcc): Save the original cleanup returned by begin handler, pass it to end handler, and use EH_ELSE_EXPR to pass a propagating exception to end handler. (gnat_to_gnu): Leave the exception pointer alone for reraise. (add_cleanup): Handle EH_ELSE_EXPR, require it by itself. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@274029 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Aug 9, 2019
…tions The addsi3_compare_op[12] patterns currently only have constraints to pick the 32-bit variants of the instructions. Although the assembler may sometimes opportunistically match a 16-bit t2 instruction, there's no real control over that within the compiler. Consequently we might emit a 32-bit adds instruction with a 16-bit subs instruction would serve equally well. We do, of course still have to be careful about the small number of boundary cases by controlling the order quite carefully. This patch adds the constraints and templates to match the t2 16-bit variants of these instructions. Now, for example, we can generate subs r0, r0, #1 // 16-bit instruction instead of adds r0, r0, #1 // 32-bit instruction. *confit/arm/arm.md (addsi3_compare_op1): Add 16-bit thumb-2 variants. (addsi3_compare_op2): Likewise. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@274237 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Aug 22, 2019
Like the logical operations, expand all shifts early rather than only sometimes. The Neon shift expansions are never emitted (not even with -fneon-for-64bits), so they are not useful. So all the late expansions and Neon shift patterns can be removed, and shifts are more optimized as a result. Since some extend patterns use Neon DImode shifts, remove the Neon extend variants and related splits. A simple example now generates the same efficient code after this patch with -mfpu=neon and -mfpu=vfp (previously just the fact of having Neon enabled resulted inefficient code for no reason). unsigned long long f(unsigned long long x, unsigned long long y) { return x & (y >> 33); } Before: strd r4, r5, [sp, #-8]! lsr r4, r3, #1 mov r5, #0 and r1, r1, r5 and r0, r0, r4 ldrd r4, r5, [sp] add sp, sp, gcc-mirror#8 bx lr After: and r0, r0, r3, lsr #1 mov r1, #0 bx lr Bootstrap and regress OK on arm-none-linux-gnueabihf --with-cpu=cortex-a57 gcc/ * config/arm/iterators.md (qhs_extenddi_cstr): Update. (qhs_extenddi_cstr): Likewise. * config/arm/arm.md (ashldi3): Always expand early. (ashlsi3): Likewise. (ashrsi3): Likewise. (zero_extend<mode>di2): Remove Neon variants. (extend<mode>di2): Likewise. * config/arm/neon.md (ashldi3_neon_noclobber): Remove. (signed_shift_di3_neon): Likewise. (unsigned_shift_di3_neon): Likewise. (ashrdi3_neon_imm_noclobber): Likewise. (lshrdi3_neon_imm_noclobber): Likewise. (<shift>di3_neon): Likewise. (split extend): Remove DI extend split patterns. gcc/testsuite/ * gcc.target/arm/neon-extend-1.c: Remove test. * gcc.target/arm/neon-extend-2.c: Remove test. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@274824 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Oct 18, 2019
In almost all cases it is better to handle inequality handling against constants by transforming comparisons of the form (reg <GE/LT/GEU/LTU> const) into (reg <GT/LE/GTU/LEU> (const+1)). However, there are many cases that we could handle but currently failed to do so because we forced the constant into a register too early in the pattern expansion. To permit this to be done we need to defer forcing the constant into a register until after we've had the chance to do the transform - in some cases that may even mean that we no-longer need to force the constant into a register at all. For example, on Arm, the case: _Bool f8 (unsigned long long a) { return a > 0xffffffff; } previously compiled to mov r3, #0 cmp r1, r3 mvn r2, #0 cmpeq r0, r2 movhi r0, #1 movls r0, #0 bx lr But now compiles to cmp r1, #1 cmpeq r0, #0 movcs r0, #1 movcc r0, #0 bx lr Which although not yet completely optimal, is certainly better than previously. * config/arm/arm.md (cbranchdi4): Accept reg_or_int_operand for operand 2. (cstoredi4): Similarly, but for operand 3. * config/arm/arm.c (arm_canoncialize_comparison): Allow canonicalization of unsigned compares with a constant on Arm. Prefer using const+1 and adjusting the comparison over swapping the operands whenever the original constant was not valid. (arm_gen_dicompare_reg): If Y is not a valid operand, force it to a register here. (arm_validize_comparison): Do not force invalid DImode operands to registers here. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@277178 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Oct 22, 2019
On Arm we have both carry and borrow operations, but borrow is essentially '~carry'. Of course, with boolean logic ~carry is also 1-carry. GCC transforms (1 - X - LTU (cc, 0)) into (GEU (cc, 0) - X) Now the former matches a real insn in Arm state, using the RSC instruction with #1 as the immediate, but we currently do not recognize the canonicalized form. Nevertheless, given the above logic, this turns out to be quite straight forward as the original expression matches arm_borrow_operation and the revised form can be used with arm_carry_operation. Since we match this new pattern we also update rtx_costs to handle it. * config/arm/arm.md (rsbsi_carryin_reg): New pattern. * config/arm/arm.c (arm_rtx_costs_internal, case MINUS): Handle subtraction from a carry operation. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@277290 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Oct 31, 2019
…piling for Thumb2 Thumb2 code now uses the Arm implementation of legitimize_address. That code has a case to handle addresses that are absolute CONST_INT values, which is a common use case in deeply embedded targets (eg: void *p = (void*)0x12345678). Since thumb has very limited negative offsets from a constant, we want to avoid forming a CSE base that will then be used with a negative value. This was reported upstream originally in https://gcc.gnu.org/ml/gcc-help/2019-10/msg00122.html For example, void test1(void) { volatile uint32_t * const p = (uint32_t *) 0x43fe1800; p[3] = 1; p[4] = 2; p[1] = 3; p[7] = 4; p[0] = 6; } With the new code, instead of ldr r3, .L2 subw r2, r3, #2035 movs r1, #1 str r1, [r2] subw r2, r3, #2031 movs r1, gcc-mirror#2 str r1, [r2] subw r2, r3, #2043 movs r1, gcc-mirror#3 str r1, [r2] subw r2, r3, #2019 movs r1, gcc-mirror#4 subw r3, r3, #2047 str r1, [r2] movs r2, gcc-mirror#6 str r2, [r3] bx lr We now get ldr r3, .L2 movs r2, #1 str r2, [r3, #2060] movs r2, gcc-mirror#2 str r2, [r3, #2064] movs r2, gcc-mirror#3 str r2, [r3, #2052] movs r2, gcc-mirror#4 str r2, [r3, #2076] movs r2, gcc-mirror#6 str r2, [r3, #2048] bx lr * config/arm/arm.c (arm_legitimize_address): Don't form negative offsets from a CONST_INT address when TARGET_THUMB2. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@277677 138bc75d-0d04-0410-961f-82ee72b054a4
kraj
pushed a commit
that referenced
this pull request
Jun 17, 2020
Made apparent by recent commit dc70315 "openmp: Implement discovery of implicit declare target to clauses": +FAIL: libgomp.c/target-39.c (internal compiler error) +FAIL: libgomp.c/target-39.c (test for excess errors) +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa' enabled (by default). during GIMPLE pass: hsagen source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’: source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault 23 | #pragma omp target map(from:err) | ^~~ [...] GDB: Program received signal SIGSEGV, Segmentation fault. fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 6267 return (fndecl_built_in_p (node, BUILT_IN_NORMAL) (gdb) bt #0 fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304 gcc-mirror#2 0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770 gcc-mirror#3 0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999 gcc-mirror#4 0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596 gcc-mirror#5 0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680 gcc-mirror#6 0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502 [...] (gdb) up #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) (gdb) print function_decl $1 = (tree) 0x0 (gdb) list 5299 if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL)) 5300 { 5301 tree function_decl = gimple_call_fndecl (stmt); 5302 /* Prefetch pass can create type-mismatching prefetch builtin calls which 5303 fail the gimple_call_builtin_p test above. Handle them here. */ 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) 5305 return; 5306 5307 if (function_decl == NULL_TREE) 5308 { The problem is present already since 2016-11-23 commit 56b1c60 (r242761) "Merge from HSA branch to trunk", and the fix obvious enough. gcc/ * hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl == NULL_TREE' check earlier. gcc/testsuite/ * c-c++-common/gomp/hsa-indirect-call-1.c: New file.
kraj
pushed a commit
that referenced
this pull request
Jun 17, 2020
Made apparent by recent commit dc70315 "openmp: Implement discovery of implicit declare target to clauses": +FAIL: libgomp.c/target-39.c (internal compiler error) +FAIL: libgomp.c/target-39.c (test for excess errors) +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa' enabled (by default). during GIMPLE pass: hsagen source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’: source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault 23 | #pragma omp target map(from:err) | ^~~ [...] GDB: Program received signal SIGSEGV, Segmentation fault. fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 6267 return (fndecl_built_in_p (node, BUILT_IN_NORMAL) (gdb) bt #0 fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304 gcc-mirror#2 0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770 gcc-mirror#3 0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999 gcc-mirror#4 0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596 gcc-mirror#5 0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680 gcc-mirror#6 0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502 [...] (gdb) up #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) (gdb) print function_decl $1 = (tree) 0x0 (gdb) list 5299 if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL)) 5300 { 5301 tree function_decl = gimple_call_fndecl (stmt); 5302 /* Prefetch pass can create type-mismatching prefetch builtin calls which 5303 fail the gimple_call_builtin_p test above. Handle them here. */ 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) 5305 return; 5306 5307 if (function_decl == NULL_TREE) 5308 { The problem is present already since 2016-11-23 commit 56b1c60 (r242761) "Merge from HSA branch to trunk", and the fix obvious enough. gcc/ * hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl == NULL_TREE' check earlier. gcc/testsuite/ * c-c++-common/gomp/hsa-indirect-call-1.c: New file. (cherry picked from commit 973bce0)
kraj
pushed a commit
that referenced
this pull request
Jun 17, 2020
Made apparent by recent commit dc70315 "openmp: Implement discovery of implicit declare target to clauses": +FAIL: libgomp.c/target-39.c (internal compiler error) +FAIL: libgomp.c/target-39.c (test for excess errors) +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa' enabled (by default). during GIMPLE pass: hsagen source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’: source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault 23 | #pragma omp target map(from:err) | ^~~ [...] GDB: Program received signal SIGSEGV, Segmentation fault. fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 6267 return (fndecl_built_in_p (node, BUILT_IN_NORMAL) (gdb) bt #0 fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304 gcc-mirror#2 0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770 gcc-mirror#3 0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999 gcc-mirror#4 0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596 gcc-mirror#5 0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680 gcc-mirror#6 0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502 [...] (gdb) up #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) (gdb) print function_decl $1 = (tree) 0x0 (gdb) list 5299 if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL)) 5300 { 5301 tree function_decl = gimple_call_fndecl (stmt); 5302 /* Prefetch pass can create type-mismatching prefetch builtin calls which 5303 fail the gimple_call_builtin_p test above. Handle them here. */ 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) 5305 return; 5306 5307 if (function_decl == NULL_TREE) 5308 { The problem is present already since 2016-11-23 commit 56b1c60 (r242761) "Merge from HSA branch to trunk", and the fix obvious enough. gcc/ * hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl == NULL_TREE' check earlier. gcc/testsuite/ * c-c++-common/gomp/hsa-indirect-call-1.c: New file. (cherry picked from commit 973bce0)
kraj
pushed a commit
that referenced
this pull request
Jun 17, 2020
Made apparent by recent commit dc70315 "openmp: Implement discovery of implicit declare target to clauses": +FAIL: libgomp.c/target-39.c (internal compiler error) +FAIL: libgomp.c/target-39.c (test for excess errors) +UNRESOLVED: libgomp.c/target-39.c compilation failed to produce executable This is in a '--enable-offload-targets=[...],hsa' build, with '-foffload=hsa' enabled (by default). during GIMPLE pass: hsagen source-gcc/libgomp/testsuite/libgomp.c/target-39.c: In function ‘main._omp_fn.0.hsa.0’: source-gcc/libgomp/testsuite/libgomp.c/target-39.c:23:11: internal compiler error: Segmentation fault 23 | #pragma omp target map(from:err) | ^~~ [...] GDB: Program received signal SIGSEGV, Segmentation fault. fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 6267 return (fndecl_built_in_p (node, BUILT_IN_NORMAL) (gdb) bt #0 fndecl_built_in_p (node=0x0, name=BUILT_IN_PREFETCH) at [...]/source-gcc/gcc/tree.h:6267 #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5304 gcc-mirror#2 0x0000000000b1aca7 in gen_hsa_insns_for_gimple_stmt (stmt=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at [...]/source-gcc/gcc/hsa-gen.c:5770 gcc-mirror#3 0x0000000000b1bd21 in gen_body_from_gimple () at [...]/source-gcc/gcc/hsa-gen.c:5999 gcc-mirror#4 0x0000000000b1dbd2 in generate_hsa (kernel=<optimized out>) at [...]/source-gcc/gcc/hsa-gen.c:6596 gcc-mirror#5 0x0000000000b1de66 in (anonymous namespace)::pass_gen_hsail::execute (this=0x2a2aac0) at [...]/source-gcc/gcc/hsa-gen.c:6680 gcc-mirror#6 0x0000000000d06f90 in execute_one_pass (pass=pass@entry=0x2a2aac0) at [...]/source-gcc/gcc/passes.c:2502 [...] (gdb) up #1 0x0000000000b19739 in gen_hsa_insns_for_call (stmt=stmt@entry=0x7ffff693b200, hbb=hbb@entry=0x2b152c0) at /home/thomas/tmp/source/gcc/build/track-slim-omp/source-gcc/gcc/hsa-gen.c:5304 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) (gdb) print function_decl $1 = (tree) 0x0 (gdb) list 5299 if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL)) 5300 { 5301 tree function_decl = gimple_call_fndecl (stmt); 5302 /* Prefetch pass can create type-mismatching prefetch builtin calls which 5303 fail the gimple_call_builtin_p test above. Handle them here. */ 5304 if (fndecl_built_in_p (function_decl, BUILT_IN_PREFETCH)) 5305 return; 5306 5307 if (function_decl == NULL_TREE) 5308 { The problem is present already since 2016-11-23 commit 56b1c60 (r242761) "Merge from HSA branch to trunk", and the fix obvious enough. gcc/ * hsa-gen.c (gen_hsa_insns_for_call): Move 'function_decl == NULL_TREE' check earlier. gcc/testsuite/ * c-c++-common/gomp/hsa-indirect-call-1.c: New file. (cherry picked from commit 973bce0)
kraj
pushed a commit
that referenced
this pull request
Aug 17, 2020
Since 21cfe72 there's a new OMP_LIST_NONTEMPORAL value, but it was missing in resolve_omp_clauses static array that is defined at the function beginning: ./xgcc -B. /home/marxin/Programming/gcc/gcc/testsuite/gfortran.dg/gomp/nontemporal-1.f90 -fopenmp -c ../../gcc/fortran/openmp.c:4737:28: runtime error: index 21 out of bounds for type 'char *[21]' #0 0xbdb956 in resolve_omp_clauses ../../gcc/fortran/openmp.c:4737 #1 0xbeb076 in resolve_omp_do ../../gcc/fortran/openmp.c:6139 gcc-mirror#2 0xbf029a in gfc_resolve_omp_directive(gfc_code*, gfc_namespace*) ../../gcc/fortran/openmp.c:6792 gcc-mirror#3 0xcb6363 in gfc_resolve_code(gfc_code*, gfc_namespace*) ../../gcc/fortran/resolve.c:12185 gcc-mirror#4 0xcef8cf in resolve_codes ../../gcc/fortran/resolve.c:17303 gcc/fortran/ChangeLog: * openmp.c (resolve_omp_clauses): Add NONTEMPORAL to clause names.
kraj
pushed a commit
that referenced
this pull request
Aug 22, 2020
PR analyzer/94851 reports various false "NULL dereference" diagnostics. The first case (comment #1) affects GCC 10.2 but no longer affects trunk; I believe it was fixed by the state rewrite of r11-2694-g808f4dfeb3a95f50f15e71148e5c1067f90a126d. The patch adds a regression test for this case. The other cases (comment gcc-mirror#3 and comment gcc-mirror#4) still affect trunk. In both cases, the && in a conditional is optimized to bitwise & _1 = p_4 != 0B; _2 = p_4 != q_6(D); _3 = _1 & _2; and the analyzer fails to fold this for the case where one (or both) of the conditionals is false, and thus erroneously considers the path where "p" is non-NULL despite being passed a NULL value. Fix this by implementing folding for this case. gcc/analyzer/ChangeLog: PR analyzer/94851 * region-model-manager.cc (region_model_manager::maybe_fold_binop): Fold bitwise "& 0" to 0. gcc/testsuite/ChangeLog: PR analyzer/94851 * gcc.dg/analyzer/pr94851-1.c: New test. * gcc.dg/analyzer/pr94851-3.c: New test. * gcc.dg/analyzer/pr94851-4.c: New test.
kraj
pushed a commit
that referenced
this pull request
Oct 1, 2020
This PR points out that we accept template<typename T> struct tuple { tuple(T); }; // #1 template<typename T> explicit tuple(T t) -> tuple<T>; // gcc-mirror#2 tuple t = { 1 }; despite the 'explicit' deduction guide in a copy-list-initialization context. That's because in deduction_guides_for we first find the user-defined deduction guide (gcc-mirror#2), and then ctor_deduction_guides_for creates artificial deduction guides: one from the tuple(T) constructor and a copy guide. So we end up with these three guides: (1) template<class T> tuple(T) -> tuple<T> [DECL_NONCONVERTING_P] (2) template<class T> tuple(tuple<T>) -> tuple<T> (3) template<class T> tuple(T) -> tuple<T> Then, in do_class_deduction, we prune this set, and get rid of (1). Then overload resolution selects (3) and we succeed. But [over.match.list]p1 says "In copy-list-initialization, if an explicit constructor is chosen, the initialization is ill-formed." It also goes on to say that this differs from other situations where only converting constructors are considered for copy-initialization. Therefore for list-initialization we consider explicit constructors and complain if one is chosen. E.g. convert_like_internal/ck_user can give an error. So my logic runs that we should not prune the deduction_guides_for guides in a copy-list-initialization context, and only complain if we actually choose an explicit deduction guide. This matches clang++/EDG/msvc++. gcc/cp/ChangeLog: PR c++/90210 * pt.c (do_class_deduction): Don't prune explicit deduction guides in copy-list-initialization. In copy-list-initialization, if an explicit deduction guide was selected, give an error. gcc/testsuite/ChangeLog: PR c++/90210 * g++.dg/cpp1z/class-deduction73.C: New test.
kraj
pushed a commit
that referenced
this pull request
Oct 7, 2020
This patch improves block-scope extern handling by always injecting a hidden copy into the enclosing namespace (or using a match already there). This hidden copy will be revealed if the user explicitly declares it later. We can get from the DECL_LOCAL_DECL_P local extern to the alias via DECL_LOCAL_DECL_ALIAS. This fixes several bugs and removes the kludgy per-function extern_decl_map. We only do this pushing for non-dependent local externs -- dependent ones will be pushed during instantiation. User code that expected to be able to handle incompatible local externs in different block-scopes will no longer work. That code is ill-formed. (always was, despite what 31775 claimed). I had to adjust a number of testcases that fell into this. I tried using DECL_VALUE_EXPR, but that didn't work out. Due to constexpr requirements we have to do the replacement very late (it happens in the gimplifier). Consider: extern int l[]; // #1 constexpr bool foo () { extern int l[3]; // this does not complete the type of decl #1 constexpr int *p = &l[2]; // ok return !p; } This requirement, coupled with our use of the common folding machinery makes pr97306 hard to fix, as we end up with an expression containing the two different decls for 'l', and only the c++ FE knows how to reconcile those. I punted on this. gcc/cp/ * cp-tree.h (struct language_function): Delete extern_decl_map. (DECL_LOCAL_DECL_ALIAS): New. * name-lookup.h (is_local_extern): Delete. * name-lookup.c (set_local_extern_decl_linkage): Replace with ... (push_local_extern_decl): ... this new function. (do_pushdecl): Call new function after pushing new decl. Unhide hidden non-functions. (is_local_extern): Delete. * decl.c (layout_var_decl): Do not allow VLA local externs. * decl2.c (mark_used): Also mark DECL_LOCAL_DECL_ALIAS. Drop old local-extern treatment. * parser.c (cp_parser_oacc_declare): Deal with local extern aliases. * pt.c (tsubst_expr): Adjust local extern instantiation. * cp-gimplify.c (cp_genericize_r): Remap DECL_LOCAL_DECLs. gcc/testsuite/ * g++.dg/cpp0x/lambda/lambda-sfinae1.C: Avoid ill-formed local extern * g++.dg/init/pr42844.C: Add expected error. * g++.dg/lookup/extern-redecl1.C: Likewise. * g++.dg/lookup/koenig15.C: Avoid ill-formed. * g++.dg/lto/pr95677.C: New. * g++.dg/other/nested-extern-1.C: Correct expected behabviour. * g++.dg/other/nested-extern-2.C: Likewise. * g++.dg/other/nested-extern.cc: Split ... * g++.dg/other/nested-extern-1.cc: ... here ... * g++.dg/other/nested-extern-2.cc: ... here. * g++.dg/template/scope5.C: Avoid ill-formed * g++.old-deja/g++.law/missed-error2.C: Allow extension. * g++.old-deja/g++.pt/crash3.C: Add expected error.
kraj
pushed a commit
that referenced
this pull request
Oct 12, 2020
Prevents the following UBSAN error: ./xgcc -B. /home/marxin/Programming/gcc/gcc/testsuite/g++.dg/torture/pr49770.C -O2 -c /home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482:22: runtime error: load of value 2, which is not a valid value for type 'bool' #0 0x1fdb4d1 in modref_tree<int>::merge(modref_tree<int>*, vec<modref_parm_map, va_heap, vl_ptr>*) /home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482 #1 0x1fcadaa in merge_call_side_effects(modref_summary*, gimple*, modref_summary*, bool) /home/marxin/Programming/gcc2/gcc/ipa-modref.c:511 gcc-mirror#2 0x1fcbadd in analyze_call /home/marxin/Programming/gcc2/gcc/ipa-modref.c:642 gcc-mirror#3 0x1fcc061 in analyze_stmt /home/marxin/Programming/gcc2/gcc/ipa-modref.c:732 gcc-mirror#4 0x1fccf31 in analyze_function /home/marxin/Programming/gcc2/gcc/ipa-modref.c:823 gcc-mirror#5 0x1fd17e5 in execute /home/marxin/Programming/gcc2/gcc/ipa-modref.c:1441 gcc-mirror#6 0x25cca6e in execute_one_pass(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2509 gcc-mirror#7 0x25cd39b in execute_pass_list_1 /home/marxin/Programming/gcc2/gcc/passes.c:2597 gcc-mirror#8 0x25cd450 in execute_pass_list_1 /home/marxin/Programming/gcc2/gcc/passes.c:2598 gcc-mirror#9 0x25cd4ee in execute_pass_list(function*, opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2608 gcc-mirror#10 0x25c7a5a in do_per_function_toporder(void (*)(function*, void*), void*) /home/marxin/Programming/gcc2/gcc/passes.c:1726 gcc-mirror#11 0x25cfa3f in execute_ipa_pass_list(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2941 gcc-mirror#12 0x173572d in ipa_passes /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2642 gcc-mirror#13 0x17364ee in symbol_table::compile() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2777 gcc-mirror#14 0x17372d9 in symbol_table::finalize_compilation_unit() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:3022 gcc-mirror#15 0x2a1f00a in compile_file /home/marxin/Programming/gcc2/gcc/toplev.c:485 gcc-mirror#16 0x2a27dc8 in do_compile /home/marxin/Programming/gcc2/gcc/toplev.c:2321 gcc-mirror#17 0x2a283cc in toplev::main(int, char**) /home/marxin/Programming/gcc2/gcc/toplev.c:2460 gcc-mirror#18 0x54f21cd in main /home/marxin/Programming/gcc2/gcc/main.c:39 gcc-mirror#19 0x7ffff6f0de09 in __libc_start_main ../csu/libc-start.c:314 gcc-mirror#20 0x9eac09 in _start (/home/marxin/Programming/gcc2/objdir/gcc/cc1plus+0x9eac09) gcc/ChangeLog: * ipa-modref.c (merge_call_side_effects): Clear modref_parm_map fields in the vector.
kraj
pushed a commit
that referenced
this pull request
Oct 19, 2020
It fixes: /home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482:22: runtime error: load of value 255, which is not a valid value for type 'bool' #0 0x18e5df3 in modref_tree<int>::merge(modref_tree<int>*, vec<modref_parm_map, va_heap, vl_ptr>*) /home/marxin/Programming/gcc2/gcc/ipa-modref-tree.h:482 #1 0x18dc180 in ipa_merge_modref_summary_after_inlining(cgraph_edge*) /home/marxin/Programming/gcc2/gcc/ipa-modref.c:1779 gcc-mirror#2 0x18c1c72 in inline_call(cgraph_edge*, bool, vec<cgraph_edge*, va_heap, vl_ptr>*, int*, bool, bool*) /home/marxin/Programming/gcc2/gcc/ipa-inline-transform.c:492 gcc-mirror#3 0x4a3589c in inline_small_functions /home/marxin/Programming/gcc2/gcc/ipa-inline.c:2216 gcc-mirror#4 0x4a3b230 in ipa_inline /home/marxin/Programming/gcc2/gcc/ipa-inline.c:2697 gcc-mirror#5 0x4a3d902 in execute /home/marxin/Programming/gcc2/gcc/ipa-inline.c:3096 gcc-mirror#6 0x1edf831 in execute_one_pass(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2509 gcc-mirror#7 0x1ee26af in execute_ipa_pass_list(opt_pass*) /home/marxin/Programming/gcc2/gcc/passes.c:2936 gcc-mirror#8 0x103f31b in ipa_passes /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2700 gcc-mirror#9 0x103fb40 in symbol_table::compile() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:2777 gcc-mirror#10 0x104092b in symbol_table::finalize_compilation_unit() /home/marxin/Programming/gcc2/gcc/cgraphunit.c:3022 gcc-mirror#11 0x235723b in compile_file /home/marxin/Programming/gcc2/gcc/toplev.c:485 gcc-mirror#12 0x235fff9 in do_compile /home/marxin/Programming/gcc2/gcc/toplev.c:2321 gcc-mirror#13 0x23605fc in toplev::main(int, char**) /home/marxin/Programming/gcc2/gcc/toplev.c:2460 gcc-mirror#14 0x4e2b93b in main /home/marxin/Programming/gcc2/gcc/main.c:39 gcc-mirror#15 0x7ffff6f0ae09 in __libc_start_main ../csu/libc-start.c:314 gcc-mirror#16 0x9a0be9 in _start (/home/marxin/Programming/gcc2/objdir/gcc/cc1+0x9a0be9) gcc/ChangeLog: * ipa-modref.c (compute_parm_map): Clear vector.
kraj
pushed a commit
that referenced
this pull request
Nov 2, 2020
Enable thumb1_gen_const_int to generate RTL or asm depending on the context, so that we avoid duplicating code to handle constants in Thumb-1 with -mpure-code. Use a template so that the algorithm is effectively shared, and rely on two classes to handle the actual emission as RTL or asm. The generated sequence is improved to handle right-shiftable and small values with less instructions. We now generate: 128: movs r0, r0, #128 264: movs r3, gcc-mirror#33 lsls r3, gcc-mirror#3 510: movs r3, #255 lsls r3, #1 512: movs r3, #1 lsls r3, gcc-mirror#9 764: movs r3, #191 lsls r3, gcc-mirror#2 65536: movs r3, #1 lsls r3, gcc-mirror#16 0x123456: movs r3, gcc-mirror#18 ;0x12 lsls r3, gcc-mirror#8 adds r3, gcc-mirror#52 ;0x34 lsls r3, gcc-mirror#8 adds r3, gcc-mirror#86 ;0x56 0x1123456: movs r3, #137 ;0x89 lsls r3, gcc-mirror#8 adds r3, gcc-mirror#26 ;0x1a lsls r3, gcc-mirror#8 adds r3, gcc-mirror#43 ;0x2b lsls r3, #1 0x1000010: movs r3, gcc-mirror#16 lsls r3, gcc-mirror#16 adds r3, #1 lsls r3, gcc-mirror#4 0x1000011: movs r3, #1 lsls r3, gcc-mirror#24 adds r3, gcc-mirror#17 -8192: movs r3, #1 lsls r3, gcc-mirror#13 rsbs r3, #0 The patch adds a testcase which does not fully exercise thumb1_gen_const_int, as other existing patterns already catch small constants. These parts of thumb1_gen_const_int are used by arm_thumb1_mi_thunk. 2020-11-02 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm.c (thumb1_const_rtl, thumb1_const_print): New classes. (thumb1_gen_const_int): Rename to ... (thumb1_gen_const_int_1): ... New helper function. Add capability to emit either RTL or asm, improve generated code. (thumb1_gen_const_int_rtl): New function. * config/arm/arm-protos.h (thumb1_gen_const_int): Rename to thumb1_gen_const_int_rtl. * config/arm/thumb1.md: Call thumb1_gen_const_int_rtl instead of thumb1_gen_const_int. gcc/testsuite/ * gcc.target/arm/pure-code/no-literal-pool-m0.c: New.
kraj
pushed a commit
that referenced
this pull request
Nov 13, 2020
To access the "n - 100000"th element of "a" in this test, GCC will generate the following code for msp430-elf with -mcpu=msp430x: RLAM.W #1, R12 MOV.W a-3392(R12), R12 Since there aren't actually 100,000 elements in a, this means that "a-3392" offset calculated by the linker can overflow, as the address of "a" can validly be less than 3392. The relocations used for -mcpu=msp430 and -mlarge are not as strict and the calculated value is allowed to wrap around the address space, avoiding relocation overflows. gcc/testsuite/ChangeLog: * gcc.c-torture/execute/index-1.c: Skip for the default MSP430 430X ISA.
kraj
pushed a commit
that referenced
this pull request
Dec 5, 2020
…ddress Fix an ICE with the handling of RTL expressions like: (subreg:QI (mem/c:SI (plus:SI (plus:SI (mult:SI (reg/v:SI 0 %r0 [orig:67 i ] [67]) (const_int 4 [0x4])) (reg/v/f:SI 7 %r7 [orig:59 doacross ] [59])) (const_int 40 [0x28])) [1 MEM[(unsigned int *)doacross_63 + 40B + i_106 * 4]+0 S4 A32]) 0) that causes the compilation of libgomp to fail: during RTL pass: reload .../libgomp/ordered.c: In function 'GOMP_doacross_wait': .../libgomp/ordered.c:507:1: internal compiler error: in change_address_1, at emit-rtl.c:2275 507 | } | ^ 0x10a3462b change_address_1 .../gcc/emit-rtl.c:2275 0x10a353a7 adjust_address_1(rtx_def*, machine_mode, poly_int<1u, long>, int, int, int, poly_int<1u, long>) .../gcc/emit-rtl.c:2409 0x10ae2993 alter_subreg(rtx_def**, bool) .../gcc/final.c:3368 0x10ae25cf cleanup_subreg_operands(rtx_insn*) .../gcc/final.c:3322 0x110922a3 reload(rtx_insn*, int) .../gcc/reload1.c:1232 0x10de2bf7 do_reload .../gcc/ira.c:5812 0x10de3377 execute .../gcc/ira.c:5986 in a `vax-netbsdelf' build, where an attempt is made to change the mode of the contained memory reference to the mode of the containing SUBREG. Such RTL expressions are produced by the VAX shift and rotate patterns (`ashift', `ashiftrt', `rotate', `rotatert') where the count operand always has the QI mode regardless of the mode, either SI or DI, of the datum shifted or rotated. Such a mode change cannot work where the memory reference uses the indexed addressing mode, where a multiplier is implied that in the VAX ISA depends on the width of the memory access requested and therefore changing the machine mode would change the address calculation as well. Avoid the attempt then by forcing the reload of any SUBREGs containing a mode-dependent memory reference, also fixing these regressions: FAIL: gcc.c-torture/compile/pr46883.c -Os (internal compiler error) FAIL: gcc.c-torture/compile/pr46883.c -Os (test for excess errors) FAIL: gcc.c-torture/execute/20120808-1.c -O2 (internal compiler error) FAIL: gcc.c-torture/execute/20120808-1.c -O2 (test for excess errors) FAIL: gcc.c-torture/execute/20120808-1.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions (internal compiler error) FAIL: gcc.c-torture/execute/20120808-1.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions (test for excess errors) FAIL: gcc.c-torture/execute/20120808-1.c -O3 -g (internal compiler error) FAIL: gcc.c-torture/execute/20120808-1.c -O3 -g (test for excess errors) FAIL: gcc.c-torture/execute/20120808-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none (internal compiler error) FAIL: gcc.c-torture/execute/20120808-1.c -O2 -flto -fno-use-linker-plugin -flto-partition=none (test for excess errors) FAIL: gcc.c-torture/execute/20120808-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects (internal compiler error) FAIL: gcc.c-torture/execute/20120808-1.c -O2 -flto -fuse-linker-plugin -fno-fat-lto-objects (test for excess errors) FAIL: gcc.dg/20050629-1.c (internal compiler error) FAIL: gcc.dg/20050629-1.c (test for excess errors) FAIL: c-c++-common/torture/pr53505.c -Os (internal compiler error) FAIL: c-c++-common/torture/pr53505.c -Os (test for excess errors) FAIL: gfortran.dg/coarray_failed_images_1.f08 -Os (internal compiler error) FAIL: gfortran.dg/coarray_stopped_images_1.f08 -Os (internal compiler error) With test case #0 included it causes a reload with: (insn 15 14 16 4 (set (reg:SI 31) (ashift:SI (const_int 1 [0x1]) (subreg:QI (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ]) 0))) "pr58901-0.c":15:12 94 {ashlsi3} (expr_list:REG_DEAD (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ]) (nil))) as follows: Reloads for insn # 15 Reload 0: reload_in (SI) = (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ]) ALL_REGS, RELOAD_FOR_INPUT (opnum = 2) reload_in_reg: (reg:SI 30 [ MEM[(int *)s_8(D) + 4B + _5 * 4] ]) reload_reg_rtx: (reg:SI 5 %r5) resulting in: (insn 37 14 15 4 (set (reg:SI 5 %r5) (mem/c:SI (plus:SI (plus:SI (mult:SI (reg/v:SI 1 %r1 [orig:25 i ] [25]) (const_int 4 [0x4])) (reg/v/f:SI 4 %r4 [orig:29 s ] [29])) (const_int 4 [0x4])) [1 MEM[(int *)s_8(D) + 4B + _5 * 4]+0 S4 A32])) "pr58901-0.c":15:12 12 {movsi_2} (nil)) (insn 15 37 16 4 (set (reg:SI 2 %r2 [31]) (ashift:SI (const_int 1 [0x1]) (reg:QI 5 %r5))) "pr58901-0.c":15:12 94 {ashlsi3} (nil)) and assembly like: .L3: movl 4(%r4)[%r1],%r5 ashl %r5,$1,%r2 xorl2 %r2,%r0 incl %r1 cmpl %r1,%r3 jneq .L3 produced for the loop, providing optimization has been enabled. Likewise with test case #1 the reload of: (insn 17 16 18 4 (set (reg:SI 34) (and:SI (subreg:SI (reg/v:DI 27 [ t ]) 4) (const_int 1 [0x1]))) "pr58901-1.c":18:20 77 {*andsi_const_int} (expr_list:REG_DEAD (reg/v:DI 27 [ t ]) (nil))) is as follows: Reloads for insn # 17 Reload 0: reload_in (DI) = (reg/v:DI 27 [ t ]) reload_out (SI) = (reg:SI 2 %r2 [34]) ALL_REGS, RELOAD_OTHER (opnum = 0) reload_in_reg: (reg/v:DI 27 [ t ]) reload_out_reg: (reg:SI 2 %r2 [34]) reload_reg_rtx: (reg:DI 4 %r4) resulting in: (insn 40 16 17 4 (set (reg:DI 4 %r4) (mem/c:DI (plus:SI (mult:SI (reg/v:SI 1 %r1 [orig:26 i ] [26]) (const_int 8 [0x8])) (reg/v/f:SI 3 %r3 [orig:30 s ] [30])) [1 MEM[(const struct s *)s_13(D) + _7 * 8]+0 S8 A32])) "pr58901-1.c":18:20 11 {movdi} (nil)) (insn 17 40 41 4 (set (reg:SI 4 %r4) (and:SI (reg:SI 5 %r5 [+4 ]) (const_int 1 [0x1]))) "pr58901-1.c":18:20 77 {*andsi_const_int} (nil)) and assembly like: .L3: movq (%r3)[%r1],%r4 bicl3 $-2,%r5,%r4 addl2 %r4,%r0 jaoblss %r0,%r1,.L3 First posted at: <https://gcc.gnu.org/ml/gcc/2014-06/msg00060.html>. 2020-12-05 Matt Thomas <matt@3am-software.com> Maciej W. Rozycki <macro@linux-mips.org> gcc/ PR target/58901 * reload.c (push_reload): Also reload the inner expression of a SUBREG for pseudos associated with a mode-dependent memory reference. (find_reloads): Force a reload likewise. 2020-12-05 Maciej W. Rozycki <macro@linux-mips.org> gcc/testsuite/ PR target/58901 * gcc.c-torture/compile/pr58901-0.c: New test. * gcc.c-torture/compile/pr58901-1.c: New test.
kraj
pushed a commit
that referenced
this pull request
Feb 23, 2021
/home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_value.cpp:77:25: runtime error: left shift of 0x0000000000000000fffffffffffffffb by 96 places cannot be represented in type '__int128' #0 0x7ffff754edfe in __ubsan::Value::getSIntValue() const /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_value.cpp:77 #1 0x7ffff7548719 in __ubsan::Value::isNegative() const /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_value.h:190 gcc-mirror#2 0x7ffff7542a34 in handleShiftOutOfBoundsImpl /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_handlers.cpp:338 gcc-mirror#3 0x7ffff75431b7 in __ubsan_handle_shift_out_of_bounds /home/marxin/Programming/gcc2/libsanitizer/ubsan/ubsan_handlers.cpp:370 gcc-mirror#4 0x40067f in main (/home/marxin/Programming/testcases/a.out+0x40067f) gcc-mirror#5 0x7ffff72c8b24 in __libc_start_main (/lib64/libc.so.6+0x27b24) gcc-mirror#6 0x4005bd in _start (/home/marxin/Programming/testcases/a.out+0x4005bd) Differential Revision: https://reviews.llvm.org/D97263 Cherry-pick from 16ede09.
kraj
pushed a commit
that referenced
this pull request
Sep 16, 2024
…on [PR113328] SVE's INDEX instruction can be used to populate vectors by values starting from "base" and incremented by "step" for each subsequent value. We can take advantage of it to generate vector constants if TARGET_SVE is available and the base and step values are within [-16, 15]. For example, with the following function: typedef int v4si __attribute__ ((vector_size (16))); v4si f_v4si (void) { return (v4si){ 0, 1, 2, 3 }; } GCC currently generates: f_v4si: adrp x0, .LC4 ldr q0, [x0, #:lo12:.LC4] ret .LC4: .word 0 .word 1 .word 2 .word 3 With this patch, we generate an INDEX instruction instead if TARGET_SVE is available. f_v4si: index z0.s, #0, #1 ret PR target/113328 gcc/ChangeLog: * config/aarch64/aarch64.cc (aarch64_simd_valid_immediate): Improve handling of some ADVSIMD vectors by using SVE's INDEX if TARGET_SVE is available. (aarch64_output_simd_mov_immediate): Likewise. gcc/testsuite/ChangeLog: * gcc.target/aarch64/sve/acle/general/dupq_1.c: Update test to use SVE's INDEX instruction. * gcc.target/aarch64/sve/acle/general/dupq_2.c: Likewise. * gcc.target/aarch64/sve/acle/general/dupq_3.c: Likewise. * gcc.target/aarch64/sve/acle/general/dupq_4.c: Likewise. * gcc.target/aarch64/sve/vec_init_3.c: New test. Signed-off-by: Pengxuan Zheng <quic_pzheng@quicinc.com>
kraj
pushed a commit
that referenced
this pull request
Oct 18, 2024
Implement vddup and vidup using the new MVE builtins framework. We generate better code because we take advantage of the two outputs produced by the v[id]dup instructions. For instance, before: ldr r3, [r0] sub r2, r3, gcc-mirror#8 str r2, [r0] mov r2, r3 vddup.u16 q3, r2, #1 now: ldr r2, [r0] vddup.u16 q3, r2, #1 str r2, [r0] 2024-08-21 Christophe Lyon <christophe.lyon@linaro.org> gcc/ * config/arm/arm-mve-builtins-base.cc (class viddup_impl): New. (vddup): New. (vidup): New. * config/arm/arm-mve-builtins-base.def (vddupq): New. (vidupq): New. * config/arm/arm-mve-builtins-base.h (vddupq): New. (vidupq): New. * config/arm/arm_mve.h (vddupq_m): Delete. (vddupq_u8): Delete. (vddupq_u32): Delete. (vddupq_u16): Delete. (vidupq_m): Delete. (vidupq_u8): Delete. (vidupq_u32): Delete. (vidupq_u16): Delete. (vddupq_x_u8): Delete. (vddupq_x_u16): Delete. (vddupq_x_u32): Delete. (vidupq_x_u8): Delete. (vidupq_x_u16): Delete. (vidupq_x_u32): Delete. (vddupq_m_n_u8): Delete. (vddupq_m_n_u32): Delete. (vddupq_m_n_u16): Delete. (vddupq_m_wb_u8): Delete. (vddupq_m_wb_u16): Delete. (vddupq_m_wb_u32): Delete. (vddupq_n_u8): Delete. (vddupq_n_u32): Delete. (vddupq_n_u16): Delete. (vddupq_wb_u8): Delete. (vddupq_wb_u16): Delete. (vddupq_wb_u32): Delete. (vidupq_m_n_u8): Delete. (vidupq_m_n_u32): Delete. (vidupq_m_n_u16): Delete. (vidupq_m_wb_u8): Delete. (vidupq_m_wb_u16): Delete. (vidupq_m_wb_u32): Delete. (vidupq_n_u8): Delete. (vidupq_n_u32): Delete. (vidupq_n_u16): Delete. (vidupq_wb_u8): Delete. (vidupq_wb_u16): Delete. (vidupq_wb_u32): Delete. (vddupq_x_n_u8): Delete. (vddupq_x_n_u16): Delete. (vddupq_x_n_u32): Delete. (vddupq_x_wb_u8): Delete. (vddupq_x_wb_u16): Delete. (vddupq_x_wb_u32): Delete. (vidupq_x_n_u8): Delete. (vidupq_x_n_u16): Delete. (vidupq_x_n_u32): Delete. (vidupq_x_wb_u8): Delete. (vidupq_x_wb_u16): Delete. (vidupq_x_wb_u32): Delete. (__arm_vddupq_m_n_u8): Delete. (__arm_vddupq_m_n_u32): Delete. (__arm_vddupq_m_n_u16): Delete. (__arm_vddupq_m_wb_u8): Delete. (__arm_vddupq_m_wb_u16): Delete. (__arm_vddupq_m_wb_u32): Delete. (__arm_vddupq_n_u8): Delete. (__arm_vddupq_n_u32): Delete. (__arm_vddupq_n_u16): Delete. (__arm_vidupq_m_n_u8): Delete. (__arm_vidupq_m_n_u32): Delete. (__arm_vidupq_m_n_u16): Delete. (__arm_vidupq_n_u8): Delete. (__arm_vidupq_m_wb_u8): Delete. (__arm_vidupq_m_wb_u16): Delete. (__arm_vidupq_m_wb_u32): Delete. (__arm_vidupq_n_u32): Delete. (__arm_vidupq_n_u16): Delete. (__arm_vidupq_wb_u8): Delete. (__arm_vidupq_wb_u16): Delete. (__arm_vidupq_wb_u32): Delete. (__arm_vddupq_wb_u8): Delete. (__arm_vddupq_wb_u16): Delete. (__arm_vddupq_wb_u32): Delete. (__arm_vddupq_x_n_u8): Delete. (__arm_vddupq_x_n_u16): Delete. (__arm_vddupq_x_n_u32): Delete. (__arm_vddupq_x_wb_u8): Delete. (__arm_vddupq_x_wb_u16): Delete. (__arm_vddupq_x_wb_u32): Delete. (__arm_vidupq_x_n_u8): Delete. (__arm_vidupq_x_n_u16): Delete. (__arm_vidupq_x_n_u32): Delete. (__arm_vidupq_x_wb_u8): Delete. (__arm_vidupq_x_wb_u16): Delete. (__arm_vidupq_x_wb_u32): Delete. (__arm_vddupq_m): Delete. (__arm_vddupq_u8): Delete. (__arm_vddupq_u32): Delete. (__arm_vddupq_u16): Delete. (__arm_vidupq_m): Delete. (__arm_vidupq_u8): Delete. (__arm_vidupq_u32): Delete. (__arm_vidupq_u16): Delete. (__arm_vddupq_x_u8): Delete. (__arm_vddupq_x_u16): Delete. (__arm_vddupq_x_u32): Delete. (__arm_vidupq_x_u8): Delete. (__arm_vidupq_x_u16): Delete. (__arm_vidupq_x_u32): Delete.
kraj
pushed a commit
that referenced
this pull request
Oct 22, 2024
gcc.dg/torture/pr112305.c contains an inner loop that executes 0x8000_0014 times and an outer loop that executes 5 times, giving about 10 billion total executions of the inner loop body. At -O2 and above we are able to remove the inner loop, but at -O1 we keep a no-op loop: dls lr, r3 .L3: subs r3, r3, #1 le lr, .L3 and at -O0 we of course don't optimise. This can lead to long execution times on simulators, possibly triggering a timeout. gcc/testsuite * gcc.dg/torture/pr112305.c: Skip at -O0 and -O1 for simulators. (cherry picked from commit 4e80432)
kraj
pushed a commit
that referenced
this pull request
Oct 22, 2024
gcc.dg/torture/pr112305.c contains an inner loop that executes 0x8000_0014 times and an outer loop that executes 5 times, giving about 10 billion total executions of the inner loop body. At -O2 and above we are able to remove the inner loop, but at -O1 we keep a no-op loop: dls lr, r3 .L3: subs r3, r3, #1 le lr, .L3 and at -O0 we of course don't optimise. This can lead to long execution times on simulators, possibly triggering a timeout. gcc/testsuite * gcc.dg/torture/pr112305.c: Skip at -O0 and -O1 for simulators.
kraj
pushed a commit
that referenced
this pull request
Nov 5, 2024
We currently crash upon the following invalid code (notice the "void void**" parameter) === cut here === using size_t = decltype(sizeof(int)); void *operator new(size_t, void void **p) noexcept { return p; } int x; void f() { int y; new (&y) int(x); } === cut here === The problem is that in this case, we end up with a NULL_TREE parameter list for the new operator because of the error, and (1) coerce_new_type wrongly complains about the first parameter type not being size_t, (2) std_placement_new_fn_p blindly accesses the parameter list, hence a crash. This patch does NOT address #1 since we can't easily distinguish between a new operator declaration without parameters from one with erroneous parameters (and it's not worth the risk to refactor and break things for an error recovery issue) hence a dg-bogus in new52.C, but it does address gcc-mirror#2 and the ICE by simply checking the first parameter against NULL_TREE. It also adds a new testcase checking that we complain about new operators with no or invalid first parameters, since we did not have any. PR c++/117101 gcc/cp/ChangeLog: * init.cc (std_placement_new_fn_p): Check first_arg against NULL_TREE. gcc/testsuite/ChangeLog: * g++.dg/init/new52.C: New test. * g++.dg/init/new53.C: New test.
kraj
pushed a commit
that referenced
this pull request
Nov 19, 2024
Update test case for armv8.1-m.main that supports conditional arithmetic. armv7-m: push {r4, lr} ldr r4, .L6 ldr r4, [r4] lsls r4, r4, gcc-mirror#29 it mi addmi r2, r2, #1 bl bar movs r0, #0 pop {r4, pc} armv8.1-m.main: push {r3, r4, r5, lr} ldr r4, .L5 ldr r5, [r4] tst r5, gcc-mirror#4 csinc r2, r2, r2, eq bl bar movs r0, #0 pop {r3, r4, r5, pc} gcc/testsuite/ChangeLog: * gcc.target/arm/epilog-1.c: Use check-function-bodies. Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com> (cherry picked from commit ec86e87)
kraj
pushed a commit
that referenced
this pull request
Nov 19, 2024
Update test case for armv8.1-m.main that supports conditional arithmetic. armv7-m: push {r4, lr} ldr r4, .L6 ldr r4, [r4] lsls r4, r4, gcc-mirror#29 it mi addmi r2, r2, #1 bl bar movs r0, #0 pop {r4, pc} armv8.1-m.main: push {r3, r4, r5, lr} ldr r4, .L5 ldr r5, [r4] tst r5, gcc-mirror#4 csinc r2, r2, r2, eq bl bar movs r0, #0 pop {r3, r4, r5, pc} gcc/testsuite/ChangeLog: * gcc.target/arm/epilog-1.c: Use check-function-bodies. Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
kraj
pushed a commit
that referenced
this pull request
Nov 19, 2024
The second source register of insn "*extzvsi-1bit_addsubx" cannot be the same as the destination register, because that register will be overwritten with an intermediate value after insn splitting. /* example #1 */ int test1(int b, int a) { return ((a & 1024) ? 4 : 0) + b; } ;; result #1 (incorrect) test1: extui a2, a3, 10, 1 ;; overwrites A2 before used addx4 a2, a2, a2 ret.n This patch fixes that. ;; result #1 (correct) test1: extui a3, a3, 10, 1 ;; uses A3 and then overwrites addx4 a2, a3, a2 ret.n However, it should be noted that the first source register can be the same as the destination without any problems. /* example gcc-mirror#2 */ int test2(int a, int b) { return ((a & 1024) ? 4 : 0) + b; } ;; result (correct) test2: extui a2, a2, 10, 1 ;; uses A2 and then overwrites addx4 a2, a2, a3 ret.n gcc/ChangeLog: * config/xtensa/xtensa.md (*extzvsi-1bit_addsubx): Add '&' to the destination register constraint to indicate that it is 'earlyclobber', append '0' to the first source register constraint to indicate that it can be the same as the destination register, and change the split condition from 1 to reload_completed so that the insn will be split only after RA in order to obtain allocated registers that satisfy the above constraints.
kraj
pushed a commit
that referenced
this pull request
Nov 26, 2024
In r14.2.0-376-g724446556e5, I accidentally introduced a regression in the expected assembler as the csinc instruction was not used for armv8.1-m.main. The generated assembler for armv8.1-m.main is: push {r3, r4, r5, lr} ldr r4, .L5 ldr r5, [r4] adds r4, r2, #1 tst r5, gcc-mirror#4 it ne movne r2, r4 bl bar movs r0, #0 pop {r3, r4, r5, pc} gcc/testsuite/ChangeLog: * gcc.target/arm/epilog-1.c: Corrected armv8.1.m-main asm. Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
kraj
pushed a commit
that referenced
this pull request
Nov 26, 2024
vec.h has this method: template<typename T, typename A> inline T * vec_safe_push (vec<T, A, vl_embed> *&v, const T &obj CXX_MEM_STAT_INFO) where v is a reference to a pointer to vec. This matches the regex for VecPrinter, so gdbhooks.py attempts to print it but chokes on the reference. I see the following: #1 0x0000000002b84b7b in vec_safe_push<edge_def*, va_gc> (v=Traceback (most recent call last): File "$SRC/gcc/gcc/gdbhooks.py", line 486, in to_string return '0x%x' % intptr(self.gdbval) File "$SRC/gcc/gcc/gdbhooks.py", line 168, in intptr return long(gdbval) if sys.version_info.major == 2 else int(gdbval) gdb.error: Cannot convert value to long. This patch makes VecPrinter handle such references by stripping them (dereferencing) at the top of the relevant functions. gcc/ChangeLog: * gdbhooks.py (strip_ref): New. Use it ... (VecPrinter.to_string): ... here, (VecPrinter.children): ... and here.
kraj
pushed a commit
that referenced
this pull request
Dec 7, 2024
Brief: The bug appears in LRA after rematerialization pass while creating live ranges. File lra.cc: ************************************************************* /* Now we know what pseudos should be spilled. Try to rematerialize them first. */ if (lra_remat ()) { /* We need full live info -- see the comment above. */ lra_create_live_ranges (lra_reg_spill_p, true); ************************************************************* Wrong call `lra_create_live_ranges (lra_reg_spill_p, true)' It have to be `lra_create_live_ranges (true, true)'. The explanation: ********************************** int main (void) { if (a.u33 * a.u33 != 0) ------^^^^^^^^^^^^^ goto abrt; if (a.u33 * a.u40 * a.u33 != 0) ********************************** The bug appears here. Part of the expression `a.u33 * a.u33' Before LRA: ************************************************************* (insn 13 11 15 2 (set (reg:QI 184 [ _1+3 ]) (mem/c:QI (const:HI (plus:HI (symbol_ref:HI ("a") [flags 0x2] <var_decl 0x7c866435d000 a>) (const_int 3 [0x3]))) [1 a+3 S1 A8])) "bf.c":11:8 86 {movqi_insn_split} (nil)) (insn 15 13 16 2 (set (reg:QI 64 [ a+4 ]) (mem/c:QI (const:HI (plus:HI (symbol_ref:HI ("a") [flags 0x2] <var_decl 0x7c866435d000 a>) (const_int 4 [0x4]))) [1 a+4 S1 A8])) "bf.c":11:8 86 {movqi_insn_split} (nil)) (insn 16 15 20 2 (set (reg:QI 185 [ _1+4 ]) (zero_extract:QI (reg:QI 64 [ a+4 ]) (const_int 1 [0x1]) (const_int 0 [0]))) "bf.c":11:8 985 {*extzvqi_split} (nil)) ************************************************************* After LRA: ************************************************************* (insn 587 11 13 2 (set (reg:QI 24 r24 [368]) (mem/c:QI (const:HI (plus:HI (symbol_ref:HI ("a") [flags 0x2] <var_decl 0x7c866435d000 a>) (const_int 3 [0x3]))) [1 a+3 S1 A8])) "bf.c":11:8 86 {movqi_insn_split} (nil)) (insn 13 587 15 2 (set (mem/c:QI (plus:HI (reg/f:HI 28 r28) (const_int 1 [0x1])) [4 %sfp+1 S1 A8]) (reg:QI 24 r24 [368])) "bf.c":11:8 86 {movqi_insn_split} (nil)) (insn 15 13 16 2 (set (reg:QI 6 r6 [orig:64 a+4 ] [64]) (mem/c:QI (const:HI (plus:HI (symbol_ref:HI ("a") [flags 0x2] <var_decl 0x7c866435d000 a>) (const_int 4 [0x4]))) [1 a+4 S1 A8])) "bf.c":11:8 86 {movqi_insn_split} (nil)) (insn 16 15 572 2 (set (reg:QI 24 r24 [orig:185 _1+4 ] [185]) (zero_extract:QI (reg:QI 6 r6 [orig:64 a+4 ] [64]) (const_int 1 [0x1]) (const_int 0 [0]))) "bf.c":11:8 985 {*extzvqi_split} (nil)) (insn 572 16 20 2 (set (mem/c:QI (plus:HI (reg/f:HI 28 r28) (const_int 1 [0x1])) [4 %sfp+1 S1 A8]) (reg:QI 24 r24 [orig:185 _1+4 ] [185])) "bf.c":11:8 86 {movqi_insn_split} (nil)) ************************************************************* Insn 13 and insn 572 use sfp+1 as a spill slot, but in IRA pass it was a two different pseudos r184 and r185. Insns 13 use sfp+1 as a spill slot for r184 Insns 572 use the same slot for r185. It's wrong. Here we have a rematerialization. Fragment from bf.c.317r.reload: ************************************************************************************** ******** Rematerialization #1: ******** df_worklist_dataflow_doublequeue: n_basic_blocks 14 n_edges 18 count 14 ( 1) df_worklist_dataflow_doublequeue: n_basic_blocks 14 n_edges 18 count 14 ( 1) Cands: 0 (nop=0, remat_regno=185, reload_regno=359): (insn 16 15 572 2 (set (reg:QI 359 [orig:185 _1+4 ] [185]) (zero_extract:QI (reg:QI 64 [ a+4 ]) (const_int 1 [0x1]) (const_int 0 [0]))) "bf.c":11:8 985 {*extzvqi_split} (nil)) ************************************************************************************** [...] ************************************************************************************** Ranges after the compression: r185: [0..1] Frame pointer can not be eliminated anymore Spilling non-eliminable hard regs: 28 29 Spilling r113(28) Spilling r184(29) Spilling r208(29) Spilling r209(28) Slot 0 regnos (width = 0): 185 209 208 184 113 ************************************************************************************** The bug is here: `r185: [0..1]' wrong live range after compression. r185 and r184 can't have the same spill slot ! Rematerialization in bf.c.317r.reload looks like: ************************************************************* 24: r14:QI=r185:QI Inserting rematerialization insn before: 581: r14:QI=zero_extract(r64:QI,0x1,0) deleting insn with uid = 24. Considering alt=0 of insn 16: (0) =r (1) rYil (2) n overall=0,losers=0,rld_nregs=0 32: r22:QI=r185:QI Inserting rematerialization insn before: 582: r22:QI=zero_extract(r64:QI,0x1,0) deleting insn with uid = 32. ************************************************************* It's happened because: Fragment from lra.c (lra): ************************************************************************* if (! live_p) { /* We need full live info for spilling pseudos into registers instead of memory. */ lra_create_live_ranges (lra_reg_spill_p, true); live_p = true; } /* We should check necessity for spilling here as the above live range pass can remove spilled pseudos. */ if (! lra_need_for_spills_p ()) break; /* Now we know what pseudos should be spilled. Try to rematerialize them first. */ if (lra_remat ()) { /* We need full live info -- see the comment above. */ lra_create_live_ranges (lra_reg_spill_p, true); ----------------------------------^^^^^^^^^^^^^^^ live_p = true; ************************************************************************* The bug is here. Rematerialization sometimes can be like spilling pseudos into registers. 582: r22:QI=zero_extract(r64:QI,0x1,0) So, here we need a live ranges for all pseudos. PS: the patch will not affect any target with usable definition of TARGET_SPILL_CLASS hook. PR target/116778 gcc/ * lra-lives.cc (complete_info_p): Clarification of the comment. * lra.cc (lra): Create a full live info after rematerialization.
kraj
pushed a commit
that referenced
this pull request
Dec 9, 2024
This PR reports a missed optimization. When we have: Str str{"Test"}; callback(str); as in the test, we're able to evaluate the Str::Str() call at compile time. But when we have: callback(Str{"Test"}); we are not. With this patch (in fact, it's Patrick's patch with a little tweak), we turn callback (TARGET_EXPR <D.2890, <<< Unknown tree: aggr_init_expr 5 __ct_comp D.2890 (struct Str *) <<< Unknown tree: void_cst >>> (const char *) "Test" >>>>) into callback (TARGET_EXPR <D.2890, {.str=(const char *) "Test", .length=4}>) I explored the idea of calling maybe_constant_value for the whole TARGET_EXPR in cp_fold. That has three problems: - we can't always elide a TARGET_EXPR, so we'd have to make sure the result is also a TARGET_EXPR; - the resulting TARGET_EXPR must have the same flags, otherwise Bad Things happen; - getting a new slot is also problematic. I've seen a test where we had "TARGET_EXPR<D.2680, ...>, D.2680", and folding the whole TARGET_EXPR would get us "TARGET_EXPR<D.2681, ...>", but since we don't see the outer D.2680, we can't replace it with D.2681, and things break. With this patch, two tree-ssa tests regressed: pr78687.C and pr90883.C. FAIL: g++.dg/tree-ssa/pr90883.C scan-tree-dump dse1 "Deleted redundant store: .*.a = {}" is easy. Previously, we would call C::C, so .gimple has: D.2590 = {}; C::C (&D.2590); D.2597 = D.2590; return D.2597; Then .einline inlines the C::C call: D.2590 = {}; D.2590.a = {}; // #1 D.2590.b = 0; // gcc-mirror#2 D.2597 = D.2590; D.2590 ={v} {CLOBBER(eos)}; return D.2597; then gcc-mirror#2 is removed in .fre1, and #1 is removed in .dse1. So the test passes. But with the patch, .gimple won't have that C::C call, so the IL is of course going to look different. The .optimized dump looks the same though so there's no problem. pr78687.C is XFAILed because the test passes with r15-5746 but not with r15-5747 as well. I opened <https://gcc.gnu.org/PR117971>. PR c++/116416 gcc/cp/ChangeLog: * cp-gimplify.cc (cp_fold_r) <case TARGET_EXPR>: Try to fold TARGET_EXPR_INITIAL and replace it with the folded result if it's TREE_CONSTANT. gcc/testsuite/ChangeLog: * g++.dg/analyzer/pr97116.C: Adjust dg-message. * g++.dg/tree-ssa/pr78687.C: Add XFAIL. * g++.dg/tree-ssa/pr90883.C: Adjust dg-final. * g++.dg/cpp0x/constexpr-prvalue1.C: New test. * g++.dg/cpp1y/constexpr-prvalue1.C: New test. Co-authored-by: Patrick Palka <ppalka@redhat.com> Reviewed-by: Jason Merrill <jason@redhat.com>
kraj
pushed a commit
that referenced
this pull request
Dec 12, 2024
With the changes in r15-1579-g792f97b44ff, the code used as "padding" in the test case is optimized way. Prevent this optimization by forcing a read of the volatile memory. Also, validate that there is a far jump in the generated assembler. Without this patch, the generated assembler is reduced to: f3: cmp r0, #0 beq .L1 ldr r4, .L6 .L1: bx lr .L7: .align 2 .L6: .word g_0_1 With the patch, the generated assembler is: f3: movs r2, #1 ldr r3, .L6 push {lr} str r2, [r3] cmp r0, #0 bne .LCB10 bl .L1 @far jump .LCB10: b .L7 .L8: .align 2 .L6: .word .LANCHOR0 .L7: str r2, [r3] ... str r2, [r3] .L1: pop {pc} gcc/testsuite/ChangeLog: * gcc.target/arm/thumb1-far-jump-2.c: Write to volatile memmory in macro to avoid optimization. Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
kraj
pushed a commit
that referenced
this pull request
Dec 12, 2024
On Cortex-M4, the code generated is: cmp r0, r1 itte ne lslne r0, r0, r1 asrne r0, r0, #1 moveq r0, r1 add r0, r0, r1 bx lr On Cortex-M7, the code generated is: cmp r0, r1 beq .L3 lsls r0, r0, r1 asrs r0, r0, #1 add r0, r0, r1 bx lr .L3: mov r0, r1 add r0, r0, r1 bx lr As Cortex-M7 only allow maximum one conditional instruction, force Cortex-M4 to have a stable test case. gcc/testsuite/ChangeLog: * gcc.target/arm/thumb-ifcvt.c: Use -mtune=cortex-m4. Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com>
kraj
pushed a commit
that referenced
this pull request
Dec 12, 2024
On Cortex-M4, the code generated is: cmp r0, r1 itte ne lslne r0, r0, r1 asrne r0, r0, #1 moveq r0, r1 add r0, r0, r1 bx lr On Cortex-M7, the code generated is: cmp r0, r1 beq .L3 lsls r0, r0, r1 asrs r0, r0, #1 add r0, r0, r1 bx lr .L3: mov r0, r1 add r0, r0, r1 bx lr As Cortex-M7 only allow maximum one conditional instruction, force Cortex-M4 to have a stable test case. gcc/testsuite/ChangeLog: * gcc.target/arm/thumb-ifcvt.c: Use -mtune=cortex-m4. Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@foss.st.com> (cherry picked from commit e7615f6)
kraj
pushed a commit
that referenced
this pull request
Dec 17, 2024
This crash started with my r12-7803 but I believe the problem lies elsewhere. build_vec_init has cleanup_flags whose purpose is -- if I grok this correctly -- to avoid destructing an object multiple times. Let's say we are initializing an array of A. Then we might end up in a scenario similar to initlist-eh1.C: try { call A::A in a loop // #0 try { call a fn using the array } finally { // #1 call A::~A in a loop } } catch { // gcc-mirror#2 call A::~A in a loop } cleanup_flags makes us emit a statement like D.3048 = 2; at #0 to disable performing the cleanup at gcc-mirror#2, since #1 will take care of the destruction of the array. But if we are not emitting the loop because we can use a constant initializer (and use a single { a, b, ...}), we shouldn't generate the statement resetting the iterator to its initial value. Otherwise we crash in gimplify_var_or_parm_decl because it gets the stray decl D.3048. PR c++/117985 gcc/cp/ChangeLog: * init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not generating the loop. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/initlist-array23.C: New test. * g++.dg/cpp0x/initlist-array24.C: New test.
kraj
pushed a commit
that referenced
this pull request
Jan 7, 2025
This patch removes the AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS tunable and use_new_vector_costs entry in aarch64-tuning-flags.def and makes the AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS paths in the backend the default. To that end, the function aarch64_use_new_vector_costs_p and its uses were removed. To prevent costing vec_to_scalar operations with 0, as described in https://gcc.gnu.org/pipermail/gcc-patches/2024-October/665481.html, we adjusted vectorizable_store such that the variable n_adjacent_stores also covers vec_to_scalar operations. This way vec_to_scalar operations are not costed individually, but as a group. As suggested by Richard Sandiford, the "known_ne" in the multilane-check was replaced by "maybe_ne" in order to treat nunits==1+1X as a vector rather than a scalar. Two tests were adjusted due to changes in codegen. In both cases, the old code performed loop unrolling once, but the new code does not: Example from gcc.target/aarch64/sve/strided_load_2.c (compiled with -O2 -ftree-vectorize -march=armv8.2-a+sve -mtune=generic -moverride=tune=none): f_int64_t_32: cbz w3, .L92 mov x4, 0 uxtw x3, w3 + cntd x5 + whilelo p7.d, xzr, x3 + mov z29.s, w5 mov z31.s, w2 - whilelo p6.d, xzr, x3 - mov x2, x3 - index z30.s, #0, #1 - uqdecd x2 - ptrue p5.b, all - whilelo p7.d, xzr, x2 + index z30.d, #0, #1 + ptrue p6.b, all .p2align 3,,7 .L94: - ld1d z27.d, p7/z, [x0, #1, mul vl] - ld1d z28.d, p6/z, [x0] - movprfx z29, z31 - mul z29.s, p5/m, z29.s, z30.s - incw x4 - uunpklo z0.d, z29.s - uunpkhi z29.d, z29.s - ld1d z25.d, p6/z, [x1, z0.d, lsl 3] - ld1d z26.d, p7/z, [x1, z29.d, lsl 3] - add z25.d, z28.d, z25.d + ld1d z27.d, p7/z, [x0, x4, lsl 3] + movprfx z28, z31 + mul z28.s, p6/m, z28.s, z30.s + ld1d z26.d, p7/z, [x1, z28.d, uxtw 3] add z26.d, z27.d, z26.d - st1d z26.d, p7, [x0, #1, mul vl] - whilelo p7.d, x4, x2 - st1d z25.d, p6, [x0] - incw z30.s - incb x0, all, mul gcc-mirror#2 - whilelo p6.d, x4, x3 + st1d z26.d, p7, [x0, x4, lsl 3] + add z30.s, z30.s, z29.s + incd x4 + whilelo p7.d, x4, x3 b.any .L94 .L92: ret Example from gcc.target/aarch64/sve/strided_store_2.c (compiled with -O2 -ftree-vectorize -march=armv8.2-a+sve -mtune=generic -moverride=tune=none): f_int64_t_32: cbz w3, .L84 - addvl x5, x1, #1 mov x4, 0 uxtw x3, w3 - mov z31.s, w2 + cntd x5 whilelo p7.d, xzr, x3 - mov x2, x3 - index z30.s, #0, #1 - uqdecd x2 - ptrue p5.b, all - whilelo p6.d, xzr, x2 + mov z29.s, w5 + mov z31.s, w2 + index z30.d, #0, #1 + ptrue p6.b, all .p2align 3,,7 .L86: - ld1d z28.d, p7/z, [x1, x4, lsl 3] - ld1d z27.d, p6/z, [x5, x4, lsl 3] - movprfx z29, z30 - mul z29.s, p5/m, z29.s, z31.s - add z28.d, z28.d, #1 - uunpklo z26.d, z29.s - st1d z28.d, p7, [x0, z26.d, lsl 3] - incw x4 - uunpkhi z29.d, z29.s + ld1d z27.d, p7/z, [x1, x4, lsl 3] + movprfx z28, z30 + mul z28.s, p6/m, z28.s, z31.s add z27.d, z27.d, #1 - whilelo p6.d, x4, x2 - st1d z27.d, p7, [x0, z29.d, lsl 3] - incw z30.s + st1d z27.d, p7, [x0, z28.d, uxtw 3] + incd x4 + add z30.s, z30.s, z29.s whilelo p7.d, x4, x3 b.any .L86 .L84: ret The patch was bootstrapped and tested on aarch64-linux-gnu, no regression. OK for mainline? Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com> gcc/ * tree-vect-stmts.cc (vectorizable_store): Extend the use of n_adjacent_stores to also cover vec_to_scalar operations. * config/aarch64/aarch64-tuning-flags.def: Remove use_new_vector_costs as tuning option. * config/aarch64/aarch64.cc (aarch64_use_new_vector_costs_p): Remove. (aarch64_vector_costs::add_stmt_cost): Remove use of aarch64_use_new_vector_costs_p. (aarch64_vector_costs::finish_cost): Remove use of aarch64_use_new_vector_costs_p. * config/aarch64/tuning_models/cortexx925.h: Remove AARCH64_EXTRA_TUNE_USE_NEW_VECTOR_COSTS. * config/aarch64/tuning_models/fujitsu_monaka.h: Likewise. * config/aarch64/tuning_models/generic_armv8_a.h: Likewise. * config/aarch64/tuning_models/generic_armv9_a.h: Likewise. * config/aarch64/tuning_models/neoverse512tvb.h: Likewise. * config/aarch64/tuning_models/neoversen2.h: Likewise. * config/aarch64/tuning_models/neoversen3.h: Likewise. * config/aarch64/tuning_models/neoversev1.h: Likewise. * config/aarch64/tuning_models/neoversev2.h: Likewise. * config/aarch64/tuning_models/neoversev3.h: Likewise. * config/aarch64/tuning_models/neoversev3ae.h: Likewise. gcc/testsuite/ * gcc.target/aarch64/sve/strided_load_2.c: Adjust expected outcome. * gcc.target/aarch64/sve/strided_store_2.c: Likewise.
kraj
pushed a commit
that referenced
this pull request
Jan 9, 2025
This crash started with my r12-7803 but I believe the problem lies elsewhere. build_vec_init has cleanup_flags whose purpose is -- if I grok this correctly -- to avoid destructing an object multiple times. Let's say we are initializing an array of A. Then we might end up in a scenario similar to initlist-eh1.C: try { call A::A in a loop // #0 try { call a fn using the array } finally { // #1 call A::~A in a loop } } catch { // gcc-mirror#2 call A::~A in a loop } cleanup_flags makes us emit a statement like D.3048 = 2; at #0 to disable performing the cleanup at gcc-mirror#2, since #1 will take care of the destruction of the array. But if we are not emitting the loop because we can use a constant initializer (and use a single { a, b, ...}), we shouldn't generate the statement resetting the iterator to its initial value. Otherwise we crash in gimplify_var_or_parm_decl because it gets the stray decl D.3048. PR c++/117985 gcc/cp/ChangeLog: * init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not generating the loop. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/initlist-array23.C: New test. * g++.dg/cpp0x/initlist-array24.C: New test. (cherry picked from commit 40e5636)
kraj
pushed a commit
that referenced
this pull request
Jan 10, 2025
This code in cxx_eval_array_reference has been hard to get right. In r12-2304 I added some code; in r13-5693 I removed some of it. Here the problematic line is "S s = arr[0];" which causes a crash on the assert in verify_ctor_sanity: gcc_assert (!ctx->object || !DECL_P (ctx->object) || ctx->global->get_value (ctx->object) == ctx->ctor); ctx->object is the VAR_DECL 's', which is correct here. The second line points to the problem: we replaced ctx->ctor in cxx_eval_array_reference: new_ctx.ctor = build_constructor (elem_type, NULL); // #1 which I think we shouldn't have; the CONSTRUCTOR we created in cxx_eval_constant_expression/DECL_EXPR new_ctx.ctor = build_constructor (TREE_TYPE (r), NULL); had the right type. We still need #1 though. E.g., in constexpr-96241.C, we never set ctx.ctor/object before calling cxx_eval_array_reference, so we have to build a CONSTRUCTOR there. And in constexpr-101371-2.C we have a ctx.ctor, but it has the wrong type, so we need a new one. We can fix the problem by always clearing the object, and, as an optimization, only create/free a new ctor when actually needed. PR c++/110382 gcc/cp/ChangeLog: * constexpr.cc (cxx_eval_array_reference): Create a new constructor only when we don't already have a matching one. Clear the object when the type is non-scalar. gcc/testsuite/ChangeLog: * g++.dg/cpp1y/constexpr-110382.C: New test. (cherry picked from commit 6e424fe)
kraj
pushed a commit
that referenced
this pull request
Jan 10, 2025
We evaluate constexpr functions on the original, pre-genericization bodies. That means that the function body we're evaluating will not have gone through cp_genericize_r's "Map block scope extern declarations to visible declarations with the same name and type in outer scopes if any". Here: constexpr bool bar() { return true; } // #1 constexpr bool foo() { constexpr bool bar(void); // gcc-mirror#2 return bar(); } it means that we: 1) register_constexpr_fundef (#1) 2) cp_genericize (#1) nothing interesting happens 3) register_constexpr_fundef (foo) does copy_fn, so we have two copies of the BIND_EXPR 4) cp_genericize (foo) this remaps gcc-mirror#2 to #1, but only on one copy of the BIND_EXPR 5) retrieve_constexpr_fundef (foo) we find it, no problem 6) retrieve_constexpr_fundef (gcc-mirror#2) and here gcc-mirror#2 isn't found in constexpr_fundef_table, because we're working on the BIND_EXPR copy where gcc-mirror#2 wasn't mapped to #1 so we fail. We've only registered #1. It should work to use DECL_LOCAL_DECL_ALIAS (which used to be extern_decl_map). We evaluate constexpr functions on pre-cp_fold bodies to avoid diagnostic problems, but the remapping I'm proposing should not interfere with diagnostics. This is not a problem for a global scope redeclaration; there we go through duplicate_decls which keeps the DECL_UID: DECL_UID (olddecl) = olddecl_uid; and DECL_UID is what constexpr_fundef_hasher::hash uses. PR c++/111132 gcc/cp/ChangeLog: * constexpr.cc (get_function_named_in_call): Use cp_get_fndecl_from_callee. * cvt.cc (cp_get_fndecl_from_callee): If there's a DECL_LOCAL_DECL_ALIAS, use it. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/constexpr-redeclaration3.C: New test. * g++.dg/cpp0x/constexpr-redeclaration4.C: New test. (cherry picked from commit 8c90638)
kraj
pushed a commit
that referenced
this pull request
Jan 10, 2025
This crash started with my r12-7803 but I believe the problem lies elsewhere. build_vec_init has cleanup_flags whose purpose is -- if I grok this correctly -- to avoid destructing an object multiple times. Let's say we are initializing an array of A. Then we might end up in a scenario similar to initlist-eh1.C: try { call A::A in a loop // #0 try { call a fn using the array } finally { // #1 call A::~A in a loop } } catch { // gcc-mirror#2 call A::~A in a loop } cleanup_flags makes us emit a statement like D.3048 = 2; at #0 to disable performing the cleanup at gcc-mirror#2, since #1 will take care of the destruction of the array. But if we are not emitting the loop because we can use a constant initializer (and use a single { a, b, ...}), we shouldn't generate the statement resetting the iterator to its initial value. Otherwise we crash in gimplify_var_or_parm_decl because it gets the stray decl D.3048. PR c++/117985 gcc/cp/ChangeLog: * init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not generating the loop. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/initlist-array23.C: New test. * g++.dg/cpp0x/initlist-array24.C: New test. (cherry picked from commit 40e5636)
kraj
pushed a commit
that referenced
this pull request
Feb 8, 2025
In a member-specification of a class, a noexcept-specifier is a complete-class context. Thus we delay parsing until the end of the class via our DEFERRED_PARSE mechanism; see cp_parser_save_noexcept and cp_parser_late_noexcept_specifier. We also attempt to defer instantiation of noexcept-specifiers in order to reduce the number of instantiations; this is done via DEFERRED_NOEXCEPT. We can even have both, as in noexcept65.C: a DEFERRED_PARSE wrapped in DEFERRED_NOEXCEPT, which uses the DEFPARSE_INSTANTIATIONS mechanism. noexcept65.C works, because when we really need the noexcept, which is when parsing the body of S::A::A(), the noexcept will have been parsed already; noexcepts are parsed before bodies of member function. But in this test we have: struct A { int x; template<class> void foo() noexcept(noexcept(x)) {} auto bar() -> decltype(foo<int>()) {} // #1 }; and I think the decltype in #1 needs the unparsed noexcept before it could have been parsed. clang++ rejects the test and I suppose we should reject it as well, rather than crashing on a DEFERRED_PARSE in tsubst_expr. PR c++/117106 PR c++/118190 gcc/cp/ChangeLog: * pt.cc (maybe_instantiate_noexcept): Give an error if the noexcept hasn't been parsed yet. gcc/testsuite/ChangeLog: * g++.dg/cpp0x/noexcept89.C: New test. * g++.dg/cpp0x/noexcept90.C: New test. Reviewed-by: Jason Merrill <jason@redhat.com>
kraj
pushed a commit
that referenced
this pull request
Mar 26, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I did not go compile something that old, and identified this change via git blame, so might be wrong) === cut here === struct Foo { int x; }; Foo& get (Foo &v) { return v; } void bar () { Foo v; v.x = 1; (true ? get (v) : get (v)).*(&Foo::x) = 2; // v.x still equals 1 here... } === cut here === The problem lies in build_m_component_ref, that computes the address of the COND_EXPR using build_address to build the representation of (true ? get (v) : get (v)).*(&Foo::x); and gets something like &(true ? get (v) : get (v)) // #1 instead of (true ? &get (v) : &get (v)) // gcc-mirror#2 and the write does not go where want it to, hence the miscompile. This patch replaces the call to build_address by a call to cp_build_addr_expr, which gives gcc-mirror#2, that is properly handled. PR c++/114525 gcc/cp/ChangeLog: * typeck2.cc (build_m_component_ref): Call cp_build_addr_expr instead of build_address. gcc/testsuite/ChangeLog: * g++.dg/expr/cond18.C: New test.
kraj
pushed a commit
that referenced
this pull request
Mar 31, 2025
Here we instantiate the lambda three times in producing A<0>::f: 1) in tsubst_function_type, substituting the type of A<>::f 2) in tsubst_function_decl, substituting the parameters of A<>::f 3) in regenerate_decl_from_template when instantiating A<>::f The first one gets thrown away by maybe_rebuild_function_decl_type. Before r15-7202, we happily built all of them and mangled the result wrongly as lambda gcc-mirror#3. After r15-7202, we try to mangle gcc-mirror#3 as #1, which breaks because #1 is already mangled as #1. This patch avoids building gcc-mirror#3 by suppressing regenerate_decl_from_template if the template signature includes a lambda, fixing the ICE. We now mangle the lambda as gcc-mirror#2, which is still wrong. Addressing that should involve not calling tsubst_function_type from tsubst_function_decl, and building the type from the parms types in the first place rather than fixing it up in maybe_rebuild_function_decl_type. PR c++/119401 gcc/cp/ChangeLog: * pt.cc (regenerate_decl_from_template): Don't regenerate if the signature involves a lambda. gcc/testsuite/ChangeLog: * g++.dg/cpp2a/lambda-targ11.C: New test.
kraj
pushed a commit
that referenced
this pull request
Apr 14, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I did not go compile something that old, and identified this change via git blame, so might be wrong) === cut here === struct Foo { int x; }; Foo& get (Foo &v) { return v; } void bar () { Foo v; v.x = 1; (true ? get (v) : get (v)).*(&Foo::x) = 2; // v.x still equals 1 here... } === cut here === The problem lies in build_m_component_ref, that computes the address of the COND_EXPR using build_address to build the representation of (true ? get (v) : get (v)).*(&Foo::x); and gets something like &(true ? get (v) : get (v)) // #1 instead of (true ? &get (v) : &get (v)) // gcc-mirror#2 and the write does not go where want it to, hence the miscompile. This patch replaces the call to build_address by a call to cp_build_addr_expr, which gives gcc-mirror#2, that is properly handled. PR c++/114525 gcc/cp/ChangeLog: * typeck2.cc (build_m_component_ref): Call cp_build_addr_expr instead of build_address. gcc/testsuite/ChangeLog: * g++.dg/expr/cond18.C: New test. (cherry picked from commit 35ce9af)
kraj
pushed a commit
that referenced
this pull request
Apr 14, 2025
We've been miscompiling the following since r0-51314-gd6b4ea8592e338 (I did not go compile something that old, and identified this change via git blame, so might be wrong) === cut here === struct Foo { int x; }; Foo& get (Foo &v) { return v; } void bar () { Foo v; v.x = 1; (true ? get (v) : get (v)).*(&Foo::x) = 2; // v.x still equals 1 here... } === cut here === The problem lies in build_m_component_ref, that computes the address of the COND_EXPR using build_address to build the representation of (true ? get (v) : get (v)).*(&Foo::x); and gets something like &(true ? get (v) : get (v)) // #1 instead of (true ? &get (v) : &get (v)) // gcc-mirror#2 and the write does not go where want it to, hence the miscompile. This patch replaces the call to build_address by a call to cp_build_addr_expr, which gives gcc-mirror#2, that is properly handled. PR c++/114525 gcc/cp/ChangeLog: * typeck2.cc (build_m_component_ref): Call cp_build_addr_expr instead of build_address. gcc/testsuite/ChangeLog: * g++.dg/expr/cond18.C: New test. (cherry picked from commit 35ce9af)
kraj
pushed a commit
that referenced
this pull request
May 10, 2025
this patch fixes some of problems with cosint in scalar to vector pass. In particular 1) the pass uses optimize_insn_for_size which is intended to be used by expanders and splitters and requires the optimization pass to use set_rtl_profile (bb) for currently processed bb. This is not done, so we get random stale info about hotness of insn. 2) register allocator move costs are all realtive to integer reg-reg move which has cost of 2, so it is (except for size tables and i386) a latency of instruction multiplied by 2. These costs has been duplicated and are now used in combination with rtx costs which are all based to COSTS_N_INSNS that multiplies latency by 4. Some of vectorizer costing contains COSTS_N_INSNS (move_cost) / 2 to compensate, but some new code does not. This patch adds compensatoin. Perhaps we should update the cost tables to use COSTS_N_INSNS everywher but I think we want to first fix inconsistencies. Also the tables will get optically much longer, since we have many move costs and COSTS_N_INSNS is a lot of characters. 3) variable m which decides how much to multiply integer variant (to account that with -m32 all 64bit computations needs 2 instructions) is declared unsigned which makes the signed computation of instruction gain to be done in unsigned type and breaks i.e. for division. 4) I added integer_to_sse costs which are currently all duplicationof sse_to_integer. AMD chips are asymetric and moving one direction is faster than another. I will chance costs incremnetally once vectorizer part is fixed up, too. There are two failures gcc.target/i386/minmax-6.c and gcc.target/i386/minmax-7.c. Both test stv on hasswell which no longer happens since SSE->INT and INT->SSE moves are now more expensive. There is only one instruction to convert: Computing gain for chain #1... Instruction gain 8 for 11: {r110:SI=smax(r116:SI,0);clobber flags:CC;} Instruction conversion gain: 8 Registers conversion cost: 8 <- this is integer_to_sse and sse_to_integer Total gain: 0 total gain used to be 4 since the patch doubles the conversion costs. According to agner fog's tables the costs should be 1 cycle which is correct here. Final code gnerated is: vmovd %esi, %xmm0 * latency 1 cmpl %edx, %esi je .L2 vpxor %xmm1, %xmm1, %xmm1 * latency 1 vpmaxsd %xmm1, %xmm0, %xmm0 * latency 1 vmovd %xmm0, %eax * latency 1 imull %edx, %eax cltq movzwl (%rdi,%rax,2), %eax ret cmpl %edx, %esi je .L2 xorl %eax, %eax * latency 1 testl %esi, %esi * latency 1 cmovs %eax, %esi * latency 2 imull %edx, %esi movslq %esi, %rsi movzwl (%rdi,%rsi,2), %eax ret Instructions with latency info are those really different. So the uncoverted code has sum of latencies 4 and real latency 3. Converted code has sum of latencies 4 and real latency 3 (vmod+vpmaxsd+vmov). So I do not quite see it should be a win. There is also a bug in costing MIN/MAX case ABS: case SMAX: case SMIN: case UMAX: case UMIN: /* We do not have any conditional move cost, estimate it as a reg-reg move. Comparisons are costed as adds. */ igain += m * (COSTS_N_INSNS (2) + ix86_cost->add); /* Integer SSE ops are all costed the same. */ igain -= ix86_cost->sse_op; break; Now COSTS_N_INSNS (2) is not quite right since reg-reg move should be 1 or perhaps 0. For Haswell cmov really is 2 cycles, but I guess we want to have that in cost vectors like all other instructions. I am not sure if this is really a win in this case (other minmax testcases seems to make sense). I have xfailed it for now and will check if that affects specs on LNT testers. I will proceed with similar fixes on vectorizer cost side. Sadly those introduces quite some differences in the testuiste (partly triggered by other costing problems, such as one of scatter/gather) gcc/ChangeLog: * config/i386/i386-features.cc (general_scalar_chain::vector_const_cost): Add BB parameter; handle size costs; use COSTS_N_INSNS to compute move costs. (general_scalar_chain::compute_convert_gain): Use optimize_bb_for_size instead of optimize_insn_for size; use COSTS_N_INSNS to compute move costs; update calls of general_scalar_chain::vector_const_cost; use ix86_cost->integer_to_sse. (timode_immed_const_gain): Add bb parameter; use optimize_bb_for_size_p. (timode_scalar_chain::compute_convert_gain): Use optimize_bb_for_size_p. * config/i386/i386-features.h (class general_scalar_chain): Update prototype of vector_const_cost. * config/i386/i386.h (struct processor_costs): Add integer_to_sse. * config/i386/x86-tune-costs.h (struct processor_costs): Copy sse_to_integer to integer_to_sse everywhere. gcc/testsuite/ChangeLog: * gcc.target/i386/minmax-6.c: xfail test that pmax is used. * gcc.target/i386/minmax-7.c: xfall test that pmin is used.
kraj
pushed a commit
that referenced
this pull request
May 12, 2025
The test was designed to pass with thumb2, but code generation changed with the introduction of Low Overhead Loops, so the test can fail if one overrides the flags when running the testsuite. In addition, useless subtract / extension instructions require -O2 to remove them (-O is not sufficient), so replace -O with -O2 in dg-options. arm_thumb2_ok_no_arm_v8_1m_lob does not do what the test needs (it can fail because some flags conflict, rather than because lob are supported, and we do not need to check runtime support in this test anyway), so the patch reverts back to arm_thumb2_ok. Finally, replace the scan-assembler directives with check-function-bodies, checking both types of code generation (with and without LOL). Depending on architecture version, the two insns and r0, r1, r0, lsr #1 ands r3, r3, #255 can be swapped, so accept both orders. gcc/testsuite/ChangeLog: PR target/116445 * gcc.target/arm/unsigned-extend-2.c: Fix dg directives.
kraj
pushed a commit
that referenced
this pull request
Jun 9, 2025
This patch adds a new param vect-scalar-cost-multiplier to scale the scalar costing during vectorization. If the cost is set high enough and when using the dynamic cost model it has the effect of effectively disabling the costing vs scalar and assumes all vectorization to be profitable. This is similar to using the unlimited cost model, but unlike unlimited it does not fully disable the vector cost model. That means that we still perform comparisons between vector modes. And it means it also still does costing for alias analysis. As an example, the following: void foo (char *restrict a, int *restrict b, int *restrict c, int *restrict d, int stride) { if (stride <= 1) return; for (int i = 0; i < 3; i++) { int res = c[i]; int t = b[i * stride]; if (a[i] != 0) res = t * d[i]; c[i] = res; } } compiled with -O3 -march=armv8-a+sve -fvect-cost-model=dynamic fails to vectorize as it assumes scalar would be faster, and with -fvect-cost-model=unlimited it picks a vector type that's so big that the large sequence generated is working on mostly inactive lanes: ... and p3.b, p3/z, p4.b, p4.b whilelo p0.s, wzr, w7 ld1w z23.s, p3/z, [x3, gcc-mirror#3, mul vl] ld1w z28.s, p0/z, [x5, z31.s, sxtw 2] add x0, x5, x0 punpklo p6.h, p6.b ld1w z27.s, p4/z, [x0, z31.s, sxtw 2] and p6.b, p6/z, p0.b, p0.b punpklo p4.h, p7.b ld1w z24.s, p6/z, [x3, gcc-mirror#2, mul vl] and p4.b, p4/z, p2.b, p2.b uqdecw w6 ld1w z26.s, p4/z, [x3] whilelo p1.s, wzr, w6 mul z27.s, p5/m, z27.s, z23.s ld1w z29.s, p1/z, [x4, z31.s, sxtw 2] punpkhi p7.h, p7.b mul z24.s, p5/m, z24.s, z28.s and p7.b, p7/z, p1.b, p1.b mul z26.s, p5/m, z26.s, z30.s ld1w z25.s, p7/z, [x3, #1, mul vl] st1w z27.s, p3, [x2, gcc-mirror#3, mul vl] mul z25.s, p5/m, z25.s, z29.s st1w z24.s, p6, [x2, gcc-mirror#2, mul vl] st1w z25.s, p7, [x2, #1, mul vl] st1w z26.s, p4, [x2] ... With -fvect-cost-model=dynamic --param vect-scalar-cost-multiplier=200 you get more reasonable code: foo: cmp w4, 1 ble .L1 ptrue p7.s, vl3 index z0.s, #0, w4 ld1b z29.s, p7/z, [x0] ld1w z30.s, p7/z, [x1, z0.s, sxtw 2] ptrue p6.b, all cmpne p7.b, p7/z, z29.b, #0 ld1w z31.s, p7/z, [x3] mul z31.s, p6/m, z31.s, z30.s st1w z31.s, p7, [x2] .L1: ret This model has been useful internally for performance exploration and cost-model validation. It allows us to force realistic vectorization overriding the cost model to be able to tell whether it's correct wrt to profitability. gcc/ChangeLog: * params.opt (vect-scalar-cost-multiplier): New. * tree-vect-loop.cc (vect_estimate_min_profitable_iters): Use it. * doc/invoke.texi (vect-scalar-cost-multiplier): Document it. gcc/testsuite/ChangeLog: * gcc.target/aarch64/sve/cost_model_16.c: New test.
kraj
pushed a commit
that referenced
this pull request
Jun 13, 2025
…o_debug_section [PR116614] cat abc.C #define A(n) struct T##n {} t##n; #define B(n) A(n##0) A(n##1) A(n#gcc-mirror#2) A(n#gcc-mirror#3) A(n#gcc-mirror#4) A(n#gcc-mirror#5) A(n#gcc-mirror#6) A(n#gcc-mirror#7) A(n#gcc-mirror#8) A(n#gcc-mirror#9) #define C(n) B(n##0) B(n##1) B(n#gcc-mirror#2) B(n#gcc-mirror#3) B(n#gcc-mirror#4) B(n#gcc-mirror#5) B(n#gcc-mirror#6) B(n#gcc-mirror#7) B(n#gcc-mirror#8) B(n#gcc-mirror#9) #define D(n) C(n##0) C(n##1) C(n#gcc-mirror#2) C(n#gcc-mirror#3) C(n#gcc-mirror#4) C(n#gcc-mirror#5) C(n#gcc-mirror#6) C(n#gcc-mirror#7) C(n#gcc-mirror#8) C(n#gcc-mirror#9) #define E(n) D(n##0) D(n##1) D(n#gcc-mirror#2) D(n#gcc-mirror#3) D(n#gcc-mirror#4) D(n#gcc-mirror#5) D(n#gcc-mirror#6) D(n#gcc-mirror#7) D(n#gcc-mirror#8) D(n#gcc-mirror#9) E(1) E(2) E(3) int main () { return 0; } ./xg++ -B ./ -o abc{.o,.C} -flto -flto-partition=1to1 -O2 -g -fdebug-types-section -c ./xgcc -B ./ -o abc{,.o} -flto -flto-partition=1to1 -O2 (not included in testsuite as it takes a while to compile) FAILs with lto-wrapper: fatal error: Too many copied sections: Operation not supported compilation terminated. /usr/bin/ld: error: lto-wrapper failed collect2: error: ld returned 1 exit status The following patch fixes that. Most of the 64K+ section support for reading and writing was already there years ago (and especially reading used quite often already) and a further bug fixed in it in the PR104617 fix. Yet, the fix isn't solely about removing the if (new_i - 1 >= SHN_LORESERVE) { *err = ENOTSUP; return "Too many copied sections"; } 5 lines, the missing part was that the function only handled reading of the .symtab_shndx section but not copying/updating of it. If the result has less than 64K-epsilon sections, that actually wasn't needed, but e.g. with -fdebug-types-section one can exceed that pretty easily (reported to us on WebKitGtk build on ppc64le). Updating the section is slightly more complicated, because it basically needs to be done in lock step with updating the .symtab section, if one doesn't need to use SHN_XINDEX in there, the section should (or should be updated to) contain SHN_UNDEF entry, otherwise needs to have whatever would be overwise stored but couldn't fit. But repeating due to that all the symtab decisions what to discard and how to rewrite it would be ugly. So, the patch instead emits the .symtab_shndx section (or sections) last and prepares the content during the .symtab processing and in a second pass when going just through .symtab_shndx sections just uses the saved content. 2024-09-07 Jakub Jelinek <jakub@redhat.com> PR lto/116614 * simple-object-elf.c (SHN_COMMON): Align comment with neighbouring comments. (SHN_HIRESERVE): Use uppercase hex digits instead of lowercase for consistency. (simple_object_elf_find_sections): Formatting fixes. (simple_object_elf_fetch_attributes): Likewise. (simple_object_elf_attributes_merge): Likewise. (simple_object_elf_start_write): Likewise. (simple_object_elf_write_ehdr): Likewise. (simple_object_elf_write_shdr): Likewise. (simple_object_elf_write_to_file): Likewise. (simple_object_elf_copy_lto_debug_section): Likewise. Don't fail for new_i - 1 >= SHN_LORESERVE, instead arrange in that case to copy over .symtab_shndx sections, though emit those last and compute their section content when processing associated .symtab sections. Handle simple_object_internal_read failure even in the .symtab_shndx reading case. (cherry picked from commit bb8dd09)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes build on hardened PAX host with gcc-5 (linker error on relocs).
Completes no-PIE config by adding to ALL_* flags variables.
Borrowed from Gentoo gcc patches, tested on 2 hardened amd64 hosts.
Upstream-Status: Inappropriate [configuration patching artifact]
Commited by: Gentoo Toolchain Project toolchain@gentoo.org
Signed-off-by: Stephen Arnold stephen.arnold42@gmail.com