This repository was archived by the owner on Mar 28, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 176
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This repository is a clone of llvm.org's LLVM repository; fixes should be made upstream at llvm.org. There is more information about this on |
fredriss
pushed a commit
that referenced
this pull request
Feb 3, 2016
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259602 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Feb 24, 2016
Summary: For instance, compiling the below results in a panic: ``` llc: ../lib/CodeGen/InlineSpiller.cpp:1140: bool (anonymous namespace)::InlineSpiller::foldMemoryOperand(ArrayRef<std::pair<MachineInstr *, unsigned int> >, llvm::MachineInstr *): Assertion `MO->isDead() && "Cannot fold physreg def"' failed. #0 0x00007f50fbcf353e llvm::sys::PrintStackTrace(llvm::raw_ostream&) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:321:15 #1 0x00007f50fbcf3929 PrintStackTraceSignalHandler(void*) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:380:1 #2 0x00007f50fbcf22a3 llvm::sys::RunSignalHandlers() /home/h/3rd/llvm/build/../lib/Support/Signals.cpp:45:5 #3 0x00007f50fbcf3bb4 SignalHandler(int) /home/h/3rd/llvm/build/../lib/Support/Unix/Signals.inc:210:1 #4 0x00007f50fa87a180 (/lib/x86_64-linux-gnu/libc.so.6+0x35180) #5 0x00007f50fa87a107 gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x35107) #6 0x00007f50fa87b4e8 abort (/lib/x86_64-linux-gnu/libc.so.6+0x364e8) #7 0x00007f50fa873226 (/lib/x86_64-linux-gnu/libc.so.6+0x2e226) #8 0x00007f50fa8732d2 (/lib/x86_64-linux-gnu/libc.so.6+0x2e2d2) #9 0x00007f50fddd9287 (anonymous namespace)::InlineSpiller::foldMemoryOperand(llvm::ArrayRef<std::pair<llvm::MachineInstr*, unsigned int> >, llvm::MachineInstr*) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1141:21 #10 0x00007f50fddd9ee9 (anonymous namespace)::InlineSpiller::spillAroundUses(unsigned int) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1286:9 #11 0x00007f50fddd388b (anonymous namespace)::InlineSpiller::spillAll() /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1338:21 #12 0x00007f50fddd221d (anonymous namespace)::InlineSpiller::spill(llvm::LiveRangeEdit&) /home/h/3rd/llvm/build/../lib/CodeGen/InlineSpiller.cpp:1391:3 #13 0x00007f50fdfd921b (anonymous namespace)::RAGreedy::selectOrSplitImpl(llvm::LiveInterval&, llvm::SmallVectorImpl<unsigned int>&, llvm::SmallSet<unsigned int, 16u, std::less<unsigned int> >&, unsigned int) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2555:5 #14 0x00007f50fdfd647b (anonymous namespace)::RAGreedy::selectOrSplit(llvm::LiveInterval&, llvm::SmallVectorImpl<unsigned int>&) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2221:12 #15 0x00007f50fdfc89f9 llvm::RegAllocBase::allocatePhysRegs() /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocBase.cpp:110:14 #16 0x00007f50fdfd6337 (anonymous namespace)::RAGreedy::runOnMachineFunction(llvm::MachineFunction&) /home/h/3rd/llvm/build/../lib/CodeGen/RegAllocGreedy.cpp:2611:3 #17 0x00007f50fded33ee llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /home/h/3rd/llvm/build/../lib/CodeGen/MachineFunctionPass.cpp:43:3 #18 0x00007f50fd6cdc6f llvm::FPPassManager::runOnFunction(llvm::Function&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1550:23 #19 0x00007f50fd6cdf85 llvm::FPPassManager::runOnModule(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1571:16 #20 0x00007f50fd6ce71a (anonymous namespace)::MPPassManager::runOnModule(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1627:23 #21 0x00007f50fd6ce246 llvm::legacy::PassManagerImpl::run(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1730:16 #22 0x00007f50fd6cec31 llvm::legacy::PassManager::run(llvm::Module&) /home/h/3rd/llvm/build/../lib/IR/LegacyPassManager.cpp:1761:3 #23 0x0000000000415bdc compileModule(char**, llvm::LLVMContext&) /home/h/3rd/llvm/build/../tools/llc/llc.cpp:405:5 #24 0x0000000000414571 main /home/h/3rd/llvm/build/../tools/llc/llc.cpp:211:13 #25 0x00007f50fa866b45 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b45) #26 0x0000000000414296 _start (/home/h/3rd/llvm/build/bin/llc+0x414296) Stack dump: 0. Program arguments: ./bin/llc -mtriple msp430 loadstore.ll 1. Running pass 'Function Pass Manager' on module 'loadstore.ll'. 2. Running pass 'Greedy Register Allocator' on function '@inc' ``` Original IR: ```llvm %struct.VeryLarge = type { i8, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32 } ; Function Attrs: norecurse nounwind define void @inc(%struct.VeryLarge* noalias nocapture sret %agg.result, %struct.VeryLarge* byval align 1 %s) #0 { entry: %p0 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 0 %0 = load i8, i8* %p0, align 1, !tbaa !1 %p1 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 1 %1 = load i32, i32* %p1, align 1, !tbaa !6 %p2 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 2 %2 = load i32, i32* %p2, align 1, !tbaa !7 %p3 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 3 %3 = load i32, i32* %p3, align 1, !tbaa !8 %p4 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 4 %4 = load i32, i32* %p4, align 1, !tbaa !9 %p5 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 5 %5 = load i32, i32* %p5, align 1, !tbaa !10 %p6 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 6 %6 = load i32, i32* %p6, align 1, !tbaa !11 %p7 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 7 %7 = load i32, i32* %p7, align 1, !tbaa !12 %p8 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 8 %8 = load i32, i32* %p8, align 1, !tbaa !13 %p9 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 9 %9 = load i32, i32* %p9, align 1, !tbaa !14 %p10 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 10 %10 = load i32, i32* %p10, align 1, !tbaa !15 %p11 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 11 %11 = load i32, i32* %p11, align 1, !tbaa !16 %p12 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 12 %12 = load i32, i32* %p12, align 1, !tbaa !17 %p13 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 13 %13 = load i32, i32* %p13, align 1, !tbaa !18 %p14 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 14 %14 = load i32, i32* %p14, align 1, !tbaa !19 %p15 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 15 %15 = load i32, i32* %p15, align 1, !tbaa !20 %p16 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 16 %16 = load i32, i32* %p16, align 1, !tbaa !21 %p17 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 17 %17 = load i32, i32* %p17, align 1, !tbaa !22 %p18 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 18 %18 = load i32, i32* %p18, align 1, !tbaa !23 %p19 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 19 %19 = load i32, i32* %p19, align 1, !tbaa !24 %p20 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 20 %20 = load i32, i32* %p20, align 1, !tbaa !25 %p21 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 21 %21 = load i32, i32* %p21, align 1, !tbaa !26 %p22 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 22 %22 = load i32, i32* %p22, align 1, !tbaa !27 %p23 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 23 %23 = load i32, i32* %p23, align 1, !tbaa !28 %p24 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 24 %24 = load i32, i32* %p24, align 1, !tbaa !29 %p25 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 25 %25 = load i32, i32* %p25, align 1, !tbaa !30 %p26 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 26 %26 = load i32, i32* %p26, align 1, !tbaa !31 %p27 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 27 %27 = load i32, i32* %p27, align 1, !tbaa !32 %p28 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 28 %28 = load i32, i32* %p28, align 1, !tbaa !33 %p29 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 29 %29 = load i32, i32* %p29, align 1, !tbaa !34 %p30 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 30 %30 = load i32, i32* %p30, align 1, !tbaa !35 %p31 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 31 %31 = load i32, i32* %p31, align 1, !tbaa !36 %p32 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %s, i32 0, i32 32 %32 = load i32, i32* %p32, align 1, !tbaa !37 %add = add i8 %0, 1 store i8 %add, i8* %p0, align 1, !tbaa !1 %add2 = add i32 %1, 2 store i32 %add2, i32* %p1, align 1, !tbaa !6 %add3 = add i32 %2, 3 store i32 %add3, i32* %p2, align 1, !tbaa !7 %add4 = add i32 %3, 4 store i32 %add4, i32* %p3, align 1, !tbaa !8 %add5 = add i32 %4, 5 store i32 %add5, i32* %p4, align 1, !tbaa !9 %add6 = add i32 %5, 6 store i32 %add6, i32* %p5, align 1, !tbaa !10 %add7 = add i32 %6, 7 store i32 %add7, i32* %p6, align 1, !tbaa !11 %add8 = add i32 %7, 8 store i32 %add8, i32* %p7, align 1, !tbaa !12 %add9 = add i32 %8, 9 store i32 %add9, i32* %p8, align 1, !tbaa !13 %add10 = add i32 %9, 10 store i32 %add10, i32* %p9, align 1, !tbaa !14 %add11 = add i32 %10, 11 store i32 %add11, i32* %p10, align 1, !tbaa !15 %add12 = add i32 %11, 12 store i32 %add12, i32* %p11, align 1, !tbaa !16 %add13 = add i32 %12, 13 store i32 %add13, i32* %p12, align 1, !tbaa !17 %add14 = add i32 %13, 14 store i32 %add14, i32* %p13, align 1, !tbaa !18 %add15 = add i32 %14, 15 store i32 %add15, i32* %p14, align 1, !tbaa !19 %add16 = add i32 %15, 16 store i32 %add16, i32* %p15, align 1, !tbaa !20 %add17 = add i32 %16, 17 store i32 %add17, i32* %p16, align 1, !tbaa !21 %add18 = add i32 %17, 18 store i32 %add18, i32* %p17, align 1, !tbaa !22 %add19 = add i32 %18, 19 store i32 %add19, i32* %p18, align 1, !tbaa !23 %add20 = add i32 %19, 20 store i32 %add20, i32* %p19, align 1, !tbaa !24 %add21 = add i32 %20, 21 store i32 %add21, i32* %p20, align 1, !tbaa !25 %add22 = add i32 %21, 22 store i32 %add22, i32* %p21, align 1, !tbaa !26 %add23 = add i32 %22, 23 store i32 %add23, i32* %p22, align 1, !tbaa !27 %add24 = add i32 %23, 24 store i32 %add24, i32* %p23, align 1, !tbaa !28 %add25 = add i32 %24, 25 store i32 %add25, i32* %p24, align 1, !tbaa !29 %add26 = add i32 %25, 26 store i32 %add26, i32* %p25, align 1, !tbaa !30 %add27 = add i32 %26, 27 store i32 %add27, i32* %p26, align 1, !tbaa !31 %add28 = add i32 %27, 28 store i32 %add28, i32* %p27, align 1, !tbaa !32 %add29 = add i32 %28, 29 store i32 %add29, i32* %p28, align 1, !tbaa !33 %add30 = add i32 %29, 30 store i32 %add30, i32* %p29, align 1, !tbaa !34 %add31 = add i32 %30, 31 store i32 %add31, i32* %p30, align 1, !tbaa !35 %add32 = add i32 %31, 32 store i32 %add32, i32* %p31, align 1, !tbaa !36 %add33 = add i32 %32, 33 store i32 %add33, i32* %p32, align 1, !tbaa !37 %33 = getelementptr inbounds %struct.VeryLarge, %struct.VeryLarge* %agg.result, i32 0, i32 0 call void @llvm.memcpy.p0i8.p0i8.i32(i8* %33, i8* %p0, i32 129, i32 1, i1 false), !tbaa.struct !38 ret void } ; Function Attrs: argmemonly nounwind declare void @llvm.memcpy.p0i8.p0i8.i32(i8* nocapture, i8* nocapture readonly, i32, i32, i1) #1 attributes #0 = { norecurse nounwind "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } attributes #1 = { argmemonly nounwind } !llvm.ident = !{!0} !0 = !{!"clang version 3.8.0 (git://github.com/llvm-mirror/clang 40ef2b7531472c41212c4719a9294aeb7bddebbc) (git://github.com/llvm-mirror/llvm c601eaf55606dfb9ad372b514b77aa00d1409be1)"} !1 = !{!2, !3, i64 0} !2 = !{!"", !3, i64 0, !5, i64 1, !5, i64 5, !5, i64 9, !5, i64 13, !5, i64 17, !5, i64 21, !5, i64 25, !5, i64 29, !5, i64 33, !5, i64 37, !5, i64 41, !5, i64 45, !5, i64 49, !5, i64 53, !5, i64 57, !5, i64 61, !5, i64 65, !5, i64 69, !5, i64 73, !5, i64 77, !5, i64 81, !5, i64 85, !5, i64 89, !5, i64 93, !5, i64 97, !5, i64 101, !5, i64 105, !5, i64 109, !5, i64 113, !5, i64 117, !5, i64 121, !5, i64 125} !3 = !{!"omnipotent char", !4, i64 0} !4 = !{!"Simple C/C++ TBAA"} !5 = !{!"int", !3, i64 0} !6 = !{!2, !5, i64 1} !7 = !{!2, !5, i64 5} !8 = !{!2, !5, i64 9} !9 = !{!2, !5, i64 13} !10 = !{!2, !5, i64 17} !11 = !{!2, !5, i64 21} !12 = !{!2, !5, i64 25} !13 = !{!2, !5, i64 29} !14 = !{!2, !5, i64 33} !15 = !{!2, !5, i64 37} !16 = !{!2, !5, i64 41} !17 = !{!2, !5, i64 45} !18 = !{!2, !5, i64 49} !19 = !{!2, !5, i64 53} !20 = !{!2, !5, i64 57} !21 = !{!2, !5, i64 61} !22 = !{!2, !5, i64 65} !23 = !{!2, !5, i64 69} !24 = !{!2, !5, i64 73} !25 = !{!2, !5, i64 77} !26 = !{!2, !5, i64 81} !27 = !{!2, !5, i64 85} !28 = !{!2, !5, i64 89} !29 = !{!2, !5, i64 93} !30 = !{!2, !5, i64 97} !31 = !{!2, !5, i64 101} !32 = !{!2, !5, i64 105} !33 = !{!2, !5, i64 109} !34 = !{!2, !5, i64 113} !35 = !{!2, !5, i64 117} !36 = !{!2, !5, i64 121} !37 = !{!2, !5, i64 125} !38 = !{i64 0, i64 1, !39, i64 1, i64 4, !40, i64 5, i64 4, !40, i64 9, i64 4, !40, i64 13, i64 4, !40, i64 17, i64 4, !40, i64 21, i64 4, !40, i64 25, i64 4, !40, i64 29, i64 4, !40, i64 33, i64 4, !40, i64 37, i64 4, !40, i64 41, i64 4, !40, i64 45, i64 4, !40, i64 49, i64 4, !40, i64 53, i64 4, !40, i64 57, i64 4, !40, i64 61, i64 4, !40, i64 65, i64 4, !40, i64 69, i64 4, !40, i64 73, i64 4, !40, i64 77, i64 4, !40, i64 81, i64 4, !40, i64 85, i64 4, !40, i64 89, i64 4, !40, i64 93, i64 4, !40, i64 97, i64 4, !40, i64 101, i64 4, !40, i64 105, i64 4, !40, i64 109, i64 4, !40, i64 113, i64 4, !40, i64 117, i64 4, !40, i64 121, i64 4, !40, i64 125, i64 4, !40} !39 = !{!3, !3, i64 0} !40 = !{!5, !5, i64 0} ``` Reviewers: asl Subscribers: qcolombet Differential Revision: http://reviews.llvm.org/D17441 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@261746 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Mar 31, 2016
This change prevents the loop vectorizer from vectorizing when all of the vector types it generates will be scalarized. I've run into this problem on the PPC's QPX vector ISA, which only holds floating-point vector types. The loop vectorizer will, however, happily vectorize loops with purely integer computation. Here's an example: LV: The Smallest and Widest types: 32 / 32 bits. LV: The Widest register is: 256 bits. LV: Found an estimated cost of 0 for VF 1 For instruction: %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ] LV: Found an estimated cost of 0 for VF 1 For instruction: %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25 LV: Found an estimated cost of 0 for VF 1 For instruction: %2 = trunc i64 %indvars.iv25 to i32 LV: Found an estimated cost of 1 for VF 1 For instruction: store i32 %2, i32* %arrayidx, align 4 LV: Found an estimated cost of 1 for VF 1 For instruction: %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1 LV: Found an estimated cost of 1 for VF 1 For instruction: %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600 LV: Found an estimated cost of 0 for VF 1 For instruction: br i1 %exitcond27, label %for.cond.cleanup, label %for.body LV: Scalar loop costs: 3. LV: Found an estimated cost of 0 for VF 2 For instruction: %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ] LV: Found an estimated cost of 0 for VF 2 For instruction: %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25 LV: Found an estimated cost of 0 for VF 2 For instruction: %2 = trunc i64 %indvars.iv25 to i32 LV: Found an estimated cost of 2 for VF 2 For instruction: store i32 %2, i32* %arrayidx, align 4 LV: Found an estimated cost of 1 for VF 2 For instruction: %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1 LV: Found an estimated cost of 1 for VF 2 For instruction: %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600 LV: Found an estimated cost of 0 for VF 2 For instruction: br i1 %exitcond27, label %for.cond.cleanup, label %for.body LV: Vector loop of width 2 costs: 2. LV: Found an estimated cost of 0 for VF 4 For instruction: %indvars.iv25 = phi i64 [ 0, %entry ], [ %indvars.iv.next26, %for.body ] LV: Found an estimated cost of 0 for VF 4 For instruction: %arrayidx = getelementptr inbounds [1600 x i32], [1600 x i32]* %a, i64 0, i64 %indvars.iv25 LV: Found an estimated cost of 0 for VF 4 For instruction: %2 = trunc i64 %indvars.iv25 to i32 LV: Found an estimated cost of 4 for VF 4 For instruction: store i32 %2, i32* %arrayidx, align 4 LV: Found an estimated cost of 1 for VF 4 For instruction: %indvars.iv.next26 = add nuw nsw i64 %indvars.iv25, 1 LV: Found an estimated cost of 1 for VF 4 For instruction: %exitcond27 = icmp eq i64 %indvars.iv.next26, 1600 LV: Found an estimated cost of 0 for VF 4 For instruction: br i1 %exitcond27, label %for.cond.cleanup, label %for.body LV: Vector loop of width 4 costs: 1. ... LV: Selecting VF: 8. LV: The target has 32 registers LV(REG): Calculating max register usage: LV(REG): At #0 Interval # 0 LV(REG): At #1 Interval # 1 LV(REG): At #2 Interval # 2 LV(REG): At #4 Interval # 1 LV(REG): At #5 Interval # 1 LV(REG): VF = 8 The problem is that the cost model here is not wrong, exactly. Since all of these operations are scalarized, their cost (aside from the uniform ones) are indeed VF*(scalar cost), just as the model suggests. In fact, the larger the VF picked, the lower the relative overhead from the loop itself (and the induction-variable update and check), and so in a sense, picking the largest VF here is the right thing to do. The problem is that vectorizing like this, where all of the vectors will be scalarized in the backend, isn't really vectorizing, but rather interleaving. By itself, this would be okay, but then the vectorizer itself also interleaves, and that's where the problem manifests itself. There's aren't actually enough scalar registers to support the normal interleave factor multiplied by a factor of VF (8 in this example). In other words, the problem with this is that our register-pressure heuristic does not account for scalarization. While we might want to improve our register-pressure heuristic, I don't think this is the right motivating case for that work. Here we have a more-basic problem: The job of the vectorizer is to vectorize things (interleaving aside), and if the IR it generates won't generate any actual vector code, then something is wrong. Thus, if every type looks like it will be scalarized (i.e. will be split into VF or more parts), then don't consider that VF. This is not a problem specific to PPC/QPX, however. The problem comes up under SSE on x86 too, and as such, this change fixes PR26837 too. I've added Sanjay's reduced test case from PR26837 to this commit. Differential Revision: http://reviews.llvm.org/D18537 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@264904 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Apr 7, 2016
A virtual register may have either a register bank or a register class. This is represented by a PointerUnion between the related classes. Typically, a virtual register went through the following states regarding register class and register bank: 1. Creation: None is set. Virtual registers are fully generic. 2. Register bank assignment: Register bank is set. Virtual registers live into a register bank, but we do not know the constraints they need to fulfil. 3. Instruction selection: Register class is set. Virtual registers are bound by encoding constraints. To map these states to GlobalISel, the IRTranslator implements #1, RegBankSelect #2, and Select #3. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265696 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Apr 23, 2016
…, take #2. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267242 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Apr 27, 2016
Summary: With the removal of support for lazy parsing of combined index summary records (e.g. r267344), we no longer need to include the summary record bitcode offset in the VST entries for definitions. Change the combined index format to be similar to the per-module index format in using value ids to cross-reference from the summary record to the VST entry (rather than the summary record bitcode offset to cross-reference in the other direction). The visible changes are: 1) Add the value id to the combined summary records 2) Remove the summary offset from the combined VST records, which has the following effects: - No longer need the VST_CODE_COMBINED_GVDEFENTRY record, as all combined index VST entries now only contain the value id and corresponding GUID. - No longer have duplicate VST entries in the case where there are multiple definitions of a symbol (e.g. weak/linkonce), as they all have the same value id and GUID. An implication of #2 above is that in order to hook up an alias to the correct aliasee based on the value id of the aliasee recorded in the combined index alias record, we need to scan the entries in the index for that GUID to find the one from the same module (i.e. the case where there are multiple entries for the aliasee). But the reader no longer has to maintain a special map to hook up the alias/aliasee. Reviewers: joker.eph Subscribers: joker.eph, llvm-commits Differential Revision: http://reviews.llvm.org/D19481 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267712 91177308-0d34-0410-b5e6-96231b3b80d8
joker-eph
pushed a commit
that referenced
this pull request
Apr 29, 2016
Summary: With the removal of support for lazy parsing of combined index summary records (e.g. r267344), we no longer need to include the summary record bitcode offset in the VST entries for definitions. Change the combined index format to be similar to the per-module index format in using value ids to cross-reference from the summary record to the VST entry (rather than the summary record bitcode offset to cross-reference in the other direction). The visible changes are: 1) Add the value id to the combined summary records 2) Remove the summary offset from the combined VST records, which has the following effects: - No longer need the VST_CODE_COMBINED_GVDEFENTRY record, as all combined index VST entries now only contain the value id and corresponding GUID. - No longer have duplicate VST entries in the case where there are multiple definitions of a symbol (e.g. weak/linkonce), as they all have the same value id and GUID. An implication of #2 above is that in order to hook up an alias to the correct aliasee based on the value id of the aliasee recorded in the combined index alias record, we need to scan the entries in the index for that GUID to find the one from the same module (i.e. the case where there are multiple entries for the aliasee). But the reader no longer has to maintain a special map to hook up the alias/aliasee. Reviewers: joker.eph Subscribers: joker.eph, llvm-commits Differential Revision: http://reviews.llvm.org/D19481 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267712 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
May 5, 2016
Summary: This reverts commit d88cc08. #0 0xfed467 in llvm::ARMFrameLowering::determineCalleeSaves(llvm::MachineFunction&, llvm::BitVector&, llvm::RegScavenger*) const /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/Target/ARM/ARMFrameLowering.cpp:1625:52 #1 0x330d4cc in (anonymous namespace)::PEI::runOnMachineFunction(llvm::MachineFunction&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/CodeGen/PrologEpilogInserter.cpp:186:3 #2 0x3193e12 in llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/CodeGen/MachineFunctionPass.cpp:60:13 #3 0x396237d in llvm::FPPassManager::runOnFunction(llvm::Function&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1526:23 #4 0x3962a23 in llvm::FPPassManager::runOnModule(llvm::Module&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1547:16 #5 0x3963d52 in runOnModule /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1603:23 #6 0x3963d52 in llvm::legacy::PassManagerImpl::run(llvm::Module&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/LegacyPassManager.cpp:1706 #7 0x6bb910 in compileModule(char**, llvm::LLVMContext&) /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/tools/llc/llc.cpp:412:5 #8 0x6b3c25 in main /mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm/tools/llc/llc.cpp:218:22 #9 0x7fd4a7d37ec4 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21ec4) #10 0x625c93 in _start (/mnt/b/sanitizer-buildbot2/sanitizer-x86_64-linux-bootstrap/build/llvm_build_msan/bin/llc+0x625c93) Reviewers: Subscribers: git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@268536 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jun 9, 2016
MSVC calls the copy ctor on StratifiedSets for some reason. So, undelete it. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272184 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jun 14, 2016
What happened here is that in the new PM there is a bunch of new copying (actually, moving) and so this reads the HasProfileData member in situations where it used to not be read. It used to only be read strictly in the "runOnFunction" method and its callees, where is *is* initialized (even after my patch). So this ends up being benign as far as functional behavior of the compiler (since we set HasProfileData in the "runImpl" method before we ever make decisions based on it). It's awesome that UBSan caught this. It highlights one more thing to watch out for when porting passes. Sanitizer bot log was: -- Testing: 17049 tests, 32 threads -- Testing: 0 .. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. FAIL: LLVM :: Transforms/JumpThreading/thread-loads.ll (15184 of 17049) ******************** TEST 'LLVM :: Transforms/JumpThreading/thread-loads.ll' FAILED ******************** Script: -- /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/opt < /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -jump-threading -S | /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/opt < /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -passes=jump-threading -S | /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -- Exit Code: 2 Command Output (stderr): -- /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/Transforms/Scalar/JumpThreading.h:90:57: runtime error: load of value 136, which is not a valid value for type 'bool' #0 0x2c33ba1 in llvm::JumpThreadingPass::JumpThreadingPass(llvm::JumpThreadingPass&&) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/Transforms/Scalar/JumpThreading.h:90:57 #1 0x2bc88e4 in void llvm::PassManager<llvm::Function>::addPass<llvm::JumpThreadingPass>(llvm::JumpThreadingPass) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/PassManager.h:282:40 #2 0x2bb2682 in llvm::PassBuilder::parseFunctionPassName(llvm::PassManager<llvm::Function>&, llvm::StringRef) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassRegistry.def:133:1 #3 0x2bb4914 in llvm::PassBuilder::parseFunctionPassPipeline(llvm::PassManager<llvm::Function>&, llvm::StringRef&, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassBuilder.cpp:489:12 #4 0x2bb6f81 in llvm::PassBuilder::parsePassPipeline(llvm::PassManager<llvm::Module>&, llvm::StringRef, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/lib/Passes/PassBuilder.cpp:674:10 #5 0x986690 in llvm::runPassPipeline(llvm::StringRef, llvm::LLVMContext&, llvm::Module&, llvm::TargetMachine*, llvm::tool_output_file*, llvm::StringRef, llvm::opt_tool::OutputKind, llvm::opt_tool::VerifierKind, bool, bool) /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/tools/opt/NewPMDriver.cpp:85:8 #6 0x9af25e in main /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/tools/opt/opt.cpp:468:12 #7 0x7fd7e27dbf44 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21f44) #8 0x960157 in _start (/mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/bin/opt+0x960157) FileCheck error: '-' is empty. FileCheck command line: /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm_build_ubsan/./bin/FileCheck /mnt/b/sanitizer-buildbot3/sanitizer-x86_64-linux-fast/build/llvm/test/Transforms/JumpThreading/thread-loads.ll -- ******************** Testing: 0 .. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90.. Testing Time: 128.90s ******************** Failing Tests (1): LLVM :: Transforms/JumpThreading/thread-loads.ll Expected Passes : 16725 Expected Failures : 129 Unsupported Tests : 194 Unexpected Failures: 1 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272616 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jun 24, 2016
Some of the bots running GCC 4.7 seem to be having trouble with lambdas that explicitly capture `this`. Relevant-looking bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53137 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273613 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Aug 24, 2016
manager, including both plumbing and logic to handle function pass updates. There are three fundamentally tied changes here: 1) Plumbing *some* mechanism for updating the CGSCC pass manager as the CG changes while passes are running. 2) Changing the CGSCC pass manager infrastructure to have support for the underlying graph to mutate mid-pass run. 3) Actually updating the CG after function passes run. I can separate them if necessary, but I think its really useful to have them together as the needs of #3 drove #2, and that in turn drove #1. The plumbing technique is to extend the "run" method signature with extra arguments. We provide the call graph that intrinsically is available as it is the basis of the pass manager's IR units, and an output parameter that records the results of updating the call graph during an SCC passes's run. Note that "...UpdateResult" isn't a *great* name here... suggestions very welcome. I tried a pretty frustrating number of different data structures and such for the innards of the update result. Every other one failed for one reason or another. Sometimes I just couldn't keep the layers of complexity right in my head. The thing that really worked was to just directly provide access to the underlying structures used to walk the call graph so that their updates could be informed by the *particular* nature of the change to the graph. The technique for how to make the pass management infrastructure cope with mutating graphs was also something that took a really, really large number of iterations to get to a place where I was happy. Here are some of the considerations that drove the design: - We operate at three levels within the infrastructure: RefSCC, SCC, and Node. In each case, we are working bottom up and so we want to continue to iterate on the "lowest" node as the graph changes. Look at how we iterate over nodes in an SCC running function passes as those function passes mutate the CG. We continue to iterate on the "lowest" SCC, which is the one that continues to contain the function just processed. - The call graph structure re-uses SCCs (and RefSCCs) during mutation events for the *highest* entry in the resulting new subgraph, not the lowest. This means that it is necessary to continually update the current SCC or RefSCC as it shifts. This is really surprising and subtle, and took a long time for me to work out. I actually tried changing the call graph to provide the opposite behavior, and it breaks *EVERYTHING*. The graph update algorithms are really deeply tied to this particualr pattern. - When SCCs or RefSCCs are split apart and refined and we continually re-pin our processing to the bottom one in the subgraph, we need to enqueue the newly formed SCCs and RefSCCs for subsequent processing. Queuing them presents a few challenges: 1) SCCs and RefSCCs use wildly different iteration strategies at a high level. We end up needing to converge them on worklist approaches that can be extended in order to be able to handle the mutations. 2) The order of the enqueuing need to remain bottom-up post-order so that we don't get surprising order of visitation for things like the inliner. 3) We need the worklists to have set semantics so we don't duplicate things endlessly. We don't need a *persistent* set though because we always keep processing the bottom node!!!! This is super, super surprising to me and took a long time to convince myself this is correct, but I'm pretty sure it is... Once we sink down to the bottom node, we can't re-split out the same node in any way, and the postorder of the current queue is fixed and unchanging. 4) We need to make sure that the "current" SCC or RefSCC actually gets enqueued here such that we re-visit it because we continue processing a *new*, *bottom* SCC/RefSCC. - We also need the ability to *skip* SCCs and RefSCCs that get merged into a larger component. We even need the ability to skip *nodes* from an SCC that are no longer part of that SCC. This led to the design you see in the patch which uses SetVector-based worklists. The RefSCC worklist is always empty until an update occurs and is just used to handle those RefSCCs created by updates as the others don't even exist yet and are formed on-demand during the bottom-up walk. The SCC worklist is pre-populated from the RefSCC, and we push new SCCs onto it and blacklist existing SCCs on it to get the desired processing. We then *directly* update these when updating the call graph as I was never able to find a satisfactory abstraction around the update strategy. Finally, we need to compute the updates for function passes. This is mostly used as an initial customer of all the update mechanisms to drive their design to at least cover some real set of use cases. There are a bunch of interesting things that came out of doing this: - It is really nice to do this a function at a time because that function is likely hot in the cache. This means we want even the function pass adaptor to support online updates to the call graph! - To update the call graph after arbitrary function pass mutations is quite hard. We have to build a fairly comprehensive set of data structures and then process them. Fortunately, some of this code is related to the code for building the cal graph in the first place. Unfortunately, very little of it makes any sense to share because the nature of what we're doing is so very different. I've factored out the one part that made sense at least. - We need to transfer these updates into the various structures for the CGSCC pass manager. Once those were more sanely worked out, this became relatively easier. But some of those needs necessitated changes to the LazyCallGraph interface to make it significantly easier to extract the changed SCCs from an update operation. - We also need to update the CGSCC analysis manager as the shape of the graph changes. When an SCC is merged away we need to clear analyses associated with it from the analysis manager which we didn't have support for in the analysis manager infrsatructure. New SCCs are easy! But then we have the case that the original SCC has its shape changed but remains in the call graph. There we need to *invalidate* the analyses associated with it. - We also need to invalidate analyses after we *finish* processing an SCC. But the analyses we need to invalidate here are *only those for the newly updated SCC*!!! Because we only continue processing the bottom SCC, if we split SCCs apart the original one gets invalidated once when its shape changes and is not processed farther so its analyses will be correct. It is the bottom SCC which continues being processed and needs to have the "normal" invalidation done based on the preserved analyses set. All of this is mostly background and context for the changes here. Many thanks to all the reviewers who helped here. Especially Sanjoy who caught several interesting bugs in the graph algorithms, David, Sean, and others who all helped with feedback. Differential Revision: http://reviews.llvm.org/D21464 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279618 91177308-0d34-0410-b5e6-96231b3b80d8
hpux735
referenced
this pull request
in swift-arm/swift-llvm
Sep 5, 2016
Update autotools buildsystem for libdispatch-500.1.5 and testsuite
hpux735
referenced
this pull request
in swift-arm/swift-llvm
Sep 5, 2016
Update autotools buildsystem for libdispatch-500.1.5 and testsuite Signed-off-by: Daniel A. Steffen <dsteffen@apple.com>
fredriss
pushed a commit
that referenced
this pull request
Sep 6, 2016
This test code previously caused a failure in the module verifier, because SimplifyCFG created this invalid instruction, which tries to take the address of inline asm: %.sink = select i1 %1, i64 ()* asm "mov $0, #1", "=r", i64 ()* asm %"mov $0, #2", "=r" This has been fixed recently, presumably by James Molloy's patches that re-wrote and changed parts of SimplifyCFG, so this patch just adds a regression test for it. Differential Revision: https://reviews.llvm.org/D24231 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280660 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Oct 14, 2016
This patch assigns cost of the scaling used in addressing. On many ARM cores, a negated register offset takes longer than a non-negated register offset, in a register-offset addressing mode. For instance: LDR R0, [R1, R2 LSL #2] LDR R0, [R1, -R2 LSL #2] Above, (1) takes less cycles than (2). By assigning appropriate scaling factor cost, we enable the LLVM to make the right trade-offs in the optimization and code-selection phase. Differential Revision: http://reviews.llvm.org/D24857 Reviewers: jmolloy, rengolin git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284127 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Oct 19, 2016
…ump tables The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions. It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size. TBB example: Before: lsls r0, r0, #2 After: add r0, pc adr r1, .LJTI0_0 ldrb r0, [r0, #6] ldr r0, [r0, r1] lsls r0, r0, #1 mov pc, r0 add pc, r0 => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4. The only case that can increase dynamic instruction count is the TBH case: Before: lsls r0, r4, #2 After: lsls r4, r4, #1 adr r1, .LJTI0_0 add r4, pc ldr r0, [r0, r1] ldrh r4, [r4, #6] mov pc, r0 lsls r4, r4, #1 add pc, r4 => 1 more instruction in prologue. Jump table shrunk by a factor of 2. So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284580 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Nov 2, 2016
…ump tables [Reapplying r284580 and r285917 with fix and testing to ensure emitted jump tables for Thumb-1 have 4-byte alignment] The TBB and TBH instructions in Thumb-2 allow jump tables to be compressed into sequences of bytes or shorts respectively. These instructions do not exist in Thumb-1, however it is possible to synthesize them out of a sequence of other instructions. It turns out this sequence is so short that it's almost never a lose for performance and is ALWAYS a significant win for code size. TBB example: Before: lsls r0, r0, #2 After: add r0, pc adr r1, .LJTI0_0 ldrb r0, [r0, #6] ldr r0, [r0, r1] lsls r0, r0, #1 mov pc, r0 add pc, r0 => No change in prologue code size or dynamic instruction count. Jump table shrunk by a factor of 4. The only case that can increase dynamic instruction count is the TBH case: Before: lsls r0, r4, #2 After: lsls r4, r4, #1 adr r1, .LJTI0_0 add r4, pc ldr r0, [r0, r1] ldrh r4, [r4, #6] mov pc, r0 lsls r4, r4, #1 add pc, r4 => 1 more instruction in prologue. Jump table shrunk by a factor of 2. So there is an argument that this should be disabled when optimizing for performance (and a TBH needs to be generated). I'm not so sure about that in practice, because on small cores with Thumb-1 performance is often tied to code size. But I'm willing to turn it off when optimizing for performance if people want (also note that TBHs are fairly rare in practice!) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285690 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Nov 12, 2016
This optimization merges adjacent zero stores into a wider store. e.g., strh wzr, [x0] strh wzr, [x0, #2] ; becomes str wzr, [x0] e.g., str wzr, [x0] str wzr, [x0, #4] ; becomes str xzr, [x0] Previously, this was only enabled for Kryo and Cortex-A57. Differential Revision: https://reviews.llvm.org/D26396 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286592 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Dec 7, 2016
…Idx. NFC. Summary: This is NFC but prevents assertions when PartialMappingIdx is tablegen-erated. The assumptions were: 1) FirstGPR is 0 2) FirstGPR is the first of the First* enumerators. GPR32 is changed to 1 to demonstrate that assumption #1 is fixed. #2 will be covered by a subsequent patch that tablegen-erates information and swaps the order of GPR and FPR as a side effect. Depends on D27336 Reviewers: ab, t.p.northover, qcolombet Subscribers: aemerson, rengolin, vkalintiris, dberris, rovka, llvm-commits Differential Revision: https://reviews.llvm.org/D27337 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288812 91177308-0d34-0410-b5e6-96231b3b80d8
joker-eph
pushed a commit
that referenced
this pull request
Dec 13, 2016
This reverts r289215 (git SHA1 cb7b86a). It breaks the ubsan build because a DenseMap that keys off of `AssertingVH<T>` will hit UB when it tries to cast the empty and tombstone keys to `T *` (due to insufficient alignment). This is the relevant stack trace (thanks to Mike Aizatsky): #0 0x25cf100 in llvm::AssertingVH<llvm::PHINode>::getValPtr() const llvm/include/llvm/IR/ValueHandle.h:212:39 #1 0x25cea20 in llvm::AssertingVH<llvm::PHINode>::operator=(llvm::AssertingVH<llvm::PHINode> const&) llvm/include/llvm/IR/ValueHandle.h:234:19 #2 0x25d0092 in llvm::DenseMapBase<llvm::DenseMap<llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >, llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >::clear() llvm/include/llvm/ADT/DenseMap.h:113:23 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289482 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Dec 14, 2016
This change re-lands r289215, by reverting r289482. The underlying issue that caused it to be reverted has been fixed by Tim Northover in r289496. Original commit message for r289215: [SCEVExpander] Use llvm data structures; NFC Original commit message for r289482: Revert "[SCEVExpander] Use llvm data structures; NFC" This reverts r289215 (git SHA1 cb7b86a). It breaks the ubsan build because a DenseMap that keys off of `AssertingVH<T>` will hit UB when it tries to cast the empty and tombstone keys to `T *` (due to insufficient alignment). This is the relevant stack trace (thanks to Mike Aizatsky): #0 0x25cf100 in llvm::AssertingVH<llvm::PHINode>::getValPtr() const llvm/include/llvm/IR/ValueHandle.h:212:39 #1 0x25cea20 in llvm::AssertingVH<llvm::PHINode>::operator=(llvm::AssertingVH<llvm::PHINode> const&) llvm/include/llvm/IR/ValueHandle.h:234:19 #2 0x25d0092 in llvm::DenseMapBase<llvm::DenseMap<llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >, llvm::AssertingVH<llvm::PHINode>, llvm::detail::DenseSetEmpty, llvm::DenseMapInfo<llvm::AssertingVH<llvm::PHINode> >, llvm::detail::DenseSetPair<llvm::AssertingVH<llvm::PHINode> > >::clear() llvm/include/llvm/ADT/DenseMap.h:113:23 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289602 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jan 13, 2017
…aries. I have two immediate motivations for adding this: 1) It makes writing expectations in tests *dramatically* easier. A quick example that is a taste of what is possible: std::vector<int> v = ...; EXPECT_THAT(v, UnorderedElementsAre(1, 2, 3)); This checks that v contains '1', '2', and '3' in some order. There are a wealth of other helpful matchers like this. They tend to be highly generic and STL-friendly so they will in almost all cases work out of the box even on custom LLVM data structures. I actually find the matcher syntax substantially easier to read even for simple assertions: EXPECT_THAT(a, Eq(b)); EXPECT_THAT(b, Ne(c)); Both of these make it clear what is being *tested* and what is being *expected*. With `EXPECT_EQ` this is implicit (the LHS is expected, the RHS is tested) and often confusing. With `EXPECT_NE` it is just not clear. Even the failure error messages are superior with the matcher based expectations. 2) When testing any kind of generic code, you are continually defining dummy types with interfaces and then trying to check that the interfaces are manipulated in a particular way. This is actually what mocks are *good* for -- testing *interface interactions*. With generic code, there is often no "fake" or other object that can be used. For a concrete example of where this is currently causing significant pain, look at the pass manager unittests which are riddled with counters incremented when methods are called. All of these could be replaced with mocks. The result would be more effective at testing the code by having tighter constraints. It would be substantially more readable and maintainable when updating the code. And the error messages on failure would have substantially more information as mocks automatically record stack traces and other information *when the API is misused* instead of trying to diagnose it after the fact. I expect that #1 will be the overwhelming majority of the uses of gmock, but I think that is sufficient to justify having it. I would actually like to update the coding standards to encourage the use of matchers rather than any other form of `EXPECT_...` macros as they are IMO a strict superset in terms of functionality and readability. I think that #2 is relatively rarely useful, but there *are* cases where it is useful. Historically, I think misuse of actual mocking as described in #2 has led to resistance towards this framework. I am actually sympathetic to this -- mocking can easily be overused. However I think this is not a significant concern in LLVM. First and foremost, LLVM has very careful and rare exposure of abstract interfaces or dependency injection, which are the most prone to abuse with mocks. So there are few opportunities to abuse them. Second, a large fraction of LLVM's unittests are testing *generic code* where mocks actually make tremendous sense. And gmock is well suited to building interfaces that exercise generic libraries. Finally, I still think we should be willing to have testing utilities in tree even if they should be used rarely. We can use code review to help guide the usage here. For a longer and more complete discussion of this, see the llvm-dev thread here: http://lists.llvm.org/pipermail/llvm-dev/2017-January/108672.html The general consensus seems that this is a reasonable direction to start down, but that doesn't mean we should race ahead and use this everywhere. I have one test that is blocked on this to land and that was specifically used as an example. Before widespread adoption, I'm going to work up some (brief) guidelines as some of these facilities should be used sparingly and carefully. Differential Revision: https://reviews.llvm.org/D28156 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291606 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jan 21, 2017
This is the second attemp to recommit r292526. The original summary: Currently, a GEP is considered free only if its indices are all constant. TTI::getGEPCost() can give target-specific more accurate analysis. TTI is already used for the cost of many other instructions. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292616 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jan 21, 2017
This reverts commit r292616 because the test case still has problem. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292618 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jan 25, 2017
…d. Attempt #2. The previous patch (https://reviews.llvm.org/rL289538) got reverted because of a bug. Chandler also requested some changes to the algorithm. http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161212/413479.html This is an updated patch. The key difference is that collectBitProviders (renamed to calculateByteProvider) now collects the origin of one byte, not the whole value. It simplifies the implementation and allows to stop the traversal earlier if we know that the result won't be used. From the original commit: Match a pattern where a wide type scalar value is loaded by several narrow loads and combined by shifts and ors. Fold it into a single load or a load and a bswap if the targets supports it. Assuming little endian target: i8 *a = ... i32 val = a[0] | (a[1] << 8) | (a[2] << 16) | (a[3] << 24) => i32 val = *((i32)a) i8 *a = ... i32 val = (a[0] << 24) | (a[1] << 16) | (a[2] << 8) | a[3] => i32 val = BSWAP(*((i32)a)) This optimization was discussed on llvm-dev some time ago in "Load combine pass" thread. We came to the conclusion that we want to do this transformation late in the pipeline because in presence of atomic loads load widening is irreversible transformation and it might hinder other optimizations. Eventually we'd like to support folding patterns like this where the offset has a variable and a constant part: i32 val = a[i] | (a[i + 1] << 8) | (a[i + 2] << 16) | (a[i + 3] << 24) Matching the pattern above is easier at SelectionDAG level since address reassociation has already happened and the fact that the loads are adjacent is clear. Understanding that these loads are adjacent at IR level would have involved looking through geps/zexts/adds while looking at the addresses. The general scheme is to match OR expressions by recursively calculating the origin of individual bytes which constitute the resulting OR value. If all the OR bytes come from memory verify that they are adjacent and match with little or big endian encoding of a wider value. If so and the load of the wider type (and bswap if needed) is allowed by the target generate a load and a bswap if needed. Reviewed By: RKSimon, filcab, chandlerc Differential Revision: https://reviews.llvm.org/D27861 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293036 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jul 19, 2018
changes that are intertwined here: 1) Extracting the tracing of predicate state through the CFG to its own function. 2) Creating a struct to manage the predicate state used throughout the pass. Doing #1 necessitates and motivates the particular approach for #2 as now the predicate management is spread across different functions focused on different aspects of it. A number of simplifications then fell out as a direct consequence. I went with an Optional to make it more natural to construct the MachineSSAUpdater object. This is probably the single largest outstanding refactoring step I have. Things get a bit more surgical from here. My current goal, beyond generally making this maintainable long-term, is to implement several improvements to how we do interprocedural tracking of predicate state. But I don't want to do that until the predicate state management and tracing is in reasonably clear state. Differential Revision: https://reviews.llvm.org/D49427 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@337446 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jul 21, 2018
A DAG-NOT-DAG is a CHECK-DAG group, X, followed by a CHECK-NOT group, N, followed by a CHECK-DAG group, Y. Let y be the initial directive of Y. This patch makes the following changes to the behavior: 1. Directives in N can no longer match within part of Y's match range just because y happens not to be the earliest match from Y. Specifically, this patch withdraws N's search range end from y's match range start to Y's match range start. 2. y can no longer match within X's match range, where a y match produced a reordering complaint, which is thus no longer possible. Specifically, this patch withdraws y's search range start from X's permitted range start to X's match range end, which was already the search range start for other members of Y. Both of these changes can only increase the number of test passes: #1 constrains the ability of CHECK-NOTs to match, and #2 expands the ability of CHECK-DAGs to match without complaints. These changes are based on discussions at: <http://lists.llvm.org/pipermail/llvm-dev/2018-May/123550.html> <https://reviews.llvm.org/D47106> which conclude that: 1. These changes simplify the FileCheck conceptual model. First, it makes search ranges for DAG-NOT-DAG more consistent with other cases. Second, it was confusing that y was treated differently from the rest of Y. 2. These changes add theoretical use cases for DAG-NOT-DAG that had no obvious means to be expressed otherwise. We can justify the first half of this assertion with the observation that these changes can only increase the number of test passes. 3. Reordering detection for DAG-NOT-DAG had no obvious real benefit. We don't have evidence from real uses cases to help us debate conclusions #2 and #3, but #1 at least seems intuitive. Reviewed By: probinson Differential Revision: https://reviews.llvm.org/D48986 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@337605 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jul 25, 2018
against v1.2 BCBS attacks directly. Attacks using spectre v1.2 (a subset of BCBS) are described in the paper here: https://people.csail.mit.edu/vlk/spectre11.pdf The core idea is to speculatively store over the address in a vtable, jumptable, or other target of indirect control flow that will be subsequently loaded. Speculative execution after such a store can forward the stored value to subsequent loads, and if called or jumped to, the speculative execution will be steered to this potentially attacker controlled address. Up until now, this could be mitigated by enableing retpolines. However, that is a relatively expensive technique to mitigate this particular flavor. Especially because in most cases SLH will have already mitigated this. To fully mitigate this with SLH, we need to do two core things: 1) Unfold loads from calls and jumps, allowing the loads to be post-load hardened. 2) Force hardening of incoming registers even if we didn't end up needing to harden the load itself. The reason we need to do these two things is because hardening calls and jumps from this particular variant is importantly different from hardening against leak of secret data. Because the "bad" data here isn't a secret, but in fact speculatively stored by the attacker, it may be loaded from any address, regardless of whether it is read-only memory, mapped memory, or a "hardened" address. The only 100% effective way to harden these instructions is to harden the their operand itself. But to the extent possible, we'd like to take advantage of all the other hardening going on, we just need a fallback in case none of that happened to cover the particular input to the control transfer instruction. For users of SLH, currently they are paing 2% to 6% performance overhead for retpolines, but this mechanism is expected to be substantially cheaper. However, it is worth reminding folks that this does not mitigate all of the things retpolines do -- most notably, variant #2 is not in *any way* mitigated by this technique. So users of SLH may still want to enable retpolines, and the implementation is carefuly designed to gracefully leverage retpolines to avoid the need for further hardening here when they are enabled. Differential Revision: https://reviews.llvm.org/D49663 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@337878 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Aug 9, 2018
Summary: Currently, in line with GCC, when specifying reserved registers like sp or pc on an inline asm() clobber list, we don't always preserve the original value across the statement. And in general, overwriting reserved registers can have surprising results. For example: ``` extern int bar(int[]); int foo(int i) { int a[i]; // VLA asm volatile( "mov r7, #1" : : : "r7" ); return 1 + bar(a); } ``` Compiled for thumb, this gives: ``` $ clang --target=arm-arm-none-eabi -march=armv7a -c test.c -o - -S -O1 -mthumb ... foo: .fnstart @ %bb.0: @ %entry .save {r4, r5, r6, r7, lr} push {r4, r5, r6, r7, lr} .setfp r7, sp, #12 add r7, sp, #12 .pad #4 sub sp, #4 movs r1, #7 add.w r0, r1, r0, lsl #2 bic r0, r0, #7 sub.w r0, sp, r0 mov sp, r0 @app mov.w r7, #1 @NO_APP bl bar adds r0, #1 sub.w r4, r7, #12 mov sp, r4 pop {r4, r5, r6, r7, pc} ... ``` r7 is used as the frame pointer for thumb targets, and this function needs to restore the SP from the FP because of the variable-length stack allocation a. r7 is clobbered by the inline assembly (and r7 is included in the clobber list), but LLVM does not preserve the value of the frame pointer across the assembly block. This type of behavior is similar to GCC's and has been discussed on the bugtracker: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11807 . No consensus seemed to have been reached on the way forward. Clang behavior has briefly been discussed on the CFE mailing (starting here: http://lists.llvm.org/pipermail/cfe-dev/2018-July/058392.html). I've opted for following Eli Friedman's advice to print warnings when there are reserved registers on the clobber list so as not to diverge from GCC behavior for now. The patch uses MachineRegisterInfo's target-specific knowledge of reserved registers, just before we convert the inline asm string in the AsmPrinter. If we find a reserved register, we print a warning: ``` repro.c:6:7: warning: inline asm clobber list contains reserved registers: R7 [-Winline-asm] "mov r7, #1" ^ ``` Reviewers: eli.friedman, olista01, javed.absar, efriedma Reviewed By: efriedma Subscribers: efriedma, eraman, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D49727 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@339257 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Aug 25, 2018
Summary: Sometimes reading an output *.ll file it is not easy to understand why some callsites are not inlined. We can read output of inline remarks (option --pass-remarks-missed=inline) and try correlating its messages with the callsites. An easier way proposed by this patch is to add to every callsite processed by Inliner an attribute with the latest message that describes the cause of not inlining this callsite. The attribute is called //inline-remark//. By default this feature is off. It can be switched on by the option //-inline-remark-attribute//. For example in the provided test the result method //@test1// has two callsites //@bar// and inline remarks report different inlining missed reasons: remark: <unknown>:0:0: bar not inlined into test1 because too costly to inline (cost=-5, threshold=-6) remark: <unknown>:0:0: bar not inlined into test1 because it should never be inlined (cost=never): recursive It is not clear which remark correspond to which callsite. With the inline remark attribute enabled we get the reasons attached to their callsites: define void @test1() { call void @bar(i1 true) #0 call void @bar(i1 false) #2 ret void } attributes #0 = { "inline-remark"="(cost=-5, threshold=-6)" } .. attributes #2 = { "inline-remark"="(cost=never): recursive" } Patch by: yrouban (Yevgeny Rouban) Reviewers: xbolva00, tejohnson, apilipenko Reviewed By: xbolva00, tejohnson Subscribers: eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D50435 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@340618 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Aug 28, 2018
Summary: Sometimes reading an output *.ll file it is not easy to understand why some callsites are not inlined. We can read output of inline remarks (option --pass-remarks-missed=inline) and try correlating its messages with the callsites. An easier way proposed by this patch is to add to every callsite processed by Inliner an attribute with the latest message that describes the cause of not inlining this callsite. The attribute is called //inline-remark//. By default this feature is off. It can be switched on by the option //-inline-remark-attribute//. For example in the provided test the result method //@test1// has two callsites //@bar// and inline remarks report different inlining missed reasons: remark: <unknown>:0:0: bar not inlined into test1 because too costly to inline (cost=-5, threshold=-6) remark: <unknown>:0:0: bar not inlined into test1 because it should never be inlined (cost=never): recursive It is not clear which remark correspond to which callsite. With the inline remark attribute enabled we get the reasons attached to their callsites: define void @test1() { call void @bar(i1 true) #0 call void @bar(i1 false) #2 ret void } attributes #0 = { "inline-remark"="(cost=-5, threshold=-6)" } .. attributes #2 = { "inline-remark"="(cost=never): recursive" } Patch by: yrouban (Yevgeny Rouban) Reviewers: xbolva00, tejohnson, apilipenko Reviewed By: xbolva00, tejohnson Subscribers: eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D50435 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@340834 91177308-0d34-0410-b5e6-96231b3b80d8
aemerson
pushed a commit
that referenced
this pull request
Aug 29, 2018
A DAG-NOT-DAG is a CHECK-DAG group, X, followed by a CHECK-NOT group, N, followed by a CHECK-DAG group, Y. Let y be the initial directive of Y. This patch makes the following changes to the behavior: 1. Directives in N can no longer match within part of Y's match range just because y happens not to be the earliest match from Y. Specifically, this patch withdraws N's search range end from y's match range start to Y's match range start. 2. y can no longer match within X's match range, where a y match produced a reordering complaint, which is thus no longer possible. Specifically, this patch withdraws y's search range start from X's permitted range start to X's match range end, which was already the search range start for other members of Y. Both of these changes can only increase the number of test passes: #1 constrains the ability of CHECK-NOTs to match, and #2 expands the ability of CHECK-DAGs to match without complaints. These changes are based on discussions at: <http://lists.llvm.org/pipermail/llvm-dev/2018-May/123550.html> <https://reviews.llvm.org/D47106> which conclude that: 1. These changes simplify the FileCheck conceptual model. First, it makes search ranges for DAG-NOT-DAG more consistent with other cases. Second, it was confusing that y was treated differently from the rest of Y. 2. These changes add theoretical use cases for DAG-NOT-DAG that had no obvious means to be expressed otherwise. We can justify the first half of this assertion with the observation that these changes can only increase the number of test passes. 3. Reordering detection for DAG-NOT-DAG had no obvious real benefit. We don't have evidence from real uses cases to help us debate conclusions #2 and #3, but #1 at least seems intuitive. Reviewed By: probinson Differential Revision: https://reviews.llvm.org/D48986 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@337605 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Aug 31, 2018
Summary: This is a continuation of https://reviews.llvm.org/D49727 Below the original text, current changes in the comments: Currently, in line with GCC, when specifying reserved registers like sp or pc on an inline asm() clobber list, we don't always preserve the original value across the statement. And in general, overwriting reserved registers can have surprising results. For example: extern int bar(int[]); int foo(int i) { int a[i]; // VLA asm volatile( "mov r7, #1" : : : "r7" ); return 1 + bar(a); } Compiled for thumb, this gives: $ clang --target=arm-arm-none-eabi -march=armv7a -c test.c -o - -S -O1 -mthumb ... foo: .fnstart @ %bb.0: @ %entry .save {r4, r5, r6, r7, lr} push {r4, r5, r6, r7, lr} .setfp r7, sp, #12 add r7, sp, #12 .pad #4 sub sp, #4 movs r1, #7 add.w r0, r1, r0, lsl #2 bic r0, r0, #7 sub.w r0, sp, r0 mov sp, r0 @app mov.w r7, #1 @NO_APP bl bar adds r0, #1 sub.w r4, r7, #12 mov sp, r4 pop {r4, r5, r6, r7, pc} ... r7 is used as the frame pointer for thumb targets, and this function needs to restore the SP from the FP because of the variable-length stack allocation a. r7 is clobbered by the inline assembly (and r7 is included in the clobber list), but LLVM does not preserve the value of the frame pointer across the assembly block. This type of behavior is similar to GCC's and has been discussed on the bugtracker: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11807 . No consensus seemed to have been reached on the way forward. Clang behavior has briefly been discussed on the CFE mailing (starting here: http://lists.llvm.org/pipermail/cfe-dev/2018-July/058392.html). I've opted for following Eli Friedman's advice to print warnings when there are reserved registers on the clobber list so as not to diverge from GCC behavior for now. The patch uses MachineRegisterInfo's target-specific knowledge of reserved registers, just before we convert the inline asm string in the AsmPrinter. If we find a reserved register, we print a warning: repro.c:6:7: warning: inline asm clobber list contains reserved registers: R7 [-Winline-asm] "mov r7, #1" ^ Reviewers: efriedma, olista01, javed.absar Reviewed By: efriedma Subscribers: eraman, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D51165 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@341062 91177308-0d34-0410-b5e6-96231b3b80d8
hyp
pushed a commit
to hyp/swift-llvm
that referenced
this pull request
Sep 12, 2018
A DAG-NOT-DAG is a CHECK-DAG group, X, followed by a CHECK-NOT group, N, followed by a CHECK-DAG group, Y. Let y be the initial directive of Y. This patch makes the following changes to the behavior: 1. Directives in N can no longer match within part of Y's match range just because y happens not to be the earliest match from Y. Specifically, this patch withdraws N's search range end from y's match range start to Y's match range start. 2. y can no longer match within X's match range, where a y match produced a reordering complaint, which is thus no longer possible. Specifically, this patch withdraws y's search range start from X's permitted range start to X's match range end, which was already the search range start for other members of Y. Both of these changes can only increase the number of test passes: apple#1 constrains the ability of CHECK-NOTs to match, and apple#2 expands the ability of CHECK-DAGs to match without complaints. These changes are based on discussions at: <http://lists.llvm.org/pipermail/llvm-dev/2018-May/123550.html> <https://reviews.llvm.org/D47106> which conclude that: 1. These changes simplify the FileCheck conceptual model. First, it makes search ranges for DAG-NOT-DAG more consistent with other cases. Second, it was confusing that y was treated differently from the rest of Y. 2. These changes add theoretical use cases for DAG-NOT-DAG that had no obvious means to be expressed otherwise. We can justify the first half of this assertion with the observation that these changes can only increase the number of test passes. 3. Reordering detection for DAG-NOT-DAG had no obvious real benefit. We don't have evidence from real uses cases to help us debate conclusions apple#2 and apple#3, but apple#1 at least seems intuitive. Reviewed By: probinson Differential Revision: https://reviews.llvm.org/D48986 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@337605 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Oct 24, 2018
…>> (32 - y) pattern" *Seems* to be breaking sanitizer-x86_64-linux-fast buildbot, the ELF/relocatable-versioned.s test: ==17758==MemorySanitizer CHECK failed: /b/sanitizer-x86_64-linux-fast/build/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_allocator.cc:191 "((kBlockMagic)) == ((((u64*)addr)[0]))" (0x6a6cb03abcebc041, 0x0) #0 0x59716b in MsanCheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) /b/sanitizer-x86_64-linux-fast/build/llvm/projects/compiler-rt/lib/msan/msan.cc:393 #1 0x586635 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) /b/sanitizer-x86_64-linux-fast/build/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_termination.cc:79 #2 0x57d5ff in __sanitizer::InternalFree(void*, __sanitizer::SizeClassAllocatorLocalCache<__sanitizer::SizeClassAllocator32<__sanitizer::AP32> >*) /b/sanitizer-x86_64-linux-fast/build/llvm/projects/compiler-rt/lib/sanitizer_common/sanitizer_allocator.cc:191 #3 0x7fc21b24193f (/lib/x86_64-linux-gnu/libc.so.6+0x3593f) #4 0x7fc21b241999 in exit (/lib/x86_64-linux-gnu/libc.so.6+0x35999) #5 0x7fc21b22c2e7 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e7) #6 0x57c039 in _start (/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/lld+0x57c039) This reverts commit r345014. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@345017 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Nov 1, 2018
Summary: As a bonus, this arguably improves the code by making it simpler. gcc 8 on Ubuntu 18.10 reports the following: ==39667==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7fffffff8ae0 at pc 0x555555dbfc68 bp 0x7fffffff8760 sp 0x7fffffff8750 WRITE of size 8 at 0x7fffffff8ae0 thread T0 #0 0x555555dbfc67 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_Alloc_hider::_Alloc_hider(char*, std::allocator<char>&&) /usr/include/c++/8/bits/basic_string.h:149 #1 0x555555dbfc67 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&) /usr/include/c++/8/bits/basic_string.h:542 #2 0x555555dbfc67 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > std::operator+<char, std::char_traits<char>, std::allocator<char> >(char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&) /usr/include/c++/8/bits/basic_string.h:6009 #3 0x555555dbfc67 in searchableFieldType /home/nha/amd/build/san/llvm-src/utils/TableGen/SearchableTableEmitter.cpp:168 (...) Address 0x7fffffff8ae0 is located in stack of thread T0 at offset 864 in frame #0 0x555555dbef3f in searchableFieldType /home/nha/amd/build/san/llvm-src/utils/TableGen/SearchableTableEmitter.cpp:148 Reviewers: fhahn, simon_tatham, kparzysz Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D53931 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@345749 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Nov 13, 2018
Flags variable was not initialized and later used (both isMBBSafeToOutlineFrom implementations assume it's initialized), which breaks test/CodeGen/AArch64/machine-outliner.mir. under memory sanitizer: MemorySanitizer: use-of-uninitialized-value #0 in llvm::AArch64InstrInfo::getOutliningType(llvm::MachineInstrBundleIterator<llvm::MachineInstr, false>&, unsigned int) const llvm/lib/Target/AArch64/AArch64InstrInfo.cpp:5494:9 #1 in (anonymous namespace)::InstructionMapper::convertToUnsignedVec(llvm::MachineBasicBlock&, llvm::TargetInstrInfo const&) llvm/lib/CodeGen/MachineOutliner.cpp:772:19 #2 in (anonymous namespace)::MachineOutliner::populateMapper((anonymous namespace)::InstructionMapper&, llvm::Module&, llvm::MachineModuleInfo&) llvm/lib/CodeGen/MachineOutliner.cpp:1543:14 #3 in (anonymous namespace)::MachineOutliner::runOnModule(llvm::Module&) llvm/lib/CodeGen/MachineOutliner.cpp:1645:3 #4 in (anonymous namespace)::MPPassManager::runOnModule(llvm::Module&) llvm/lib/IR/LegacyPassManager.cpp:1744:27 #5 in llvm::legacy::PassManagerImpl::run(llvm::Module&) llvm/lib/IR/LegacyPassManager.cpp:1857:44 #6 in compileModule(char**, llvm::LLVMContext&) llvm/tools/llc/llc.cpp:597:8 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@346761 91177308-0d34-0410-b5e6-96231b3b80d8
aemerson
pushed a commit
that referenced
this pull request
Jan 19, 2019
Summary: Scanning blocks in sub-loops for uses is unnecessary, as they were already handled while dealing with the containing sub-loop. This speeds up LCSSA for highly nested loops. For the test case in PR37202, it halves the time spent in LCSSA. In cases were we won't be able to skip any blocks, the additional lookup should be negligible. Time-passes without this patch for test case from PR37202: Total Execution Time: 48.5505 seconds (48.5511 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 10.0822 ( 21.0%) 0.1406 ( 27.0%) 10.2228 ( 21.1%) 10.2228 ( 21.1%) Loop-Closed SSA Form Pass 10.0417 ( 20.9%) 0.1467 ( 28.2%) 10.1884 ( 21.0%) 10.1890 ( 21.0%) Loop-Closed SSA Form Pass #2 4.2703 ( 8.9%) 0.0040 ( 0.8%) 4.2742 ( 8.8%) 4.2742 ( 8.8%) Unswitch loops 2.7376 ( 5.7%) 0.0229 ( 4.4%) 2.7605 ( 5.7%) 2.7611 ( 5.7%) Loop-Closed SSA Form Pass #5 2.7332 ( 5.7%) 0.0214 ( 4.1%) 2.7546 ( 5.7%) 2.7546 ( 5.7%) Loop-Closed SSA Form Pass #3 2.7088 ( 5.6%) 0.0230 ( 4.4%) 2.7319 ( 5.6%) 2.7324 ( 5.6%) Loop-Closed SSA Form Pass #4 2.6855 ( 5.6%) 0.0236 ( 4.5%) 2.7091 ( 5.6%) 2.7090 ( 5.6%) Loop-Closed SSA Form Pass #6 2.1648 ( 4.5%) 0.0018 ( 0.4%) 2.1666 ( 4.5%) 2.1664 ( 4.5%) Unroll loops 1.8371 ( 3.8%) 0.0009 ( 0.2%) 1.8379 ( 3.8%) 1.8380 ( 3.8%) Value Propagation 1.8149 ( 3.8%) 0.0021 ( 0.4%) 1.8170 ( 3.7%) 1.8169 ( 3.7%) Loop Invariant Code Motion 1.6755 ( 3.5%) 0.0226 ( 4.3%) 1.6981 ( 3.5%) 1.6980 ( 3.5%) Loop-Closed SSA Form Pass #7 Time-passes with this patch Total Execution Time: 29.9285 seconds (29.9276 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 5.2786 ( 17.7%) 0.0021 ( 1.2%) 5.2806 ( 17.6%) 5.2808 ( 17.6%) Unswitch loops 4.3739 ( 14.7%) 0.0303 ( 18.1%) 4.4042 ( 14.7%) 4.4042 ( 14.7%) Loop-Closed SSA Form Pass 4.2658 ( 14.3%) 0.0192 ( 11.5%) 4.2850 ( 14.3%) 4.2851 ( 14.3%) Loop-Closed SSA Form Pass #2 2.2307 ( 7.5%) 0.0013 ( 0.8%) 2.2320 ( 7.5%) 2.2318 ( 7.5%) Loop Invariant Code Motion 2.0888 ( 7.0%) 0.0012 ( 0.7%) 2.0900 ( 7.0%) 2.0897 ( 7.0%) Unroll loops 1.6761 ( 5.6%) 0.0013 ( 0.8%) 1.6774 ( 5.6%) 1.6774 ( 5.6%) Value Propagation 1.3686 ( 4.6%) 0.0029 ( 1.8%) 1.3716 ( 4.6%) 1.3714 ( 4.6%) Induction Variable Simplification 1.1457 ( 3.8%) 0.0010 ( 0.6%) 1.1468 ( 3.8%) 1.1468 ( 3.8%) Loop-Closed SSA Form Pass #4 1.1384 ( 3.8%) 0.0005 ( 0.3%) 1.1389 ( 3.8%) 1.1389 ( 3.8%) Loop-Closed SSA Form Pass #6 1.1360 ( 3.8%) 0.0027 ( 1.6%) 1.1387 ( 3.8%) 1.1387 ( 3.8%) Loop-Closed SSA Form Pass #5 1.1331 ( 3.8%) 0.0010 ( 0.6%) 1.1341 ( 3.8%) 1.1340 ( 3.8%) Loop-Closed SSA Form Pass #3 Reviewers: davide, efriedma, mzolotukhin Reviewed By: davide, efriedma Subscribers: hiraditya, dmgreen, llvm-commits Differential Revision: https://reviews.llvm.org/D56848 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@351567 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jan 24, 2019
[LV][VPlan] Change to implement VPlan based predication for VPlan-native path Context: Patch Series #2 for outer loop vectorization support in LV using VPlan. (RFC: http://lists.llvm.org/pipermail/llvm-dev/2017-December/119523.html). Patch series #2 checks that inner loops are still trivially lock-step among all vector elements. Non-loop branches are blindly assumed as divergent. Changes here implement VPlan based predication algorithm to compute predicates for blocks that need predication. Predicates are computed for the VPLoop region in reverse post order. A block's predicate is computed as OR of the masks of all incoming edges. The mask for an incoming edge is computed as AND of predecessor block's predicate and either predecessor's Condition bit or NOT(Condition bit) depending on whether the edge from predecessor block to the current block is true or false edge. Reviewers: fhahn, rengolin, hsaito, dcaballe Reviewed By: fhahn Patch by Satish Guggilla, thanks! Differential Revision: https://reviews.llvm.org/D53349 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@351990 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Feb 1, 2019
…implify (#2) This is the third of a series of patches simplifying llvm-symbolizer tests. See r352752 and r352753 for the previous two. This patch splits out a number of distinct test cases from llvm-symbolizer.test into separate tests, and simplifies them in various ways including: 1) using --obj/positional arguments for the input file and addresses instead of stdin, 2) using runtime-generated inputs rather than a pre-canned binary, and 3) testing more specifically (i.e. checking only what is interesting to the behaviour changed in the original commit for that test case). This patch also removes the test case for using --obj. The tools/llvm-symbolizer/basic.s test already tests this case. Finally, this patch adds a simple test case to the demangle switch test case to show that demangling happens by default. See https://bugs.llvm.org/show_bug.cgi?id=40070#c1 for the motivation. Reviewed by: dblaikie Differential Revision: https://reviews.llvm.org/D57446 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@352754 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Feb 3, 2019
If there is only a single available value, all uses must be dominated by the single value and there is no need to search for a reaching definition. This drastically speeds up LCSSA in some cases. For the test case from PR37202, it speeds up LCSSA construction by 4 times. Time-passes without this patch for test case from PR37202: Total Execution Time: 29.9285 seconds (29.9276 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 5.2786 ( 17.7%) 0.0021 ( 1.2%) 5.2806 ( 17.6%) 5.2808 ( 17.6%) Unswitch loops 4.3739 ( 14.7%) 0.0303 ( 18.1%) 4.4042 ( 14.7%) 4.4042 ( 14.7%) Loop-Closed SSA Form Pass 4.2658 ( 14.3%) 0.0192 ( 11.5%) 4.2850 ( 14.3%) 4.2851 ( 14.3%) Loop-Closed SSA Form Pass #2 2.2307 ( 7.5%) 0.0013 ( 0.8%) 2.2320 ( 7.5%) 2.2318 ( 7.5%) Loop Invariant Code Motion 2.0888 ( 7.0%) 0.0012 ( 0.7%) 2.0900 ( 7.0%) 2.0897 ( 7.0%) Unroll loops 1.6761 ( 5.6%) 0.0013 ( 0.8%) 1.6774 ( 5.6%) 1.6774 ( 5.6%) Value Propagation 1.3686 ( 4.6%) 0.0029 ( 1.8%) 1.3716 ( 4.6%) 1.3714 ( 4.6%) Induction Variable Simplification 1.1457 ( 3.8%) 0.0010 ( 0.6%) 1.1468 ( 3.8%) 1.1468 ( 3.8%) Loop-Closed SSA Form Pass #4 1.1384 ( 3.8%) 0.0005 ( 0.3%) 1.1389 ( 3.8%) 1.1389 ( 3.8%) Loop-Closed SSA Form Pass #6 1.1360 ( 3.8%) 0.0027 ( 1.6%) 1.1387 ( 3.8%) 1.1387 ( 3.8%) Loop-Closed SSA Form Pass #5 1.1331 ( 3.8%) 0.0010 ( 0.6%) 1.1341 ( 3.8%) 1.1340 ( 3.8%) Loop-Closed SSA Form Pass #3 Time passes with this patch Total Execution Time: 19.2802 seconds (19.2813 wall clock) ---User Time--- --System Time-- --User+System-- ---Wall Time--- --- Name --- 4.4234 ( 23.2%) 0.0038 ( 2.0%) 4.4272 ( 23.0%) 4.4273 ( 23.0%) Unswitch loops 2.3828 ( 12.5%) 0.0020 ( 1.1%) 2.3848 ( 12.4%) 2.3847 ( 12.4%) Unroll loops 1.8714 ( 9.8%) 0.0020 ( 1.1%) 1.8734 ( 9.7%) 1.8735 ( 9.7%) Loop Invariant Code Motion 1.7973 ( 9.4%) 0.0022 ( 1.2%) 1.7995 ( 9.3%) 1.8003 ( 9.3%) Value Propagation 1.4010 ( 7.3%) 0.0033 ( 1.8%) 1.4043 ( 7.3%) 1.4044 ( 7.3%) Induction Variable Simplification 0.9978 ( 5.2%) 0.0244 ( 13.1%) 1.0222 ( 5.3%) 1.0224 ( 5.3%) Loop-Closed SSA Form Pass #2 0.9611 ( 5.0%) 0.0257 ( 13.8%) 0.9868 ( 5.1%) 0.9868 ( 5.1%) Loop-Closed SSA Form Pass 0.5856 ( 3.1%) 0.0015 ( 0.8%) 0.5871 ( 3.0%) 0.5869 ( 3.0%) Unroll loops #2 0.4132 ( 2.2%) 0.0012 ( 0.7%) 0.4145 ( 2.1%) 0.4143 ( 2.1%) Loop Invariant Code Motion #3 Reviewers: efriedma, davide, mzolotukhin Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D57033 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@352960 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Feb 9, 2019
(test commit #2 migrating to git) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@353533 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Feb 11, 2019
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@353683 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Mar 1, 2019
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355192 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Mar 8, 2019
Introduces memory leak in FunctionTest.GetPointerAlignment that breaks sanitizer buildbots: ``` ================================================================= ==2453==ERROR: LeakSanitizer: detected memory leaks Direct leak of 128 byte(s) in 1 object(s) allocated from: #0 0x610428 in operator new(unsigned long) /b/sanitizer-x86_64-linux-bootstrap/build/llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:105 #1 0x16936bc in llvm::User::operator new(unsigned long) /b/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/User.cpp:151:19 #2 0x7c3fe9 in Create /b/sanitizer-x86_64-linux-bootstrap/build/llvm/include/llvm/IR/Function.h:144:12 #3 0x7c3fe9 in (anonymous namespace)::FunctionTest_GetPointerAlignment_Test::TestBody() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/unittests/IR/FunctionTest.cpp:136 #4 0x1a836a0 in HandleExceptionsInMethodIfSupported<testing::Test, void> /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc #5 0x1a836a0 in testing::Test::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:2474 #6 0x1a85c55 in testing::TestInfo::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:2656:11 #7 0x1a870d0 in testing::TestCase::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:2774:28 #8 0x1aa5b84 in testing::internal::UnitTestImpl::RunAllTests() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:4649:43 #9 0x1aa4d30 in HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc #10 0x1aa4d30 in testing::UnitTest::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:4257 #11 0x1a6b656 in RUN_ALL_TESTS /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/include/gtest/gtest.h:2233:46 #12 0x1a6b656 in main /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/UnitTestMain/TestMain.cpp:50 #13 0x7f5af37a22e0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e0) Indirect leak of 40 byte(s) in 1 object(s) allocated from: #0 0x610428 in operator new(unsigned long) /b/sanitizer-x86_64-linux-bootstrap/build/llvm/projects/compiler-rt/lib/asan/asan_new_delete.cc:105 #1 0x151be6b in make_unique<llvm::ValueSymbolTable> /b/sanitizer-x86_64-linux-bootstrap/build/llvm/include/llvm/ADT/STLExtras.h:1349:29 #2 0x151be6b in llvm::Function::Function(llvm::FunctionType*, llvm::GlobalValue::LinkageTypes, unsigned int, llvm::Twine const&, llvm::Module*) /b/sanitizer-x86_64-linux-bootstrap/build/llvm/lib/IR/Function.cpp:241 #3 0x7c4006 in Create /b/sanitizer-x86_64-linux-bootstrap/build/llvm/include/llvm/IR/Function.h:144:16 #4 0x7c4006 in (anonymous namespace)::FunctionTest_GetPointerAlignment_Test::TestBody() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/unittests/IR/FunctionTest.cpp:136 #5 0x1a836a0 in HandleExceptionsInMethodIfSupported<testing::Test, void> /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc #6 0x1a836a0 in testing::Test::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:2474 #7 0x1a85c55 in testing::TestInfo::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:2656:11 #8 0x1a870d0 in testing::TestCase::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:2774:28 #9 0x1aa5b84 in testing::internal::UnitTestImpl::RunAllTests() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:4649:43 #10 0x1aa4d30 in HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc #11 0x1aa4d30 in testing::UnitTest::Run() /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/src/gtest.cc:4257 #12 0x1a6b656 in RUN_ALL_TESTS /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/googletest/include/gtest/gtest.h:2233:46 #13 0x1a6b656 in main /b/sanitizer-x86_64-linux-bootstrap/build/llvm/utils/unittest/UnitTestMain/TestMain.cpp:50 #14 0x7f5af37a22e0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e0) SUMMARY: AddressSanitizer: 168 byte(s) leaked in 2 allocation(s). ``` See http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap/builds/11358/steps/check-llvm%20asan/logs/stdio for more information. Also introduces use-of-uninitialized-value in ConstantsTest.FoldGlobalVariablePtr: ``` ==7070==WARNING: MemorySanitizer: use-of-uninitialized-value #0 0x14e703c in User /b/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/User.h:79:5 #1 0x14e703c in Constant /b/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/Constant.h:44 #2 0x14e703c in llvm::GlobalValue::GlobalValue(llvm::Type*, llvm::Value::ValueTy, llvm::Use*, unsigned int, llvm::GlobalValue::LinkageTypes, llvm::Twine const&, unsigned int) /b/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/GlobalValue.h:78 #3 0x14e5467 in GlobalObject /b/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/GlobalObject.h:34:9 #4 0x14e5467 in llvm::GlobalVariable::GlobalVariable(llvm::Type*, bool, llvm::GlobalValue::LinkageTypes, llvm::Constant*, llvm::Twine const&, llvm::GlobalValue::ThreadLocalMode, unsigned int, bool) /b/sanitizer-x86_64-linux-fast/build/llvm/lib/IR/Globals.cpp:314 #5 0x6938f1 in llvm::(anonymous namespace)::ConstantsTest_FoldGlobalVariablePtr_Test::TestBody() /b/sanitizer-x86_64-linux-fast/build/llvm/unittests/IR/ConstantsTest.cpp:565:18 #6 0x1a240a1 in HandleExceptionsInMethodIfSupported<testing::Test, void> /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/src/gtest.cc #7 0x1a240a1 in testing::Test::Run() /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/src/gtest.cc:2474 #8 0x1a26d26 in testing::TestInfo::Run() /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/src/gtest.cc:2656:11 #9 0x1a2815f in testing::TestCase::Run() /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/src/gtest.cc:2774:28 #10 0x1a43de8 in testing::internal::UnitTestImpl::RunAllTests() /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/src/gtest.cc:4649:43 #11 0x1a42c47 in HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool> /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/src/gtest.cc #12 0x1a42c47 in testing::UnitTest::Run() /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/src/gtest.cc:4257 #13 0x1a0dfba in RUN_ALL_TESTS /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/googletest/include/gtest/gtest.h:2233:46 #14 0x1a0dfba in main /b/sanitizer-x86_64-linux-fast/build/llvm/utils/unittest/UnitTestMain/TestMain.cpp:50 #15 0x7f2081c412e0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e0) #16 0x4dff49 in _start (/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/unittests/IR/IRTests+0x4dff49) SUMMARY: MemorySanitizer: use-of-uninitialized-value /b/sanitizer-x86_64-linux-fast/build/llvm/include/llvm/IR/User.h:79:5 in User ``` See http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/30222/steps/check-llvm%20msan/logs/stdio for more information. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355616 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Mar 22, 2019
Currently, the type id for a derived type is computed incorrectly. For example, type #1: int type #2: ptr to #1 For a global variable "int *a", type #1 will be attributed to variable "a". This is due to a bug which assigns the type id of the basetype of that derived type as the derived type's type id. This happens to "const", "volatile", "restrict", "typedef" and "pointer" types. This patch fixed this bug, fixed existing test cases and added a new one focusing on pointers plus other derived types. Signed-off-by: Yonghong Song <yhs@fb.com> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356727 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Apr 13, 2019
The size field of a location can be different for each entry, so it is useful to have this displayed in the output of llvm-readobj -stackmap. Below is an example of how the output would look: Record ID: 2882400000, instruction offset: 16 3 locations: #1: Constant 1, size: 8 #2: Constant 2, size: 8 #3: Constant 3, size: 8 0 live-outs: [ ] Patch By: jacob.hughes@kcl.ac.uk (with heavy modification by me) Differential Revision: https://reviews.llvm.org/D59169 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@358324 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
May 23, 2019
This patch focuses on adding additional testing for the --source switch. For reference, the source-interleave-x86_64.ll test file has been split into two parts - the input (shared with the other tests) and the test itself. Reviewed by: MaskRay, rupprecht, grimar Differential Revision: https://reviews.llvm.org/D61996 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@361479 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jun 12, 2019
Looks like a MachinePipeliner algorithm problem found by sanitizer-x86_64-linux-fast. I will backout this test first while investigating the problem to unblock buildbot. ==49637==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x614000002e08 at pc 0x000004364350 bp 0x7ffe228a3bd0 sp 0x7ffe228a3bc8 READ of size 4 at 0x614000002e08 thread T0 #0 0x436434f in llvm::SwingSchedulerDAG::checkValidNodeOrder(llvm::SmallVector<llvm::NodeSet, 8u> const&) const /b/sanitizer-x86_64-linux-fast/build/llvm/lib/CodeGen/MachinePipeliner.cpp:3736:11 #1 0x4342cd0 in llvm::SwingSchedulerDAG::schedule() /b/sanitizer-x86_64-linux-fast/build/llvm/lib/CodeGen/MachinePipeliner.cpp:486:3 #2 0x434042d in llvm::MachinePipeliner::swingModuloScheduler(llvm::MachineLoop&) /b/sanitizer-x86_64-linux-fast/build/llvm/lib/CodeGen/MachinePipeliner.cpp:385:7 #3 0x433eb90 in llvm::MachinePipeliner::runOnMachineFunction(llvm::MachineFunction&) /b/sanitizer-x86_64-linux-fast/build/llvm/lib/CodeGen/MachinePipeliner.cpp:207:5 #4 0x428b7ea in llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /b/sanitizer-x86_64-linux-fast/build/llvm/lib/CodeGen/MachineFunctionPass.cpp:73:13 #5 0x4d1a913 in llvm::FPPassManager::runOnFunction(llvm::Function&) /b/sanitizer-x86_64-linux-fast/build/llvm/lib/IR/LegacyPassManager.cpp:1648:27 #6 0x4d1b192 in llvm::FPPassManager::runOnModule(llvm::Module&) /b/sanitizer-x86_64-linux-fast/build/llvm/lib/IR/LegacyPassManager.cpp:1685:16 #7 0x4d1c06d in runOnModule /b/sanitizer-x86_64-linux-fast/build/llvm/lib/IR/LegacyPassManager.cpp:1752:27 #8 0x4d1c06d in llvm::legacy::PassManagerImpl::run(llvm::Module&) /b/sanitizer-x86_64-linux-fast/build/llvm/lib/IR/LegacyPassManager.cpp:1865 #9 0xa48ca3 in compileModule(char**, llvm::LLVMContext&) /b/sanitizer-x86_64-linux-fast/build/llvm/tools/llc/llc.cpp:611:8 #10 0xa4270f in main /b/sanitizer-x86_64-linux-fast/build/llvm/tools/llc/llc.cpp:365:22 #11 0x7fec902572e0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e0) #12 0x971b69 in _start (/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan/bin/llc+0x971b69) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@363105 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jul 9, 2019
Introduction ============ This patch added intial support for bpf program compile once and run everywhere (CO-RE). The main motivation is for bpf program which depends on kernel headers which may vary between different kernel versions. The initial discussion can be found at https://lwn.net/Articles/773198/. Currently, bpf program accesses kernel internal data structure through bpf_probe_read() helper. The idea is to capture the kernel data structure to be accessed through bpf_probe_read() and relocate them on different kernel versions. On each host, right before bpf program load, the bpfloader will look at the types of the native linux through vmlinux BTF, calculates proper access offset and patch the instruction. To accommodate this, three intrinsic functions preserve_{array,union,struct}_access_index are introduced which in clang will preserve the base pointer, struct/union/array access_index and struct/union debuginfo type information. Later, bpf IR pass can reconstruct the whole gep access chains without looking at gep itself. This patch did the following: . An IR pass is added to convert preserve_*_access_index to global variable who name encodes the getelementptr access pattern. The global variable has metadata attached to describe the corresponding struct/union debuginfo type. . An SimplifyPatchable MachineInstruction pass is added to remove unnecessary loads. . The BTF output pass is enhanced to generate relocation records located in .BTF.ext section. Typical CO-RE also needs support of global variables which can be assigned to different values to different hosts. For example, kernel version can be used to guard different versions of codes. This patch added the support for patchable externals as well. Example ======= The following is an example. struct pt_regs { long arg1; long arg2; }; struct sk_buff { int i; struct net_device *dev; }; #define _(x) (__builtin_preserve_access_index(x)) static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) = (void *) 4; extern __attribute__((section(".BPF.patchable_externs"))) unsigned __kernel_version; int bpf_prog(struct pt_regs *ctx) { struct net_device *dev = 0; // ctx->arg* does not need bpf_probe_read if (__kernel_version >= 41608) bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg1)->dev)); else bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg2)->dev)); return dev != 0; } In the above, we want to translate the third argument of bpf_probe_read() as relocations. -bash-4.4$ clang -target bpf -O2 -g -S trace.c The compiler will generate two new subsections in .BTF.ext, OffsetReloc and ExternReloc. OffsetReloc is to record the structure member offset operations, and ExternalReloc is to record the external globals where only u8, u16, u32 and u64 are supported. BPFOffsetReloc Size struct SecLOffsetReloc for ELF section #1 A number of struct BPFOffsetReloc for ELF section #1 struct SecOffsetReloc for ELF section #2 A number of struct BPFOffsetReloc for ELF section #2 ... BPFExternReloc Size struct SecExternReloc for ELF section #1 A number of struct BPFExternReloc for ELF section #1 struct SecExternReloc for ELF section #2 A number of struct BPFExternReloc for ELF section #2 struct BPFOffsetReloc { uint32_t InsnOffset; ///< Byte offset in this section uint32_t TypeID; ///< TypeID for the relocation uint32_t OffsetNameOff; ///< The string to traverse types }; struct BPFExternReloc { uint32_t InsnOffset; ///< Byte offset in this section uint32_t ExternNameOff; ///< The string for external variable }; Note that only externs with attribute section ".BPF.patchable_externs" are considered for Extern Reloc which will be patched by bpf loader right before the load. For the above test case, two offset records and one extern record will be generated: OffsetReloc records: .long .Ltmp12 # Insn Offset .long 7 # TypeId .long 242 # Type Decode String .long .Ltmp18 # Insn Offset .long 7 # TypeId .long 242 # Type Decode String ExternReloc record: .long .Ltmp5 # Insn Offset .long 165 # External Variable In string table: .ascii "0:1" # string offset=242 .ascii "__kernel_version" # string offset=165 The default member offset can be calculated as the 2nd member offset (0 representing the 1st member) of struct "sk_buff". The asm code: .Ltmp5: .Ltmp6: r2 = 0 r3 = 41608 .Ltmp7: .Ltmp8: .loc 1 18 9 is_stmt 0 # t.c:18:9 .Ltmp9: if r3 > r2 goto LBB0_2 .Ltmp10: .Ltmp11: .loc 1 0 9 # t.c:0:9 .Ltmp12: r2 = 8 .Ltmp13: .loc 1 19 66 is_stmt 1 # t.c:19:66 .Ltmp14: .Ltmp15: r3 = *(u64 *)(r1 + 0) goto LBB0_3 .Ltmp16: .Ltmp17: LBB0_2: .loc 1 0 66 is_stmt 0 # t.c:0:66 .Ltmp18: r2 = 8 .loc 1 21 66 is_stmt 1 # t.c:21:66 .Ltmp19: r3 = *(u64 *)(r1 + 8) .Ltmp20: .Ltmp21: LBB0_3: .loc 1 0 66 is_stmt 0 # t.c:0:66 r3 += r2 r1 = r10 .Ltmp22: .Ltmp23: .Ltmp24: r1 += -8 r2 = 8 call 4 For instruction .Ltmp12 and .Ltmp18, "r2 = 8", the number 8 is the structure offset based on the current BTF. Loader needs to adjust it if it changes on the host. For instruction .Ltmp5, "r2 = 0", the external variable got a default value 0, loader needs to supply an appropriate value for the particular host. Compiling to generate object code and disassemble: 0000000000000000 bpf_prog: 0: b7 02 00 00 00 00 00 00 r2 = 0 1: 7b 2a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r2 2: b7 02 00 00 00 00 00 00 r2 = 0 3: b7 03 00 00 88 a2 00 00 r3 = 41608 4: 2d 23 03 00 00 00 00 00 if r3 > r2 goto +3 <LBB0_2> 5: b7 02 00 00 08 00 00 00 r2 = 8 6: 79 13 00 00 00 00 00 00 r3 = *(u64 *)(r1 + 0) 7: 05 00 02 00 00 00 00 00 goto +2 <LBB0_3> 0000000000000040 LBB0_2: 8: b7 02 00 00 08 00 00 00 r2 = 8 9: 79 13 08 00 00 00 00 00 r3 = *(u64 *)(r1 + 8) 0000000000000050 LBB0_3: 10: 0f 23 00 00 00 00 00 00 r3 += r2 11: bf a1 00 00 00 00 00 00 r1 = r10 12: 07 01 00 00 f8 ff ff ff r1 += -8 13: b7 02 00 00 08 00 00 00 r2 = 8 14: 85 00 00 00 04 00 00 00 call 4 Instructions #2, #5 and #8 need relocation resoutions from the loader. Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D61524 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@365503 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jul 19, 2019
Summary: Change the scan algorithm to use only power-of-two shifts (1, 2, 4, 8, 16, 32) instead of starting off shifting by 1, 2 and 3 and then doing a 3-way ADD, because: 1. It simplifies the compiler a little. 2. It minimizes vgpr pressure because each instruction is now of the form vn = vn + vn << c. 3. It is more friendly to the DPP combiner, which currently can't combine into an ADD3 instruction. Because of #2 and #3 the end result is improved from this: v_add_u32_dpp v4, v3, v3 row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0 v_mov_b32_dpp v5, v3 row_shr:2 row_mask:0xf bank_mask:0xf v_mov_b32_dpp v1, v3 row_shr:3 row_mask:0xf bank_mask:0xf v_add3_u32 v1, v4, v5, v1 s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:4 row_mask:0xf bank_mask:0xe s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:8 row_mask:0xf bank_mask:0xc s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:15 row_mask:0xa bank_mask:0xf s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:31 row_mask:0xc bank_mask:0xf To this: v_add_u32_dpp v1, v1, v1 row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0 s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:2 row_mask:0xf bank_mask:0xf bound_ctrl:0 s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:4 row_mask:0xf bank_mask:0xe s_nop 1 v_add_u32_dpp v1, v1, v1 row_shr:8 row_mask:0xf bank_mask:0xc s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:15 row_mask:0xa bank_mask:0xf s_nop 1 v_add_u32_dpp v1, v1, v1 row_bcast:31 row_mask:0xc bank_mask:0xf I.e. two fewer computational instructions, one extra nop where we could schedule something else. Reviewers: arsenm, sheredom, critson, rampitec, vpykhtin Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64411 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@366543 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Jul 25, 2019
Summary: Every time PrettyPrinter::printInst is called, stdout is flushed and it makes llvm-objdump slow. This patches adds a string buffer to prevent stdout from being flushed. Benchmark results (./llvm-objdump-master: without this patch, ./bin/llvm-objcopy: with this patch): $ hyperfine --warmup 10 './llvm-objdump-master -d ./bin/llvm-objcopy' './bin/llvm-objdump -d ./bin/llvm-objcopy' Benchmark #1: ./llvm-objdump-master -d ./bin/llvm-objcopy Time (mean ± σ): 2.230 s ± 0.050 s [User: 1.533 s, System: 0.682 s] Range (min … max): 2.115 s … 2.278 s 10 runs Benchmark #2: ./bin/llvm-objdump -d ./bin/llvm-objcopy Time (mean ± σ): 386.4 ms ± 13.0 ms [User: 376.6 ms, System: 6.1 ms] Range (min … max): 366.1 ms … 407.0 ms 10 runs Summary './bin/llvm-objdump -d ./bin/llvm-objcopy' ran 5.77 ± 0.23 times faster than './llvm-objdump-master -d ./bin/llvm-objcopy' Reviewers: alexshap, Bigcheese, jhenderson, rupprecht, grimar, MaskRay Reviewed By: jhenderson, MaskRay Subscribers: dexonsmith, jhenderson, javed.absar, kristof.beyls, rupprecht, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D64969 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@366984 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Sep 3, 2019
…ymbol that belongs to a section with a broken sh_name" It broke BB: http://lab.llvm.org:8011/builders/clang-x86_64-debian-fast/builds/16955/steps/test/logs/stdio Expected<T> must be checked before access or destruction. Unchecked Expected<T> contained error: a section [index 1] has an invalid sh_name (0xffff) offset which goes past the end of the section name string tableStack dump: 0. Program arguments: /srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm /srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/test/tools/llvm-nm/Output/format-sysv-section.test.tmp2.o --format=sysv #0 0x00000000008af7c4 PrintStackTraceSignalHandler(void*) (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x8af7c4) #1 0x00000000008ad8be llvm::sys::RunSignalHandlers() (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x8ad8be) #2 0x00000000008afbd8 SignalHandler(int) (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x8afbd8) #3 0x00007f0a6b989730 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x12730) #4 0x00007f0a6b48d7bb raise (/lib/x86_64-linux-gnu/libc.so.6+0x377bb) #5 0x00007f0a6b478535 abort (/lib/x86_64-linux-gnu/libc.so.6+0x22535) #6 0x000000000042004b llvm::Expected<llvm::StringRef>::fatalUncheckedExpected() const (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x42004b) #7 0x00000000008367f5 (/sv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x8367f5) #8 0x0000000000817b80 llvm::object::IRObjectFile::findBitcodeInObject(llvm::object::ObjectFile const&) (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x817b80) #9 0x0000000000838416 llvm::object::SymbolicFile::createSymbolicFile(llvm::MemoryBufferRef, llvm::file_magic, llvm::LLVMContext*) (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x838416) #10 0x00000000007f36cb llvm::object::createBinary(llvm::MemoryBufferRef, llvm::LLVMContext*) (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x7f36cb) #11 0x0000000000413123 dumpSymbolNamesFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&) (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x413123) #12 0x0000000000412e38 main (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x412e38) #13 0x00007f0a6b47a09b __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409b) #14 0x00000000004120da _start (/srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/llvm-nm+0x4120da) FileCheck error: '-' is empty. FileCheck command line: /srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.obj/bin/FileCheck /srv/llvm-buildbot-srcatch/llvm-build-dir/clang-x86_64-debian-fast/llvm.src/test/tools/llvm-nm/format-sysv-section.test --check-prefix=ERR -- git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@370662 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Sep 19, 2019
Summary: This reverts commit r372204. This change causes build bot failures under msan: http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/35236/steps/check-llvm%20msan/logs/stdio: ``` FAIL: LLVM :: DebugInfo/AArch64/asan-stack-vars.mir (19531 of 33579) ******************** TEST 'LLVM :: DebugInfo/AArch64/asan-stack-vars.mir' FAILED ******************** Script: -- : 'RUN: at line 1'; /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/llc -O0 -start-before=livedebugvalues -filetype=obj -o - /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/DebugInfo/AArch64/asan-stack-vars.mir | /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/llvm-dwarfdump -v - | /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/FileCheck /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/DebugInfo/AArch64/asan-stack-vars.mir -- Exit Code: 2 Command Output (stderr): -- ==62894==WARNING: MemorySanitizer: use-of-uninitialized-value #0 0xdfcafb in llvm::AArch64FrameLowering::resolveFrameOffsetReference(llvm::MachineFunction const&, int, bool, unsigned int&, bool, bool) const /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1658:3 #1 0xdfae8a in resolveFrameIndexReference /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1580:10 #2 0xdfae8a in llvm::AArch64FrameLowering::getFrameIndexReference(llvm::MachineFunction const&, int, unsigned int&) const /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1536 #3 0x46642c1 in (anonymous namespace)::LiveDebugValues::extractSpillBaseRegAndOffset(llvm::MachineInstr const&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:582:21 #4 0x4647cb3 in transferSpillOrRestoreInst /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:883:11 #5 0x4647cb3 in process /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:1079 #6 0x4647cb3 in (anonymous namespace)::LiveDebugValues::ExtendRanges(llvm::MachineFunction&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:1361 #7 0x463ac0e in (anonymous namespace)::LiveDebugValues::runOnMachineFunction(llvm::MachineFunction&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/LiveDebugValues.cpp:1415:18 #8 0x4854ef0 in llvm::MachineFunctionPass::runOnFunction(llvm::Function&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/CodeGen/MachineFunctionPass.cpp:73:13 #9 0x53b0b01 in llvm::FPPassManager::runOnFunction(llvm::Function&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1648:27 #10 0x53b15f6 in llvm::FPPassManager::runOnModule(llvm::Module&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1685:16 #11 0x53b298d in runOnModule /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1750:27 #12 0x53b298d in llvm::legacy::PassManagerImpl::run(llvm::Module&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/IR/LegacyPassManager.cpp:1863 #13 0x905f21 in compileModule(char**, llvm::LLVMContext&) /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/tools/llc/llc.cpp:601:8 #14 0x8fdc4e in main /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/tools/llc/llc.cpp:355:22 #15 0x7f67673632e0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e0) #16 0x882369 in _start (/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/llc+0x882369) MemorySanitizer: use-of-uninitialized-value /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp:1658:3 in llvm::AArch64FrameLowering::resolveFrameOffsetReference(llvm::MachineFunction const&, int, bool, unsigned int&, bool, bool) const Exiting error: -: The file was not recognized as a valid object file FileCheck error: '-' is empty. FileCheck command line: /b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/FileCheck /b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/test/DebugInfo/AArch64/asan-stack-vars.mir ``` Reviewers: bkramer Reviewed By: bkramer Subscribers: sdardis, aprantl, kristof.beyls, jrtc27, atanasyan, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D67710 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@372228 91177308-0d34-0410-b5e6-96231b3b80d8
fredriss
pushed a commit
that referenced
this pull request
Oct 11, 2019
This test is not defined. FAIL: LLVM-Unit :: ADT/./ADTTests/ArrayRefTest.SizeTSizedOperations (178 of 33926) ******************** TEST 'LLVM-Unit :: ADT/./ADTTests/ArrayRefTest.SizeTSizedOperations' FAILED ******************** Note: Google Test filter = ArrayRefTest.SizeTSizedOperations [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from ArrayRefTest [ RUN ] ArrayRefTest.SizeTSizedOperations /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/include/llvm/ADT/ArrayRef.h:180:32: runtime error: applying non-zero offset 9223372036854775806 to null pointer #0 0x5ae8dc in llvm::ArrayRef<char>::slice(unsigned long, unsigned long) const /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/include/llvm/ADT/ArrayRef.h:180:32 #1 0x5ae44c in (anonymous namespace)::ArrayRefTest_SizeTSizedOperations_Test::TestBody() /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/unittests/ADT/ArrayRefTest.cpp:85:3 #2 0x928a96 in testing::Test::Run() /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/utils/unittest/googletest/src/gtest.cc:2474:5 #3 0x929793 in testing::TestInfo::Run() /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/utils/unittest/googletest/src/gtest.cc:2656:11 #4 0x92a152 in testing::TestCase::Run() /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/utils/unittest/googletest/src/gtest.cc:2774:28 #5 0x9319d2 in testing::internal::UnitTestImpl::RunAllTests() /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/utils/unittest/googletest/src/gtest.cc:4649:43 #6 0x931416 in testing::UnitTest::Run() /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/utils/unittest/googletest/src/gtest.cc:4257:10 #7 0x920ac3 in RUN_ALL_TESTS /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/utils/unittest/googletest/include/gtest/gtest.h:2233:46 #8 0x920ac3 in main /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/utils/unittest/UnitTestMain/TestMain.cpp:50:10 #9 0x7f66135b72e0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202e0) #10 0x472c19 in _start (/b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm_build_ubsan/unittests/ADT/ADTTests+0x472c19) SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /b/sanitizer-x86_64-linux-bootstrap-ubsan/build/llvm-project/llvm/include/llvm/ADT/ArrayRef.h:180:32 in git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@374327 91177308-0d34-0410-b5e6-96231b3b80d8
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.