Skip to content

Merge branch 'amd-master' into 'amd-common' #48

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 327 commits into from
Dec 21, 2016
Merged

Merge branch 'amd-master' into 'amd-common' #48

merged 327 commits into from
Dec 21, 2016

Conversation

kzhuravl
Copy link

No description provided.

RKSimon and others added 30 commits December 15, 2016 10:45
This also refactors some common code into the 'GetTypeName' method.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289803 91177308-0d34-0410-b5e6-96231b3b80d8
In some situations, the BUILD_VECTOR node that builds a v18i8 vector by
a splat of an i8 constant will end up with signed 8-bit values and other
situations, it'll end up with unsigned ones. Handle both situations.

Fixes PR31340.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289804 91177308-0d34-0410-b5e6-96231b3b80d8
Incorrect 'undef' mask index matching meant that broadcast shuffles could be detected as reverse shuffles

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289811 91177308-0d34-0410-b5e6-96231b3b80d8
A number of new patterns for simplifying and/xor of icmp:

(icmp ne %x, 0) ^ (icmp ne %y, 0) => icmp ne %x, %y if the following is true:
1- (%x = and %a, %mask) and (%y = and %b, %mask)
2- %mask is a power of 2.

(icmp eq %x, 0) & (icmp ne %y, 0) => icmp ult %x, %y if the following is true:
1- (%x = and %a, %mask1) and (%y = and %b, %mask2)
2- Let %t be the smallest power of 2 where %mask1 & %t != 0. Then for any
   %s that is a power of 2 and %s & %mask2 != 0, we must have %s <= %t.
For example if %mask1 = 24 and %mask2 = 16, setting %s = 16 and %t = 8
violates condition (2) above. So this optimization cannot be applied.




git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289813 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit ee709f8988653a0334fbf100cdbbdd83a3933347.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289814 91177308-0d34-0410-b5e6-96231b3b80d8
Specifically avoid implicit conversions from/to integral types to
avoid potential errors when changing the underlying type. For example,
a typical initialization of a "full" mask was "LaneMask = ~0u", which
would result in a value of 0x00000000FFFFFFFF if the type was extended
to uint64_t.

Differential Revision: https://reviews.llvm.org/D27454


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289820 91177308-0d34-0410-b5e6-96231b3b80d8
Add the missing domain equivalences for movss, movsd, movd and movq zero extending loading instructions.

Differential Revision: https://reviews.llvm.org/D27684

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289825 91177308-0d34-0410-b5e6-96231b3b80d8
…n" inst

Simplify CFG will try to sink the last instruction in a series of basic blocks,
creating a "common" instruction in the successor block (sinkLastInstruction).
When it does this, the debug location of the single instruction should be the
merged debug locations of the commoned instructions.

Differential Revision: https://reviews.llvm.org/D27590


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289828 91177308-0d34-0410-b5e6-96231b3b80d8
Avoid duplicating instructions in the int32/int64 domains.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289830 91177308-0d34-0410-b5e6-96231b3b80d8
…ldata sections specially.

Move the check for the code model into isGlobalInSmallSectionImpl and return false (not in small section) for variables placed in sections prefixed with .ldata (workaround for a tool limitation).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289832 91177308-0d34-0410-b5e6-96231b3b80d8
…f common inst"

Reverting as it is causing buildbot failures (address sanitizer).



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289833 91177308-0d34-0410-b5e6-96231b3b80d8
As discussed on D27692

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289834 91177308-0d34-0410-b5e6-96231b3b80d8
This is a tiny patch with a big pile of test changes.
This partially fixes PR27885:
https://llvm.org/bugs/show_bug.cgi?id=27885

My motivating case looks like this:

  - vpshufd {{.*#+}} xmm1 = xmm1[0,1,0,2]
  - vpshufd {{.*#+}} xmm0 = xmm0[0,2,2,3]
  - vpblendw {{.*#+}} xmm0 = xmm0[0,1,2,3],xmm1[4,5,6,7]

  + vshufps {{.*#+}} xmm0 = xmm0[0,2],xmm1[0,2]

And this happens several times in the diffs. For chips with domain-crossing penalties,
the instruction count and size reduction should usually overcome any potential 
domain-crossing penalty due to using an FP op in a sequence of int ops. For chips such
as recent Intel big cores and Atom, there is no domain-crossing penalty for shufps, so
using shufps is a pure win.

So the test case diffs all appear to be improvements except one test in 
vector-shuffle-combining.ll where we miss an opportunity to use a shift to generate 
zero elements and one test in combine-sra.ll where multiple uses prevent the expected
shuffle combining.

Differential Revision: https://reviews.llvm.org/D27692


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289837 91177308-0d34-0410-b5e6-96231b3b80d8
If OUTPUT_DIR is not specified we can assume the symlink is linking to a file in the same directory, so we can use $<TARGET_FILE_NAME:${target}> to create a relative symlink.

In the case of LLDB, when we build a framework, we are creating symlinks in a different directory than the file we're pointing to, and we don't install those links. To make this work in the build directory we can use $<TARGET_FILE:${target}> instead, which uses the full path to the target.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289840 91177308-0d34-0410-b5e6-96231b3b80d8
This is split out from D27696, since it turned out to be a bug fix and
not part of the NFC efficiency change.

Keep the same adjusted (possibly decayed) threshold in both the worklist
and the ImportList. Otherwise if we encountered it first along a cold
path, the callee would be added to the worklist with a lower decayed
threshold than when it is later encountered along a hot path. But the
logic uses the threshold recorded in the ImportList entry to check if
we should re-add it, and without this patch the threshold recorded there
is the same along both paths so we don't re-add it. Using the
same possibly decayed threshold in the ImportList ensures we re-add it
later with the higher non-decayed hot path threshold.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289843 91177308-0d34-0410-b5e6-96231b3b80d8
It's brittle, and Doxygen already picks the overriden method's comment
anyway.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289844 91177308-0d34-0410-b5e6-96231b3b80d8
This patch checks that the SlowMisaligned128Store subtarget feature is set
when penalizing such stores in getMemoryOpCost.

Differential Revision: https://reviews.llvm.org/D27677

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289845 91177308-0d34-0410-b5e6-96231b3b80d8
…ctions

This is the 256-bit counterpart to the 128-bit transform checked in here:
https://reviews.llvm.org/rL289837

This patch is based on the draft by @sroland (Roland Scheidegger) that is
attached to PR27885:
https://llvm.org/bugs/show_bug.cgi?id=27885



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289846 91177308-0d34-0410-b5e6-96231b3b80d8
…e. NFC.

MachineLegalizer used to be the name of both the class and the member,
causing GCC errors. r276522 fixed that by renaming the member to just
'Legalizer'.  The 'class' workaround isn't necessary anymore; drop it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289848 91177308-0d34-0410-b5e6-96231b3b80d8
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289849 91177308-0d34-0410-b5e6-96231b3b80d8
to prevent StringLiteral from being created with a non-literal
char array, clang has a macro enable_if() that can be used
in such a way as to guarantee that the constructor is disabled
unless the length fo the string can be computed at compile time.

This only works on clang, but at least it should allow bots
to catch abuse of StringLiteral.

Differential Revision: https://reviews.llvm.org/D27780

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289853 91177308-0d34-0410-b5e6-96231b3b80d8
MSVC 2015 has been the minimum supported version of VS since October.

Differential Revision: https://reviews.llvm.org/D25710

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289854 91177308-0d34-0410-b5e6-96231b3b80d8
Min/max canonicalization (r287585) exposes the fact that we're missing combines for min/max patterns. 
This patch won't solve the example that was attached to that thread, so something else still needs fixing.

The line between InstCombine and InstSimplify gets blurry here because sometimes the icmp instruction that
we want to fold to already exists, but sometimes it's the swapped form of what we want.

Corresponding changes for smax/umin/umax to follow.

Differential Revision: https://reviews.llvm.org/D27531


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289855 91177308-0d34-0410-b5e6-96231b3b80d8
Eli Friedman and others added 27 commits December 20, 2016 19:33
See https://reviews.llvm.org/D6678 for the history of
isExtractSubvectorCheap. Essentially the same considerations apply
to ARM.

This temporarily breaks the formation of vpadd/vpaddl in certain cases;
AddCombineToVPADDL essentially assumes that we won't form VUZP shuffles.
See https://reviews.llvm.org/D27779 for followup fix.

Differential Revision: https://reviews.llvm.org/D27774



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290198 91177308-0d34-0410-b5e6-96231b3b80d8
Make it clear that TripCount is the upper bound of the iteration on which
control exits LatchBlock.

Differential Revision: https://reviews.llvm.org/D26675

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290199 91177308-0d34-0410-b5e6-96231b3b80d8
Also make the summary ref and call graph vectors immutable. This means
a smaller API surface and fewer places to audit for non-determinism.

Differential Revision: https://reviews.llvm.org/D27875

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290200 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support for YAML<->DWARF for debug_info sections.

This re-lands r290147, after fixing the issue that caused bots to fail (thank you UBSan!).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290204 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r290204.

Still breaking bots... In a meeting now, so I can't fix it immediately.

Bot URL:
http://lab.llvm.org:8011/builders/clang-s390x-linux/builds/2415

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290209 91177308-0d34-0410-b5e6-96231b3b80d8
…nges.

Summary:
In getRangeForAffineAR we compute ranges for affine exprs E = A + B*C,
where ranges for A, B, and C are known. To avoid overflow, we need to
operate on a bigger bitwidth, and originally we chose 2*x+1 for this
(x being the original bitwidth). However, it is safe to use just 2*x:

A+B*C <= (2^x - 1) + (2^x - 1)*(2^x - 1) =
       =  2^x - 1 + 2^2x - 2^x - 2^x + 1 =
       = 2^2x - 2^x <= 2^2x - 1

Unnecessary extending of bitwidths results in noticeable slowdowns: ranges
perform arithmetic operations using APInt, which are much slower when bitwidths
are bigger than 64.

Reviewers: sanjoy, majnemer, chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D27795

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290211 91177308-0d34-0410-b5e6-96231b3b80d8
GlobPattern is a class to handle glob pattern matching. Currently
only LLD is using that, but technically that feature is not specific
to linkers, so in this patch I move that file to LLVM.

Differential Revision: https://reviews.llvm.org/D27969

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290212 91177308-0d34-0410-b5e6-96231b3b80d8
We're currently doing nearly the same thing for @llvm.objectsize in
three different places: two of them are missing checks for overflow,
and one of them could subtly break if InstCombine gets much smarter
about removing alloc sites. Seems like a good idea to not do that.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290214 91177308-0d34-0410-b5e6-96231b3b80d8
No existing client is passing a non-null value here. This will come back
in a slightly different form as part of the type identifier summary work.

Differential Revision: https://reviews.llvm.org/D28006

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290222 91177308-0d34-0410-b5e6-96231b3b80d8
… RPC calls

and handler registrations.

Also add a unit test for alternate-type serialization/deserialization.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290223 91177308-0d34-0410-b5e6-96231b3b80d8
…ly the valid portion of the string

The usual method, and the one employed before my change, of displaying strings in natvis is to make use of the "<variable>,s" format specifier; however, this method only works for null-terminated strings. My fix here is to use the "<pointer>,[size]" format specifier to display a bounded array, and then cast it to "const char*", which in the MSVC debugger has the desired effect of rendering the character array as a string.

Differential Revision: https://reviews.llvm.org/D27972

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290224 91177308-0d34-0410-b5e6-96231b3b80d8
we used to print UNKNOWN instructions when the instruction to be printer was not
yet inserted in any BB: in that case the pretty printer would not be able to
compute a TII as the instruction does not belong to any BB or function yet.
This patch explicitly passes the TII to the pretty-printer.

Differential Revision: https://reviews.llvm.org/D27645

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290228 91177308-0d34-0410-b5e6-96231b3b80d8
… the right

namespace.

r290226 was a think-o - just qualifying the name doesn't count.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290230 91177308-0d34-0410-b5e6-96231b3b80d8
Reviewers: kbarton, iteratee, hfinkel, echristo

Subscribers: mehdi_amini, llvm-commits

Differential Revision: https://reviews.llvm.org/D27934

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290232 91177308-0d34-0410-b5e6-96231b3b80d8
There is no need to test the pretty printer. Remove the boggus test to make the
build bots happy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290234 91177308-0d34-0410-b5e6-96231b3b80d8
…opt -loop-distribute

In r267672, where the loop distribution pragma was introduced, I tried
it hard to keep the old behavior for opt: when opt is invoked
with -loop-distribute, it should distribute the loop (it's off by
default when ran via the optimization pipeline).

As MichaelZ has discovered this has the unintended consequence of
breaking a very common developer work-flow to reproduce compilations
using opt: First you print the pass pipeline of clang
with -debug-pass=Arguments and then invoking opt with the returned
arguments.

clang -debug-pass will include -loop-distribute but the pass is invoked
with default=off so nothing happens unless the loop carries the pragma.
While through opt (default=on) we will try to distribute all loops.

This changes opt's default to off as well to match clang.  The tests are
modified to explicitly enable the transformation.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290235 91177308-0d34-0410-b5e6-96231b3b80d8
The vectorcall calling convention specifies that arguments to functions are to be passed in registers, when possible.
vectorcall uses more registers for arguments than fastcall or the default x64 calling convention use. 
The vectorcall calling convention is only supported in native code on x86 and x64 processors that include Streaming SIMD Extensions 2 (SSE2) and above.

The current implementation does not handle Homogeneous Vector Aggregates (HVAs) correctly and this review attempts to fix it.
This aubmit also includes additional lit tests to cover better HVAs corner cases.

Differential Revision: https://reviews.llvm.org/D27392



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290240 91177308-0d34-0410-b5e6-96231b3b80d8
I added API for creation a target specific memory node in DAG. Today, all memory nodes are common for all targets and their constructors are located in SelectionDAG.cpp.
There are some cases in X86 where we need to create a special node - truncation-with-saturation store, float-to-half-store. 
In the current patch I added truncation-with-saturation nodes and I'm using them for intrinsics. In the future I plan to implement DAG lowering for truncation-with-saturation pattern.

Differential Revision: https://reviews.llvm.org/D27899



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290250 91177308-0d34-0410-b5e6-96231b3b80d8
@kzhuravl kzhuravl merged commit 4010c7c into ROCm:amd-common Dec 21, 2016
kzhuravl pushed a commit that referenced this pull request Mar 17, 2018
Optionally allow the order of restoring the callee-saved registers in the
epilogue to be reversed.

The flag -reverse-csr-restore-seq generates the following code:

```
stp     x26, x25, [sp, #-64]!
stp     x24, x23, [sp, #16]
stp     x22, x21, [sp, #32]
stp     x20, x19, [sp, #48]

; [..]

ldp     x24, x23, [sp, #16]
ldp     x22, x21, [sp, #32]
ldp     x20, x19, [sp, #48]
ldp     x26, x25, [sp], #64
ret
```

Note how the CSRs are restored in the same order as they are saved.

One exception to this rule is the last `ldp`, which allows us to merge
the stack adjustment and the ldp into a post-index ldp. This is done by
first generating:

ldp x26, x27, [sp]
add sp, sp, #64

which gets merged by the arm64 load store optimizer into

ldp x26, x25, [sp], #64

The flag is disabled by default.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327569 91177308-0d34-0410-b5e6-96231b3b80d8
kzhuravl pushed a commit that referenced this pull request Jul 20, 2019
Summary:
According to the new Armv8-M specification
https://static.docs.arm.com/ddi0553/bh/DDI0553B_h_armv8m_arm.pdf the
instructions SQRSHRL and UQRSHLL now have an additional immediate
operand <saturate>. The new assembly syntax is:

SQRSHRL<c> RdaLo, RdaHi, #<saturate>, Rm
UQRSHLL<c> RdaLo, RdaHi, #<saturate>, Rm

where <saturate> can be either 64 (the existing behavior) or 48, in
that case the result is saturated to 48 bits.

The new operand is encoded as follows:
  #64 Encoded as sat = 0
  #48 Encoded as sat = 1
sat is bit 7 of the instruction bit pattern.

This patch adds a new assembler operand class MveSaturateOperand which
implements parsing and encoding. Decoding is implemented in
DecodeMVEOverlappingLongShift.

Reviewers: ostannard, simon_tatham, t.p.northover, samparker, dmgreen, SjoerdMeijer

Reviewed By: simon_tatham

Subscribers: javed.absar, kristof.beyls, hiraditya, pbarrio, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64810

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@366555 91177308-0d34-0410-b5e6-96231b3b80d8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.