-
Notifications
You must be signed in to change notification settings - Fork 195
Conversation
If the UnwindPlan did not identify how to unwind the stack pointer register, LLDB currently assumes it can determine to caller's SP from the current frame's CFA. This is true on most platforms where CFA is by definition equal to the incoming SP at function entry. However, on the s390x target, we instead define the CFA to equal the incoming SP plus an offset of 160 bytes. This is because our ABI defines that the caller has to provide a register save area of size 160 bytes. This area is allocated by the caller, but is considered part of the callee's stack frame, and therefore the CFA is defined as pointing to the top of this area. In order to make this work on s390x, this patch introduces a new ABI callback GetFallbackRegisterLocation that provides platform- specific fallback register locations for unwinding. The existing code to handle SP unwinding as well as volatile registers is moved into the default implementation of that ABI callback, to allow targets where that implementation is incorrect to override it. This patch in itself is a no-op for all existing platforms. But it is a pre-requisite for adding s390x support. Differential Revision: http://reviews.llvm.org/D18977 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266307 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support for Linux on SystemZ: - A new ArchSpec value of eCore_s390x_generic - A new directory Plugins/ABI/SysV-s390x providing an ABI implementation - Register context support - Native Linux support including watchpoint support - ELF core file support - Misc. support throughout the code base (e.g. breakpoint opcodes) - Test case updates to support the platform This should provide complete support for debugging the SystemZ platform. Not yet supported are optional features like transaction support (zEC12) or SIMD vector support (z13). There is no instruction emulation, since our ABI requires that all code provide correct DWARF CFI at all PC locations in .eh_frame to support unwinding (i.e. -fasynchronous-unwind-tables is on by default). The implementation follows existing platforms in a mostly straightforward manner. A couple of things that are different: - We do not use PTRACE_PEEKUSER / PTRACE_POKEUSER to access single registers, since some registers (access register) reside at offsets in the user area that are multiples of 4, but the PTRACE_PEEKUSER interface only allows accessing aligned 8-byte blocks in the user area. Instead, we use a s390 specific ptrace interface PTRACE_PEEKUSR_AREA / PTRACE_POKEUSR_AREA that allows accessing a whole block of the user area in one go, so in effect allowing to treat parts of the user area as register sets. - SystemZ hardware does not provide any means to implement read watchpoints, only write watchpoints. In fact, we can only support a *single* write watchpoint (but this can span a range of arbitrary size). In LLDB this means we support only a single watchpoint. I've set all test cases that require read watchpoints (or multiple watchpoints) to expected failure on the platform. [ Note that there were two test cases that install a read/write watchpoint even though they nowhere rely on the "read" property. I've changed those to simply use plain write watchpoints. ] Differential Revision: http://reviews.llvm.org/D18978 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266308 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes several test case failure on s390x caused by the fact that on this platform, the default "char" type is unsigned. - In ClangASTContext::GetBuiltinTypeForEncodingAndBitSize we should return an explicit *signed* char type for encoding eEncodingSint and bit size 8, instead of the default platform char type (which may be unsigned). This fix matches existing code in ClangASTContext::GetIntTypeFromBitSize, and fixes the TestClangASTContext.TestBuiltinTypeForEncodingAndBitSize unit test case. - The test/expression_command/char/TestExprsChar.py test case is known to fail on platforms defaulting to unsigned char (pr23069), and just needs to be xfailed on s390x like on arm. - The test/functionalities/watchpoint/watchpoint_on_vectors/main.c test case defines a vector of "char" and implicitly assumes to be signed. Use an explicit "signed char" instead. Differential Revision: http://reviews.llvm.org/D18979 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266309 91177308-0d34-0410-b5e6-96231b3b80d8
Scalar::GetBytes provides a non-const access to the underlying bytes of the scalar value, supposedly allowing for modification of those bytes. However, even with the current implementation, this is not really possible. For floating-point scalars, the pointer returned by GetBytes refers to a temporary copy; modifications to that copy will be simply ignored. For integer scalars, the pointer refers to internal memory of the APInt implementation, which isn't supposed to be directly modifyable; GetBytes simply casts aways the const-ness of the pointer ... With my upcoming patch to fix Scalar::GetBytes for big-endian systems, this problem is going to get worse, since there we need temporary copies even for some integer scalars. Therefore, this patch makes Scalar::GetBytes const, fixing all those problems. As a follow-on change, RegisterValues::GetBytes must be made const as well. This in turn means that the way of initializing a RegisterValue by doing a SetType followed by writing to GetBytes no longer works. Instead, I've changed SetValueFromData to do the equivalent of SetType itself, and then re-implemented SetFromMemoryData to work on top of SetValueFromData. There is still a need for RegisterValue::SetType, since some platform-specific code uses it to reinterpret the contents of an already filled RegisterValue. To make this usage work in all cases (even changing from a type implemented via Scalar to a type implemented as a byte buffer), SetType now simply copies the old contents out, and then reloads the RegisterValue from this data using the new type via SetValueFromData. This in turn means that there is no remaining caller of Scalar::SetType, so it can be removed. The only other follow-on change was in MIPS EmulateInstruction code, where some uses of RegisterValue::GetBytes could be made const trivially. Differential Revision: http://reviews.llvm.org/D18980 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266310 91177308-0d34-0410-b5e6-96231b3b80d8
The Scalar implementation and a few other places in LLDB directly access the internal implementation of APInt values using the getRawData method. Unfortunately, pretty much all of these places do not handle big-endian systems correctly. While on little-endian machines, the pointer returned by getRawData can simply be used as a pointer to the integer value in its natural format, no matter what size, this is not true on big-endian systems: getRawData actually points to an array of type uint64_t, with the first element of the array always containing the least-significant word of the integer. This means that if the bitsize of that integer is smaller than 64, we need to add an offset to the pointer returned by getRawData in order to access the value in its natural type, and if the bitsize is *larger* than 64, we actually have to swap the constituent words before we can access the value in its natural type. This patch fixes every incorrect use of getRawData in the code base. For the most part, this is done by simply removing uses of getRawData in the first place, and using other APInt member functions to operate on the integer data. This can be done in many member functions of Scalar itself, as well as in Symbol/Type.h and in IRInterpreter::Interpret. For the latter, I've had to add a Scalar::MakeUnsigned routine to parallel the existing Scalar::MakeSigned, e.g. in order to implement an unsigned divide. The Scalar::RawUInt, Scalar::RawULong, and Scalar::RawULongLong were already unused and can be simply removed. I've also removed the Scalar::GetRawBits64 function and its few users. The one remaining user of getRawData in Scalar.cpp is GetBytes. I've implemented all the cases described above to correctly implement access to the underlying integer data on big-endian systems. GetData now simply calls GetBytes instead of reimplementing its contents. Finally, two places in the clang interface code were also accessing APInt.getRawData in order to actually construct a byte representation of an integer. I've changed those to make use of a Scalar instead, to avoid having to re-implement the logic there. The patch also adds a couple of unit tests verifying correct operation of the GetBytes routine as well as the conversion routines. Those tests actually exposed more problems in the Scalar code: the SetValueFromData routine didn't work correctly for 128- and 256-bit data types, and the SChar routine should have an explicit "signed char" return type to work correctly on platforms where char defaults to unsigned. Differential Revision: http://reviews.llvm.org/D18981 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266311 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, the DataExtractor::GetMaxU64Bitfield and GetMaxS64Bitfield routines assume the incoming "bitfield_bit_offset" parameter uses little-endian bit numbering, i.e. a bitfield_bit_offset 0 refers to a bitfield whose least-significant bit coincides with the least- significant bit of the surrounding integer. On many big-endian systems, however, the big-endian bit numbering is used for bit fields. Here, a bitfield_bit_offset 0 refers to a bitfield whose most-significant bit conincides with the most- significant bit of the surrounding integer. Now, in principle LLDB could arbitrarily choose which semantics of bitfield_bit_offset to use. However, there are two problems with the current approach: - When parsing DWARF, LLDB decodes bit offsets in little-endian bit numbering on LE systems, but in big-endian bit numbering on BE systems. Passing those offsets later on into the DataExtractor routines gives incorrect results on BE. - In the interim, LLDB's type layer combines byte and bit offsets into a single number. I.e. instead of recording bitfields by specifying the byte offset and byte size of the surrounding integer *plus* the bit offset of the bit field within that field, it simply records a single bit offset number. Now, note that converting from byte offset + bit offset to a single offset value and back is well-defined if we either use little-endian byte order *and* little-endian bit numbering, or use big-endian byte order *and* big-endian bit numbering. Any other combination will yield incorrect results. Therefore, the simplest approach would seem to be to always use the bit numbering that matches the system byte order. This makes storing a single bit offset valid, and makes the existing DWARF code correct. The only place to fix is to teach DataExtractor to use big-endian bit numbering on big endian systems. However, there is only additional caveat: we also get bit offsets from LLDB synthetic bitfields. While the exact semantics of those doesn't seem to be well-defined, from test cases it appears that the intent was for the user-provided synthetic bitfield offset to always use little-endian bit numbering. Therefore, on a big-endian system we now have to convert those to big-endian bit numbering to remain consistent. Differential Revision: http://reviews.llvm.org/D18982 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266312 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fixes a bunch of issues that show up on big-endian systems: - The gnu_libstdcpp.py script doesn't follow the way libstdc++ encodes bit vectors: it should identify the enclosing *word* and then access the appropriate bit within that word. Instead, the script simply operates on bytes. This gives the same result on little-endian systems, but not on big-endian. - lldb_private::formatters::WCharSummaryProvider always assumes wchar_t is UTF16, even though it could also be UTF8 or UTF32. This is mostly not an issue on little-endian systems, but immediately fails on BE. Fixed by checking the size of wchar_t like WCharStringSummaryProvider already does. - ClangASTContext::GetChildCompilerTypeAtIndex uses uint32_t to access the virtual base offset stored in the vtable, even though the size of this field matches the target pointer size according to the C++ ABI. Again, this is mostly not visible on LE, but fails on BE. - Process::ReadStringFromMemory uses strncmp to search for a terminator consisting of multiple zero bytes. This doesn't work since strncmp will stop already at the first zero byte. Use memcmp instead. Differential Revision: http://reviews.llvm.org/D18983 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266313 91177308-0d34-0410-b5e6-96231b3b80d8
Running the ARM instruction emulation test on a big-endian system would fail, since the code doesn't respect endianness properly. In EmulateInstructionARM::TestEmulation, code assumes that an instruction opcode read in from the test file is in target byte order, but it was in fact read in in host byte order. More difficult to fix, the EmulationStateARM structure models the overlapping sregs and dregs by a union in _sd_regs. This only works correctly if the host is a little-endian system. I've removed the union in favor of a simple array containing the 32 sregs, and changed any code accessing dregs to explicitly use the correct two sregs overlaying that dreg in the proper target order. Also, the EmulationStateARM::ReadPseudoMemory and WritePseudoMemory track memory as a map of uint32_t values in host byte order, and implement 64-bit memory accessing by splitting them up into two uint32_t ones. However, callers expect memory contents to be provided in the form of a byte array (in target byte order). This means the uint32_t contents need to be byte-swapped on BE systems, and when splitting up a 64-bit access into two 32-bit ones, byte order has to be respected. Differential Revision: http://reviews.llvm.org/D18984 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266314 91177308-0d34-0410-b5e6-96231b3b80d8
A number of test cases were failing on big-endian systems simply due to byte order assumptions in the tests themselves, and no underlying bug in LLDB. These two test cases: tools/lldb-server/lldbgdbserverutils.py python_api/process/TestProcessAPI.py actually check for big-endian target byte order, but contain Python errors in the corresponding code paths. These test cases: functionalities/data-formatter/data-formatter-python-synth/TestDataFormatterPythonSynth.py functionalities/data-formatter/data-formatter-smart-array/TestDataFormatterSmartArray.py functionalities/data-formatter/synthcapping/TestSyntheticCapping.py lang/cpp/frame-var-anon-unions/TestFrameVariableAnonymousUnions.py python_api/sbdata/TestSBData.py (first change) could be fixed to check for big-endian target byte order and update the expected result strings accordingly. For the two synthetic tests, I've also updated the source to make sure the fake_a value is always nonzero on both big- and little-endian platforms. These test case: python_api/sbdata/TestSBData.py (second change) functionalities/memory/cache/TestMemoryCache.py simply accessed memory with the wrong size, which wasn't noticed on LE but fails on BE. Differential Revision: http://reviews.llvm.org/D18985 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266315 91177308-0d34-0410-b5e6-96231b3b80d8
Code in ObjectFileELF::ParseTrampolineSymbols assumes that the sh_info field of the .rel(a).plt section identifies the .plt section. However, with recent GNU ld this is no longer true. As a result of this: https://sourceware.org/bugzilla/show_bug.cgi?id=18169 in object files generated with current linkers the sh_info field of .rel(a).plt now points to the .got.plt section (or .got on some targets). This causes LLDB to fail to identify any PLT stubs, causing a number of test case failures. This patch changes LLDB to simply always look for the .plt section by name. This should be safe across all linkers and targets. Differential Revision: http://reviews.llvm.org/D18973 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266316 91177308-0d34-0410-b5e6-96231b3b80d8
Try to get 32-bit build bots running again. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266341 91177308-0d34-0410-b5e6-96231b3b80d8
This seems to hang on non-s390x hosts. Disable for now to get the build bots going again. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266343 91177308-0d34-0410-b5e6-96231b3b80d8
CreateChildAtOffset needs a byte offset, not an element number. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266352 91177308-0d34-0410-b5e6-96231b3b80d8
This routine contained a stray "return false;" making part of the code never executed. Also, the stack offset where to find on-stack arguments was incorrect. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266417 91177308-0d34-0410-b5e6-96231b3b80d8
Obvious fix for incorrect use of GetU64 offset pointer. Originally committed as part of (now reverted) r266311. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266418 91177308-0d34-0410-b5e6-96231b3b80d8
Obvious fix for incorrect result types of the operation. Originally committed as part of (now reverted) r266311. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266419 91177308-0d34-0410-b5e6-96231b3b80d8
This is needed for platforms where the default "char" type is unsigned. Originally committed as part of (now reverted) r266311. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266420 91177308-0d34-0410-b5e6-96231b3b80d8
Recommit modified version of r266311 including build bot regression fix. This differs from the original r266311 by: - Fixing Scalar::Promote to correctly zero- or sign-extend value depending on signedness of the *source* type, not the target type. - Omitting a few stand-alone fixes that were already committed separately. git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266422 91177308-0d34-0410-b5e6-96231b3b80d8
RegisterContextLLDB::InitializeNonZerothFrame already has code to attempt to detect and handle the case where the PC points beyond the end of a function, but there are certain cases where this doesn't work correctly. In fact, there are *two* different places where this detection is attempted, and the failure is in fact a result of an unfortunate interaction between those two separate attempts. First, the ResolveSymbolContextForAddress routine is called with the resolve_tail_call_address flag set to true. This causes the routine to internally accept a PC pointing beyond the end of a function, and still resolving the PC to that function symbol. Second, the InitializeNonZerothFrame routine itself maintains a "decr_pc_and_recompute_addr_range" flag and, if that turns out to be true, itself decrements the PC by one and searches again for a symbol at that new PC value. Both approaches correctly identify the symbol associated with the PC. However, the problem is now that later on, we also need to find the DWARF CFI record associated with the PC. This is done in the RegisterContextLLDB::GetFullUnwindPlanForFrame routine, and uses the "m_current_offset_backed_up_one" member variable. However, that variable only actually contains the PC "backed up by one" if the *second* approach above was taken. If the function was already identified via the first approach above, that member variable is *not* backed up by one but simply points to the original PC. This in turn causes GetEHFrameUnwindPlan to not correctly identify the DWARF CFI record associated with the PC. Now, in many cases, if the first method had to back up the PC by one, we *still* use the second method too, because of this piece of code: // Or if we're in the middle of the stack (and not "above" an asynchronous event like sigtramp), // and our "current" pc is the start of a function... if (m_sym_ctx_valid && GetNextFrame()->m_frame_type != eTrapHandlerFrame && GetNextFrame()->m_frame_type != eDebuggerFrame && addr_range.GetBaseAddress().IsValid() && addr_range.GetBaseAddress().GetSection() == m_current_pc.GetSection() && addr_range.GetBaseAddress().GetOffset() == m_current_pc.GetOffset()) { decr_pc_and_recompute_addr_range = true; } In many cases, when the PC is one beyond the end of the current function, it will indeed then be exactly at the start of the next function. But this is not always the case, e.g. if there happens to be alignment padding between the end of one function and the start of the next. In those cases, we may sucessfully look up the function symbol via ResolveSymbolContextForAddress, but *not* set decr_pc_and_recompute_addr_range, and therefore fail to find the correct DWARF CFI record. A very simple fix for this problem is to just never use the first method. Call ResolveSymbolContextForAddress with resolve_tail_call_address set to false, which will cause it to fail if the PC is beyond the end of the current function; or else, identify the next function if the PC is also at the start of the next function. In either case, we will then set the decr_pc_and_recompute_addr_range variable and back up the PC anyway, but this time also find the correct DWARF CFI. A related problem is that the ResolveSymbolContextForAddress sometimes returns a "symbol" with empty name. This turns out to be an ELF section symbol. Now, usually those get type eSymbolTypeInvalid. However, there is code in ObjectFileELF::ParseSymbols that tries to change the type of invalid symbols to eSymbolTypeCode or eSymbolTypeData if the symbol lies within the code or data section. Unfortunately, this check also hits the symbol for the code section itself, which is then marked as eSymbolTypeCode. While the size of the section symbol is 0 according to the ELF file, LLDB considers this size invalid and attempts to figure out the "correct" size. Depending on how this goes, we may end up with a symbol that overlays part of the code section, even outside areas covered by real function symbols. Therefore, if we call ResolveSymbolContextForAddress with PC pointing beyond the end of a function, we may get this bogus section symbol. This again means InitializeNonZerothFrame thinks we have a valid PC, but then we don't find any unwind info for it. The fix for this problem is me to simply always leave ELF section symbols as type eSymbolTypeInvalid. Differential Revision: http://reviews.llvm.org/D18975 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@267363 91177308-0d34-0410-b5e6-96231b3b80d8
git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@268520 91177308-0d34-0410-b5e6-96231b3b80d8
@swift-ci Please test |
Yes, I've seen those. We are keeping the swift-lldb/master-next refreshed every few weeks with new merges from upstream LLDB. Those happened to be handled up there. I'll have a look at this change. (I also kicked off a PR test on our CI). |
@bryanpkc, it looks like these tests are failing on Ubuntu x86_64 with this change: ERROR: [EXCEPTIONAL EXIT 6 (SIGABRT)] test_with_dwarf (lang/swift/variables/generic_struct_debug_info/generic_array/TestSwiftGenericStructDebugInfoGenericArray.py) Still waiting on OS X. |
The OS X Xcode build project also needs a few modifications to bring in some new files:
I can handle that part (the OS X Xcode build fix-up) once the broken tests on Ubuntu are addressed. |
… of the same size. Summary: One of the cases handled by ValueObjectChild::UpdateValue() uses the entire width of the parent's scalar value as the size of the child, and extracts the child by calling Scalar::ExtractBitfield(). This seems valid but APInt::trunc(), APInt::sext() and APInt::zext() assert that the bit field must not have the same size as the parent scalar. Replacing those calls with sextOrTrunc(), zextOrTrunc(), sextOrSelf() and zextOrSelf() fixes the assertion failures. Reviewers: uweigand, labath Subscribers: labath, lldb-commits Differential Revision: http://reviews.llvm.org/D20355 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@270062 91177308-0d34-0410-b5e6-96231b3b80d8
I have committed an LLDB change upstream to fix the assertion failures. |
@swift-ci please test |
@tfiala The failures don't seem related to the most recent LLDB change that I made. I can't reproduce the Linux failure here. |
@bryanpkc, it looks like the swift that was synched did not yet pick up a fix earlier today for the broken linkage we saw there. The first build that had the fix on our master-branch Ubuntu builders was: We'll kick off another build. Meanwhile, I'll get you a PR for your set that will fix that OS X side. That's just going to be a minor project change. Then we can see if anything else is broken on the OS X side behind that. |
@swift-ci Please test Linux platform |
The Linux build came back clean now. I'll get you the OS X changes in the morning so we can clear that side and get this in. |
* Update README URLs based on HTTP redirects * Update README template URLs based on HTTP redirects
…ssfully (as in, free of any errors and/or warnings)
…ormation on iOS devices The __ENVIRONMENT_MAC_OS_X_VERSION_MIN_REQUIRED macro is only defined on OS X, so the check as written compiled the code out for iOS The right thing to do is compile the code out for older OSX versions, but leave iOS alone rdar://26333564 git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@270004 91177308-0d34-0410-b5e6-96231b3b80d8
<rdar://problem/26356975>
…ting. The error was not getting propagated to the caller, so the higher layers thought the breakpoint was successfully set & resolved. I added a testcase, but it assumes 0x0 is not a valid place to set a breakpoint. On most systems that is true, but if it isn't true of your system, either find another good place and add it to the test, or x-fail the test. <rdar://problem/26345962> git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@270014 91177308-0d34-0410-b5e6-96231b3b80d8 (cherry picked from commit 34483bb)
Raw commands look for " -- " as the separator between their options and the rest of the argument string. The trailing space was missing in the repl alias, causing it to attempt to run a bad expression. <rdar://problem/25986155>
…tepping through line 0 code". That's good 'cause it means all the different kinds of source line stepping won't leave user in the middle of compiler implementation code or code inlined from odd places, etc. But it turns out that the compiler also marks functions it MIGHT inline as all being of line 0. That would mean we single step through this code instead of just stepping out. That is both inefficient, and more error prone 'cause these little nuggets tend to be bits of hand-written assembly and the like and are hard to step through. This change just checks and if the entire function is marked with line 0, we step out rather than step through. Also un-skip the TestSwiftStepping test that showed the need for this change. <rdar://problem/25966460> git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@268823 91177308-0d34-0410-b5e6-96231b3b80d8 (cherry picked from commit 1b47ae1)
before comparing the value of it. <rdar://problem/26333564> git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@270015 91177308-0d34-0410-b5e6-96231b3b80d8
values for the pc or return address register. On ios with arm64 and a binary that has multiple functions without individual symbol boundaries, we end up with an assembly profile unwind plan that says lr=<same> - that is, the link register contents are unmodified from the caller's value. This gets the unwinder in a loop. When we're off the 0th frame, we never want to look to a caller for a pc or return-address register value. Add checks to ReadGPRValue and ReadRegister to prevent both the pc and ra register values from recursing. If this causes problems with backtraces on android, let me know or back it out and I'll look into it -- but I think these are straightforward and don't expect problems. <rdar://problem/24610365> git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@270162 91177308-0d34-0410-b5e6-96231b3b80d8
@tfiala Thanks! |
@bryanpkc, it looks like you just need to pull in this change as well:
Can you grab r266361 and add it to your PR? I started getting it ready but realized it must have already been fixed upstream for the Xcode project. |
(If you end up having trouble merging the Xcode file, which isn't always the easiest thing to merge, let me know. I'm in the process of merging LLVM.org lldb svn trunk into the swift-lldb/master-next branch, which at some point in the future will move to swift-lldb/master. If you have trouble with the Xcode merge, we likely can do a simple cherry-pick from master-next once I'm done with that. Might not be until early next week though, so if you can try to merge it, that might be faster). |
@bryanpkc - nm, I'll merge it for you and get it to you. You won't have an easy way to check it and it is probably not going to be an easy merge since we're a little out of sync between LLVM.org and GitHub swift-lldb w/r/t that particular file. Working on this now... |
git-svn-id: https://llvm.org/svn/llvm-project/lldb/trunk@266361 91177308-0d34-0410-b5e6-96231b3b80d8 (cherry picked from commit 464d3f2) tweaked.
@bryanpkc, I sent a PR your way to pull into your branch, that should get Xcode building here: Once you have that in here, we can get the CI testing OS X again. |
Xcode project changes for Linux s390x port.
@swift-ci Please test OS X platform |
I'm going to pull this in The OS X side had two tests failing, but those also have been failing on master after this change:
|
@tfiala Todd, thank you so much for the extra effort on your part to get this in! |
My pleasure :-) |
Make autogen.sh executable
Make autogen.sh executable Signed-off-by: Daniel A. Steffen <dsteffen@apple.com>
We have ported Swift to Linux on IBM z Systems. Please cherry-pick these commits from upstream LLDB to enable LLDB/REPL to work on the SystemZ target.
There are a couple of conflicts in source/Core/Scalar.cpp, which are resolved by using the current working solution from upstream LLDB. The uses of Scalar::APIntWithTypeAndValue() and getRawData() in the original code were not correct for big-endian systems.