forked from bminor/binutils-gdb
-
Notifications
You must be signed in to change notification settings - Fork 6
GDB 12.1 ARC patches #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
To be merged as part of a separate ARC support PR on the sdk-ng side. |
3a5e3e9
to
e53a8e8
Compare
c20be14
to
9c81759
Compare
Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit 5bf4923c81ccd45c17a21ff26f84794c941e7cd2 Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
The previous "arc*-..." pattern was very permissive. Now there is only "arc-..." and "arceb-..." patterns. Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit a68dbbe32546691e3a6c6205bc75bc908abb2880. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Introduce an "ARC_ISA_NONE = 0" to "arc_isa" enum in order to reflect an invalid value. Not that it really matters but this tweak does not alter the other enum values. Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit 89c1bcc9762228eb523bc1aba16468b81c9a16c3. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This is the arc64 gdb suited to debug baremetal programs served through a gdbstub (openocd,qemu,etc.). To build: $ configure --target=arc64-elf \ --with-pkgversion="arc64 baremetal" \ --with-endian=little \ --enable-languages=c,c++ \ --prefix=/path/to/install \ --enable-shared \ --with-gnu-as \ --with-gnu-ld \ --without-newlib \ --disable-libgomp \ --disable-ld \ --disable-gas \ --disable-binutils Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit daad3f4cd34164c9e8ae7503476129cc8dfb58c8. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
The newly introduced arc64-newlib-tdep.c file uses bfd_arch_arc64 and the correct offset for PC in jumpbuffer (17). Moreover, the arc64-tdep.c also uses bfd_arch_arc64 now. Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit 607e804edb8f3eb80927fd0a7d776a597536ae3f. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This is the cross platform gdb suited to debug arc64 linux programs served with arc64 gdbserver. To build: $ configure --target=arc64-linux \ --with-pkgversion="arc64 linux gnu" \ --with-endian=little \ --enable-languages=c,c++ \ --prefix=/path/to/install \ --enable-shared \ --with-gnu-as \ --with-gnu-ld \ --without-newlib \ --disable-libgomp \ --disable-ld \ --disable-gas \ --disable-binutils Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit a38c3d86b9e2142e3ffb2133074991ae34b0666a. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This gdbserver is suited for running inside arc64 linux while serving to a cross arc64-linux gdb. To build: $ module load <arcv3-toolchain> $ configure --host=arc64-linux-gnu \ --prefix=/usr \ --disable-build-with-cxx \ --disable-ld \ --disable-gas \ --disable-binutils \ --disable-gdb Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit 11c1d553c9e286118a41345cf2176ca0f309ca0b. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit 8e17b603d06a0c8926fc7d6c196971335d6d8d1f. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This is the cross platform gdb suited to debug arc32 linux programs served with arc32 gdbserver. To build: $ configure --target=arc32-linux \ --with-pkgversion="arc32 linux gnu" \ --with-endian=little \ --enable-languages=c,c++ \ --prefix=/path/to/install \ --enable-shared \ --with-gnu-as \ --with-gnu-ld \ --without-newlib \ --disable-libgomp \ --disable-ld \ --disable-gas \ --disable-binutils Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit 1ad6439ec56b80abf989642dd02123cb3ee724a3. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This gdbserver is suited for running inside arc32 linux while serving to a cross arc32-linux gdb. To build: $ module load <arcv3-toolchain> $ configure --host=arc32-linux-gnu \ --prefix=/usr \ --disable-build-with-cxx \ --disable-ld \ --disable-gas \ --disable-binutils \ --disable-gdb Cherry-picked from foss-for-synopsys-dwc-arc-processors/binutils-gdb commit c6c9279a302fcbf9a7501093d6202824cc6a57a1. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
9c81759
to
c27733e
Compare
stephanosio
commented
Jul 8, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alpsayin
pushed a commit
to alpsayin/binutils-gdb
that referenced
this pull request
Apr 29, 2023
Fedora Rawhide is now using gcc-12.0. As part of updating to the gcc-12.0 package set, Rawhide is also now using a version of libgcc_s which lacks a .data section. This causes gdb to fail in the following fashion while debugging a program (such as gdb) which uses libgcc_s: (top-gdb) run Starting program: rawhide-master/bld/gdb/gdb ... objfiles.h:467: internal-error: sect_index_data not initialized A problem internal to GDB has been detected, further debugging may prove unreliable. ... I snipped the backtrace from the above output. Instead, here's a portion of a backtrace obtained using GDB's backtrace command. (Obviously, in order to obtain it, I used a GDB which has been patched with this commit.) #0 internal_error ( file=0xc6a508 "gdb/objfiles.h", line=467, fmt=0xc6a4e8 "sect_index_data not initialized") at gdbsupport/errors.cc:51 zephyrproject-rtos#1 0x00000000005f9651 in objfile::data_section_offset (this=0x4fa48f0) at gdb/objfiles.h:467 zephyrproject-rtos#2 0x000000000097c5f8 in relocate_address (address=0x17244, objfile=0x4fa48f0) at gdb/stap-probe.c:1333 zephyrproject-rtos#3 0x000000000097c630 in stap_probe::get_relocated_address (this=0xa1a17a0, objfile=0x4fa48f0) at gdb/stap-probe.c:1341 zephyrproject-rtos#4 0x00000000004d7025 in create_exception_master_breakpoint_probe ( objfile=0x4fa48f0) at gdb/breakpoint.c:3505 zephyrproject-rtos#5 0x00000000004d7426 in create_exception_master_breakpoint () at gdb/breakpoint.c:3575 zephyrproject-rtos#6 0x00000000004efcc1 in breakpoint_re_set () at gdb/breakpoint.c:13407 zephyrproject-rtos#7 0x0000000000956998 in solib_add (pattern=0x0, from_tty=0, readsyms=1) at gdb/solib.c:1001 zephyrproject-rtos#8 0x00000000009576a8 in handle_solib_event () at gdb/solib.c:1269 ... The function 'relocate_address' in gdb/stap-probe.c attempts to do its "relocation" by using objfile->data_section_offset(). That method, data_section_offset() is defined as follows in objfiles.h: CORE_ADDR data_section_offset () const { return section_offsets[SECT_OFF_DATA (this)]; } The internal error occurs when the SECT_OFF_DATA macro finds that the 'sect_index_data' field is -1: #define SECT_OFF_DATA(objfile) \ ((objfile->sect_index_data == -1) \ ? (internal_error (__FILE__, __LINE__, \ _("sect_index_data not initialized")), -1) \ : objfile->sect_index_data) relocate_address() is obtaining the section offset in order to compute a relocated address. For some ABIs, such as the System V ABI, the section offsets will all be the same. So for those ABIs, it doesn't matter which offset is used. However, other ABIs, such as the FDPIC ABI, will have different offsets for the various sections. Thus, for those ABIs, it is vital that this and other relocation code use the correct offset. In stap_probe::get_relocated_address, the address to which to add the offset (thus forming the relocated address) is obtained via this->get_address (); get_address is a getter for m_address in probe.h. It's documented/defined as follows (also in probe.h): /* The address where the probe is inserted, relative to SECT_OFF_TEXT. */ CORE_ADDR m_address; (Thanks to Tom Tromey for this observation.) So, based on this, the current use of data_section_offset / SECT_OFF_DATA is wrong. This relocation code should have been using text_section_offset / SECT_OFF_TEXT all along. That being the case, I've adjusted the stap-probe.c relocation code accordingly. Searching the sources turned up one other use of data_section_offset, in gdb/dtrace-probe.c, so I've updated that code as well. The same reasoning presented above applies to this case too. Summary: * gdb/dtrace-probe.c (dtrace_probe::get_relocated_address): Use method text_section_offset instead of data_section_offset. * gdb/stap-probe.c (relocate_address): Likewise.
alpsayin
pushed a commit
to alpsayin/binutils-gdb
that referenced
this pull request
Apr 29, 2023
g++ 11.1.0 has a bug where it will emit a negative DW_AT_data_member_location in some cases: $ cat test.cpp #include <memory> int main() { std::unique_ptr<int> ptr; } $ g++ -g test.cpp $ llvm-dwarfdump -F a.out ... 0x00000964: DW_TAG_member DW_AT_name [DW_FORM_strp] ("_M_head_impl") DW_AT_decl_file [DW_FORM_data1] ("/usr/include/c++/11.1.0/tuple") DW_AT_decl_line [DW_FORM_data1] (125) DW_AT_decl_column [DW_FORM_data1] (0x27) DW_AT_type [DW_FORM_ref4] (0x0000067a "default_delete<int>") DW_AT_data_member_location [DW_FORM_sdata] (-1) ... This leads to a GDB crash (when built with ASan, otherwise probably garbage results), since it tries to read just before (to the left, in ASan speak) of the value's buffer: ==888645==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000c52af at pc 0x7f711b239f4b bp 0x7fff356bd470 sp 0x7fff356bcc18 READ of size 1 at 0x6020000c52af thread T0 #0 0x7f711b239f4a in __interceptor_memcpy /build/gcc/src/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827 zephyrproject-rtos#1 0x555c4977efa1 in value_contents_copy_raw /home/simark/src/binutils-gdb/gdb/value.c:1347 zephyrproject-rtos#2 0x555c497909cd in value_primitive_field(value*, long, int, type*) /home/simark/src/binutils-gdb/gdb/value.c:3126 zephyrproject-rtos#3 0x555c478f2eaa in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:333 zephyrproject-rtos#4 0x555c478f63b2 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:513 zephyrproject-rtos#5 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 zephyrproject-rtos#6 0x555c478f63b2 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:513 zephyrproject-rtos#7 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 zephyrproject-rtos#8 0x555c478f63b2 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:513 zephyrproject-rtos#9 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 zephyrproject-rtos#10 0x555c4760d45f in c_value_print_struct /home/simark/src/binutils-gdb/gdb/c-valprint.c:383 zephyrproject-rtos#11 0x555c4760df4c in c_value_print_inner(value*, ui_file*, int, value_print_options const*) /home/simark/src/binutils-gdb/gdb/c-valprint.c:438 zephyrproject-rtos#12 0x555c483ff9a7 in language_defn::value_print_inner(value*, ui_file*, int, value_print_options const*) const /home/simark/src/binutils-gdb/gdb/language.c:632 zephyrproject-rtos#13 0x555c49758b68 in do_val_print /home/simark/src/binutils-gdb/gdb/valprint.c:1048 zephyrproject-rtos#14 0x555c49759b17 in common_val_print(value*, ui_file*, int, value_print_options const*, language_defn const*) /home/simark/src/binutils-gdb/gdb/valprint.c:1151 zephyrproject-rtos#15 0x555c478f2fcb in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:335 zephyrproject-rtos#16 0x555c478f63b2 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:513 zephyrproject-rtos#17 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 zephyrproject-rtos#18 0x555c4760d45f in c_value_print_struct /home/simark/src/binutils-gdb/gdb/c-valprint.c:383 zephyrproject-rtos#19 0x555c4760df4c in c_value_print_inner(value*, ui_file*, int, value_print_options const*) /home/simark/src/binutils-gdb/gdb/c-valprint.c:438 #20 0x555c483ff9a7 in language_defn::value_print_inner(value*, ui_file*, int, value_print_options const*) const /home/simark/src/binutils-gdb/gdb/language.c:632 #21 0x555c49758b68 in do_val_print /home/simark/src/binutils-gdb/gdb/valprint.c:1048 #22 0x555c49759b17 in common_val_print(value*, ui_file*, int, value_print_options const*, language_defn const*) /home/simark/src/binutils-gdb/gdb/valprint.c:1151 #23 0x555c478f2fcb in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:335 #24 0x555c4760d45f in c_value_print_struct /home/simark/src/binutils-gdb/gdb/c-valprint.c:383 #25 0x555c4760df4c in c_value_print_inner(value*, ui_file*, int, value_print_options const*) /home/simark/src/binutils-gdb/gdb/c-valprint.c:438 #26 0x555c483ff9a7 in language_defn::value_print_inner(value*, ui_file*, int, value_print_options const*) const /home/simark/src/binutils-gdb/gdb/language.c:632 #27 0x555c49758b68 in do_val_print /home/simark/src/binutils-gdb/gdb/valprint.c:1048 #28 0x555c49759b17 in common_val_print(value*, ui_file*, int, value_print_options const*, language_defn const*) /home/simark/src/binutils-gdb/gdb/valprint.c:1151 #29 0x555c4760f04c in c_value_print(value*, ui_file*, value_print_options const*) /home/simark/src/binutils-gdb/gdb/c-valprint.c:587 #30 0x555c483ff954 in language_defn::value_print(value*, ui_file*, value_print_options const*) const /home/simark/src/binutils-gdb/gdb/language.c:614 #31 0x555c49759f61 in value_print(value*, ui_file*, value_print_options const*) /home/simark/src/binutils-gdb/gdb/valprint.c:1189 #32 0x555c48950f70 in print_formatted /home/simark/src/binutils-gdb/gdb/printcmd.c:337 #33 0x555c48958eda in print_value(value*, value_print_options const&) /home/simark/src/binutils-gdb/gdb/printcmd.c:1258 #34 0x555c48959891 in print_command_1 /home/simark/src/binutils-gdb/gdb/printcmd.c:1367 #35 0x555c4895a3df in print_command /home/simark/src/binutils-gdb/gdb/printcmd.c:1458 #36 0x555c4767f974 in do_simple_func /home/simark/src/binutils-gdb/gdb/cli/cli-decode.c:97 #37 0x555c47692e25 in cmd_func(cmd_list_element*, char const*, int) /home/simark/src/binutils-gdb/gdb/cli/cli-decode.c:2475 #38 0x555c4936107e in execute_command(char const*, int) /home/simark/src/binutils-gdb/gdb/top.c:670 #39 0x555c485f1bff in catch_command_errors /home/simark/src/binutils-gdb/gdb/main.c:523 #40 0x555c485f249c in execute_cmdargs /home/simark/src/binutils-gdb/gdb/main.c:618 #41 0x555c485f6677 in captured_main_1 /home/simark/src/binutils-gdb/gdb/main.c:1317 #42 0x555c485f6c83 in captured_main /home/simark/src/binutils-gdb/gdb/main.c:1338 #43 0x555c485f6d65 in gdb_main(captured_main_args*) /home/simark/src/binutils-gdb/gdb/main.c:1363 #44 0x555c46e41ba8 in main /home/simark/src/binutils-gdb/gdb/gdb.c:32 #45 0x7f71198bcb24 in __libc_start_main (/usr/lib/libc.so.6+0x27b24) #46 0x555c46e4197d in _start (/home/simark/build/binutils-gdb-one-target/gdb/gdb+0x77f197d) 0x6020000c52af is located 1 bytes to the left of 8-byte region [0x6020000c52b0,0x6020000c52b8) allocated by thread T0 here: #0 0x7f711b2b7459 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154 zephyrproject-rtos#1 0x555c470acdc9 in xcalloc /home/simark/src/binutils-gdb/gdb/alloc.c:100 zephyrproject-rtos#2 0x555c49b775cd in xzalloc(unsigned long) /home/simark/src/binutils-gdb/gdbsupport/common-utils.cc:29 zephyrproject-rtos#3 0x555c4977bdeb in allocate_value_contents /home/simark/src/binutils-gdb/gdb/value.c:1029 zephyrproject-rtos#4 0x555c4977be25 in allocate_value(type*) /home/simark/src/binutils-gdb/gdb/value.c:1040 zephyrproject-rtos#5 0x555c4979030d in value_primitive_field(value*, long, int, type*) /home/simark/src/binutils-gdb/gdb/value.c:3092 zephyrproject-rtos#6 0x555c478f6280 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:501 zephyrproject-rtos#7 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 zephyrproject-rtos#8 0x555c478f63b2 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:513 zephyrproject-rtos#9 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 zephyrproject-rtos#10 0x555c478f63b2 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:513 zephyrproject-rtos#11 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 zephyrproject-rtos#12 0x555c4760d45f in c_value_print_struct /home/simark/src/binutils-gdb/gdb/c-valprint.c:383 zephyrproject-rtos#13 0x555c4760df4c in c_value_print_inner(value*, ui_file*, int, value_print_options const*) /home/simark/src/binutils-gdb/gdb/c-valprint.c:438 zephyrproject-rtos#14 0x555c483ff9a7 in language_defn::value_print_inner(value*, ui_file*, int, value_print_options const*) const /home/simark/src/binutils-gdb/gdb/language.c:632 zephyrproject-rtos#15 0x555c49758b68 in do_val_print /home/simark/src/binutils-gdb/gdb/valprint.c:1048 zephyrproject-rtos#16 0x555c49759b17 in common_val_print(value*, ui_file*, int, value_print_options const*, language_defn const*) /home/simark/src/binutils-gdb/gdb/valprint.c:1151 zephyrproject-rtos#17 0x555c478f2fcb in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:335 zephyrproject-rtos#18 0x555c478f63b2 in cp_print_value /home/simark/src/binutils-gdb/gdb/cp-valprint.c:513 zephyrproject-rtos#19 0x555c478f02ca in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:161 #20 0x555c4760d45f in c_value_print_struct /home/simark/src/binutils-gdb/gdb/c-valprint.c:383 #21 0x555c4760df4c in c_value_print_inner(value*, ui_file*, int, value_print_options const*) /home/simark/src/binutils-gdb/gdb/c-valprint.c:438 #22 0x555c483ff9a7 in language_defn::value_print_inner(value*, ui_file*, int, value_print_options const*) const /home/simark/src/binutils-gdb/gdb/language.c:632 #23 0x555c49758b68 in do_val_print /home/simark/src/binutils-gdb/gdb/valprint.c:1048 #24 0x555c49759b17 in common_val_print(value*, ui_file*, int, value_print_options const*, language_defn const*) /home/simark/src/binutils-gdb/gdb/valprint.c:1151 #25 0x555c478f2fcb in cp_print_value_fields(value*, ui_file*, int, value_print_options const*, type**, int) /home/simark/src/binutils-gdb/gdb/cp-valprint.c:335 #26 0x555c4760d45f in c_value_print_struct /home/simark/src/binutils-gdb/gdb/c-valprint.c:383 #27 0x555c4760df4c in c_value_print_inner(value*, ui_file*, int, value_print_options const*) /home/simark/src/binutils-gdb/gdb/c-valprint.c:438 #28 0x555c483ff9a7 in language_defn::value_print_inner(value*, ui_file*, int, value_print_options const*) const /home/simark/src/binutils-gdb/gdb/language.c:632 #29 0x555c49758b68 in do_val_print /home/simark/src/binutils-gdb/gdb/valprint.c:1048 Since there are some binaries with this in the wild, I think it would be useful for GDB to work around this. I did the obvious simple thing, if the DW_AT_data_member_location's value is -1, replace it with 0. I added a producer check to only apply this fixup for GCC 11. The idea is that if some other compiler ever uses a DW_AT_data_member_location value of -1 by mistake, we don't know (before analyzing the bug at least) if they did mean 0 or some other value. So I wouldn't want to apply the fixup in that case. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28063 Change-Id: Ieef3459b0b9bbce8bdad838ba83b4b64e7269d42
alpsayin
pushed a commit
to alpsayin/binutils-gdb
that referenced
this pull request
Apr 29, 2023
Starting with commit commit 1da5d0e Date: Tue Jan 4 08:02:24 2022 -0700 Change how Python architecture and language are handled we see a failure in gdb.threads/killed-outside.exp: ... Executing on target: kill -9 16622 (timeout = 300) builtin_spawn -ignore SIGHUP kill -9 16622 continue Continuing. Couldn't get registers: No such process. (gdb) [Thread 0x7ffff77c2700 (LWP 16626) exited] Program terminated with signal SIGKILL, Killed. The program no longer exists. FAIL: gdb.threads/killed-outside.exp: prompt after first continue (timeout) This is not a regression but a failure due to a change in GDB's output. Prior to the aforementioned commit, GDB has been printing the "Couldn't get registers: No such process." message twice. The second one came from (top-gdb) bt #0 amd64_linux_nat_target::fetch_registers (this=0x555557f31440 <the_amd64_linux_nat_target>, regcache=0x555558805ce0, regnum=16) at /gdb-up/gdb/amd64-linux-nat.c:225 zephyrproject-rtos#1 0x000055555640ac5f in target_ops::fetch_registers (this=0x555557d636d0 <the_thread_db_target>, arg0=0x555558805ce0, arg1=16) at /gdb-up/gdb/target-delegates.c:502 zephyrproject-rtos#2 0x000055555641a647 in target_fetch_registers (regcache=0x555558805ce0, regno=16) at /gdb-up/gdb/target.c:3945 zephyrproject-rtos#3 0x0000555556278e68 in regcache::raw_update (this=0x555558805ce0, regnum=16) at /gdb-up/gdb/regcache.c:587 zephyrproject-rtos#4 0x0000555556278f14 in readable_regcache::raw_read (this=0x555558805ce0, regnum=16, buf=0x555558881950 "") at /gdb-up/gdb/regcache.c:601 zephyrproject-rtos#5 0x00005555562792aa in readable_regcache::cooked_read (this=0x555558805ce0, regnum=16, buf=0x555558881950 "") at /gdb-up/gdb/regcache.c:690 zephyrproject-rtos#6 0x000055555627965e in readable_regcache::cooked_read_value (this=0x555558805ce0, regnum=16) at /gdb-up/gdb/regcache.c:748 zephyrproject-rtos#7 0x0000555556352a37 in sentinel_frame_prev_register (this_frame=0x555558181090, this_prologue_cache=0x5555581810a8, regnum=16) at /gdb-up/gdb/sentinel-frame.c:53 zephyrproject-rtos#8 0x0000555555fa4773 in frame_unwind_register_value (next_frame=0x555558181090, regnum=16) at /gdb-up/gdb/frame.c:1235 zephyrproject-rtos#9 0x0000555555fa420d in frame_register_unwind (next_frame=0x555558181090, regnum=16, optimizedp=0x7fffffffd570, unavailablep=0x7fffffffd574, lvalp=0x7fffffffd57c, addrp=0x7fffffffd580, realnump=0x7fffffffd578, bufferp=0x7fffffffd5b0 "") at /gdb-up/gdb/frame.c:1143 zephyrproject-rtos#10 0x0000555555fa455f in frame_unwind_register (next_frame=0x555558181090, regnum=16, buf=0x7fffffffd5b0 "") at /gdb-up/gdb/frame.c:1199 zephyrproject-rtos#11 0x00005555560178e2 in i386_unwind_pc (gdbarch=0x5555587c4a70, next_frame=0x555558181090) at /gdb-up/gdb/i386-tdep.c:1972 zephyrproject-rtos#12 0x0000555555cd2b9d in gdbarch_unwind_pc (gdbarch=0x5555587c4a70, next_frame=0x555558181090) at /gdb-up/gdb/gdbarch.c:3007 zephyrproject-rtos#13 0x0000555555fa3a5b in frame_unwind_pc (this_frame=0x555558181090) at /gdb-up/gdb/frame.c:948 zephyrproject-rtos#14 0x0000555555fa7621 in get_frame_pc (frame=0x555558181160) at /gdb-up/gdb/frame.c:2572 zephyrproject-rtos#15 0x0000555555fa7706 in get_frame_address_in_block (this_frame=0x555558181160) at /gdb-up/gdb/frame.c:2602 zephyrproject-rtos#16 0x0000555555fa77d0 in get_frame_address_in_block_if_available (this_frame=0x555558181160, pc=0x7fffffffd708) at /gdb-up/gdb/frame.c:2665 zephyrproject-rtos#17 0x0000555555fa5f8d in select_frame (fi=0x555558181160) at /gdb-up/gdb/frame.c:1890 zephyrproject-rtos#18 0x0000555555fa5bab in lookup_selected_frame (a_frame_id=..., frame_level=-1) at /gdb-up/gdb/frame.c:1720 zephyrproject-rtos#19 0x0000555555fa5e47 in get_selected_frame (message=0x0) at /gdb-up/gdb/frame.c:1810 #20 0x0000555555cc9c6e in get_current_arch () at /gdb-up/gdb/arch-utils.c:848 #21 0x000055555625b239 in gdbpy_before_prompt_hook (extlang=0x555557451f20 <extension_language_python>, current_gdb_prompt=0x555557f4d890 <top_prompt+16> "(gdb) ") at /gdb-up/gdb/python/python.c:1063 #22 0x0000555555f7cfbb in ext_lang_before_prompt (current_gdb_prompt=0x555557f4d890 <top_prompt+16> "(gdb) ") at /gdb-up/gdb/extension.c:922 #23 0x0000555555f7d442 in std::_Function_handler<void (char const*), void (*)(char const*)>::_M_invoke(std::_Any_data const&, char const*&&) (__functor=..., __args#0=@0x7fffffffd900: 0x555557f4d890 <top_prompt+16> "(gdb) ") at /usr/include/c++/7/bits/std_function.h:316 #24 0x0000555555f752dd in std::function<void (char const*)>::operator()(char const*) const (this=0x55555817d838, __args#0=0x555557f4d890 <top_prompt+16> "(gdb) ") at /usr/include/c++/7/bits/std_function.h:706 #25 0x0000555555f75100 in gdb::observers::observable<char const*>::notify (this=0x555557f49060 <gdb::observers::before_prompt>, args#0=0x555557f4d890 <top_prompt+16> "(gdb) ") at /gdb-up/gdb/../gdbsupport/observable.h:150 #26 0x0000555555f736dc in top_level_prompt () at /gdb-up/gdb/event-top.c:444 #27 0x0000555555f735ba in display_gdb_prompt (new_prompt=0x0) at /gdb-up/gdb/event-top.c:411 #28 0x00005555564611a7 in tui_on_command_error () at /gdb-up/gdb/tui/tui-interp.c:205 #29 0x0000555555c2173f in std::_Function_handler<void (), void (*)()>::_M_invoke(std::_Any_data const&) (__functor=...) at /usr/include/c++/7/bits/std_function.h:316 #30 0x0000555555e10c20 in std::function<void ()>::operator()() const (this=0x5555580f9028) at /usr/include/c++/7/bits/std_function.h:706 #31 0x0000555555e10973 in gdb::observers::observable<>::notify() const (this=0x555557f48d20 <gdb::observers::command_error>) at /gdb-up/gdb/../gdbsupport/observable.h:150 #32 0x00005555560e9b3f in start_event_loop () at /gdb-up/gdb/main.c:438 #33 0x00005555560e9bcc in captured_command_loop () at /gdb-up/gdb/main.c:481 #34 0x00005555560eb616 in captured_main (data=0x7fffffffddd0) at /gdb-up/gdb/main.c:1348 #35 0x00005555560eb67c in gdb_main (args=0x7fffffffddd0) at /gdb-up/gdb/main.c:1363 #36 0x0000555555c1b6b3 in main (argc=12, argv=0x7fffffffded8) at /gdb-up/gdb/gdb.c:32 Commit 1da5d0e eliminated the call to 'get_current_arch' in 'gdbpy_before_prompt_hook'. Hence, the second instance of "Couldn't get registers: No such process." does not appear anymore. Fix the failure by updating the regular expression in the test.
alpsayin
pushed a commit
to alpsayin/binutils-gdb
that referenced
this pull request
Apr 29, 2023
…ync." Commit 14b3360 ("do_target_wait_1: Clear TARGET_WNOHANG if the target isn't async.") broke some multi-target tests, such as gdb.multi/multi-target-info-inferiors.exp. The symptom is that execution just hangs at some point. What happens is: 1. One remote inferior is started, and now sits stopped at a breakpoint. It is not "async" at this point (but it "can async"). 2. We run a native inferior, the event loop gets woken up by the native target's fd. 3. In do_target_wait, we randomly choose an inferior to call target_wait on first, it happens to be the remote inferior. 4. Because the target is currently not "async", we clear TARGET_WNOHANG, resulting in synchronous wait. We therefore block here: #0 0x00007fe9540dbb4d in select () from /usr/lib/libc.so.6 zephyrproject-rtos#1 0x000055fc7e821da7 in gdb_select (n=15, readfds=0x7ffdb77c1fb0, writefds=0x0, exceptfds=0x7ffdb77c2050, timeout=0x7ffdb77c1f90) at /home/simark/src/binutils-gdb/gdb/posix-hdep.c:31 zephyrproject-rtos#2 0x000055fc7ddef905 in interruptible_select (n=15, readfds=0x7ffdb77c1fb0, writefds=0x0, exceptfds=0x7ffdb77c2050, timeout=0x7ffdb77c1f90) at /home/simark/src/binutils-gdb/gdb/event-top.c:1134 zephyrproject-rtos#3 0x000055fc7eda58e4 in ser_base_wait_for (scb=0x6250002e4100, timeout=1) at /home/simark/src/binutils-gdb/gdb/ser-base.c:240 zephyrproject-rtos#4 0x000055fc7eda66ba in do_ser_base_readchar (scb=0x6250002e4100, timeout=-1) at /home/simark/src/binutils-gdb/gdb/ser-base.c:365 zephyrproject-rtos#5 0x000055fc7eda6ff6 in generic_readchar (scb=0x6250002e4100, timeout=-1, do_readchar=0x55fc7eda663c <do_ser_base_readchar(serial*, int)>) at /home/simark/src/binutils-gdb/gdb/ser-base.c:444 zephyrproject-rtos#6 0x000055fc7eda718a in ser_base_readchar (scb=0x6250002e4100, timeout=-1) at /home/simark/src/binutils-gdb/gdb/ser-base.c:471 zephyrproject-rtos#7 0x000055fc7edb1ecd in serial_readchar (scb=0x6250002e4100, timeout=-1) at /home/simark/src/binutils-gdb/gdb/serial.c:393 zephyrproject-rtos#8 0x000055fc7ec48b8f in remote_target::readchar (this=0x617000038780, timeout=-1) at /home/simark/src/binutils-gdb/gdb/remote.c:9446 zephyrproject-rtos#9 0x000055fc7ec4da82 in remote_target::getpkt_or_notif_sane_1 (this=0x617000038780, buf=0x6170000387a8, forever=1, expecting_notif=1, is_notif=0x7ffdb77c24f0) at /home/simark/src/binutils-gdb/gdb/remote.c:9928 zephyrproject-rtos#10 0x000055fc7ec4f045 in remote_target::getpkt_or_notif_sane (this=0x617000038780, buf=0x6170000387a8, forever=1, is_notif=0x7ffdb77c24f0) at /home/simark/src/binutils-gdb/gdb/remote.c:10037 zephyrproject-rtos#11 0x000055fc7ec354d4 in remote_target::wait_ns (this=0x617000038780, ptid=..., status=0x7ffdb77c33c8, options=...) at /home/simark/src/binutils-gdb/gdb/remote.c:8147 zephyrproject-rtos#12 0x000055fc7ec38aa1 in remote_target::wait (this=0x617000038780, ptid=..., status=0x7ffdb77c33c8, options=...) at /home/simark/src/binutils-gdb/gdb/remote.c:8337 zephyrproject-rtos#13 0x000055fc7f1409ce in target_wait (ptid=..., status=0x7ffdb77c33c8, options=...) at /home/simark/src/binutils-gdb/gdb/target.c:2612 zephyrproject-rtos#14 0x000055fc7e19da98 in do_target_wait_1 (inf=0x617000038080, ptid=..., status=0x7ffdb77c33c8, options=...) at /home/simark/src/binutils-gdb/gdb/infrun.c:3636 zephyrproject-rtos#15 0x000055fc7e19e26b in operator() (__closure=0x7ffdb77c2f90, inf=0x617000038080) at /home/simark/src/binutils-gdb/gdb/infrun.c:3697 zephyrproject-rtos#16 0x000055fc7e19f0c4 in do_target_wait (ecs=0x7ffdb77c33a0, options=...) at /home/simark/src/binutils-gdb/gdb/infrun.c:3716 zephyrproject-rtos#17 0x000055fc7e1a31f7 in fetch_inferior_event () at /home/simark/src/binutils-gdb/gdb/infrun.c:4061 Before the aforementioned commit, we would not have cleared TARGET_WNOHANG, the remote target's wait would have returned nothing, and we would have consumed the native target's event. After applying this revert, the testsuite state looks as good as before for me on Ubuntu 20.04 amd64. Change-Id: Ic17a1642935cabcc16c25cb6899d52e12c2f5c3f
alpsayin
pushed a commit
to alpsayin/binutils-gdb
that referenced
this pull request
Apr 29, 2023
The current zombie leader detection code in linux-nat.c has a race -- if a multi-threaded inferior exits just before check_zombie_leaders finds that the leader is now zombie via checking /proc/PID/status, check_zombie_leaders deletes the leader, assuming we won't get an event for that exit (which we won't in some scenarios, but not in this one). That might seem mostly harmless, but it has some downsides: - later when we continue pulling events out of the kernel, we will collect the exit event of the non-leader threads, and once we see the last lwp in our list exit, we return _that_ lwp's exit code as whole-process exit code to infrun, instead of the leader's exit code. - this can cause a hang in stop_all_threads in infrun.c. Say there are 2 threads in the process. stop_all_threads stops each of those threads, and then waits for two stop or exit events, one for each thread. If the whole process exits, and check_zombie_leaders hits the false-positive case, linux-nat.c will only return one event to GDB (the whole-process exit returned when we see the last thread, the non-leader thread, exit), making stop_all_threads hang forever waiting for a second event that will never come. However, in this false-positive scenario, where the whole process is exiting, as opposed to just the leader (with pthread_exit(), for example), we _will_ get an exit event shortly for the leader, after we collect the exit event of all the other non-leader threads. Or put another way, we _always_ get an event for the leader after we see it become zombie. I tried a number of approaches to fix this: zephyrproject-rtos#1 - My first thought to address the race was to make GDB always report the whole-process exit status for the leader thread, not for whatever is the last lwp in the list. We _always_ get a final exit (or exec) event for the leader, and when the race triggers, we're not collecting it. zephyrproject-rtos#2 - My second thought was to try to plug the race in the first place. I thought of making GDB call waitpid/WNOHANG for all non-leader threads immediately when the zombie leader is detected, assuming there would be an exit event pending for each of them waiting to be collected. Turns out that that doesn't work -- you can see the leader become zombie _before_ the kernel kills all other threads. Waitpid in that small time window returns 0, indicating no-event. Thankfully we hit that race window all the time, which avoided trading one race for another. Looking at the non-leader thread's status in /proc doesn't help either, the threads are still in running state for a bit, for the same reason. zephyrproject-rtos#3 - My next attempt, which seemed promising, was to synchronously stop and wait for the stop for each of the non-leader threads. For the scenario in question, this will collect all the exit statuses of the non-leader threads. Then, if we are left with only the zombie leader in the lwp list, it means we either have a normal while-process exit or an exec, in which case we should not delete the leader. If _only_ the leader exited, like in gdb.threads/leader-exit.exp, then after pausing threads, we will still have at least one live non-leader thread in the list, and so we delete the leader lwp. I got this working and polished, and it was only after staring at the kernel code to convince myself that this would really work (and it would, for the scenario I considered), that I realized I had failed to account for one scenario -- if any non-leader thread is _already_ stopped when some thread triggers a group exit, like e.g., if you have some threads stopped and then resume just one thread with scheduler-locking or non-stop, and that thread exits the process. I also played with PTRACE_EVENT_EXIT, see if it would help in any way to plug the race, and I couldn't find a way that it would result in any practical difference compared to looking at /proc/PID/status, with respect to having a race. So I concluded that there's no way to plug the race, we just have to deal with it. Which means, going back to approach zephyrproject-rtos#1. That is the approach taken by this patch. Change-Id: I6309fd4727da8c67951f9cea557724b77e8ee979
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
GDB 12.1 ARC patches