-
Notifications
You must be signed in to change notification settings - Fork 272
Open
Description
i found this comment in libstd: https://github.com/rust-lang/rust/blob/59a4f02f836f74c4cf08f47d76c9f6069a2f8276/library/std/src/sys/backtrace.rs#L20-L48
// Use a lock to prevent mixed output in multithreading context.
// Some platforms also requires it, like `SymFromAddr` on Windows.and indeed dbghelp does not appear to be threadsafe. but backtrace-rs already has a mutex around dbghelp:
Lines 283 to 385 in 72265be
| /// Initialize all support necessary to access `dbghelp` API functions from this | |
| /// crate. | |
| /// | |
| /// Note that this function is **safe**, it internally has its own | |
| /// synchronization. Also note that it is safe to call this function multiple | |
| /// times recursively. | |
| pub fn init() -> Result<Init, ()> { | |
| use core::sync::atomic::{AtomicUsize, Ordering::SeqCst}; | |
| // Helper function for generating a name that's unique to the process. | |
| fn mutex_name() -> [u8; 33] { | |
| let mut name: [u8; 33] = *b"Local\\RustBacktraceMutex00000000\0"; | |
| let mut id = unsafe { GetCurrentProcessId() }; | |
| // Quick and dirty no alloc u32 to hex. | |
| let mut index = name.len() - 1; | |
| while id > 0 { | |
| name[index - 1] = match (id & 0xF) as u8 { | |
| h @ 0..=9 => b'0' + h, | |
| h => b'A' + (h - 10), | |
| }; | |
| id >>= 4; | |
| index -= 1; | |
| } | |
| name | |
| } | |
| unsafe { | |
| // First thing we need to do is to synchronize this function. This can | |
| // be called concurrently from other threads or recursively within one | |
| // thread. Note that it's trickier than that though because what we're | |
| // using here, `dbghelp`, *also* needs to be synchronized with all other | |
| // callers to `dbghelp` in this process. | |
| // | |
| // Typically there aren't really that many calls to `dbghelp` within the | |
| // same process and we can probably safely assume that we're the only | |
| // ones accessing it. There is, however, one primary other user we have | |
| // to worry about which is ironically ourselves, but in the standard | |
| // library. The Rust standard library depends on this crate for | |
| // backtrace support, and this crate also exists on crates.io. This | |
| // means that if the standard library is printing a panic backtrace it | |
| // may race with this crate coming from crates.io, causing segfaults. | |
| // | |
| // To help solve this synchronization problem we employ a | |
| // Windows-specific trick here (it is, after all, a Windows-specific | |
| // restriction about synchronization). We create a *session-local* named | |
| // mutex to protect this call. The intention here is that the standard | |
| // library and this crate don't have to share Rust-level APIs to | |
| // synchronize here but can instead work behind the scenes to make sure | |
| // they're synchronizing with one another. That way when this function | |
| // is called through the standard library or through crates.io we can be | |
| // sure that the same mutex is being acquired. | |
| // | |
| // So all of that is to say that the first thing we do here is we | |
| // atomically create a `HANDLE` which is a named mutex on Windows. We | |
| // synchronize a bit with other threads sharing this function | |
| // specifically and ensure that only one handle is created per instance | |
| // of this function. Note that the handle is never closed once it's | |
| // stored in the global. | |
| // | |
| // After we've actually go the lock we simply acquire it, and our `Init` | |
| // handle we hand out will be responsible for dropping it eventually. | |
| static LOCK: AtomicUsize = AtomicUsize::new(0); | |
| let mut lock = LOCK.load(SeqCst); | |
| if lock == 0 { | |
| let name = mutex_name(); | |
| lock = CreateMutexA(ptr::null_mut(), 0, name.as_ptr().cast::<i8>()) as usize; | |
| if lock == 0 { | |
| return Err(()); | |
| } | |
| if let Err(other) = LOCK.compare_exchange(0, lock, SeqCst, SeqCst) { | |
| debug_assert!(other != 0); | |
| CloseHandle(lock as HANDLE); | |
| lock = other; | |
| } | |
| } | |
| debug_assert!(lock != 0); | |
| let lock = lock as HANDLE; | |
| let r = WaitForSingleObjectEx(lock, INFINITE, FALSE); | |
| debug_assert_eq!(r, 0); | |
| let ret = Init { lock }; | |
| // Ok, phew! Now that we're all safely synchronized, let's actually | |
| // start processing everything. First up we need to ensure that | |
| // `dbghelp.dll` is actually loaded in this process. We do this | |
| // dynamically to avoid a static dependency. This has historically been | |
| // done to work around weird linking issues and is intended at making | |
| // binaries a bit more portable since this is largely just a debugging | |
| // utility. | |
| // | |
| // Once we've opened `dbghelp.dll` we need to call some initialization | |
| // functions in it, and that's detailed more below. We only do this | |
| // once, though, so we've got a global boolean indicating whether we're | |
| // done yet or not. | |
| DBGHELP.ensure_open()?; | |
| static mut INITIALIZED: bool = false; | |
| if !INITIALIZED { | |
| set_optional_options(); | |
| INITIALIZED = true; | |
| } | |
| Ok(ret) | |
| } | |
| } |
so i think it is not necessary for callers to have their own mutex? i do see that the gimli symbolification is unsafe because it accesses
MAPPINGS_CACHE unsynchronized, but maybe we can use a similar trick there? or just use std::lazy or something? backtrace-rs/src/symbolize/gimli.rs
Lines 325 to 339 in dcd0aaa
| // unsafe because this is required to be externally synchronized | |
| unsafe fn with_global(f: impl FnOnce(&mut Self)) { | |
| // A very small, very simple LRU cache for debug info mappings. | |
| // | |
| // The hit rate should be very high, since the typical stack doesn't cross | |
| // between many shared libraries. | |
| // | |
| // The `addr2line::Context` structures are pretty expensive to create. Its | |
| // cost is expected to be amortized by subsequent `locate` queries, which | |
| // leverage the structures built when constructing `addr2line::Context`s to | |
| // get nice speedups. If we didn't have this cache, that amortization would | |
| // never happen, and symbolicating backtraces would be ssssllllooooowwww. | |
| static mut MAPPINGS_CACHE: Option<Cache> = None; | |
| f(MAPPINGS_CACHE.get_or_insert_with(|| Cache::new())) |
see around rust-lang/rust#127397 (comment) for previous discussion; cc @ChrisDenton @workingjubilee
ChrisDenton
Metadata
Metadata
Assignees
Labels
No labels