Skip to content

Miri subtree update #131727

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 61 commits into from
Oct 15, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
885ec88
avoid pthread_attr_t in tests
RalfJung Oct 5, 2024
58e0fba
Auto merge of #3945 - RalfJung:pthread_attr_t, r=RalfJung
bors Oct 5, 2024
35eb2da
Added rust-analyzer instructions for Helix
YohDeadfall Oct 3, 2024
6f84740
tweak the hint
RalfJung Oct 6, 2024
c5f3c60
Auto merge of #3936 - YohDeadfall:rust-analyzer-conf, r=RalfJung
bors Oct 6, 2024
bf2d46d
bump rustc_tools_util version
RalfJung Oct 7, 2024
ee491b3
Auto merge of #3949 - RalfJung:rustc_tools_util, r=RalfJung
bors Oct 7, 2024
0130edd
Fix spelling in README
JakeRoggenbuck Oct 7, 2024
4e9554d
fix behavior of release_clock()
RalfJung Oct 8, 2024
edc5c1e
Auto merge of #3951 - RalfJung:release-clock, r=RalfJung
bors Oct 8, 2024
6f854ce
epoll: test case showing too much clock sync
FrankReh Oct 9, 2024
f90c97c
explain the review bot use
RalfJung Oct 9, 2024
87058a4
Auto merge of #3955 - RalfJung:review-bot, r=RalfJung
bors Oct 9, 2024
7a9a9cb
epoll: rename blocking_epoll_callback since it is not just called aft…
RalfJung Oct 9, 2024
f64c9c6
Fixed pthread_getname_np impl for glibc
YohDeadfall Oct 8, 2024
8e8dd57
Auto merge of #3953 - YohDeadfall:glibc-thread-name, r=RalfJung
bors Oct 9, 2024
4560606
epoll: change clock to be per event
FrankReh Oct 8, 2024
5cdee07
Auto merge of #3946 - FrankReh:fix-over-synchronization-of-epoll, r=R…
bors Oct 9, 2024
f04d1f6
syscall/eventfd2: add failing test
FrankReh Oct 3, 2024
2675f14
syscall/eventfd2: add support
FrankReh Oct 3, 2024
26ce30d
Auto merge of #3937 - FrankReh:syscall-eventfd2, r=RalfJung
bors Oct 9, 2024
faf8c14
Auto merge of #3959 - JakeRoggenbuck:fix-spelling-in-readme, r=saethlin
bors Oct 10, 2024
eb76079
epoll event adding: no need to join, there's no old clock here
RalfJung Oct 10, 2024
66fda4a
Auto merge of #3961 - RalfJung:event-release-clock-join, r=RalfJung
bors Oct 10, 2024
c7bfc44
Auto merge of #3956 - RalfJung:epoll-ready-list, r=RalfJung
bors Oct 10, 2024
7e21dce
remove handle_unsupported_foreign_item helper
RalfJung Oct 8, 2024
9e2688c
remove -Zmiri-panic-on-unsupported flag
RalfJung Oct 10, 2024
e58a406
Auto merge of #3950 - RalfJung:handle_unsupported_foreign_item, r=Ral…
bors Oct 10, 2024
37698a9
Pipe minor changes: diagnostics, flag support and comments
tiif Oct 10, 2024
1df7a0f
add libc-pipe test to CI for freebsd, solarish
RalfJung Oct 10, 2024
256d63f
Auto merge of #3960 - tiif:smallchange, r=RalfJung
bors Oct 10, 2024
ccdea3e
simplify Tree Borrows write-during-2phase example
RalfJung Oct 11, 2024
2eda6a4
Auto merge of #3964 - RalfJung:write-during-2phase, r=RalfJung
bors Oct 11, 2024
56c0612
Fixed pthread get/set name for macOS
YohDeadfall Oct 9, 2024
a495a79
Fixed get thread name behavior for FreeBSD
YohDeadfall Oct 11, 2024
b2b0d24
rework threadname test for more consistency
RalfJung Oct 13, 2024
15f0242
Auto merge of #3957 - YohDeadfall:macos-thread-name, r=RalfJung
bors Oct 13, 2024
5e6170b
Preparing for merge from rustc
RalfJung Oct 14, 2024
9d579f5
Merge from rustc
RalfJung Oct 14, 2024
543d226
clippy
RalfJung Oct 14, 2024
bd8f2af
Auto merge of #3970 - RalfJung:rustup, r=RalfJung
bors Oct 14, 2024
89323bf
pthread_mutex: store mutex ID outside adressable memory, so it can be…
RalfJung Oct 12, 2024
1389bb9
pthread_rwlock: also store ID outside addressable memory
RalfJung Oct 12, 2024
17f0aed
pthread_cond: also store ID outside addressable memory
RalfJung Oct 12, 2024
8f342ed
macOS os_unfair_lock: also store ID outside addressable memory
RalfJung Oct 12, 2024
0f7d321
Windows InitOnce: also store ID outside addressable memory
RalfJung Oct 12, 2024
e3cfe45
make lazy_sync_get_data also take a closure to initialize if needed
RalfJung Oct 12, 2024
9265a6e
turns out relaxed accesses suffice here
RalfJung Oct 12, 2024
ba95d52
pick more clear names for the types
RalfJung Oct 12, 2024
d183660
Auto merge of #3966 - RalfJung:dont-trust-the-user, r=RalfJung
bors Oct 14, 2024
1412993
Avoid some needless monomorphizations
oli-obk Oct 14, 2024
d3c1036
Auto merge of #3972 - rust-lang:eager_dyn, r=RalfJung
bors Oct 14, 2024
64259a7
Added a variadic argument helper
YohDeadfall Oct 13, 2024
af98424
add test ensuring a moved mutex deadlocks
RalfJung Oct 14, 2024
2b020bf
Auto merge of #3968 - YohDeadfall:variadic-arg-helper, r=RalfJung
bors Oct 14, 2024
4e14ad6
move lazy_sync helper methods to be with InterpCx
RalfJung Oct 14, 2024
a802fd9
ensure that a macOS os_unfair_lock that is moved while being held is …
RalfJung Oct 14, 2024
b7c06b4
Auto merge of #3973 - RalfJung:os-unfair-lock, r=RalfJung
bors Oct 14, 2024
413ea99
use new check_min_arg_count helper in more places
RalfJung Oct 14, 2024
eb5ca5f
Auto merge of #3974 - RalfJung:check_min_arg_count, r=RalfJung
bors Oct 14, 2024
1f501a7
update lockfile
RalfJung Oct 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
ensure that a macOS os_unfair_lock that is moved while being held is …
…not implicitly unlocked
  • Loading branch information
RalfJung committed Oct 14, 2024
commit a802fd9c11f98c660b15bac6cec114066c786ff2
21 changes: 13 additions & 8 deletions src/tools/miri/src/concurrency/sync.rs
Original file line number Diff line number Diff line change
Expand Up @@ -223,13 +223,15 @@ pub trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
}

/// Helper for lazily initialized `alloc_extra.sync` data:
/// Checks if the primitive is initialized, and return its associated data if so.
/// Otherwise, calls `new_data` to initialize the primitive.
/// Checks if the primitive is initialized:
/// - If yes, fetches the data from `alloc_extra.sync`, or calls `missing_data` if that fails
/// and stores that in `alloc_extra.sync`.
/// - Otherwise, calls `new_data` to initialize the primitive.
fn lazy_sync_get_data<T: 'static + Copy>(
&mut self,
primitive: &MPlaceTy<'tcx>,
init_offset: Size,
name: &str,
missing_data: impl FnOnce() -> InterpResult<'tcx, T>,
new_data: impl FnOnce(&mut MiriInterpCx<'tcx>) -> InterpResult<'tcx, T>,
) -> InterpResult<'tcx, T> {
let this = self.eval_context_mut();
Expand All @@ -254,11 +256,14 @@ pub trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
// If it is initialized, it must be found in the "sync primitive" table,
// or else it has been moved illegally.
let (alloc, offset, _) = this.ptr_get_alloc_id(primitive.ptr(), 0)?;
let alloc_extra = this.get_alloc_extra(alloc)?;
let data = alloc_extra
.get_sync::<T>(offset)
.ok_or_else(|| err_ub_format!("`{name}` can't be moved after first use"))?;
interp_ok(*data)
let (alloc_extra, _machine) = this.get_alloc_extra_mut(alloc)?;
if let Some(data) = alloc_extra.get_sync::<T>(offset) {
interp_ok(*data)
} else {
let data = missing_data()?;
alloc_extra.sync.insert(offset, Box::new(data));
interp_ok(data)
}
} else {
let data = new_data(this)?;
this.lazy_sync_init(primitive, init_offset, data)?;
Expand Down
2 changes: 1 addition & 1 deletion src/tools/miri/src/concurrency/thread.rs
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ macro_rules! callback {
@unblock = |$this:ident| $unblock:block
) => {
callback!(
@capture<$tcx, $($lft),*> { $($name: $type),+ }
@capture<$tcx, $($lft),*> { $($name: $type),* }
@unblock = |$this| $unblock
@timeout = |_this| {
unreachable!(
Expand Down
98 changes: 83 additions & 15 deletions src/tools/miri/src/shims/unix/macos/sync.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,24 +10,42 @@
//! and we do not detect copying of the lock, but macOS doesn't guarantee anything
//! in that case either.

use rustc_target::abi::Size;

use crate::*;

struct MacOsUnfairLock {
id: MutexId,
#[derive(Copy, Clone)]
enum MacOsUnfairLock {
Poisoned,
Active { id: MutexId },
}

impl<'tcx> EvalContextExtPriv<'tcx> for crate::MiriInterpCx<'tcx> {}
trait EvalContextExtPriv<'tcx>: crate::MiriInterpCxExt<'tcx> {
fn os_unfair_lock_getid(&mut self, lock_ptr: &OpTy<'tcx>) -> InterpResult<'tcx, MutexId> {
fn os_unfair_lock_get_data(
&mut self,
lock_ptr: &OpTy<'tcx>,
) -> InterpResult<'tcx, MacOsUnfairLock> {
let this = self.eval_context_mut();
let lock = this.deref_pointer(lock_ptr)?;
// We store the mutex ID in the `sync` metadata. This means that when the lock is moved,
// that's just implicitly creating a new lock at the new location.
let data = this.get_sync_or_init(lock.ptr(), |machine| {
let id = machine.sync.mutex_create();
interp_ok(MacOsUnfairLock { id })
})?;
interp_ok(data.id)
this.lazy_sync_get_data(
&lock,
Size::ZERO, // offset for init tracking
|| {
// If we get here, due to how we reset things to zero in `os_unfair_lock_unlock`,
// this means the lock was moved while locked. This can happen with a `std` lock,
// but then any future attempt to unlock will just deadlock. In practice, terrible
// things can probably happen if you swap two locked locks, since they'd wake up
// from the wrong queue... we just won't catch all UB of this library API then (we
// would need to store some unique identifer in-memory for this, instead of a static
// LAZY_INIT_COOKIE). This can't be hit via `std::sync::Mutex`.
interp_ok(MacOsUnfairLock::Poisoned)
},
|ecx| {
let id = ecx.machine.sync.mutex_create();
interp_ok(MacOsUnfairLock::Active { id })
},
)
}
}

Expand All @@ -36,7 +54,21 @@ pub trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
fn os_unfair_lock_lock(&mut self, lock_op: &OpTy<'tcx>) -> InterpResult<'tcx> {
let this = self.eval_context_mut();

let id = this.os_unfair_lock_getid(lock_op)?;
let MacOsUnfairLock::Active { id } = this.os_unfair_lock_get_data(lock_op)? else {
// Trying to get a poisoned lock. Just block forever...
this.block_thread(
BlockReason::Sleep,
None,
callback!(
@capture<'tcx> {}
@unblock = |_this| {
panic!("we shouldn't wake up ever")
}
),
);
return interp_ok(());
};

if this.mutex_is_locked(id) {
if this.mutex_get_owner(id) == this.active_thread() {
// Matching the current macOS implementation: abort on reentrant locking.
Expand All @@ -60,7 +92,12 @@ pub trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
) -> InterpResult<'tcx> {
let this = self.eval_context_mut();

let id = this.os_unfair_lock_getid(lock_op)?;
let MacOsUnfairLock::Active { id } = this.os_unfair_lock_get_data(lock_op)? else {
// Trying to get a poisoned lock. That never works.
this.write_scalar(Scalar::from_bool(false), dest)?;
return interp_ok(());
};

if this.mutex_is_locked(id) {
// Contrary to the blocking lock function, this does not check for
// reentrancy.
Expand All @@ -76,40 +113,71 @@ pub trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
fn os_unfair_lock_unlock(&mut self, lock_op: &OpTy<'tcx>) -> InterpResult<'tcx> {
let this = self.eval_context_mut();

let id = this.os_unfair_lock_getid(lock_op)?;
let MacOsUnfairLock::Active { id } = this.os_unfair_lock_get_data(lock_op)? else {
// The lock is poisoned, who knows who owns it... we'll pretend: someone else.
throw_machine_stop!(TerminationInfo::Abort(
"attempted to unlock an os_unfair_lock not owned by the current thread".to_owned()
));
};

// Now, unlock.
if this.mutex_unlock(id)?.is_none() {
// Matching the current macOS implementation: abort.
throw_machine_stop!(TerminationInfo::Abort(
"attempted to unlock an os_unfair_lock not owned by the current thread".to_owned()
));
}

// If the lock is not locked by anyone now, it went quer.
// Reset to zero so that it can be moved and initialized again for the next phase.
if !this.mutex_is_locked(id) {
let lock_place = this.deref_pointer_as(lock_op, this.machine.layouts.u32)?;
this.write_scalar_atomic(Scalar::from_u32(0), &lock_place, AtomicWriteOrd::Relaxed)?;
}

interp_ok(())
}

fn os_unfair_lock_assert_owner(&mut self, lock_op: &OpTy<'tcx>) -> InterpResult<'tcx> {
let this = self.eval_context_mut();

let id = this.os_unfair_lock_getid(lock_op)?;
let MacOsUnfairLock::Active { id } = this.os_unfair_lock_get_data(lock_op)? else {
// The lock is poisoned, who knows who owns it... we'll pretend: someone else.
throw_machine_stop!(TerminationInfo::Abort(
"called os_unfair_lock_assert_owner on an os_unfair_lock not owned by the current thread".to_owned()
));
};
if !this.mutex_is_locked(id) || this.mutex_get_owner(id) != this.active_thread() {
throw_machine_stop!(TerminationInfo::Abort(
"called os_unfair_lock_assert_owner on an os_unfair_lock not owned by the current thread".to_owned()
));
}

// The lock is definitely not quiet since we are the owner.

interp_ok(())
}

fn os_unfair_lock_assert_not_owner(&mut self, lock_op: &OpTy<'tcx>) -> InterpResult<'tcx> {
let this = self.eval_context_mut();

let id = this.os_unfair_lock_getid(lock_op)?;
let MacOsUnfairLock::Active { id } = this.os_unfair_lock_get_data(lock_op)? else {
// The lock is poisoned, who knows who owns it... we'll pretend: someone else.
return interp_ok(());
};
if this.mutex_is_locked(id) && this.mutex_get_owner(id) == this.active_thread() {
throw_machine_stop!(TerminationInfo::Abort(
"called os_unfair_lock_assert_not_owner on an os_unfair_lock owned by the current thread".to_owned()
));
}

// If the lock is not locked by anyone now, it went quer.
// Reset to zero so that it can be moved and initialized again for the next phase.
if !this.mutex_is_locked(id) {
let lock_place = this.deref_pointer_as(lock_op, this.machine.layouts.u32)?;
this.write_scalar_atomic(Scalar::from_u32(0), &lock_place, AtomicWriteOrd::Relaxed)?;
}

interp_ok(())
}
}
71 changes: 43 additions & 28 deletions src/tools/miri/src/shims/unix/sync.rs
Original file line number Diff line number Diff line change
Expand Up @@ -189,11 +189,16 @@ fn mutex_get_data<'tcx, 'a>(
mutex_ptr: &OpTy<'tcx>,
) -> InterpResult<'tcx, PthreadMutex> {
let mutex = ecx.deref_pointer(mutex_ptr)?;
ecx.lazy_sync_get_data(&mutex, mutex_init_offset(ecx)?, "pthread_mutex_t", |ecx| {
let kind = mutex_kind_from_static_initializer(ecx, &mutex)?;
let id = ecx.machine.sync.mutex_create();
interp_ok(PthreadMutex { id, kind })
})
ecx.lazy_sync_get_data(
&mutex,
mutex_init_offset(ecx)?,
|| throw_ub_format!("`pthread_mutex_t` can't be moved after first use"),
|ecx| {
let kind = mutex_kind_from_static_initializer(ecx, &mutex)?;
let id = ecx.machine.sync.mutex_create();
interp_ok(PthreadMutex { id, kind })
},
)
}

/// Returns the kind of a static initializer.
Expand Down Expand Up @@ -261,17 +266,22 @@ fn rwlock_get_data<'tcx>(
rwlock_ptr: &OpTy<'tcx>,
) -> InterpResult<'tcx, PthreadRwLock> {
let rwlock = ecx.deref_pointer(rwlock_ptr)?;
ecx.lazy_sync_get_data(&rwlock, rwlock_init_offset(ecx)?, "pthread_rwlock_t", |ecx| {
if !bytewise_equal_atomic_relaxed(
ecx,
&rwlock,
&ecx.eval_path(&["libc", "PTHREAD_RWLOCK_INITIALIZER"]),
)? {
throw_unsup_format!("unsupported static initializer used for `pthread_rwlock_t`");
}
let id = ecx.machine.sync.rwlock_create();
interp_ok(PthreadRwLock { id })
})
ecx.lazy_sync_get_data(
&rwlock,
rwlock_init_offset(ecx)?,
|| throw_ub_format!("`pthread_rwlock_t` can't be moved after first use"),
|ecx| {
if !bytewise_equal_atomic_relaxed(
ecx,
&rwlock,
&ecx.eval_path(&["libc", "PTHREAD_RWLOCK_INITIALIZER"]),
)? {
throw_unsup_format!("unsupported static initializer used for `pthread_rwlock_t`");
}
let id = ecx.machine.sync.rwlock_create();
interp_ok(PthreadRwLock { id })
},
)
}

// # pthread_condattr_t
Expand Down Expand Up @@ -386,18 +396,23 @@ fn cond_get_data<'tcx>(
cond_ptr: &OpTy<'tcx>,
) -> InterpResult<'tcx, PthreadCondvar> {
let cond = ecx.deref_pointer(cond_ptr)?;
ecx.lazy_sync_get_data(&cond, cond_init_offset(ecx)?, "pthread_cond_t", |ecx| {
if !bytewise_equal_atomic_relaxed(
ecx,
&cond,
&ecx.eval_path(&["libc", "PTHREAD_COND_INITIALIZER"]),
)? {
throw_unsup_format!("unsupported static initializer used for `pthread_cond_t`");
}
// This used the static initializer. The clock there is always CLOCK_REALTIME.
let id = ecx.machine.sync.condvar_create();
interp_ok(PthreadCondvar { id, clock: ClockId::Realtime })
})
ecx.lazy_sync_get_data(
&cond,
cond_init_offset(ecx)?,
|| throw_ub_format!("`pthread_cond_t` can't be moved after first use"),
|ecx| {
if !bytewise_equal_atomic_relaxed(
ecx,
&cond,
&ecx.eval_path(&["libc", "PTHREAD_COND_INITIALIZER"]),
)? {
throw_unsup_format!("unsupported static initializer used for `pthread_cond_t`");
}
// This used the static initializer. The clock there is always CLOCK_REALTIME.
let id = ecx.machine.sync.condvar_create();
interp_ok(PthreadCondvar { id, clock: ClockId::Realtime })
},
)
}

impl<'tcx> EvalContextExt<'tcx> for crate::MiriInterpCx<'tcx> {}
Expand Down
15 changes: 10 additions & 5 deletions src/tools/miri/src/shims/windows/sync.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,16 @@ trait EvalContextExtPriv<'tcx>: crate::MiriInterpCxExt<'tcx> {
let init_once = this.deref_pointer(init_once_ptr)?;
let init_offset = Size::ZERO;

this.lazy_sync_get_data(&init_once, init_offset, "INIT_ONCE", |this| {
// TODO: check that this is still all-zero.
let id = this.machine.sync.init_once_create();
interp_ok(WindowsInitOnce { id })
})
this.lazy_sync_get_data(
&init_once,
init_offset,
|| throw_ub_format!("`INIT_ONCE` can't be moved after first use"),
|this| {
// TODO: check that this is still all-zero.
let id = this.machine.sync.init_once_create();
interp_ok(WindowsInitOnce { id })
},
)
}

/// Returns `true` if we were succssful, `false` if we would block.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
//@only-target: darwin

use std::cell::UnsafeCell;

fn main() {
let lock = UnsafeCell::new(libc::OS_UNFAIR_LOCK_INIT);

unsafe { libc::os_unfair_lock_lock(lock.get()) };
let lock = lock;
// This needs to either error or deadlock.
unsafe { libc::os_unfair_lock_lock(lock.get()) };
//~^ error: deadlock
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
error: deadlock: the evaluated program deadlocked
--> tests/fail-dep/concurrency/apple_os_unfair_lock_move_deadlock.rs:LL:CC
|
LL | unsafe { libc::os_unfair_lock_lock(lock.get()) };
| ^ the evaluated program deadlocked
|
= note: BACKTRACE:
= note: inside `main` at tests/fail-dep/concurrency/apple_os_unfair_lock_move_deadlock.rs:LL:CC

note: some details are omitted, run with `MIRIFLAGS=-Zmiri-backtrace=full` for a verbose backtrace

error: aborting due to 1 previous error

Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ fn main() {

// `os_unfair_lock`s can be moved and leaked.
// In the real implementation, even moving it while locked is possible
// (and "forks" the lock, i.e. old and new location have independent wait queues);
// Miri behavior differs here and anyway none of this is documented.
// (and "forks" the lock, i.e. old and new location have independent wait queues).
// We only test the somewhat sane case of moving while unlocked that `std` plans to rely on.
let lock = lock;
let locked = unsafe { libc::os_unfair_lock_trylock(lock.get()) };
assert!(locked);
Expand Down