Skip to content

Commit

Permalink
Rollup merge of rust-lang#44595 - budziq:stabilize_compiler_fences, r…
Browse files Browse the repository at this point in the history
…=alexcrichton

stabilized compiler_fences (fixes rust-lang#41091)

I did not know what to proceed with "unstable-book" entry. The feature would no longer be unstable so I have deleted it. If it was the wrong call I'll revert it (unfortunately his case is not described in the CONTRIBUTING.md).
  • Loading branch information
TimNN authored Sep 17, 2017
2 parents 640dcef + 5f62c0c commit a28cfb2
Show file tree
Hide file tree
Showing 2 changed files with 54 additions and 111 deletions.
106 changes: 0 additions & 106 deletions src/doc/unstable-book/src/library-features/compiler-fences.md

This file was deleted.

59 changes: 54 additions & 5 deletions src/libcore/sync/atomic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1679,10 +1679,14 @@ pub fn fence(order: Ordering) {

/// A compiler memory fence.
///
/// `compiler_fence` does not emit any machine code, but prevents the compiler from re-ordering
/// memory operations across this point. Which reorderings are disallowed is dictated by the given
/// [`Ordering`]. Note that `compiler_fence` does *not* introduce inter-thread memory
/// synchronization; for that, a [`fence`] is needed.
/// `compiler_fence` does not emit any machine code, but restricts the kinds
/// of memory re-ordering the compiler is allowed to do. Specifically, depending on
/// the given [`Ordering`] semantics, the compiler may be disallowed from moving reads
/// or writes from before or after the call to the other side of the call to
/// `compiler_fence`. Note that it does **not** prevent the *hardware*
/// from doing such re-ordering. This is not a problem in a single-threaded,
/// execution context, but when other threads may modify memory at the same
/// time, stronger synchronization primitives such as [`fence`] are required.
///
/// The re-ordering prevented by the different ordering semantics are:
///
Expand All @@ -1691,19 +1695,64 @@ pub fn fence(order: Ordering) {
/// - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
/// - with [`AcqRel`], both of the above rules are enforced.
///
/// `compiler_fence` is generally only useful for preventing a thread from
/// racing *with itself*. That is, if a given thread is executing one piece
/// of code, and is then interrupted, and starts executing code elsewhere
/// (while still in the same thread, and conceptually still on the same
/// core). In traditional programs, this can only occur when a signal
/// handler is registered. In more low-level code, such situations can also
/// arise when handling interrupts, when implementing green threads with
/// pre-emption, etc. Curious readers are encouraged to read the Linux kernel's
/// discussion of [memory barriers].
///
/// # Panics
///
/// Panics if `order` is [`Relaxed`].
///
/// # Examples
///
/// Without `compiler_fence`, the `assert_eq!` in following code
/// is *not* guaranteed to succeed, despite everything happening in a single thread.
/// To see why, remember that the compiler is free to swap the stores to
/// `IMPORTANT_VARIABLE` and `IS_READ` since they are both
/// `Ordering::Relaxed`. If it does, and the signal handler is invoked right
/// after `IS_READY` is updated, then the signal handler will see
/// `IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
/// Using a `compiler_fence` remedies this situation.
///
/// ```
/// use std::sync::atomic::{AtomicBool, AtomicUsize};
/// use std::sync::atomic::{ATOMIC_BOOL_INIT, ATOMIC_USIZE_INIT};
/// use std::sync::atomic::Ordering;
/// use std::sync::atomic::compiler_fence;
///
/// static IMPORTANT_VARIABLE: AtomicUsize = ATOMIC_USIZE_INIT;
/// static IS_READY: AtomicBool = ATOMIC_BOOL_INIT;
///
/// fn main() {
/// IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
/// // prevent earlier writes from being moved beyond this point
/// compiler_fence(Ordering::Release);
/// IS_READY.store(true, Ordering::Relaxed);
/// }
///
/// fn signal_handler() {
/// if IS_READY.load(Ordering::Relaxed) {
/// assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
/// }
/// }
/// ```
///
/// [`fence`]: fn.fence.html
/// [`Ordering`]: enum.Ordering.html
/// [`Acquire`]: enum.Ordering.html#variant.Acquire
/// [`SeqCst`]: enum.Ordering.html#variant.SeqCst
/// [`Release`]: enum.Ordering.html#variant.Release
/// [`AcqRel`]: enum.Ordering.html#variant.AcqRel
/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
/// [memory barriers]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
#[inline]
#[unstable(feature = "compiler_fences", issue = "41091")]
#[stable(feature = "compiler_fences", since = "1.22.0")]
pub fn compiler_fence(order: Ordering) {
unsafe {
match order {
Expand Down

0 comments on commit a28cfb2

Please sign in to comment.