Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

control_flow by &mut rationale #1018

Open
mickvangelderen opened this issue Jul 6, 2019 · 6 comments
Open

control_flow by &mut rationale #1018

mickvangelderen opened this issue Jul 6, 2019 · 6 comments
Labels
F - question There's no such thing as a stupid one S - meta Project governance

Comments

@mickvangelderen
Copy link

I would like to understand the rationale for having control_flow as a mutable parameter rather than the return value in the EventLoop::run and similar methods. I am confused because when returned by value it is harder to forget to set it. What are the downsides?

@goddessfreya goddessfreya transferred this issue from rust-windowing/glutin Jul 6, 2019
@goddessfreya goddessfreya added S - meta Project governance F - question There's no such thing as a stupid one labels Jul 6, 2019
@Osspial
Copy link
Contributor

Osspial commented Jul 6, 2019

Thanks for the question!

I've actually answered this before, but the answer is buried pretty deeply in the original API discussion so it's pretty hard to find. I'll add it to the FAQ in the announcement post, but I'll also copy it here for posterity:

With a return-based solution, there isn't one clear "correct" way of handling the loop's control flow. Consider the following event loop:

event_loop.run(|event| {
    match event {
        Event::EventsCleared => ControlFlow::Poll,
        _ => ControlFlow::Wait
    }
});

That situation leads to return patterns like Wait, Wait, Wait, {...}, Poll. What do you do with the Waits that are returned before the Poll? You could interpret them as "halt the event loop between each call to the function", which is pretty clearly nonsense. However, it's the only solution that doesn't involve throwing away values. Other solutions could be to have some sort of "control flow precedence pattern" where returning one type overrides future returns of another type, or to ignore all of the return values except for the one from EventsCleared (in which case what's the point of having it return anyway!).

As a persistent parameter, there are no values we have to ignore and it's clear when the control flow of the loop actually changes. Under this, Wait, WaitUntil, and Poll only impact control flow when the event queue has been cleared, and Exit kills the loop immediately.

@mickvangelderen
Copy link
Author

mickvangelderen commented Jul 8, 2019

Thank you for answering.

Now I'm wondering if I should put my update and render code (which are coupled 1:1 for now) in NewEvents(Poll) or in EventsCleared. I would rather see run take an iterator of 0..n available events. This will make it easier to accumulate multiple updates into a single value (like mouse dx dy).

event_loop.run(|events| {
    let mut keep_running = true;
    let mut mouse_dx = 0; // accumulate multiple mouse move events.
    for event in events {
        // handle window closed etc. add mouse_dx
    }
    if keep_running { ControlFlow::Poll } else { ControlFlow::Exit } // or yield until next update/render start

    // simulation update
    // render
    // swap buffers (blocks if vsync enabled)
})

As far as I understand the main reason for having glutin take ownership of the thread because some of the underlying api's require that. My intuition tells me that the above design still does that, while being a more natural translation of the ideal poll_events api we currently have. The above design must have been considered before, do you also know where I can find the discussion of that?

@Osspial
Copy link
Contributor

Osspial commented Jul 9, 2019

Now I'm wondering if I should put my update and render code (which are coupled 1:1 for now) in NewEvents(Poll) or in EventsCleared.

It shouldn't matter a whole lot, but I'd personally put them in EventsCleared.


As far as the design you proposed - I don't recall that specific design ever getting discussed. There are a couple reasons I prefer the current design, though:

  1. Exposing events the way we do currently allows downstream applications to implement smooth window resizing properly, since the underlying OS rendering toolchain prefers applications redraw their contents exactly when requested, with no event buffering.
  2. The current design allows us to add borrowing events (Event<'a>), which is the only way to expose some window sizing-related events.
  3. Higher-level APIs like the one you describe can be built on top of the current API, but our current API can't be built on top of those higher-level APIs without losing UX benefits and the benefits described above.

(There are nuances to all of those answers that I didn't bring up for the sake of brevity and not diving too far into the technical details, but I'll happily talk about those if you'd like).

That being said, I'd agree that the API you describe is easier to use in a lot of ways. I've actually been doing work recently to use Async/Await to expose something similar to what you've described there:

event_loop.run_async(async move |mut runner| {
    'main: loop {
        let mut recv_events = runner.recv_events().await;
        while let Some(event) = recv_events.next().await {
            match event {
                Event::WindowEvent {
                    event: WindowEvent::CloseRequested,
                    ..
                } => {
                    break 'main;
                },
                _ => println!("\t{:?}", event),
            }
        }

        window.request_redraw();

        let mut redraw_requests = recv_events.redraw_requests().await;
        while let Some(window_id) = redraw_requests.next().await {
            println!("\tredraw {:?}", window_id);
        }
        println!();
    }
})

However, I've ended up uncovering quite a few bugs in our current code when implementing it, so I haven't published it quite yet.

@mickvangelderen
Copy link
Author

(There are nuances to all of those answers that I didn't bring up for the sake of brevity and not diving too far into the technical details, but I'll happily talk about those if you'd like).

I would love to, especially because glutin is making this big step and I want to understand and embrace its new design. With that said, I would like to challenge you on the points you presented so that I can come to understand the new design.

1. Exposing events the way we do currently allows downstream applications to implement smooth window resizing properly, since the underlying OS rendering toolchain prefers applications redraw their contents exactly when requested, with no event buffering.

In that case, can't glutin can emit an iterator of exactly 1 element? It is still in control of calling the run callback.

2. The current design allows us to add borrowing events (Event<'a>), which is the only way to expose some window sizing-related events.

I've tried to imagine what glutin needs and wrote this implementation that allows events with lifetimes:

// NOTE: Hide the std::vec::Drain by wrapping the iterator in a newtype.
pub fn run<F>(mut event_handler: F) -> !
where
    F: 'static + for<'e> FnMut(std::vec::Drain<Event<'e>>) -> ControlFlow,
{
    let resource = Resource {
        name: String::from("I am some resource.")
    };
    
    let mut event_buffer: Vec<Event> = Vec::new();
    loop {
        // Collect events.
        event_buffer.push(Event::Update(&resource));
        
        // Emit events.
        match event_handler(event_buffer.drain(0..)) {
            ControlFlow::Poll => continue,
            ControlFlow::Exit => break,
        }
    }
    
    std::process::exit(0);
}

fn main() {
    run(|event_iter| {
        let mut should_exit = false;
        
        for event in event_iter {
            match event {
                Event::Update(res) => {
                    println!("{}", res.name);
                    should_exit = true;
                }
            }   
        }
        
        if should_exit {
            ControlFlow::Exit
        } else {
            ControlFlow::Poll
        }
    })
}

Full code: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1d4c198e186438555b3fc4ed37ed2534
Hopefully that covers what glutin needs to do but I probably missed something.

3. Higher-level APIs like the one you describe can be built on top of the current API, but our current API can't be built on top of those higher-level APIs without losing UX benefits and the benefits described above.

Intuitively I think it is easier to emit single events when you have an iterator rather collecting single events. An example implementation using the above implementation of run.

pub fn run_single<F>(mut event_handler: F) -> !
where
F: 'static + for<'e> FnMut(Event<'e>, &mut ControlFlow),
{
    run(move |event_iter| {
        let mut control_flow = ControlFlow::Poll;
        
        for event in event_iter {
            event_handler(event, &mut control_flow);
        }
        
        control_flow
    })
}

fn main() {
    run_single(|event, control_flow| {
        match event {
            Event::Update(res) => {
                println!("{}", res.name);
                *control_flow = ControlFlow::Exit;
            }
        }   
    })
}

Full code: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=1d4c198e186438555b3fc4ed37ed2534

As a final note, the design at point 3 looks nicer for this example. However it doesn't show the, confusion / multiple possible implementations, that the event loop events (EventStart/EventsCleared) introduce. Nor does the example accumulate state from the events before doing an actual simulation update.

@Osspial
Copy link
Contributor

Osspial commented Jul 23, 2019

Sorry about the delay - it took a while for me to find the time to answer this properly.

I would love to, especially because glutin is making this big step and I want to understand and embrace its new design. With that said, I would like to challenge you on the points you presented so that I can come to understand the new design.

See #1041 for info on how exactly smooth resizing works, and the technical reasons why the current EventLoop structure is necessary. I'll intersperse technical details through the rest of this as appropriate.

In that case, can't glutin can emit an iterator of exactly 1 element? It is still in control of calling the run callback.

Strictly speaking, yes. However, that makes it really hard to write an event loop that functions in a sane way. Let's go over a couple potential implementations to see what the problems are, in order to figure out how to structure things properly:

This first example is pretty much a direct copy of the first model you suggested, and I'd expect is the first way most people would think to do things:

event_loop.run(|events| {
    let mut keep_running = true;
    let mut mouse_dx = 0; // accumulate multiple mouse move events.
    for event in events {
        // handle window closed etc. add mouse_dx
    }

    // simulation update
    // render
    // swap buffers (blocks if vsync enabled)

    if keep_running { ControlFlow::Poll } else { ControlFlow::Exit } // or yield until next update/render start
})

However, every time the OS requests a redraw (or, if the program uses request_redraw), the program will go through at least two iterations of the simulation/update loop for each iteration of the event loop! The first iteration will occur when Winit emits an iterator for the user input events, and the second iteration will occur when it emits the iterator for the single render event. This problem gets compounded if you have multiple windows updating in a single logical iteration of the event loop, since a single render iterator must be emitted for each separate window, leading to further duplicate simulation updates.

Let's try another approach:

event_loop.run(|events| {
    let mut keep_running = true;
    let mut mouse_dx = 0; // accumulate multiple mouse move events.
    let mut redraw = false;

    for event in events {
        match event {
            Event::RedrawRequested => redraw = true,
            _ => {/* handle window closed etc. add mouse_dx */}
        }
    }

    // simulation update
    if redraw {
        // render
        // swap buffers (blocks if vsync enabled)
    }

    if keep_running { ControlFlow::Poll } else { ControlFlow::Exit } // or yield until next update/render start
})

That fixes the duplicate rendering, but doesn't fix the duplicate simulation updates.

A third approach, which only does simulation if the render flag is checked:

event_loop.run(|events| {
    let mut keep_running = true;
    let mut mouse_dx = 0; // accumulate multiple mouse move events.
    let mut update_and_redraw = false;

    for event in events {
        match event {
            Event::RedrawRequested => update_and_redraw = true,
            _ => {
                // handle window closed etc. add mouse_dx
                //
                // We can't actually set the `update_and_redraw` flag here, since that just runs
                // into the duplicate update/render issue in the first example.

                // Instead, we have to call `request_redraw`, which Winit can hopefully turn into
                // a single-event iterator that sets `update_and_redraw` and (mostly) properly handles
                // updating.
                window.request_redraw();
           }
        }
    }

    if update_and_redraw {
        // simulation update
        // render
        // swap buffers (blocks if vsync enabled)
    }

    if keep_running { ControlFlow::Poll } else { ControlFlow::Exit } // or yield until next update/render start
})

That should work for single-window applications, but still leads to duplicate updates when you have multiple windows. I'm pretty sure there are ways around that limitation, but I think my point has been made: it's really hard to structure a loop properly in this scenario, and this structure leads to myriad footguns. It'd be irresponsible of us to expose that.


I've tried to imagine what glutin needs and wrote this implementation that allows events with lifetimes:

/* snip */

The issue with that example is that it doesn't match up with how the OS structures its event loop. I've adapted that into a more accurate example here (playground link):

enum OsEvent<'e> {
    Update(&'e Resource)   
}

fn handle_os_event(_handle_event: impl FnOnce(OsEvent<'_>)) -> bool {
    unimplemented!("get event from OS and dispatch to handler closure")
}

// NOTE: Hide the std::vec::Drain by wrapping the iterator in a newtype.
pub fn run<F>(mut event_handler: F) -> !
where
    F: 'static + for<'e> FnMut(std::vec::Drain<Event<'e>>) -> ControlFlow,
{
    let mut event_buffer: Vec<Event> = Vec::new();
    loop {
        loop {
            // Collect events.
            //
            // This may seem like a contrived way to structure the event loop,
            // but it matches up pretty closely with how the OS actually
            // structures its event loop.
            let more_os_events = handle_os_event(|event| {
                match event {
                    OsEvent::Update(resource) => event_buffer.push(Event::Update(resource)),
                    _ => {/* handle everything else */}
                }
            });
            if !more_os_events {
                break;
            }
        }
        
        // Emit events.
        match event_handler(event_buffer.drain(..)) {
            ControlFlow::Poll => continue,
            ControlFlow::Exit => break,
        }
    }
    
    std::process::exit(0);
}

You'll notice that it doesn't compile. The Rust compiler rightfully rejects it, since the borrowed Resource only lives for a lifetime inside handle_os_event and not for the entire lifetime of the event loop.

As far as exact technical details go, check out this branch to see a real example of the lifetime issue in our codebase:

winuser::WM_DPICHANGED => {
use crate::event::WindowEvent::HiDpiFactorChanged;
// This message actually provides two DPI values - x and y. However MSDN says that
// "you only need to use either the X-axis or the Y-axis value when scaling your
// application since they are the same".
// https://msdn.microsoft.com/en-us/library/windows/desktop/dn312083(v=vs.85).aspx
let new_dpi_x = u32::from(LOWORD(wparam as DWORD));
let new_dpi_factor = dpi_to_scale_factor(new_dpi_x);
let allow_resize = {
let mut window_state = subclass_input.window_state.lock();
let old_dpi_factor = window_state.dpi_factor;
window_state.dpi_factor = new_dpi_factor;
new_dpi_factor != old_dpi_factor && window_state.fullscreen.is_none()
};
let style = winuser::GetWindowLongW(window, winuser::GWL_STYLE) as _;
let style_ex = winuser::GetWindowLongW(window, winuser::GWL_EXSTYLE) as _;
let b_menu = !winuser::GetMenu(window).is_null() as BOOL;
// New size as suggested by Windows.
let rect = *(lparam as *const RECT);
// The window rect provided is the window's outer size, not it's inner size. However,
// win32 doesn't provide an `UnadjustWindowRectEx` function to get the client rect from
// the outer rect, so we instead adjust the window rect to get the decoration margins
// and remove them from the outer size.
let margins_horizontal: u32;
let margins_vertical: u32;
{
let mut adjusted_rect = rect;
winuser::AdjustWindowRectExForDpi(
&mut adjusted_rect,
style,
b_menu,
style_ex,
new_dpi_x,
);
let margin_left = rect.left - adjusted_rect.left;
let margin_right = adjusted_rect.right - rect.right;
let margin_top = rect.top - adjusted_rect.top;
let margin_bottom = adjusted_rect.bottom - rect.bottom;
margins_horizontal = (margin_left + margin_right) as u32;
margins_vertical = (margin_bottom + margin_top) as u32;
}
let physical_inner_rect = PhysicalSize::new(
(rect.right - rect.left) as u32 - margins_horizontal,
(rect.bottom - rect.top) as u32 - margins_vertical,
);
// `allow_resize` prevents us from re-applying DPI adjustment to the restored size after
// exiting fullscreen (the restored size is already DPI adjusted).
let mut new_inner_rect_opt = Some(physical_inner_rect).filter(|_| allow_resize);
let _ = subclass_input.send_event_unbuffered(Event::WindowEvent {
window_id: RootWindowId(WindowId(window)),
event: HiDpiFactorChanged {
hidpi_factor: new_dpi_factor,
new_inner_size: &mut new_inner_rect_opt,
},
});
if let Some(new_inner_rect) = new_inner_rect_opt {
winuser::SetWindowPos(
window,
ptr::null_mut(),
rect.left,
rect.top,
(new_inner_rect.width + margins_horizontal) as _,
(new_inner_rect.height + margins_vertical) as _,
winuser::SWP_NOZORDER | winuser::SWP_NOACTIVATE,
);
}
0
}

Also, here's a link to the actual implementation of the Windows event loop. There's a bit more indirection in the actual implementation, but structurally it's similar to what I posted above:

let mut msg = mem::zeroed();
let mut msg_unprocessed = false;
'main: loop {
runner!().new_events();
loop {
if !msg_unprocessed {
if 0 == winuser::PeekMessageW(&mut msg, ptr::null_mut(), 0, 0, 1) {
break;
}
}
winuser::TranslateMessage(&mut msg);
winuser::DispatchMessageW(&mut msg);
msg_unprocessed = false;
}
runner!().events_cleared();
if let Some(payload) = runner!().panic_error.take() {
panic::resume_unwind(payload);
}
if !msg_unprocessed {
let control_flow = runner!().control_flow;
match control_flow {
ControlFlow::Exit => break 'main,
ControlFlow::Wait => {
if 0 == winuser::GetMessageW(&mut msg, ptr::null_mut(), 0, 0) {
break 'main;
}
msg_unprocessed = true;
}
ControlFlow::WaitUntil(resume_time) => {
wait_until_time_or_msg(resume_time);
}
ControlFlow::Poll => (),
}
}
}

Intuitively I think it is easier to emit single events when you have an iterator rather collecting single events. An example implementation using the above implementation of run.

I'll agree that this is a more intuitive model for the event loop, and if it were possible to expose it without compromises I'd do it. However, it results in losing out on the features and correctness improvements outlined above.

@mickvangelderen
Copy link
Author

Thank you for the write up. I am glad you've found the time and the effort is much appreciated!

I like the idea of a single event that tells you when to draw, but I am concerned that more advanced applications, which need to interweave multiple event loops (like a file watcher or networking) and might use other API's (like OpenVR which requires a blocking call which does a predictive wait, trying to ensure the latest possible state when rendering) will be harder to author. If I find the time I should try to implement them using the new API.

However, every time the OS requests a redraw (or, if the program uses request_redraw), the program will go through at least two iterations of the simulation/update loop for each iteration of the event loop! The first iteration will occur when Winit emits an iterator for the user input events, and the second iteration will occur when it emits the iterator for the single render event.

So if I understand correctly, the goal is to debounce drawing operations. What I do not understand is why winit has to emit an iterator for the user input events, and then another iterator for the single render event. If you could merge them, then this is no longer a problem right?

You'll notice that it doesn't compile. The Rust compiler rightfully rejects it, since the borrowed Resource only lives for a lifetime inside handle_os_event and not for the entire lifetime of the event loop.

In that case instead of using an external iterator we could expose an internal iterator (callback) like so:

fn main() {
    run(|event_group| {
        let mut should_exit = false;
        
        event_group.for_each(|event| {
            match event {
                Event::Update(res) => {
                    println!("{}", res.name);
                    should_exit = true;
                }
            }   
        });
        
        if should_exit {
            ControlFlow::Exit
        } else {
            ControlFlow::Poll
        }
    })
}

playground.

This iterator is no longer composable like rust's iterators but it can yield events with references. Additionally you leverage the OS's buffer and don't need to create another buffer yourself, unless we need to merge events.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
F - question There's no such thing as a stupid one S - meta Project governance
Development

No branches or pull requests

3 participants