Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Executing futures in wasm #1126

Closed
richard-uk1 opened this issue Dec 23, 2018 · 77 comments
Closed

Executing futures in wasm #1126

richard-uk1 opened this issue Dec 23, 2018 · 77 comments

Comments

@richard-uk1
Copy link
Contributor

the wasm-bindgen-futures crate provides for consuming promises as futures, and returning futures to JS as promises, but not executing futures in wasm.

I'm currently experimenting with borrowing the strategy for passing futures to js for my use-case.

@alexcrichton
Copy link
Contributor

I'm not sure it's actually possible to execute futures in wasm without the support of JS? Executing futures involves some degree of blocking but JS/wasm can't block right now I think? Do you have a particular API in mind though for this?

@richard-uk1
Copy link
Contributor Author

richard-uk1 commented Dec 24, 2018

I'm playing with creating an indexeddb library. Eventually, I'd like to make it integrate into a virtual DOM library so an action like a click is processed into an immediate DOM update (loading), and a delayed DOM update (once the promises have resolved).

+------------+     +------------+
|click action+----->future chain|
+-----+------+     +-----+------+
      | pending          |
      |                  | future chain is complete
+-----v------+           |
|state change<-----------+
+------------+

You should be able to get this behavior by specifying your state, and functions describing how the state changes when the future is ready or rejected. This applies equally to fetch as much as indexedDB. The beauty of futures is that you can have quite a complex operation with many steps but treat it as a single atomic operation.

I'm just struggling to get my head around how js and wasm interact here. Do we need an executor in wasm using something like setTimeout(() => {..}, 0), or can we arrange to be woken up by the JS when a promise completes.

I always find the lack of determinism in js a bit frustrating (w.r.t. the order that things will happen in), and I feel like futures would help me get a handle on it, but the big picture is just eluding me.

@richard-uk1
Copy link
Contributor Author

I'm thinking I probably have to pass the function I want to run into the promise, so that JS knows what code to run. Maybe this means I can't use futures (at least not the standard rust ones).

@richard-uk1
Copy link
Contributor Author

richard-uk1 commented Dec 24, 2018

An example of the code I'd like to be able to write: (pseudo code)

#[derive(Default, Debug, ...)]
struct State {
   name: Option<String>,
   age: Option<u32>,
   loading: bool,
}

enum Action {
   LoadData,
   LoadFailed { reason: String },
   LoadSucceeded {
      name: String,
      age: u32
   }
}

fn merge(action: Action, state: &mut State) {
   match action {
      Action::LoadData => {
         state.loading = true;
      }
      Action::LoadFailed { reason } => {
         eprintln!("loading failed: {}", reason);
      }
      Action::LoadSucceeded { name, age } => {
         state.name = Some(name);
         state.age = Some(age);
         state.loading = false;
      }
   }
}

fn on_click() {
    state.merge(Action::LoadData);
    indexeddb::open("my_db", 1)
        .then(|db| db.start_transaction())
        .then(|(db, trans)| db.object_store("people"))
        .then(|store| store.get(0))
        .then(|record| {
            state.merge(Action::LoadSucceeded {
                name: record.name,
                age: record.age
            });
        }); // todo pass this future somewhere so it gets run.
}

@Pauan
Copy link
Contributor

Pauan commented Dec 24, 2018

The way that this is handled in stdweb is that it provides a spawn_local function, which lets you spawn a Future:

spawn_local(some_rust_future);

You can then write your code like this:

fn on_click() {
    state.merge(Action::LoadData);
    spawn_local(indexeddb::open("my_db", 1)
        .then(|db| db.start_transaction())
        .then(|(db, trans)| db.object_store("people"))
        .then(|store| store.get(0))
        .then(|record| {
            state.merge(Action::LoadSucceeded {
                name: record.name,
                age: record.age
            });
        }));
}

Or even easier with async/await!:

fn on_click() {
    state.merge(Action::LoadData);
    spawn_local(async {
        let db = await!(indexeddb::open("my_db", 1))?;
        let trans = await!(db.start_transaction())?;
        let store = await!(db.object_store("people"))?;
        let record = await!(store.get(0))?;
        state.merge(Action::LoadSucceeded {
            name: record.name,
            age: record.age
        });
    });
}

Internally spawn_local uses a Rust dequeue of Tasks, and it uses JavaScript's microtask queue to delay the Future execution by one tick.

The same sort of function should be implementable in wasm-bindgen as well.

@richard-uk1
Copy link
Contributor Author

@Pauan do you have to be running an event loop for that to work?

@Pauan
Copy link
Contributor

Pauan commented Dec 25, 2018

@derekdreery In asynchronous code, an event queue is mandatory (though it's hidden from the user).

JavaScript provides two built-in event queues: macrotask and microtask (Promises always use the microtask event queue). You can read more here:

https://jakearchibald.com/2015/tasks-microtasks-queues-and-schedules/

So the question is whether spawning Futures directly uses the JS microtask queue, or whether it uses a Rust dequeue.

Using the JS microtask queue is a lot simpler, but using a Rust dequeue is multiple orders of magnitude faster.

Here is a very old stdweb Executor which doesn't use a Rust dequeue, it spawns Futures directly using the JS Promise microtask queue.

It was written for Futures 0.1, it's quite old, probably buggy, and quite slow, but it is also simple and easy to understand, so hopefully it's educational.

@Pauan
Copy link
Contributor

Pauan commented Dec 25, 2018

All of this event queue stuff is an implementation detail: the user just calls spawn_local and at some unknown point in the future the Future will run.

I say "unknown point", but it's usually very fast (a few milliseconds at most). The "unknown" part just means you can't rely upon it running at a deterministic point in time, since it's asynchronous.

@richard-uk1
Copy link
Contributor Author

Thanks for the microtask reading - I feel like I understand all this stuff much better now!

@lcnr
Copy link
Contributor

lcnr commented Dec 28, 2018

I am currently writing a small HttpRequest crate and ended up creating a wrapper struct with a custom Dropimplementation.

pub struct Request<T: Future + 'static = JsFuture>(Option<T>);   

/// snip ..

impl<T: Future + 'static> std::ops::Drop for Request<T> {
    fn drop(&mut self) {
        if let Some(future) = self.0.take() {
            future_to_promise(future.and_then(|_| {
                future::ok(JsValue::null())
            }).or_else(|_| {
                future::err(JsValue::null())
            }));
        }
    }
}

Now the browser is doing most of the work.

Usage:

  Request::new(Method::Get, "example.org/test")
        .header("Accept", "text/plain").send()
        .and_then(|resp_value: JsValue| {
            let resp: Response = resp_value.dyn_into().unwrap();
            resp.text()
        })
        .and_then(|text: Promise| {
            JsFuture::from(text)
        })
        .and_then(|body| {
            println!("Response: {}", body.as_string().unwrap());
            future::ok(())
        });

@Pauan
Copy link
Contributor

Pauan commented Dec 29, 2018

@lcnr Running an asynchronous Future in the Drop impl seems pretty hacky to me.

If the goal is to cancel a fetch, you should use the standard AbortController, something like this:

pub struct Request<'a> {
    url: &'a str,
    init: RequestInit,
}

impl<'a> Request<'a> {
    pub fn new(url: &'a str) -> Self {
        Self {
            url,
            init: RequestInit::new(),
        }
    }

    pub fn send(mut self) -> RequestFuture {
        let controller = AbortController::new().unwrap();

        let init = self.init.signal(Some(&controller.signal()));

        let future = window().unwrap().fetch_with_str_and_init(self.url, &init).into();

        RequestFuture {
            controller,
            future,
        }
    }
}


pub struct RequestFuture {
    controller: AbortController,
    future: JsFuture,
}

impl Drop for RequestFuture {
    #[inline]
    fn drop(&mut self) {
        self.controller.abort();
    }
}

impl Future for RequestFuture {
    type Output = Result<JsValue, JsValue>;

    #[inline]
    fn poll(self: Pin<&mut Self>, waker: &LocalWaker) -> Poll<Self::Output> {
        self.future.poll_unpin(waker);
    }
}

Untested, but it should be close to correct.

@Pauan
Copy link
Contributor

Pauan commented Dec 29, 2018

If your goal is instead to workaround the lack of spawn_local (or similar)... that's a very hacky solution.

Instead the correct solution is for us to add in spawn_local, which you would use in your main function.

@lcnr
Copy link
Contributor

lcnr commented Dec 29, 2018

@Pauan I am not even sure what exactly I am doing. 😆 I want to make requests during which I return control of the main thread back to to browser/javascript and don't help to store the future anywhere. This means I am currently using a global state.

If your goal is instead to workaround the lack of spawn_local (or similar)... that's a very hacky solution.

jup

@Pauan
Copy link
Contributor

Pauan commented Dec 29, 2018

I want to make requests during which I return control of the main thread back to to browser/javascript and don't help to store the future anywhere.

That happens naturally as part of the Promises/Futures system, you don't need to do anything special for that.

The best thing to do is to have Request::send return a Future (similar to what I showed in my previous post), and then you can use future_to_promise to spawn it in main:

#[wasm_bindgen(start)]
pub fn main() {
    future_to_promise(
         Request::new(Method::Get, "example.org/test")
            .header("Accept", "text/plain").send()
            .and_then(|resp_value: JsValue| {
                let resp: Response = resp_value.dyn_into().unwrap();
                resp.text()
            })
            .and_then(|text: Promise| {
                JsFuture::from(text)
            })
            .and_then(|body| {
                println!("Response: {}", body.as_string().unwrap());
                future::ok(JsValue::UNDEFINED)
            })
    );
}

No global state needed.

And then when spawn_local is supported you can just replace future_to_promise with spawn_local.

It's even nicer with async/await:

#[wasm_bindgen(start)]
pub fn main() {
    future_to_promise(async {
        let resp_value = await!(
            Request::new(Method::Get, "example.org/test")
                .header("Accept", "text/plain").send()
        )?;

        let resp: Response = resp_value.dyn_into().unwrap();
        let body = await!(JsFuture::from(resp.text()?))?;

        println!("Response: {}", body.as_string().unwrap());

        Ok(JsValue::UNDEFINED)
    });
}

Naturally you don't want to put everything into main, but that's not a problem: you can just split your program into multiple async functions:

async fn get_text(url: &str) -> Result<String, JsValue> {
    let resp_value = await!(
        Request::new(Method::Get, url).header("Accept", "text/plain").send()
    )?;

    let resp: Response = resp_value.dyn_into().unwrap();

    let body = await!(JsFuture::from(resp.text()?))?;

    Ok(body.as_string().unwrap())
}


async fn do_something() -> Result<(), JsValue> {
    let body = await!(get_text("example.org/test"))?;

    println!("Response: {}", body);

    Ok(())
}


#[wasm_bindgen(start)]
pub fn main() {
    future_to_promise(async {
        await!(do_something())?;
        Ok(JsValue::UNDEFINED)
    });
}

When you use await! on a Promise, it will yield control to the browser.

So in the above example, await!(Request::new(Method::Get, url).header("Accept", "text/plain").send()) and await!(JsFuture::from(resp.text()?)) will yield control.

await!(get_text("example.org/test")) and await!(do_something()) also yield control, since they call get_text (and get_text uses await! on Promises).

Using async/await in Rust is similar to async/await in JavaScript.

@Pauan
Copy link
Contributor

Pauan commented Dec 29, 2018

P.S. async/await support will require #1105 to be fixed first (or I guess you can use the 0.1 to 0.3 Futures compatibility shim to make it work?).

Even without async/await, my point about using future_to_promise in main still stands.

@alexcrichton
Copy link
Contributor

FWIW JS at the fundamental level can't block, so it's almost always queueing up callbacks to execute at some later date. If you do blocking work at the base level there's likely some callback that gets invoked when the operation is finished (either successfully or not). In that sense we can queue up callbacks to run on events, and those callbacks could drive another event queue in Rust (much like futures work today with tokio and such).

Some of this may belong in the wasm-bindgen-futures crate, but otherwise much of this is largely stock futures and other crates which in theory already work.

Are there still points though that want to be clarified before closing this?

@richard-uk1
Copy link
Contributor Author

@alexcrichton what do you think about @Pauan's suggestion to add a spawn_local method to wasm-bindgen-futures?

@alexcrichton
Copy link
Contributor

It may be a good idea! I'll admit though that I don't fully understand the motivation after skimming over this issue again. Could you remind me the motivation though for adding a function like that?

@alexcrichton
Copy link
Contributor

(er also good to mention the context that future_to_promise is sort of like spawn_local, but it seems to me like spawn_local would likely have different performance characteristics, so I'm unclear if it's just that or if it also adds expressivity)

@Pauan
Copy link
Contributor

Pauan commented Jan 4, 2019

@alexcrichton It's just a way to spawn a Future.

So it is indeed almost identical to future_to_promise, except it doesn't return a Promise, so as you say it can be faster.

In particular, it would have this signature:

pub fn spawn_local<F>(future: F) where F: Future<Output = ()> + 'static

I'm assuming Futures 0.3 (it'll be a bit different with Futures 0.1)

This makes it clear to any readers what it is doing, compared to future_to_promise.

@alexcrichton
Copy link
Contributor

Ok just wanted to confirm. That seems reasonable to me to add to wasm-bindgen-futures!

@richard-uk1
Copy link
Contributor Author

I'll have a go at implementing the Queue and see what it looks like.

@richard-uk1
Copy link
Contributor Author

Very naive implementation in #1148.

@dakom

This comment was marked as abuse.

@richard-uk1
Copy link
Contributor Author

richard-uk1 commented Jan 6, 2019

Caveat emptor I'm not a genius this may be incorrect:

  1. No it just uses javascript promises
  2. Basically, you need a resource, that is a base future that other futures are built on top of. It is the bottom future's responsibility to tell the executor when further progress can be made. The way you do this is (in the poll method) call let handle = task::current() to get a handle, and then call handle.notify() when you can make progress. Here's an example for indexeddb I'm working on
impl Future for IdbOpenDbRequest {
    type Item = Db;
    type Error = JsValue;

    fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
        use web_sys::IdbRequestReadyState as ReadyState;
        match self.inner.ready_state() {
            ReadyState::Pending => {
                let success_notifier = task::current();
                let error_notifier = success_notifier.clone();
                // If we're not ready set up onsuccess and onerror callbacks to notify the
                // executor.
                let onsuccess = Closure::wrap(Box::new(move || {
                    success_notifier.notify();
                }) as Box<FnMut()>);
                self.inner
                    .set_onsuccess(Some(onsuccess.as_ref().unchecked_ref()));
                self.onsuccess.replace(onsuccess); // drop the old closure if there was one

                let onerror = Closure::wrap(Box::new(move || {
                    error_notifier.notify();
                }) as Box<FnMut()>);
                self.inner
                    .set_onerror(Some(&onerror.as_ref().unchecked_ref()));
                self.onerror.replace(onerror); // drop the old closure if there was one

                Ok(Async::NotReady)
            }
            ReadyState::Done => match self.inner.result() {
                Ok(val) => Ok(Async::Ready(Db {
                    inner: val.unchecked_into(),
                })),
                Err(_) => match self.inner.error() {
                    Ok(Some(e)) => Err(e.into()),
                    Ok(None) => unreachable!("internal error polling open db request"),
                    Err(e) => Err(e),
                },
            },
            _ => panic!("unexpected ready state"),
        }
    }
}

in my case I have a ready_state function I can call to see if I'm ready to make progress. If you only have the result in the callback, you can use a futures::unsync::oneshot to send the result to the original future, so you'd do a rx.receive() and if there is no data yet, then return Async::NotReady.

  1. Probably best not to panic - I think if you return an Err it turns into a javascript exception. Maybe.

@Pauan
Copy link
Contributor

Pauan commented Jan 7, 2019

@dakom

Does wasm_bindgen have a native Future reactor - i.e. a way to run and consume Futures without going through Promises?

Currently no, and executing Futures requires scheduling them on the JS microtask event loop, so it is necessary for it to internally use Promises (or another technique like MutationObserver).

However, once spawn_local is added it will be easy to spawn Futures without worrying about the internal details of Promises.

Specifically, my current goal is to to wrap, say HtmlImageElement's onload/onerror and return a Future<HtmlImageElement,JsValue> when it's ready.

Normally you would use the oneshot channels for this.

First, let's make it easier to create event listeners:

use web_sys::EventTarget;
use wasm_bindgen::JsCast;
use wasm_bindgen::convert::FromWasmAbi;

pub struct EventListener<'a, A> {
    node: EventTarget,
    kind: &'a str,
    callback: Closure<FnMut(A)>,
}

impl<'a, A> EventListener<'a, A> where A: FromWasmAbi + 'static {
    #[inline]
    pub fn new<F>(node: &EventTarget, kind: &'a str, f: F) -> Self where F: FnMut(A) + 'static {
        let callback = Closure::wrap(Box::new(f) as Box<FnMut(A)>);

        node.add_event_listener_with_callback(kind, callback.as_ref().unchecked_ref()).unwrap();

        Self {
            node: node.clone(),
            kind,
            callback,
        }
    }
}

impl<'a, A> Drop for EventListener<'a, A> {
    #[inline]
    fn drop(&mut self) {
        self.node.remove_event_listener_with_callback(self.kind, self.callback.as_ref().unchecked_ref()).unwrap();
    }
}

Now you can use oneshot::channel:

use web_sys::{HtmlImageElement, UiEvent};
use futures::Poll;
use futures::sync::oneshot::{Receiver, channel};

pub struct Image {
    img: Option<HtmlImageElement>,
    _on_load: EventListener<'static, UiEvent>,
    receiver: Receiver<HtmlImageElement>,
}

impl Image {
    pub fn new(width: u32, height: u32, url: &str) -> Self {
        let (sender, receiver) = channel();

        let img = HtmlImageElement::new_with_width_and_height(width, height).unwrap();

        img.set_src(url);

        let _on_load = EventListener::new(&img, "load", {
            let mut sender = Some(sender);
            let img = img.clone();

            move |_| {
                sender.take().unwrap().send(img.clone()).unwrap();
            }
        });

        Self { img: Some(img), _on_load, receiver }
    }
}

impl Drop for Image {
    #[inline]
    fn drop(&mut self) {
        if let Some(ref img) = self.img {
            // Cancels the image download
            img.set_src("");
        }
    }
}

impl Future for Image {
    type Item = HtmlImageElement;
    type Error = JsValue;

    fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
        self.receiver.poll().map(|x| {
            if x.is_ready() {
                // Prevents the image from being cancelled
                self.img = None;
            }

            x
        }).map_err(|_| unreachable!())
    }
}

However, in your case you need to send from two different callbacks, so that doesn't work. So instead, like @derekdreery mentioned, you need to manually use the Task stuff.

But rather than mucking around directly with Task, let's instead define some helper types:

use std::sync::{Arc, Mutex};
use futures::task::{Task, current};
use futures::{Async, Poll};

// TODO should use oneshot::Inner
#[derive(Debug)]
struct Inner<T, E> {
    completed: bool,
    value: Option<Result<T, E>>,
    task: Option<Task>,
}

impl<T, E> Inner<T, E> {
    #[inline]
    fn new() -> Self {
        Self {
            completed: false,
            value: None,
            task: None,
        }
    }
}


pub fn result_channel<T, E>() -> (ResultSender<T, E>, ResultReceiver<T, E>) {
    let inner = Arc::new(Mutex::new(Inner::new()));

    (
        ResultSender {
            inner: inner.clone(),
        },

        ResultReceiver {
            inner: inner,
        },
    )
}


#[derive(Debug, Clone)]
pub struct ResultSender<T, E> {
    inner: Arc<Mutex<Inner<T, E>>>,
}

impl<T, E> ResultSender<T, E> {
    fn send(&self, value: Result<T, E>) {
        let mut lock = self.inner.lock().unwrap();

        if !lock.completed {
            lock.completed = true;
            lock.value = Some(value);

            if let Some(task) = lock.task.take() {
                drop(lock);
                task.notify();
            }
        }
    }

    #[inline]
    pub fn ok(&self, value: T) {
        self.send(Ok(value));
    }

    #[inline]
    pub fn err(&self, value: E) {
        self.send(Err(value));
    }
}


#[derive(Debug)]
pub struct ResultReceiver<T, E> {
    inner: Arc<Mutex<Inner<T, E>>>,
}

impl<T, E> Future for ResultReceiver<T, E> {
    type Item = T;
    type Error = E;

    fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
        let mut lock = self.inner.lock().unwrap();

        if lock.completed {
            lock.value.take().unwrap().map(Async::Ready)

        } else {
            lock.task = Some(current());
            Ok(Async::NotReady)
        }
    }
}

Now finally we can define the Image struct:

use web_sys::{HtmlImageElement, UiEvent};
use js_sys::Error;

enum ImageState<'a> {
    Initial {
        width: u32,
        height: u32,
        url: &'a str,
    },
    Pending {
        img: HtmlImageElement,
        receiver: ResultReceiver<HtmlImageElement, JsValue>,
        _on_load: EventListener<'static, UiEvent>,
        _on_error: EventListener<'static, UiEvent>,
    },
    Complete,
}

pub struct Image<'a> {
    state: ImageState<'a>,
}

impl<'a> Image<'a> {
    #[inline]
    pub fn new(width: u32, height: u32, url: &'a str) -> Self {
        Self { state: ImageState::Initial { width, height, url } }
    }
}

impl<'a> Drop for Image<'a> {
    #[inline]
    fn drop(&mut self) {
        if let ImageState::Pending { ref img, .. } = self.state {
            // Cancels the image download
            img.set_src("");
        }
    }
}

impl<'a> Future for Image<'a> {
    type Item = HtmlImageElement;
    type Error = JsValue;

    fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
        match self.state {
            ImageState::Initial { width, height, url } => {
                let (sender, receiver) = result_channel();

                let img = HtmlImageElement::new_with_width_and_height(width, height).unwrap();

                img.set_src(url);

                let _on_load = EventListener::new(&img, "load", {
                    let sender = sender.clone();
                    let img = img.clone();

                    move |_| {
                        sender.ok(img.clone());
                    }
                });

                let _on_error = EventListener::new(&img, "error", move |_| {
                    sender.err(Error::new("Failed to load image").into());
                });

                self.state = ImageState::Pending { img, receiver, _on_load, _on_error };

                Ok(Async::NotReady)
            },
            ImageState::Pending { ref mut receiver, .. } => {
                let output = receiver.poll();

                match output {
                    Ok(Async::Ready(_)) | Err(_) => {
                        self.state = ImageState::Complete;
                    },
                    _ => {},
                }

                output
            },
            ImageState::Complete => {
                panic!("Image polled after completion");
            },
        }
    }
}

imho a runtime exception should be the equivilent of a panic() but Promises swallow exceptions (sortof)... do Futures that come from Promises thereby also swallow panics?

I'm not sure. The interactions between Rust panics and JS exceptions are currently pretty weird, so it may just always panic.

I suggest not relying upon the panic handling, since it will almost certainly change in the future.

In other words when I get a Future - via the wasm_bindgen_futures helpers, if I decide to do nothing with "expected errors" in the course of development, can I be sure that I'm not also accidentally silencing unexpected exceptions?

There are no silent errors. When you use JsFuture::from, the error type is JsValue.

In order to run the Future using spawn_local, it must have an error type of (). So that means the static type system forces you to use map_err to handle the error in some way.

If you instead use future_to_promise, the errors still won't be silent: JS has a mechanism where unhandled Promise errors are automatically reported in the console.

Unlike some other languages, Rust and JS don't have silent errors (which is a wonderful thing).

@dakom

This comment was marked as abuse.

@Pauan
Copy link
Contributor

Pauan commented Jan 7, 2019

but realistically in the course of development sometimes you just don't deal with it while working stuff out.

In stdweb there is an unwrap_future function which handles errors by printing them to the console (and then panicking).

You use it like this:

spawn_local(unwrap_future(some_future));

In other words, all you need to do is slap unwrap_future into your main and you're good to go.

So it's not hard at all to handle errors correctly, we just need a helper function like that in wasm-bindgen.

Would be so nice if wasm_bindgen futures did not swallow exceptions - even in the case of there being an error handler. As a developer I want to know if I caused a panic, even if I am not handling my Promise rejections nicely.

My understanding is that there is one of two possibilities:

  1. It panics, in which case there should be an error message in the console (if you're using console_error_panic_hook)

  2. The Promise swallows it, in which case it will then be logged to the console by unwrap_future (or similar).

So the only way it can be silently ignored is if you intentionally ignore it by using .map_err or similar. We can't force developers to handle errors correctly, if they choose to ignore it with .map_err then that's their choice.

@dakom

This comment was marked as abuse.

@dakom

This comment was marked as abuse.

@chpio
Copy link

chpio commented Jan 14, 2019

Also, to be clear, regardless of what strategy is used for spawn_local, you can always use things like setTimeout and requestAnimationFrame: you just have to create a Future which wraps them.

i don't know the difference between a micro- and macrotask, so maybe that's just part of my misunderstanding. isn't the current impl creating a new Promisse in spawn_local/future_to_promise and thus executing the task on the next tick/after RAF?

edit: a microtask seems to be a task, which is added to the running/current task/executed directly after the current task finishes(?)

@Pauan
Copy link
Contributor

Pauan commented Jan 14, 2019

I don't see why spawn necessarily equates to microtask queue.

There's some rather deep and subtle reasons for this.

The Futures system is designed to be asynchronous, so when you call task.notify() it must not immediately wake up the Task, instead it must be delayed by a small amount.

This isn't specific to wasm, all Future Executors on all platforms must be that way (including tokio). It's a part of the Executor contract, and various Futures rely upon that contract (so they would break on non-compliant Executors).

So since we are in wasm, we need some way to schedule a wakeup in the future. There are only two ways to do that: microtask and macrotask.

We could use macrotasks (e.g. setTimeout), but that introduces an unnecessarily large delay (it can be up to 4ms in some cases).

In addition, the browser rendering is based on macrotasks, so we might end up racing with the renderer! In other words, the browser might render the page before the setTimeout or after the setTimeout.

But we don't want that: we want to guarantee that the Future runs before rendering, so we can avoid the dreaded "flash of unstyled content" (and similar issues).

If we use microtasks (e.g. Promises), then all those problems go away: microtasks have zero latency (they are delayed, but run immediately after the JS code), and they are guaranteed to run before rendering.

This ensures the maximum performance and the minimum issues.

It isn't very surprising that a mechanism specifically designed for asynchronous values (Promises) would also be a good match for Futures.


There's another reason why we need spawn_local (and future_to_promise) to be asynchronous: consistency. Consider this program:

let x = Arc::new(Mutex::new(0));

let y = x.clone();

spawn_local(some_future.map(move |_| {
    *y.lock().unwrap() = 5;
}));

println!("{}", *x.lock().unwrap());

The question is: does it print 0 or 5? If we didn't guarantee asynchronousness, then it would depend on the behavior of some_future! But by guaranteeing asynchronousness, it will always be 0.

@Pauan
Copy link
Contributor

Pauan commented Jan 14, 2019

i don't know the difference between a micro- and macrotask, so maybe that's just part of my misunderstanding. isn't the current impl creating a new Promisse in spawn_local/future_to_promise and thus executing the task on the next tick/after RAF?

Promises execute long before RAF (RAF has very high latency, Promises have zero latency).

The difference between microtasks and macrotasks is rather subtle, this page probably explains it best.

a microtask seems to be a task, which is added to the running/current task/executed directly after the current task finishes(?)

Yes, basically. Microtasks have priority over macrotasks, so they run first.

@richard-uk1
Copy link
Contributor Author

so when you call task.notify() it must not immediately wake up the Task

Async is sometimes called cooperative multitasking. It's called this because it requires the futures to play by the rules. With threads, a bad thread will still only get their share of the cpu. With futures, we can block everything up if we want.

@richard-uk1
Copy link
Contributor Author

This thread should probably be turned into a blogpost. There is loads of great information here!

@Pauan
Copy link
Contributor

Pauan commented Jan 14, 2019

Async is sometimes called cooperative multitasking. It's called this because it requires the futures to play by the rules. With threads, a bad thread will still only get their share of the cpu. With futures, we can block everything up if we want.

According to the Future contract, it is valid to immediately call task.notify() inside of poll, and this is guaranteed to not immediately call poll again.

It might still synchronously call poll, but it must allow other pending Tasks to go first.

You can read more here:

rust-lang/futures-rs#738
rust-lang/futures-rs#754

@dakom

This comment was marked as abuse.

@chpio
Copy link

chpio commented Jan 15, 2019

@dakom yeah, but that's a broken future impl, it would break pretty much every executor, i guess.

  1. Executor Behavior for WebAssembly rust-lang/futures-rs#738 (comment)
  2. Executor Behavior for WebAssembly rust-lang/futures-rs#738 (comment)

@dakom

This comment was marked as abuse.

@Pauan
Copy link
Contributor

Pauan commented Jan 15, 2019

If the scheduling is always driven by the microtask queue, then won't calling task.notify() immediately create a deadlock since it re-schedules it in the same queue?

Yes, however, it won't stop other Futures from running. That is the key difference between an event loop and running task.notify() immediately. So it's a soft deadlock, not a hard deadlock.

Even if we used the macrotask queue, it would still livelock, which isn't much better than deadlock.

Interestingly, it's possible to create a combinator which forces other Futures to run on the macrotask queue, thus converting deadlock into livelock.

I'd imagine that example is synonymous with calling task.notify() immediately - but haven't tested that out yet.

Yeah, it's the same.

The microtask queue behaves the same as most event loop implementations (including Rust event loops).

The macrotask queue is... different.

@dakom

This comment was marked as abuse.

@dakom

This comment was marked as abuse.

@chpio
Copy link

chpio commented Feb 17, 2019

this could throw an exception on the js side if the rust struct is dropped?


also i think, that it's not guaranteed that a task.notify will result in the future being polled (especially if your future is wrapped by adapters).


edit: yeah, i deleted that comment, i thought you're storing the closure inside of the <img> dom element.

@dakom

This comment was marked as abuse.

@dakom

This comment was marked as abuse.

@dakom

This comment was marked as abuse.

@dakom

This comment was marked as abuse.

@Pauan
Copy link
Contributor

Pauan commented Sep 2, 2019

@dakom That's very cool, and exactly what we need to remove the Promise overhead. But it's not standardized, so we can't use it right now.

@sagudev
Copy link

sagudev commented Aug 19, 2021

@dakom That's very cool, and exactly what we need to remove the Promise overhead. But it's not standardized, so we can't use it right now.

@Pauan Today its not only standardized but also well supported https://caniuse.com/?search=queueMicrotask

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants