Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

waiting for stream to close. #27

Closed
jimmywarting opened this issue Jun 27, 2023 · 3 comments
Closed

waiting for stream to close. #27

jimmywarting opened this issue Jun 27, 2023 · 3 comments

Comments

@jimmywarting
Copy link

it seems like as soon as all chunks have been transfered to the other remote streams it closes without actually knowing if the worker is done or not?

i get race condition errors...

await redable.pipeTo(writable) // resolves earlier than it should...?
// here i'm required to wait a bit longer for the worker to successfully write it to indexeddb.
await new Promise(rs => setTimeout(rs, ms))
file = await idb.get(key)

maybe it's a bug or just not implemented yet?

@MattiasBuelens
Copy link
Owner

I'm assuming writable is a RemoteWritableStream's writable end, whose readable end is being read inside a worker that pushes stuff into IndexedDB?

Unfortunately, there's not a whole lot I can do about this. RemoteWritableStream's sink close() method sends a message to the other end to call ReadableStreamDefaultController.close(), but that doesn't say anything about when that closing is observed. For example, there may be multiple chunks in the readable stream's queue that have not yet been consumed.

Note that this is not specific to remote streams. When you make a pipe chain of "regular" streams, the first pipeTo() promise will resolve before the subsequent pipeTo() promises, without waiting for the "final sink" to process the last chunk. That's why you usually only await the last pipeTo() promise, as in:

await readable.pipeThrough(transform1).pipeThrough(transform2).pipeTo(writable);

(There's some discussion in whatwg/streams#976 about possibly adding a finally() method to an underlying source, which might help.)

For now, your best best would be to have the worker send a message back to the main thread when it's actually done writing all chunks to IndexedDB. Perhaps you can (ab)use another remote stream for that? 😛

@jimmywarting
Copy link
Author

Hmm, roger that.
gone close this as there seems to be nothing to do about it right now...
maybe have to wait for that finally() method to arrive... or better yet for safari to start shipping transferable streams... only reason why this pkg is still needed. all other browser support it already.

For now i have rolled my own solution that sends a postMessage back notifying me when it's done.

this is not specific to remote streams. When you make a pipe chain of "regular" streams, the first pipeTo() promise will resolve before the subsequent pipeTo() promises, without waiting for the "final sink"

I figured it was something like that... i only really use one writable stream that dose not go trough any transformers.
await redable.pipeTo(writable) was just a very basic simplification of my problem. but the same thing still applies.

I have basically built a polyfill for FileSystemWritableFileStream which extends WritableStream... so it not a transform stream with both a readable + writable combo. so sending all the data to the worker required me to write something like this:

const worker = new Worker('./worker.js', { type: 'module' })
const remote = new RemoteWritableStream()
const writable = new FileSystemWritableFileStream(
  remote.writable.getWriter() // non normal option (just private / internal use)
)
postMessage('', [remote.readablePort])
return writable

As you can see i needed it to be a different kind of writable stream that do also have additional methods added to the public facing methods that also has a seek / truncate / write / close method added to it.

constructing a RemoteWritableStream creates one additional transformer. so now i can no longer safely wait for the closed promise to complete.

So what i really need to be able to accomplish is more something in the lines of:

// main.js
import { RemoteWritableStreamController } from 'remote-web-streams'
// this controller would basically refer to the controller constructed inside the worker.js (below)
const controller = new RemoteWritableStreamController()
worker.postMessage('', [controller.port])
return new FileSystemWritableFileStream(controller)

// worker.js
import { RemoteWritableStreamController } from 'remote-web-streams'
onmessage = evt => {
  RemoteWritableStreamController.from({
    async start() { },
    async close() { },
    async abort() { },
    async write(chunk, ctrl) { ... },
    port: evt.ports[0]
  })
}

So basically you would not create any additional writable/readable stream in the main thread. the controller will basically be built in the worker instead.
i guess this would more likely resemble how transferring a stream to a worker would work by the browser without creating any additional pipes.


It may look a bit weird/backward to create the FileSystemWritableFileStream in the main thread and not constructing the FileSystemWritableFileStream in the worker and transferring it to the main thread. but we also have to remember that they are not transferable as well. (specially when they are polyfilled)

@jimmywarting
Copy link
Author

if you are interested in my work then here is what i have been doing: jimmywarting/native-file-system-adapter#62 (i did not end up using your package after all... it broke too many test)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants