-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
waiting for stream to close. #27
Comments
I'm assuming Unfortunately, there's not a whole lot I can do about this. Note that this is not specific to remote streams. When you make a pipe chain of "regular" streams, the first await readable.pipeThrough(transform1).pipeThrough(transform2).pipeTo(writable); (There's some discussion in whatwg/streams#976 about possibly adding a For now, your best best would be to have the worker send a message back to the main thread when it's actually done writing all chunks to IndexedDB. Perhaps you can (ab)use another remote stream for that? 😛 |
Hmm, roger that. For now i have rolled my own solution that sends a postMessage back notifying me when it's done.
I figured it was something like that... i only really use one writable stream that dose not go trough any transformers. I have basically built a polyfill for const worker = new Worker('./worker.js', { type: 'module' })
const remote = new RemoteWritableStream()
const writable = new FileSystemWritableFileStream(
remote.writable.getWriter() // non normal option (just private / internal use)
)
postMessage('', [remote.readablePort])
return writable As you can see i needed it to be a different kind of writable stream that do also have additional methods added to the public facing methods that also has a constructing a So what i really need to be able to accomplish is more something in the lines of: // main.js
import { RemoteWritableStreamController } from 'remote-web-streams'
// this controller would basically refer to the controller constructed inside the worker.js (below)
const controller = new RemoteWritableStreamController()
worker.postMessage('', [controller.port])
return new FileSystemWritableFileStream(controller)
// worker.js
import { RemoteWritableStreamController } from 'remote-web-streams'
onmessage = evt => {
RemoteWritableStreamController.from({
async start() { },
async close() { },
async abort() { },
async write(chunk, ctrl) { ... },
port: evt.ports[0]
})
} So basically you would not create any additional writable/readable stream in the main thread. the controller will basically be built in the worker instead. It may look a bit weird/backward to create the |
if you are interested in my work then here is what i have been doing: jimmywarting/native-file-system-adapter#62 (i did not end up using your package after all... it broke too many test) |
it seems like as soon as all chunks have been transfered to the other remote streams it closes without actually knowing if the worker is done or not?
i get race condition errors...
maybe it's a bug or just not implemented yet?
The text was updated successfully, but these errors were encountered: