Skip to content

Support reading bytes into buffers allocated by user code on platforms where only async read is available #253

Closed
@domenic

Description

@domenic

Had a thought-provoking conversation with @piscisaureus today. He works on libuv and was giving me their perspective on how they do I/O and how it interfaces with Node.js's streams.

He pointed out that resources like files and pipes shared with other processes (seekable resources, he called them) don't have the epoll + read interface that ReadableByteStream's current design is based on. In Node.js, they do blocking I/O in a thread pool with pre-allocated buffers.

This works for a ready + read() JS interface, since then the implementation can pre-allocate (say) a 1 MiB buffer, read into it off-thread using blocking I/O, then fulfill the ready promise. When JS calls read(), it just returns the pre-allocated buffer.

It doesn't work as well with ReadableByteStream's ready + readInto(ab, offset, sizeDesired) interface, since we don't know what size buffer to allocate until too late. If we pre-allocate something too small, readInto will always be returning a number below the desired size, which is a bit user-hostile. If we pre-allocate too large, we are at the least wasting memory, and at the worst getting ourselves into a bunch of trouble as we need to figure out how to merge the "leftovers" with other chunks we read in the future.

(I am assuming here it is easy in V8 to map the [offset, offset + max(sizeRead, sizeDesired)] portion of ab onto a given C++ backing buffer.)

A readIntoAsync(ab, offset, desiredSize) model would work a bit better, since then we'd know to pre-allocate a buffer of size desiredSize. But, I was curious if you guys had any other thoughts? Did you think of this issue when designing the API, @tyoshino? I imagine @willchan could be helpful here too.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions