Skip to content
This repository was archived by the owner on Aug 29, 2024. It is now read-only.

Conversation

@maruth-stripe
Copy link
Contributor

@maruth-stripe maruth-stripe commented Oct 11, 2023

fill_read_buffer has a race condition which can potentially lead to data loss. Consider the following sequence of events:

  • init
  • Fiber 1 issues a read of 16K bytes. This invokes fill_read_buffer with a size of BLOCK_SIZE (say, 64K) bytes.
  • @read_buffer is currently empty, so we call @io.read_nonblock(size, input_buffer, exception: false)
  • Suppose the underlying @io is a socket, and the socket is empty. Then, @io.read_nonblock will yield (on stable-v1 because of the async_send instrumentation, and on main because of the io_read hook in rb_fiber_scheduler_io_read_memory in the VM, leading to io_wait when the read would've blocked.
  • Suppose Fiber 2 issues a read of 16K bytes. This invokes fill_read_buffer with a size of BLOCK_SIZE (64K) bytes.
  • Suppose the read succeeds, it consumes 16K bytes, leaving 48K bytes in @read_buffer
  • Fiber 1 resumes, and retries @io.read_nonblock(size, input_buffer, exception: false)
  • The read succeeds. But read_nonblock nukes the contents of input_buffer leading to a data loss of 48K bytes.

This PR fixes this race condition by using a fresh buffer every time

cc @fables-tales

Types of Changes

  • Bug fix.

Contribution

@ioquatix
Copy link
Member

In principle I agree with your assessment and the proposed fix. However, I question the premise - in what scenario do we read from the same stream using two fibers and how is any kind of correctness maintained? I think the expectation is only one fiber uses a stream - otherwise without any other synchronisation, I'm not sure what we are expecting.

@ioquatix
Copy link
Member

I am planning to archive this code base.

The relevant code is moved to io-stream gem: https://github.com/socketry/io-stream.

Feel free to create a new issue/pull request there if you feel strongly about this problem, we can continue the discussion.

Thanks for your efforts.

@ioquatix ioquatix closed this Aug 29, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants