-
-
Notifications
You must be signed in to change notification settings - Fork 11
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Consider a call flow like:
@read_bufferin our underlying async/io/stream contains exactly 9 bytes.read_frame(takes 9 bytes off the underlying@read_bufferin consume_read_buffer) and this completely drains @read_buffer- successfully gets the header
- reads payload, times out,
@read_bufferis still empty, we do not parse the frame, and exit the call flow. - retry
read_framewith a higher timeout - we enter
read_headeragain, which will callfill_read_buffer(fills buffer with ~thousands of bytes) @read_bufferin the underlying stream now contains the payload of the previous frame, instead of a valid frame header, and we get a protocol error.
I think in this case the "right" thing to do is put the 9 bytes back in the read buffer, or hold the frame header and retry reading the payload, instead of trying to read the header out of what is certainly payload.
def read_frame(maximum_frame_size = MAXIMUM_ALLOWED_FRAME_SIZE)
# Read the header:
length, type, flags, stream_id = read_header <- second time we come here, we're reading payload bytes, not header bytes
# Async.logger.debug(self) {"read_frame: length=#{length} type=#{type} flags=#{flags} stream_id=#{stream_id} -> klass=#{@frames[type].inspect}"}
# Allocate the frame:
klass = @frames[type] || Frame
frame = klass.new(stream_id, flags, type, length)
# Read the payload:
frame.read(@stream, maximum_frame_size) <- timeout occurs here
# Async.logger.debug(self, name: "read") {frame.inspect}
return frame
end
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working