-
Notifications
You must be signed in to change notification settings - Fork 132
Retry #127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry #127
Conversation
When trying to add Retry for a server, I found that the API that handles multiple packets was inconvenient. There is no real need to have the loop inside Connection anyway, so I externalized it. This simplifies the API and implementation a decent amount and should help avoid problems. This isn't complete at the moment because we don't set any timeout when there is more data to send. As a result, if we have more to say than fits in a single packet, we won't say it until given the opportunity. For the moment, that means either hitting the loss recovery timer or getting a new packet inbound. The way that the generators are structured makes it very difficult to do that right now because we have a single generator squatting on the queue always. As part of fixing process_input and process_output, I removed the idle timeout code that was there. It wasn't in any way right, so I am not concerned about its loss; I just moved the TODO.
neqo-transport/src/connection.rs
Outdated
@@ -245,11 +301,20 @@ impl Connection { | |||
tps.set_empty(tp_const::DISABLE_MIGRATION); | |||
} | |||
|
|||
fn new<A: ToString, I: IntoIterator<Item = A>>( | |||
pub(crate) fn original_connection_id(&mut self, odcid: &ConnectionId) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depending on the outcome of Retry discussions, it might make sense to move this into the constructor. A private constructor that included the ODCID would be a potential path forward. We need to ensure that the server creates epoch 0 keys at the right time and that it doesn't end up creating them twice if it doesn't need to.
neqo-transport/src/connection.rs
Outdated
@@ -831,13 +910,14 @@ impl Connection { | |||
} | |||
|
|||
qdebug!([self] "Need to send a packet"); | |||
initial_only = epoch == 0; | |||
// Packets without Handshake or 1-RTT need to be padded. | |||
needs_padding = epoch <= 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this is wrong, but the other PR is correct.
fixes #55, looks like? |
Use enum IdleTimeout to track state of idle timeout. Set idle timeout in transport parameters. API CHANGE: Connection now exposes both a process_input() and process_timer() method, instead of process_input() handling both incoming packets and timer expiry. Arbitrarily pick 60 seconds as idle timeout value. fixes #38
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
initial feedback. Sorry it's so so so tardy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Pull request has been modified.
f8fc5c6
to
98ebfea
Compare
This is a big set of changes (sorry), but once it is rebased, the core set are hard to disentangle. This is, until it is rebased, just a draft for the purposes of discussion and preview.
The main change here is to create a proper, functioning
Server
class that manages multiple server connections. It does so with the aid of a new indirection for connections: aConnectionIdManager
. That trait embodies both the ability to generate new connection IDs and the ability to parse them out of short header packets.The new
Server
has some added responsibilities:ConnectionIdManager
and capturing requests to generate connection IDs)I haven't done:
Server
All of those will have to be follow-on issues and pull requests.
I started on the HTTP server conversion and ran into too many problems trying to disentangle client and server parts. That code will need some serious rearchitecting to fit into this model. There are a few small things that the
Server
class needs to add in order to support HTTP, but those are minor and can be added as part of any integration.