Skip to content

Retry #127

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 43 commits into from
Sep 18, 2019
Merged

Retry #127

merged 43 commits into from
Sep 18, 2019

Conversation

martinthomson
Copy link
Member

This is a big set of changes (sorry), but once it is rebased, the core set are hard to disentangle. This is, until it is rebased, just a draft for the purposes of discussion and preview.

The main change here is to create a proper, functioning Server class that manages multiple server connections. It does so with the aid of a new indirection for connections: a ConnectionIdManager. That trait embodies both the ability to generate new connection IDs and the ability to parse them out of short header packets.

The new Server has some added responsibilities:

  • It manages which connection has the next timer.
  • It adds connection IDs to a map as the connection asks for them (by wrapping the ConnectionIdManager and capturing requests to generate connection IDs)
  • It generates Version Negotiation and Retry packets as needed.

I haven't done:

  • the self-encryption piece for Retry
  • conversion of the HTTP server part to use Server
  • conversion of command-line tools to use this

All of those will have to be follow-on issues and pull requests.

I started on the HTTP server conversion and ran into too many problems trying to disentangle client and server parts. That code will need some serious rearchitecting to fit into this model. There are a few small things that the Server class needs to add in order to support HTTP, but those are minor and can be added as part of any integration.

martinthomson and others added 18 commits August 2, 2019 13:41
When trying to add Retry for a server, I found that the API that handles
multiple packets was inconvenient.  There is no real need to have the
loop inside Connection anyway, so I externalized it.  This simplifies
the API and implementation a decent amount and should help avoid
problems.

This isn't complete at the moment because we don't set any timeout when
there is more data to send.  As a result, if we have more to say than
fits in a single packet, we won't say it until given the opportunity.
For the moment, that means either hitting the loss recovery timer or
getting a new packet inbound.  The way that the generators are
structured makes it very difficult to do that right now because we have
a single generator squatting on the queue always.

As part of fixing process_input and process_output, I removed the idle
timeout code that was there.  It wasn't in any way right, so I am not
concerned about its loss; I just moved the TODO.
@@ -245,11 +301,20 @@ impl Connection {
tps.set_empty(tp_const::DISABLE_MIGRATION);
}

fn new<A: ToString, I: IntoIterator<Item = A>>(
pub(crate) fn original_connection_id(&mut self, odcid: &ConnectionId) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depending on the outcome of Retry discussions, it might make sense to move this into the constructor. A private constructor that included the ODCID would be a potential path forward. We need to ensure that the server creates epoch 0 keys at the right time and that it doesn't end up creating them twice if it doesn't need to.

@@ -831,13 +910,14 @@ impl Connection {
}

qdebug!([self] "Need to send a packet");
initial_only = epoch == 0;
// Packets without Handshake or 1-RTT need to be padded.
needs_padding = epoch <= 1;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this is wrong, but the other PR is correct.

@agrover
Copy link
Contributor

agrover commented Aug 8, 2019

fixes #55, looks like?

@martinthomson martinthomson marked this pull request as ready for review August 13, 2019 07:12
@martinthomson
Copy link
Member Author

This should fix a few different issues. It does Retry for both server (#55) and client (#15), as well as providing a basic server at the transport layer (#20).

In reviewing this, I'm happy to take input on how this might be split into smaller pieces.

martinthomson and others added 3 commits August 13, 2019 17:33
Use enum IdleTimeout to track state of idle timeout.

Set idle timeout in transport parameters.

API CHANGE: Connection now exposes both a process_input() and
process_timer() method, instead of process_input() handling
both incoming packets and timer expiry.

Arbitrarily pick 60 seconds as idle timeout value.

fixes #38
Copy link
Contributor

@agrover agrover left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

initial feedback. Sorry it's so so so tardy.

agrover
agrover previously approved these changes Sep 17, 2019
Copy link
Contributor

@agrover agrover left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@mergify mergify bot dismissed agrover’s stale review September 18, 2019 03:42

Pull request has been modified.

@martinthomson martinthomson merged commit 6c012fb into master Sep 18, 2019
@martinthomson martinthomson deleted the retry-proper branch September 27, 2019 11:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants