-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implment Request Stream for UDP Tracker #873
Implment Request Stream for UDP Tracker #873
Conversation
c58f073
to
3483324
Compare
3483324
to
1b481be
Compare
1b481be
to
dce4c4b
Compare
438dae2
to
401925f
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #873 +/- ##
===========================================
+ Coverage 78.53% 78.75% +0.21%
===========================================
Files 171 170 -1
Lines 9458 9873 +415
===========================================
+ Hits 7428 7775 +347
- Misses 2030 2098 +68 ☔ View full report in Codecov by Sentry. |
401925f
to
e623a55
Compare
@josecelano This pull request is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7 I haven't reviewed the code yet but I have executed the UDP load lest:
Current version
MAX
Requests out: 435973.31/second
Responses in: 392346.07/second
- Connect responses: 194332.01
- Announce responses: 194108.16
- Scrape responses: 3905.90
- Error responses: 0.00
Peers per announce response: 0.00
Announce responses per info hash:
- p10: 1
- p25: 1
- p50: 1
- p75: 1
- p90: 2
- p95: 3
- p99: 105
- p99.9: 315
- p100: 391
MIN
Requests out: 371407.92/second
Responses in: 334267.17/second
- Connect responses: 165563.03
- Announce responses: 165413.11
- Scrape responses: 3291.02
- Error responses: 0.00
Peers per announce response: 0.00
Announce responses per info hash:
- p10: 1
- p25: 1
- p50: 1
- p75: 1
- p90: 2
- p95: 3
- p99: 104
- p99.9: 271
- p100: 339
This version
MAX
Requests out: 157926.53/second
Responses in: 142133.56/second
- Connect responses: 70206.70
- Announce responses: 70498.45
- Scrape responses: 1428.41
- Error responses: 0.00
Peers per announce response: 0.00
Announce responses per info hash:
- p10: 1
- p25: 1
- p50: 1
- p75: 1
- p90: 2
- p95: 10
- p99: 78
- p99.9: 125
- p100: 156s
MIN
Requests out: 79844.49/second
Responses in: 71860.52/second
- Connect responses: 35539.87
- Announce responses: 35614.59
- Scrape responses: 706.05
- Error responses: 0.00
Peers per announce response: 0.00
Announce responses per info hash:
- p10: 1
- p25: 1
- p50: 1
- p75: 1
- p90: 4
- p95: 16
- p99: 49
- p99.9: 68
- p100: 88
e623a55
to
b6c7fe2
Compare
b6c7fe2
to
dc12890
Compare
@josecelano reimplmented with a much simpler, (and not-slower?), version. :) |
dc12890
to
7df1201
Compare
7df1201
to
584a38a
Compare
Hey @da2ce7 cool! I will review it tomorrow after the meeting. I'm almost finished with the configuration refactorings. |
584a38a
to
e663c9c
Compare
e663c9c
to
4c67a83
Compare
4c67a83
to
84cc1a1
Compare
Hi @da2ce7, this version is even better than the previous one 🥇 !!. At least with this concrete performance test that I'm running. I will review the code. I have to double-check that the current version has the same result. The computer is exactly the same. However, we updated the BIOS. I guess that should not affect it. This versionMAX
MIN
|
The current version (target branch) is slightly better (the max) but, on average, much slower than the new one (this PR). The new MIN is almost as good as the previous MAX. I will run again the whole benchmarking. NEW MIN Requests out: 433379.95/second CURRENT MAX Requests out: 448806.70/second Current versionMAX
MIN
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7, I think this improves two important things:
- It removes the mess of "processor" functions.
- It introduces some abstractions (Receiver, BoundSocket, ...) that make the code more straightforward and clear.
And it improves significantly the performance, so I think we should merge it. All my comments are only suggestions to potentially improve the readability, and they can be done later.
}); | ||
let halt_task = tokio::task::spawn(shutdown_signal_with_message( | ||
rx_halt, | ||
format!("Halting Http Service Bound to Socket: {bind_to}"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7 I think the message should be "Halting UDP Service Bound to Socket: {bind_to}"
// note: this log message is parsed by our container. i.e: | ||
// | ||
// `[UDP TRACKER][INFO] Starting on: udp://` | ||
// | ||
tracing::info!(target: "UDP TRACKER", "Starting on: {local_addr}"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7, maybe we could make this dependency explicit. We could move the const from:
- https://github.com/torrust/torrust-tracker/blob/develop/src/console/ci/e2e/logs_parser.rs#L5-L8
- https://github.com/torrust/torrust-tracker/blob/develop/src/console/ci/e2e/logs_parser.rs#L67-L69
to the UDP server mod. Ideally, we should also use a field in the log now that we are using tracing. But that could be implemented in a new PR.
@@ -235,6 +234,84 @@ impl Drop for ActiveRequests { | |||
} | |||
} | |||
|
|||
/// Wrapper for Tokio [`UdpSocket`][`tokio::net::UdpSocket`] that is bound to a particular socket. | |||
struct Socket { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7, I would call it BoundSocket
and add a local_addr()
method.
let direct = Receiver { | ||
socket, | ||
tracker, | ||
data: RefCell::new([0; MAX_PACKET_SIZE]), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7, why did you call this "direct"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@josecelano yes, this is a poor name, it is from a experiment where the requests were 'directly' made...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @da2ce7 ok.
Some(Ok(tokio::task::spawn(Udp::process_request( | ||
request, | ||
self.tracker.clone(), | ||
self.socket.clone(), | ||
)) | ||
.abort_handle())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@da2ce7 I like this trait implementation. However, I like the idea of having a more "pure" receiver that only returns the request. I would move this spawn to the main loop or maybe the Reciever could have another wrapper StreamProcessor
or something like that that takes the requests and launches the worker. I think in the future, we could have a limited number of workers instead of spawning a new task, although I guess Tokio handles that.
ACK 84cc1a1 |
Hi @da2ce7, just for the record, this PR behaves the same way as the current version. We have a limit of 50 active requests. When the server has more than 50 simultaneously, it starts aborting the oldest tasks. As far as I can see, this enabled a kind of indirect timeout. Maybe we should increase the buffer and set an explicit timeout to process UDP requests. I guess that would add an overhead, and in the end, the effect would be the same; some requests are aborted when the server is overloaded. The advantage of this solution is the timeout is somehow dynamic. Just out of curiosity, I've run the test load for 1 minute on my PC, and there were 6 aborted requests. We have to check that this is not affecting the benchmarking. I don't think so because the UDP load tests count the responses. |
Hi @da2ce7 I'm working on the refactorings I've suggested in this PR: #913 Now that I'm working on the code and not just reviewing it, I've noticed one thing. When we receive a new request, we immediately spawn a new task to handle it. However, we limit the number of active requests to 50. If we are already handling 50 requests and a new request comes in, we spawn a new task and remove the oldest tasks in the active requests buffer. Would not it make sense not to spawn new tasks if the active request buffer is already full? We could have a "pending to activate requests" buffer in front of the active reqs buffer. When tasks in the active buffer finish we get more requests from the first requests buffer and spwan a new task to handle them. Pros:
Cons:
By the way, do you want to continue working on this PR? I can stop working on the new one , but I think it's a good thing to alternately work on the same to share the code ownership. |
f06976e docs: update some UDP server comments (Jose Celano) 89bb735 refactor: reorganize UDP server mod (Jose Celano) c121bf2 refactor: rename UDP server types (Jose Celano) 336e0e6 refactor: reorganize mod to extract new submods (Jose Celano) 61fb4b2 refactor: move active requests logic to ActiveRequest type (Jose Celano) 35b6c84 refactor: simplify UDP server receiver (Jose Celano) a5e2baf refactor: extract method (Jose Celano) b4b4515 refactor: extract const for logging targets (Jose Celano) 0388e1d refactor: extract consts for logging targets (Jose Celano) 16ae4fd refactor: rename vars and extract constructor (Jose Celano) 7ff0cd2 refactor: rename var (Jose Celano) 0e3678d refactor: rename Socket to BoundSocket and fix format errors" (Jose Celano) 9b3b75b fix: log message (Jose Celano) Pull request description: This PR only includes some minor changes I've proposed in this [PR](#873), and some refactorings. ACKs for top commit: josecelano: ACK f06976e Tree-SHA512: a7a4ea14077c2ce6df0b80b222952d0c6c6588f1df50f78d01198ea0ab12ce3ca74923caed6601994e955880e2afe3d9432f87ae74f383362fae452e367ad359
This pull request improves the UDP tracker
server.rs
request processing logic; now using atokio::stream
for getting the requests for the ring buffer.I hope this pull request will improve the performance of the tracker in multi-core setups.
@josecelano This code will require a special code-review.