-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Implement concurrent pushes across batches #3206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3206 +/- ##
==========================================
+ Coverage 72.68% 73.24% +0.55%
==========================================
Files 259 259
Lines 19864 19889 +25
==========================================
+ Hits 14439 14567 +128
+ Misses 4521 4400 -121
- Partials 904 922 +18
Flags with carried forward coverage won't be shown. Click here to find out more.
|
16be249
to
e2f4dc6
Compare
e2f4dc6
to
459cac9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but I would prefer if we can do some changes to some of the math ... mostly removing some.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM too 🤝
Non-blocking on my part, but I share the same concern regarding the work dispatching to multiple workers as @mstoykov. I think the code could be simplified indeed, to have one input channel, and multiple workers reading from it concurrently, I find it might also make the logic easier to follow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀 🚀 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
What?
This change moves the concurrency from flushes to batches. That fixes issue #3192 and generally improves processing.
Why?
Even if the single network request to ingester is fast enough, executing them sequentially increases the time.
Checklist
make ci-like-lint
) and all checks pass.make tests
) and all tests pass.Related PR(s)/Issue(s)
Closes #3192