-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multithreading via C++ thread pool of clients #125
Conversation
World you please show a benchmark about the performance improvement your modifications have? |
Alright, I think this is ready to be reviewed/merged. I also rewrote some parts of the README unrelated to this PR for the sake of clarity. I'll start tracking gevent support for threaded clients in a greenify PR. Ideally that shouldn't affect the implementation here. Edit: actually, one more thing; I meant to look into golang bindings |
It looks like gomemcache is already thread safe, so I'm back to thinking this PR is done. |
I've closed the greenify PR for being unsolvable. Unless you have feedback, this pull request is done. Please squash & merge. Thanks for your time, I know this is a lot of code to add at one time. The reason that I made the fork is already satisfied by having it under my profile, so the pull request really is up to you. |
I think the changes under I do not agree with some changes of readme. What is Changes about Sorry for replying so slowly, but it is really not easy to look over such a big pull request. |
I think I've now removed all the parts of the pull request that aren't necessary for the Quick summary of the parts removed:
I'm currently expecting to make separate PRs to add back the changes in:
Here's some ideas on parts that can further split up the changes, along with what I see as the benefits and reasons not to.
Please let me know if there's any changes that would make reviewing easier. |
std::atomic<int> rv = 0; | ||
std::lock_guard<std::mutex> updating(m_fifo_access); | ||
std::for_each(irange(0), irange(m_clients.size()), | ||
//std::for_each(std::execution::par_unseq, irange(0), irange(m_clients.size()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The loops for initializing and growing the client pool (doubles in size each time it's overrun) were written using for_each
in order to support parallelized initialization of the client objects, but, at the scales I tested it at, the parallelization didn't make a difference, thus the commented out execution policy.
self.config(libmc.MC_MAX_CLIENTS, POOL_SIZE) | ||
|
||
|
||
class FIFOThreadPool(ThreadPool): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class is analogous to the C++ LockPool
class and might be an easier way to understand the overall control flow
🎉 🎉 🎉 |
I've implemented the C++ thread workers to be FILO under the assumption that reusing a single client is preferable when possible. The current implementation needs C++17 or above to parallelize some of the client setup (
std::execution
). Building still fails with the required version, but I figured I'd open a draft PR anyway in case there's feedback. I'd especially appreciate advice about a good testing setup for the concurrent code along with any opinions about language bindings.