-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
incorrect locking in FinagleChirper Cache #164
Comments
Thanks @tkrodriguez for the investigation !
I'll wait for @axel22 to comment though since some Prokopec guy seems to be behind the [1] https://www.scala-lang.org/api/2.12.2/scala/collection/concurrent/TrieMap.html |
Maybe twitter has already fixed this issue in their version of the code? It might be nice to be in sync with them though I don't think it matters much here. Using a concurrent data structure instead of a synchronized one might affect throughput though. |
Well, the hashmap being corrupt by this missing lock is in our benchmark code, not in Finagle's. What kind of fix would you expect on the Finagle side ? |
Ok. i wasn't clear where the boundary between our code and their code was. Since it's our harness code then we can do whatever we want. From that perspective a lock free implementation might be best but I don't have strong opinions otherwise. |
Thanks for the catch and the great analysis. I think that a concurrent hash map is a way to go. |
We'd been seeing frequency hangs in finagle-chirper and investigation indicated that the Cache HashMap was getting corrupted with DefaultEntry buckets have a next pointer which points at itself. The cache update is syntactically with in a lock.synchronized block but since there's a future it's not executed while that lock is held. Adding a lock.wait(1) before the cache update shows that the lock isn't held when the cache is updated.
which throws this
The text was updated successfully, but these errors were encountered: