Possible race condition for near cache? #83
Replies: 3 comments 5 replies
-
Near cache uses synchronous listener for invalidation events, which are sent back by the service thread, in order. Basically, it is impossible for front map to receive invalidation before it receives the value itself, as both GET request and the subsequent invalidation notification are processed by the same service thread, in the order they were received (and operations on a single key are always ordered, even when they execute on worker threads). At least, that's the theory -- if you are seeing different behavior it is a bug and we may need to investigate. |
Beta Was this translation helpful? Give feedback.
-
So that means that the update performed on the shared cache tier will be
"forever lost" for the front cache performing the read while the update
occured? If so the optimization is not usable in our usecase - we can
handle a longer "eventual consistency" but not a "permanent inconsistency"
between front and back.
Would be good to document this as it is not clear from reading the doc (at
least I assumed a longer eventual consistency window but nothing worse than
that from the description).
…On Thu, Nov 10, 2022, 23:37 Maurice Gamanho ***@***.***> wrote:
@javafanboy <https://github.com/javafanboy> From what I can see in the
code (and @aseovic <https://github.com/aseovic> can confirm when he's got
some cycles, PartitionedCache.onGetAllRequest), we do not lock the entries
belonging to backup partitions, nor do we pin said partitions (prevents
them from moving while the operation is in progress). Probably simply
because the concept of locking entries does not apply to backups. On
pinning: the primaries remain pinned, which means that the backups are
transitively pinned.
The behavior then becomes what you describe initially, and strictly in the
context of "read-locator", because then this becomes what we refer to as
"incoherent reads".
Let us know if you need more details.
—
Reply to this email directly, view it on GitHub
<#83 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADXQF4OFVTKQSEBJ5OCPVLWHV2LNANCNFSM6AAAAAAQJIXT4Y>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Sounds great then the optimization is viable in our case - thanks for
looking into it!
…On Mon, Nov 14, 2022 at 5:30 PM Maurice Gamanho ***@***.***> wrote:
As far as I understand you will have a temporary inconsistency, but once
the primary owner processes the update event the front will be invalidated,
which will happen immediately after the read in this worst case scenario.
This will prompt a refresh of the front, and won't leave a "permanent
inconsistency".
—
Reply to this email directly, view it on GitHub
<#83 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADXQF4WQX4LZ6MNLQBYJB3WIJSJ5ANCNFSM6AAAAAAQJIXT4Y>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Assuming you perform a getAll of a number of keys (k1 to k100) from the a near cache and a subset (say k50 to k100) needs to be fetched from the distributed tier. Lets say that the value "A" for k50 is read from one partititon and while other values keys are read from other partititons the value of k50 is updated to "B" and therefore an invalidation of key k50 is sent out. Is it then possible that the invalidation of k50 arrive to the front cache before the old value "B" is returned (and faultily is added to the front cache) or does Coherence prevent this by for instance keeping the version of the latest invalidation for each key so that the value "A" in this case will not be inserted in the near cache?
Sorry if the description above was unclear - a secuence diagram would be better but I was too lazy drawing one :-)
Beta Was this translation helpful? Give feedback.
All reactions