-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Base Architecture #9
Conversation
a1b467a
to
32d5d39
Compare
400e45f
to
c027a22
Compare
786d0b2
to
716b80c
Compare
b5d3488
to
2819591
Compare
… EntityId unto a tuple
e79338f
to
352f250
Compare
|
||
* Track all optimistic updates individually via an [optimistic state queue](../src/OptimisticUpdateQueue.ts), where each update is represented in the same format as a GraphQL response payload. | ||
|
||
* The cache tracks both a base graph snapshot and - if there are active optimistic updates - an optimistic graph snapshot. Every time either the raw snapshot changes, or the optimistic state queue changes, we regenerate the unified snapshot by replaying the optimistic updates on top of the base snapshot. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that it would be helpful to reason about optimistic updates by calling out explicit scenarios:
- User is offline for an extended period of time. App should behave as normal and network calls can be replayed once reconnected.
- User is mostly connected, updates will succeed the vast majority of the time.
In case 1 it seems reasonable to merge state changes directly to the cache. During an extended offline session, these changes would likely pile up and replaying events could become really slow.
Case 2 (well connected) seems like fewer updates would pile up at a given point in time and replaying them each time server data comes in wouldn't be too big of an issue.
One idea for a simplified optimistic update mechanism could be to merge all optimistic updates to the cache while still maintaining a queue of state changes. If one of the updates fails, you could bust the cache pulling fresh state from the server and then replay the additional queued state changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case 1 it seems reasonable to merge state changes directly to the cache. During an extended offline session, these changes would likely pile up and replaying events could become really slow.
Unfortunately, I don't think it's safe to merge them directly into the cache:
-
When we do end up getting back online, after the update finally flushes, the server may disagree, but we still want to represent the optimistic state until all optimistic updates have flushed.
-
Those requests could still fail at some point, which should invalidate that specific update, but not others. If we merge directly into the cache, we lose all ability to safely roll back.
I think we can flip the merging around: merge all the updates into one delta (as opposed to replaying one at a time), and just apply that merged update when the base store changes. It gets a little tricky, though, as each update can be rooted at a different node in the graph
Pulling enough server state to cover all updates may be untenable (too much data, or too many joins)
docs/Architecture.md
Outdated
|
||
3. Verify that the query is satisfied by the cache. _The naive approach is to walk the selection set(s) expressed by the query; it's probably good enough for now_. | ||
|
||
4. Return the query root, or view on top of it via (3). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean (2)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doh, yup.
docs/Architecture.md
Outdated
|
||
4. Return the query root, or view on top of it via (3). | ||
|
||
Generally, when reading, we want to return whatever data we have, as well as a status indicating whether the query was completely satisfying. The caller can determine what to do if not satisfied. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
satisfying -> satisified?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Herein lies an overview of the architecture as well as the interfaces (and more detailed algorithms) that satisfy it.
I'd suggest starting with
docs/Architecture.md
, which explains the broad strokes. Then, follow up with the individual interfaces; roughly in this order:src/*Snapshot.ts
which describe the shape of the various pieces of the cache.src/Cache.ts
which is the entry point into this module, to (roughly) satisfy Apollo Client 2.0's new cache APIsrc/operations/*
for the actual work we perform against the cache.Note that I haven't gotten to the following pieces of the architecture:
However, I'm pretty confident they can be built given all the components outlined here. Will be fast-following this PR with their architecture, as well.