Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Base Architecture #9

Merged
merged 31 commits into from
Aug 3, 2017
Merged

Base Architecture #9

merged 31 commits into from
Aug 3, 2017

Conversation

nevir
Copy link
Contributor

@nevir nevir commented Jul 28, 2017

Herein lies an overview of the architecture as well as the interfaces (and more detailed algorithms) that satisfy it.

I'd suggest starting with docs/Architecture.md, which explains the broad strokes. Then, follow up with the individual interfaces; roughly in this order:

src/*Snapshot.ts which describe the shape of the various pieces of the cache.

src/Cache.ts which is the entry point into this module, to (roughly) satisfy Apollo Client 2.0's new cache API

src/operations/* for the actual work we perform against the cache.


Note that I haven't gotten to the following pieces of the architecture:

  • Query observers.
  • Apollo Client's abstraction around transactions, and optimistic transactions.

However, I'm pretty confident they can be built given all the components outlined here. Will be fast-following this PR with their architecture, as well.

@nevir nevir force-pushed the nevir/doc-architecture branch 3 times, most recently from a1b467a to 32d5d39 Compare July 30, 2017 22:52
@nevir nevir changed the title WIP [WIP] Architecture Jul 30, 2017
@nevir nevir force-pushed the master branch 10 times, most recently from 400e45f to c027a22 Compare August 1, 2017 01:17
@nevir nevir force-pushed the nevir/doc-architecture branch 2 times, most recently from 786d0b2 to 716b80c Compare August 1, 2017 21:27
@nevir nevir changed the title [WIP] Architecture Base Architecture Aug 2, 2017

* Track all optimistic updates individually via an [optimistic state queue](../src/OptimisticUpdateQueue.ts), where each update is represented in the same format as a GraphQL response payload.

* The cache tracks both a base graph snapshot and - if there are active optimistic updates - an optimistic graph snapshot. Every time either the raw snapshot changes, or the optimistic state queue changes, we regenerate the unified snapshot by replaying the optimistic updates on top of the base snapshot.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that it would be helpful to reason about optimistic updates by calling out explicit scenarios:

  1. User is offline for an extended period of time. App should behave as normal and network calls can be replayed once reconnected.
  2. User is mostly connected, updates will succeed the vast majority of the time.

In case 1 it seems reasonable to merge state changes directly to the cache. During an extended offline session, these changes would likely pile up and replaying events could become really slow.

Case 2 (well connected) seems like fewer updates would pile up at a given point in time and replaying them each time server data comes in wouldn't be too big of an issue.

One idea for a simplified optimistic update mechanism could be to merge all optimistic updates to the cache while still maintaining a queue of state changes. If one of the updates fails, you could bust the cache pulling fresh state from the server and then replay the additional queued state changes.

Copy link
Contributor Author

@nevir nevir Aug 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case 1 it seems reasonable to merge state changes directly to the cache. During an extended offline session, these changes would likely pile up and replaying events could become really slow.

Unfortunately, I don't think it's safe to merge them directly into the cache:

  • When we do end up getting back online, after the update finally flushes, the server may disagree, but we still want to represent the optimistic state until all optimistic updates have flushed.

  • Those requests could still fail at some point, which should invalidate that specific update, but not others. If we merge directly into the cache, we lose all ability to safely roll back.


I think we can flip the merging around: merge all the updates into one delta (as opposed to replaying one at a time), and just apply that merged update when the base store changes. It gets a little tricky, though, as each update can be rooted at a different node in the graph

Pulling enough server state to cover all updates may be untenable (too much data, or too many joins)


3. Verify that the query is satisfied by the cache. _The naive approach is to walk the selection set(s) expressed by the query; it's probably good enough for now_.

4. Return the query root, or view on top of it via (3).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean (2)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doh, yup.


4. Return the query root, or view on top of it via (3).

Generally, when reading, we want to return whatever data we have, as well as a status indicating whether the query was completely satisfying. The caller can determine what to do if not satisfied.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

satisfying -> satisified?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want our queries to feel like:

taqcv0s

@nevir nevir merged commit 23a0b2b into master Aug 3, 2017
@nevir nevir deleted the nevir/doc-architecture branch August 3, 2017 17:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants