Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimisticResponse is changing all immutable objects on query #4141

Closed
maxguzenski opened this issue Nov 14, 2018 · 18 comments
Closed

optimisticResponse is changing all immutable objects on query #4141

maxguzenski opened this issue Nov 14, 2018 · 18 comments

Comments

@maxguzenski
Copy link

maxguzenski commented Nov 14, 2018

I have a watched query and I use optimisticResponse in a mutation to update only one object from this query:

    return client.mutate({
      mutation: LIKE_MUTATION,
      variables: {id},
      optimisticResponse: {__typename: 'Mutation', userLike: {id, __typename: 'User', liked: true}}
    })

When I use mutation with optimisticResponse like that, my whole pure list update (all itens on my watchedQuery changes). When I remove optimisticResponse only my changed item on list is updated.

Is this expected?

@seanconnollydev
Copy link

seanconnollydev commented Nov 19, 2018

Seeing the exact same behavior. It defeats the purpose of optimistic UI in the first place because rerensering all items can crush performance.

@barbalex
Copy link

Don't know if this is related but when I use optimisticResponce the cache (InMemoryCache) grows every time a mutation is done.

When using a whole lot of queries it grows so bad that after about 15 Updates chrome crashes...

@SnidelyWhiplash
Copy link

SnidelyWhiplash commented Dec 8, 2018

@barbalex Possibly related to #4210 ?

@Grmiade
Copy link

Grmiade commented Jan 9, 2019

Any news about this issue? Seeing the exact same behavior :/

@nikonhub
Copy link

+1 same problem here.

@maxguzenski
Copy link
Author

Please, someone?

@klaussner
Copy link

klaussner commented Mar 8, 2020

I made a reproduction (https://codesandbox.io/s/apollo-client-issue-4141-lxib0) and found out what causes the issue.

The reproduction renders a list of two people using a Person component that is wrapped with React.memo. Since the Apollo cache should always return the same objects if the data didn't change after a mutation, re-rending can be avoided by comparing the person objects with ===. The "Update name" button runs a mutation that changes the first person's name.

If no optimisticResponse is used for the mutation, the isPersonEqual function returns false for the first person and true for the second person, as expected:

rendering person (ID 1)
rendering person (ID 2)

> "Update name" button clicked
> Mutation result received

isPersonEqual (ID 1): false
rendering person (ID 1)
isPersonEqual (ID 2): true

However, if an optimisticResponse is used (uncomment lines 46–52), isPersonEqual always returns false, even if the person didn't change, which can make performance optimizations with React.memo and shouldComponentUpdate a little difficult.

rendering person (ID 1)
rendering person (ID 2)

> "Update name" button clicked
> Optimistic update applied

isPersonEqual (ID 1): false
rendering person (ID 1)
isPersonEqual (ID 2): false  <- should not be false
rendering person (ID 2)

> Mutation result received

isPersonEqual (ID 1): false
rendering person (ID 1)
isPersonEqual (ID 2): false
rendering person (ID 2)

I think the problem is that optimistic updates use their own CacheGroup, which is why previously created objects aren't reused when results are retrieved from the cache. Since #5648, all optimistic updates share a single CacheGroup but the cache for the root EntityStore is still separate. The two CacheGroups are created here:

super(policies, new CacheGroup(resultCaching));
this.sharedLayerGroup = new CacheGroup(resultCaching);

I changed the second line to this.sharedLayerGroup = this.group; to make Root and Layers share one CacheGroup, which seemed to resolve the problem but I don't know if that has other, unintended consequences.

@ablbol
Copy link

ablbol commented Sep 25, 2020

Any update on this?

@benjamn
Copy link
Member

benjamn commented Nov 24, 2020

After a lot of thinking over the past few months, I believe InMemoryCache is uniquely equipped to improve upon the current promise of "=== on a best-effort basis, if you play your cards just right" by making a much stronger guarantee: If any two cache result objects are deeply equal to each other, they will also be ===, no matter how they were computed, and no matter where they appear in the result trees for different queries.

The general idea is to perform canonicalization (sometimes called internalization) as a post-processing step, which happens to be a feature of the Record and Tuple proposal TC39 is currently considering, which means this trick has a good chance of becoming natively supported in future versions of JavaScript. In essence, every cache result object would be a Record, and any arrays within the result tree would be represented as Tuples. Of course this means result objects have to be immutable, which they already are (#5153).

Canonicalization can be performed in O(|result|) time, but the InMemoryCache result caching system allows substantial parts (subtrees) of that work to be memoized/amortized. This is the same system that currently provides === for repeated query reads most of the time (#3394), and every case where result objects were already === translates to a fast path for canonicalization, which is why I say InMemoryCache is uniquely equipped to implement this idea. External code that attempted to implement the same post-processing logic would have to start from scratch every time, whereas InMemoryCache can cache unchanged result subtrees, so it can avoid redoing deep canonicalization work. That said, the cost of canonicalization is an implementation detail, because the results are always the same.

Avoiding memory leaks in a system like this is a challenge, but one I understand pretty well by now. Ultimately, this trick should also limit the total amount of memory Apollo Client needs to retain, since deeply equal subtrees share a single canonical representation. Finally, while canonical objects are valuable and worth retaining, they are not irreplaceable—the cache could jettison its canonical object pool at any time to reclaim that memory, at the cost of losing the === guarantee for past results.

This is definitely too big of a change for Apollo Client v3.3, which we're hoping to finalize today, but I think we can get this into a v3.4 beta release soon!

@benjamn benjamn self-assigned this Nov 24, 2020
@stephenh
Copy link

@benjamn amazing! Thanks for the detailed update! Sounds really promising.

@darkbasic
Copy link

but I think we can get this into a v3.4 beta release soon

Awesome!

@benjamn
Copy link
Member

benjamn commented Dec 16, 2020

My implementation of #4141 (comment) in #7439 is now available for testing in @apollo/client@3.4.0-beta.4 (or later). Please give it a try when you have the chance!

@klaussner
Copy link

This change reduced the rendering time of a large component in my app by about 50%. Thanks, @benjamn! 😊

@ghost
Copy link

ghost commented Dec 22, 2020

Same for us, we noticed a large improvement in performance with this change 🎉

Thank you @benjamn!

@stephenh
Copy link

stephenh commented Dec 22, 2020

This is awesome @benjamn ; I scanned the PR just for curiosity and really like how elegant it looks.

I had a naive question, I was kind of surprised, it looks like most of the canonicalization changes were on the readFromStore side of things. I expected to see a few more changes to the writeQuery path.

With the disclaimer that I really know very little about the internal client impls details, my concern specifically for the optimistic response case is that our flow (for this bug) was (or will be) something like:

  1. we make a GQL query for a large DAG
  2. we update a single leaf in the DAG with an optimistic response
  3. something in the optimisticResponse write path causes "jitter" such that the whole DAG looks changed / all leaf identities are changed (i.e. that's what this bug is about, both our changed leaf + all other leaves had new identities)
  4. now (your fix) when we read from the cache, the canon.admit kicks in and re-establishes the canonical identities of every leaf

Which is great, insofar as we get back stable identities for all leaves now, but my worry is the extra work that canon.admit now has to do, by re-scanning all (say) 100 leaves, without being able to leverage that 99 of them should be in its weak set but aren't (due to the observed/historical whole-DAG identity change in step 3).

Basically is the core issue here that optimistic cache updates are still (?) deeply changing identities when they don't really need to?

And yeah, fixing that back up in read from store is great (and faster than a React re-render penalty), but it could be even faster/cheaper for canon.admit to do its job if the write path (maybe specifically for optimisticResponse or just in general? although the "in general" case seemed to work before) would stop unnecessarily jittering every leaf?

I also admit:

a) this is an internal/perf impl detail b/c the bug is no longer observable, which is great
b) the canon.admit-on-read also makes sense for fixing other classes of bugs/being a very generalized fix, so is great/not questioning that at all (other than musing should it be done on write? would that be cheaper), and
c) I am very much guessing at the internal impl details for "optimisticResponse jittered the whole DAG"--that's an just observed theory based on our original bug
d) canon.admit may also be fast enough that my theorized "unnecessary jitter in optimistic response" just really isn't a big deal, even for cases like "1000 of nodes in a query". I.e. we don't have big data floating around the client side (...although that assumption frequently gets me into trouble with React re-renders).

Anyway, feel free to just scan/sanity check/not think too deeply about this, just wanted to type it out if anything for my own understanding. Thanks for the great work!

@benjamn
Copy link
Member

benjamn commented Jan 13, 2021

@stephenh I owe you a longer answer, but I agree #7439 hides the referential identity churn without really addressing the underlying reasons for that churn.

Specifically, because of the CacheGroup separation, optimistic cache reads during optimistic updates currently have no opportunity to reuse non-optimistic result objects. Effectively, there are two separate result caches (one per CacheGroup), which makes some aspects of the problem simpler, but definitely creates extra work for canon.admit behind the scenes.

Even with #7439 in place, I think it makes sense to keep working on this issue from the other direction. After a couple days of investigation, I think I have a solution that involves removing most of the CacheGroup logic, which should improve cache performance (hit rate) for optimistic reads. PR coming soon!

@stephenh
Copy link

I owe you a longer answer

Not at all!

Your description of an optimistic write creating its own separate-ish cache makes a lot of sense, in terms of needing a non-canonical staging area for its data, and why this would (pre #7439) be causing identity changes. And is plenty sufficient for my "just a user building a mental model" purposes.

Great to hear you've got a plan for optimizing specifically the optimistic read/write side of things; really impressive, thanks!

@hwillson
Copy link
Member

This was resolved by #7439 and #7887. Thanks!

@hwillson hwillson removed this from the MM-2021-06 milestone Jul 29, 2021
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 15, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests