Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable feud-stopping logic after any cache eviction. #6817

Merged
merged 2 commits into from
Aug 10, 2020
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 63 additions & 16 deletions src/core/QueryInfo.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import {
ObservableSubscription,
isNonEmptyArray,
graphQLResultHasError,
canUseWeakMap,
} from '../utilities';
import {
NetworkStatus,
Expand All @@ -24,6 +25,10 @@ export type QueryStoreValue = Pick<QueryInfo,
| "graphQLErrors"
>;

const cacheEvictCounts = new (
canUseWeakMap ? WeakMap : Map
)<ApolloCache<any>, number>();

// A QueryInfo object represents a single query managed by the
// QueryManager, which tracks all QueryInfo objects by queryId in its
// this.queries Map. QueryInfo objects store the latest results and errors
Expand All @@ -46,7 +51,28 @@ export class QueryInfo {
networkError?: Error | null;
graphQLErrors?: ReadonlyArray<GraphQLError>;

constructor(private cache: ApolloCache<any>) {}
constructor(private cache: ApolloCache<any>) {
// Track how often cache.evict is called, since we want eviction to
// override the feud-stopping logic in the markResult method, by
// causing shouldWrite to return true. Wrapping the cache.evict method
// is a bit of a hack, but it saves us from having to make eviction
// counting an official part of the ApolloCache API.
if (!cacheEvictCounts.has(cache) && cache.evict) {
cacheEvictCounts.set(cache, 0);
const originalEvict = cache.evict;
cache.evict = function evict() {
cacheEvictCounts.set(
cache,
// The %1e15 allows the count to wrap around to 0 safely every
// quadrillion evictions, so there's no risk of overflow. To be
// clear, this is more of a pedantic principle than something
// that matters in any conceivable practical scenario.
(cacheEvictCounts.get(cache)! + 1) % 1e15,
);
return originalEvict.apply(this, arguments);
};
}
}

public init(query: {
document: DocumentNode;
Expand Down Expand Up @@ -204,8 +230,27 @@ export class QueryInfo {
}
}

private lastWrittenResult?: FetchResult<any>;
private lastWrittenVars?: WatchQueryOptions["variables"];
private lastWrite?: {
result: FetchResult<any>;
variables: WatchQueryOptions["variables"];
evictCount: number | undefined;
};

private shouldWrite(
result: FetchResult<any>,
variables: WatchQueryOptions["variables"],
) {
const { lastWrite } = this;
return !(
lastWrite &&
// If cache.evict has been called since the last time we wrote this
// data into the cache, there's a chance writing this result into
// the cache will repair what was evicted.
lastWrite.evictCount === cacheEvictCounts.get(this.cache) &&
equal(variables, lastWrite.variables) &&
equal(result.data, lastWrite.result.data)
);
}

public markResult<T>(
result: FetchResult<T>,
Expand Down Expand Up @@ -235,9 +280,19 @@ export class QueryInfo {
// of writeQuery, so we can store the new diff quietly and ignore
// it when we receive it redundantly from the watch callback.
this.cache.performTransaction(cache => {
if (this.lastWrittenResult &&
equal(result.data, this.lastWrittenResult.data) &&
equal(options.variables, this.lastWrittenVars)) {
if (this.shouldWrite(result, options.variables)) {
cache.writeQuery({
query: this.document!,
data: result.data as T,
variables: options.variables,
});

this.lastWrite = {
result,
variables: options.variables,
evictCount: cacheEvictCounts.get(this.cache),
};
Comment on lines +290 to +294
Copy link
Member Author

@benjamn benjamn Aug 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we wanted to throw away this.lastWrite after a certain time (thus setting a limit on how long the feud-stopping logic matters), that would also be pretty easy to implement:

setTimeout(() => {
  this.lastWrite = void 0;
}, 10000);

This would mean: if more than 10 seconds have passed, any new data from the network gets written into the cache unconditionally, even if it's the same as the previously-written data.

} else {
// If result is the same as the last result we received from
// the network (and the variables match too), avoid writing
// result into the cache again. The wisdom of skipping this
Expand Down Expand Up @@ -278,14 +333,6 @@ export class QueryInfo {
}
// If the previous this.diff was incomplete, fall through to
// re-reading the latest data with cache.diff, below.
} else {
cache.writeQuery({
query: this.document!,
data: result.data as T,
variables: options.variables,
});
this.lastWrittenResult = result;
this.lastWrittenVars = options.variables;
}

const diff = cache.diff<T>({
Expand All @@ -311,7 +358,7 @@ export class QueryInfo {
});

} else {
this.lastWrittenResult = this.lastWrittenVars = void 0;
this.lastWrite = void 0;
}
}
}
Expand All @@ -323,7 +370,7 @@ export class QueryInfo {

public markError(error: ApolloError) {
this.networkStatus = NetworkStatus.error;
this.lastWrittenResult = this.lastWrittenVars = void 0;
this.lastWrite = void 0;

if (error.graphQLErrors) {
this.graphQLErrors = error.graphQLErrors;
Expand Down