-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow custom merge functions (including merge:true and merge:false) in type policies. #7070
Conversation
Thanks @benjamn I think this is exactly what I need. Note that I intend to use this feature to implement pagination, which is a fairly common use case I believe, so you might want to expand on the note about also accepting a merge function as this is a legitimate use case.I.e, I intend to new InMemoryCache({
typePolicies: {
Connection: {
merge: paginationMergeFn,
},
},
}) Also, new InMemoryCache({
typePolicies: {
JSONObject: {
merge: false,
},
},
}) |
@vigie That should work, as long as your |
Its defined as a scalar so I don't think a Treating all scalars (custom or otherwise) as opaque makes sense and is what I would want but not what I observe in this case, so maybe this is a separate issue. I opened #7071 to track. |
b585166
to
9ede85b
Compare
Instead of recursively searching for FieldValueToBeMerged wrapper objects anywhere in the incoming data, processSelectionSet and processFieldValue can build a sparse tree specifying just the paths of fields that need to be merged, and then applyMerges can use that tree to traverse only the parts of the data where merge functions need to be called. These changes effectively revert #5880, since the idea of giving merge functions a chance to transform their child data before calling nested merge functions no longer makes as much sense. Instead, applyMerges will be recursively called on the child data before parent merge functions run, the way it used to be (before #5880).
We know what these functions do, so we can detect them by reference and then just do what they would have done, without creating a complete options object or actually calling mergeTrueFn or mergeFalseFn.
The optimism update allows passing an arguments object to KeyTrie#lookupArray, so we don't have to convert arguments to an array before performing KeyTrie lookups.
This warning was introduced in #6372.
9ede85b
to
3d3af86
Compare
const emptyMergeTreePool: MergeTree[] = []; | ||
|
||
function getChildMergeTree( | ||
{ map }: MergeTree, | ||
name: string | number, | ||
): MergeTree { | ||
if (!map.has(name)) { | ||
map.set(name, emptyMergeTreePool.pop() || { map: new Map }); | ||
} | ||
return map.get(name)!; | ||
} | ||
|
||
function maybeRecycleChildMergeTree( | ||
{ map }: MergeTree, | ||
name: string | number, | ||
) { | ||
const childTree = map.get(name); | ||
if (childTree && | ||
!childTree.info && | ||
!childTree.map.size) { | ||
emptyMergeTreePool.push(childTree); | ||
map.delete(name); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This recycling pattern makes up for the fact that most MergeTree
objects never have any fields added to tree.map
, and would otherwise be thrown away empty. With recycling, we create only as many MergeTree
objects as we need, without relying on garbage collection to handle the waste.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome functionality add @benjamn!
I have tested |
Thanks for this addition @benjamn, it's cleaned up our cache configuration significantly. We have one remaining issue, related to the use-case @vigie mentions about pagination: we have many types that conform to the Relay cursor-based pagination spec, and ideally we'd map those to a single type using The following setup doesn't work, I assume because it isn't possible to set the "read" function by type instead of field. It's a bit more work for us to figure out every field in our schema that resolves to one of these types, and set the field policy for each. Is it possible to lift more of the field policy configuration to the type policy level? new InMemoryCache({
possibleTypes: {
// "Connection" isn't a real type, it is a made-up supertype that includes all the Relay-style pagination
// types in our schema. These objects inherit the type policy for Connection set below (that is: they
// use the relayStylePagination utility from Apollo).
Connection: ["ProductsConnection", "UsersConnection", "FoobarConnection", ...]
},
// See notes above.
typePolicies: {
Connection: {
// Use the "merge" function from the relayStylePagination helper for Connection objects.
merge: relayStylePagination().merge
}
},
}); |
@cecchi The above setup is working perfectly for me, the only difference is I am using a custom merge function as opposed to the one supplied by the helpers. Is the problem specific to the merge function you are using, or are you finding the function is not being called at all in certain cases? If it is helpful I can paste my custom function here |
@vigie I think it's because in the case of relay pagination the merge logic alone is not enough: https://github.com/apollographql/apollo-client/blob/main/src/utilities/policies/pagination.ts#L91-L249 That field policy consists of a read function in addition to the merge function. To be honest, I'm not very familiar with the logic defined there, but I know that it does work as desired when I replace the type policy I have above with individual field policies: new InMemoryCache({
typePolicies: {
Query: {
fields: {
searchProducts: relayStylePagination(),
searchUsers: relayStylePagination(),
searchFoobars: relayStylePagination()
}
}
},
}); ... except in practice we have far more of these, so the example from my previous comment would make things more ergonomic. |
If you've ever dealt with the warnings introduced by #6372, you either reconfigured the object type to be normalized, or you had to go find a bunch of fields that might hold an object of the given type, then configure a
merge
function (ormerge: true
) for those fields.While normalization is still the preferred strategy in most cases, and
merge: true
(introduced by #6714) has made the second option a little easier, it's still not as easy as it could be to say that a specific object type can be safely merged.In practice, the intuition about merge safety often concerns only the type of the object, and doesn't really have anything to do with the parent field that holds the object. To handle this common case, this PR allows placing a single
merge
configuration in a type policy, rather than configuring the samemerge
function/boolean in multiple field policies.In other words, instead of
you can now write
These configurations have exactly the same semantics, but the second is shorter and arguably easier to maintain, especially when
Author
objects can appear in lots of different fields (not justBook.author
).Note that mergeable objects will only be merged with existing objects occupying the same field of the same parent object, and only when the
__typename
of the objects is the same. If you really need to work around these rules, you can write a custommerge
function to do whatever you want, butmerge: true
follows these rules.This feature really begins to shine when paired with inheritance (#7065):
Since
Book
haskeyFields: ["isbn"]
,Book
objects will be normalized, which implies mergeability (whenever the ISBNs match), so configuringmerge: true
for theBook
type is redundant (but harmless).This PR should help with #6808 and #6949, and was inspired by discussion with @vigie and @tomprats: #6949 (comment)