You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/framework/react/guides/caching.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Let's assume we are using the default `gcTime` of **5 minutes** and the default
19
19
- A new instance of `useQuery({ queryKey: ['todos'], queryFn: fetchTodos })` mounts.
20
20
- Since no other queries have been made with the `['todos']` query key, this query will show a hard loading state and make a network request to fetch the data.
21
21
- When the network request has completed, the returned data will be cached under the `['todos']` key.
22
-
- The hook will mark the data as stale after the configured `staleTime` (defaults to `0`, or immediately).
22
+
- The data will be marked as stale after the configured `staleTime` (defaults to `0`, or immediately).
23
23
- A second instance of `useQuery({ queryKey: ['todos'], queryFn: fetchTodos })` mounts elsewhere.
24
24
- Since the cache already has data for the `['todos']` key from the first query, that data is immediately returned from the cache.
25
25
- The new instance triggers a new network request using its query function.
Copy file name to clipboardExpand all lines: docs/framework/react/guides/important-defaults.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,9 +32,13 @@ Out of the box, TanStack Query is configured with **aggressive but sane** defaul
32
32
33
33
> To change this, you can alter the default `retry` and `retryDelay` options for queries to something other than `3` and the default exponential backoff function.
34
34
35
+
[//]: #'StructuralSharing'
36
+
35
37
- Query results by default are **structurally shared to detect if data has actually changed** and if not, **the data reference remains unchanged** to better help with value stabilization with regards to useMemo and useCallback. If this concept sounds foreign, then don't worry about it! 99.9% of the time you will not need to disable this and it makes your app more performant at zero cost to you.
36
38
37
-
> Structural sharing only works with JSON-compatible values, any other value types will always be considered as changed. If you are seeing performance issues because of large responses for example, you can disable this feature with the `config.structuralSharing` flag. If you are dealing with non-JSON compatible values in your query responses and still want to detect if data has changed or not, you can provide your own custom function as `config.structuralSharing` to compute a value from the old and new responses, retaining references as required.
39
+
[//]: #'StructuralSharing'
40
+
41
+
> Structural sharing only works with JSON-compatible values, any other value types will always be considered as changed. If you are seeing performance issues because of large responses for example, you can disable this feature with the `config.structuralSharing` flag. If you are dealing with non-JSON compatible values in your query responses and still want to detect if data has changed or not, you can provide your own custom function as `config.structuralSharing` to compute a value from the old and new responses, retaining references as required.
If you persist offline mutations with the [persistQueryClient plugin](../plugins/persistQueryClient.md), mutations cannot be resumed when the page is reloaded unless you provide a default mutation function.
347
348
349
+
[//]: #'PersistOfflineIntro'
350
+
348
351
This is a technical limitation. When persisting to an external storage, only the state of mutations is persisted, as functions cannot be serialized. After hydration, the component that triggers the mutation might not be mounted, so calling `resumePausedMutations` might yield an error: `No mutationFn found`.
349
352
350
353
[//]: #'Example11'
@@ -385,9 +388,12 @@ export default function App() {
385
388
```
386
389
387
390
[//]: #'Example11'
391
+
[//]: #'OfflineExampleLink'
388
392
389
393
We also have an extensive [offline example](../examples/offline) that covers both queries and mutations.
390
394
395
+
[//]: #'OfflineExampleLink'
396
+
391
397
## Mutation Scopes
392
398
393
399
Per default, all mutations run in parallel - even if you invoke `.mutate()` of the same mutation multiple times. Mutations can be given a `scope` with an `id` to avoid that. All mutations with the same `scope.id` will run in serial, which means when they are triggered, they will start in `isPaused: true` state if there is already a mutation for that scope in progress. They will be put into a queue and will automatically resume once their time in the queue has come.
Copy file name to clipboardExpand all lines: docs/framework/react/guides/parallel-queries.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,9 +35,11 @@ function App () {
35
35
If the number of queries you need to execute is changing from render to render, you cannot use manual querying since that would violate the rules of hooks. Instead, TanStack Query provides a `useQueries` hook, which you can use to dynamically execute as many queries in parallel as you'd like.
36
36
37
37
[//]: #'DynamicParallelIntro'
38
+
[//]: #'DynamicParallelDescription'
38
39
39
40
`useQueries` accepts an **options object** with a **queries key** whose value is an **array of query objects**. It returns an **array of query results**:
Copy file name to clipboardExpand all lines: docs/framework/react/guides/queries.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,8 +7,12 @@ title: Queries
7
7
8
8
A query is a declarative dependency on an asynchronous source of data that is tied to a **unique key**. A query can be used with any Promise based method (including GET and POST methods) to fetch data from a server. If your method modifies data on the server, we recommend using [Mutations](./mutations.md) instead.
9
9
10
+
[//]: #'SubscribeDescription'
11
+
10
12
To subscribe to a query in your components or custom hooks, call the `useQuery` hook with at least:
For Infinite Queries, a separate [`infiniteQueryOptions`](../reference/infiniteQueryOptions.md) helper is available.
35
35
36
+
[//]: #'SelectDescription'
37
+
36
38
You can still override some options at the component level. A very common and useful pattern is to create per-component [`select`](./render-optimizations.md#select) functions:
Copy file name to clipboardExpand all lines: docs/framework/react/guides/request-waterfalls.md
+42Lines changed: 42 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,8 +11,12 @@ The [Prefetching & Router Integration guide](./prefetching.md) builds on this an
11
11
12
12
The [Server Rendering & Hydration guide](./ssr.md) teaches you how to prefetch data on the server and pass that data down to the client so you don't have to fetch it again.
13
13
14
+
[//]: #'AdvancedSSRLink'
15
+
14
16
The [Advanced Server Rendering guide](./advanced-ssr.md) further teaches you how to apply these patterns to Server Components and Streaming Server Rendering.
15
17
18
+
[//]: #'AdvancedSSRLink'
19
+
16
20
## What is a Request Waterfall?
17
21
18
22
A request waterfall is what happens when a request for a resource (code, css, images, data) does not start until _after_ another request for a resource has finished.
@@ -67,6 +71,8 @@ With this as a basis, let's look at a few different patterns that can lead to Re
67
71
68
72
When a single component first fetches one query, and then another, that's a request waterfall. This can happen when the second query is a [Dependent Query](./dependent-queries.md), that is, it depends on data from the first query when fetching:
69
73
74
+
[//]: #'DependentExample'
75
+
70
76
```tsx
71
77
// Get the user
72
78
const { data: user } =useQuery({
@@ -89,10 +95,17 @@ const {
89
95
})
90
96
```
91
97
98
+
[//]: #'DependentExample'
99
+
92
100
While not always feasible, for optimal performance it's better to restructure your API so you can fetch both of these in a single query. In the example above, instead of first fetching `getUserByEmail` to be able to `getProjectsByUser`, introducing a new `getProjectsByUserEmail` query would flatten the waterfall.
93
101
102
+
[//]: #'ServerComponentsNote1'
103
+
94
104
> Another way to mitigate dependent queries without restructuring your API is to move the waterfall to the server where latency is lower. This is the idea behind Server Components which are covered in the [Advanced Server Rendering guide](./advanced-ssr.md).
95
105
106
+
[//]: #'ServerComponentsNote1'
107
+
[//]: #'SuspenseSerial'
108
+
96
109
Another example of serial queries is when you use React Query with Suspense:
Nested Component Waterfalls is when both a parent and a child component contains queries, and the parent does not render the child until its query is done. This can happen both with `useQuery` and `useSuspenseQuery`.
128
145
146
+
[//]: #'NestedIntro'
147
+
129
148
If the child renders conditionally based on the data in the parent, or if the child relies on some part of the result being passed down as a prop from the parent to make its query, we have a _dependent_ nested component waterfall.
130
149
131
150
Let's first look at an example where the child is **not** dependent on the parent.
Note that while `<Comments>` takes a prop `id` from the parent, that id is already available when the `<Article>` renders so there is no reason we could not fetch the comments at the same time as the article. In real world applications, the child might be nested far below the parent and these kinds of waterfalls are often trickier to spot and fix, but for our example, one way to flatten the waterfall would be to hoist the comments query to the parent instead:
The two queries will now fetch in parallel. Note that if you are using suspense, you'd want to combine these two queries into a single `useSuspenseQueries` instead.
197
225
226
+
[//]: #'NestedHoistedOutro'
227
+
198
228
Another way to flatten this waterfall would be to prefetch the comments in the `<Article>` component, or prefetch both of these queries at the router level on page load or page navigation, read more about this in the [Prefetching & Router Integration guide](./prefetching.md).
199
229
200
230
Next, let's look at a _Dependent Nested Component Waterfall_.
201
231
232
+
[//]: #'DependentNestedExample'
233
+
202
234
```tsx
203
235
function Feed() {
204
236
const { data, isPending } =useQuery({
@@ -233,15 +265,21 @@ function GraphFeedItem({ feedItem }) {
233
265
}
234
266
```
235
267
268
+
[//]: #'DependentNestedExample'
269
+
236
270
The second query `getGraphDataById` is dependent on its parent in two different ways. First of all, it doesn't ever happen unless the `feedItem` is a graph, and second, it needs an `id` from the parent.
237
271
238
272
```
239
273
1. |> getFeed()
240
274
2. |> getGraphDataById()
241
275
```
242
276
277
+
[//]: #'ServerComponentsNote2'
278
+
243
279
In this example, we can't trivially flatten the waterfall by just hoisting the query to the parent, or even adding prefetching. Just like the dependent query example at the beginning of this guide, one option is to refactor our API to include the graph data in the `getFeed` query. Another more advanced solution is to leverage Server Components to move the waterfall to the server where latency is lower (read more about this in the [Advanced Server Rendering guide](./advanced-ssr.md)) but note that this can be a very big architectural change.
244
280
281
+
[//]: #'ServerComponentsNote2'
282
+
245
283
You can have good performance even with a few query waterfalls here and there, just know they are a common performance concern and be mindful about them. An especially insidious version is when Code Splitting is involved, let's take a look at this next.
246
284
247
285
### Code Splitting
@@ -323,13 +361,17 @@ In the code split case, it might actually help to hoist the `getGraphDataById` q
323
361
324
362
This is very much a tradeoff however. You are now including the data fetching code for `getGraphDataById` in the same bundle as `<Feed>`, so evaluate what is best for your case. Read more about how to do this in the [Prefetching & Router Integration guide](./prefetching.md).
325
363
364
+
[//]: #'ServerComponentsNote3'
365
+
326
366
> The tradeoff between:
327
367
>
328
368
> - Include all data fetching code in the main bundle, even if we seldom use it
329
369
> - Put the data fetching code in the code split bundle, but with a request waterfall
330
370
>
331
371
> is not great and has been one of the motivations for Server Components. With Server Components, it's possible to avoid both, read more about how this applies to React Query in the [Advanced Server Rendering guide](./advanced-ssr.md).
332
372
373
+
[//]: #'ServerComponentsNote3'
374
+
333
375
## Summary and takeaways
334
376
335
377
Request Waterfalls are a very common and complex performance concern with many tradeoffs. There are many ways to accidentally introduce them into your application:
0 commit comments