WIP: React optimization#89823
WIP: React optimization#89823timneutkens wants to merge 12 commits intofeedthejim/node-stream-05-cifrom
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
Failing test suitesCommit: 5382259 | About building and testing Next.js
Expand output● runtime prefetching › passed to a public cache › can completely prefetch a page that uses cookies and no uncached IO ● runtime prefetching › passed to a public cache › can completely prefetch a page that uses cookies and no uncached IO |
Stats from current PR✅ No significant changes detected📊 All Metrics📖 Metrics GlossaryDev Server Metrics:
Build Metrics:
Change Thresholds:
⚡ Dev Server
📦 Dev Server (Webpack) (Legacy)📦 Dev Server (Webpack)
⚡ Production Builds
📦 Production Builds (Webpack) (Legacy)📦 Production Builds (Webpack)
📦 Bundle SizesBundle Sizes⚡ TurbopackClient Main Bundles: **437 kB** → **437 kB**
|
| Canary | PR | Change | |
|---|---|---|---|
| middleware-b..fest.js gzip | 758 B | 756 B | ✓ |
| Total | 758 B | 756 B | ✅ -2 B |
Build Details
Build Manifests
| Canary | PR | Change | |
|---|---|---|---|
| _buildManifest.js gzip | 451 B | 449 B | ✓ |
| Total | 451 B | 449 B | ✅ -2 B |
📦 Webpack
Client
Main Bundles
| Canary | PR | Change | |
|---|---|---|---|
| 5528-HASH.js gzip | 5.47 kB | N/A | - |
| 6280-HASH.js gzip | 57 kB | N/A | - |
| 6335.HASH.js gzip | 169 B | N/A | - |
| 912-HASH.js gzip | 4.53 kB | N/A | - |
| e8aec2e4-HASH.js gzip | 62.5 kB | N/A | - |
| framework-HASH.js gzip | 59.7 kB | 59.7 kB | ✓ |
| main-app-HASH.js gzip | 254 B | 254 B | ✓ |
| main-HASH.js gzip | 39.1 kB | 39.1 kB | ✓ |
| webpack-HASH.js gzip | 1.68 kB | 1.68 kB | ✓ |
| 262-HASH.js gzip | N/A | 4.53 kB | - |
| 2889.HASH.js gzip | N/A | 169 B | - |
| 5602-HASH.js gzip | N/A | 5.49 kB | - |
| 6948ada0-HASH.js gzip | N/A | 62.5 kB | - |
| 9544-HASH.js gzip | N/A | 57.7 kB | - |
| Total | 230 kB | 231 kB |
Polyfills
| Canary | PR | Change | |
|---|---|---|---|
| polyfills-HASH.js gzip | 39.4 kB | 39.4 kB | ✓ |
| Total | 39.4 kB | 39.4 kB | ✓ |
Pages
| Canary | PR | Change | |
|---|---|---|---|
| _app-HASH.js gzip | 194 B | 194 B | ✓ |
| _error-HASH.js gzip | 183 B | 180 B | 🟢 3 B (-2%) |
| css-HASH.js gzip | 331 B | 330 B | ✓ |
| dynamic-HASH.js gzip | 1.81 kB | 1.81 kB | ✓ |
| edge-ssr-HASH.js gzip | 256 B | 256 B | ✓ |
| head-HASH.js gzip | 351 B | 352 B | ✓ |
| hooks-HASH.js gzip | 384 B | 383 B | ✓ |
| image-HASH.js gzip | 580 B | 581 B | ✓ |
| index-HASH.js gzip | 260 B | 260 B | ✓ |
| link-HASH.js gzip | 2.49 kB | 2.49 kB | ✓ |
| routerDirect..HASH.js gzip | 320 B | 319 B | ✓ |
| script-HASH.js gzip | 386 B | 386 B | ✓ |
| withRouter-HASH.js gzip | 315 B | 315 B | ✓ |
| 1afbb74e6ecf..834.css gzip | 106 B | 106 B | ✓ |
| Total | 7.97 kB | 7.97 kB | ✅ -1 B |
Server
Edge SSR
| Canary | PR | Change | |
|---|---|---|---|
| edge-ssr.js gzip | 126 kB | 126 kB | ✓ |
| page.js gzip | 249 kB | 250 kB | ✓ |
| Total | 375 kB | 376 kB |
Middleware
| Canary | PR | Change | |
|---|---|---|---|
| middleware-b..fest.js gzip | 614 B | 615 B | ✓ |
| middleware-r..fest.js gzip | 156 B | 155 B | ✓ |
| middleware.js gzip | 32.9 kB | 33.2 kB | 🔴 +332 B (+1%) |
| edge-runtime..pack.js gzip | 842 B | 842 B | ✓ |
| Total | 34.5 kB | 34.8 kB |
Build Details
Build Manifests
| Canary | PR | Change | |
|---|---|---|---|
| _buildManifest.js gzip | 733 B | 735 B | ✓ |
| Total | 733 B | 735 B |
Build Cache
| Canary | PR | Change | |
|---|---|---|---|
| 0.pack gzip | 3.84 MB | 3.85 MB | 🔴 +7.7 kB (+0%) |
| index.pack gzip | 104 kB | 104 kB | ✓ |
| index.pack.old gzip | 103 kB | 103 kB | ✓ |
| Total | 4.05 MB | 4.06 MB |
🔄 Shared (bundler-independent)
Runtimes
| Canary | PR | Change | |
|---|---|---|---|
| app-page-exp...dev.js gzip | 316 kB | 316 kB | ✓ |
| app-page-exp..prod.js gzip | 168 kB | 168 kB | ✓ |
| app-page-tur...dev.js gzip | 315 kB | 315 kB | ✓ |
| app-page-tur..prod.js gzip | 167 kB | 167 kB | ✓ |
| app-page-tur...dev.js gzip | 312 kB | 312 kB | ✓ |
| app-page-tur..prod.js gzip | 166 kB | 166 kB | ✓ |
| app-page.run...dev.js gzip | 312 kB | 312 kB | ✓ |
| app-page.run..prod.js gzip | 166 kB | 166 kB | ✓ |
| app-route-ex...dev.js gzip | 70.5 kB | 70.5 kB | ✓ |
| app-route-ex..prod.js gzip | 49 kB | 49 kB | ✓ |
| app-route-tu...dev.js gzip | 70.5 kB | 70.5 kB | ✓ |
| app-route-tu..prod.js gzip | 49 kB | 49 kB | ✓ |
| app-route-tu...dev.js gzip | 70.1 kB | 70.1 kB | ✓ |
| app-route-tu..prod.js gzip | 48.8 kB | 48.8 kB | ✓ |
| app-route.ru...dev.js gzip | 70.1 kB | 70.1 kB | ✓ |
| app-route.ru..prod.js gzip | 48.8 kB | 48.8 kB | ✓ |
| dist_client_...dev.js gzip | 324 B | 324 B | ✓ |
| dist_client_...dev.js gzip | 326 B | 326 B | ✓ |
| dist_client_...dev.js gzip | 318 B | 318 B | ✓ |
| dist_client_...dev.js gzip | 317 B | 317 B | ✓ |
| pages-api-tu...dev.js gzip | 43.2 kB | 43.2 kB | ✓ |
| pages-api-tu..prod.js gzip | 32.9 kB | 32.9 kB | ✓ |
| pages-api.ru...dev.js gzip | 43.2 kB | 43.2 kB | ✓ |
| pages-api.ru..prod.js gzip | 32.8 kB | 32.8 kB | ✓ |
| pages-turbo....dev.js gzip | 52.5 kB | 52.5 kB | ✓ |
| pages-turbo...prod.js gzip | 39.4 kB | 39.4 kB | ✓ |
| pages.runtim...dev.js gzip | 52.5 kB | 52.5 kB | ✓ |
| pages.runtim..prod.js gzip | 39.4 kB | 39.4 kB | ✓ |
| server.runti..prod.js gzip | 62.7 kB | 62.7 kB | ✓ |
| Total | 2.8 MB | 2.8 MB |
📝 Changed Files (1 file)
Files with changes:
app-page-tur..time.prod.js
View diffs
app-page-tur..time.prod.js
Diff too large to display
|
With nested Suspense: Before: After: Still 30% better |
|
More nesting Before: After: Still 15% better |
5382259 to
6f562cf
Compare
Merging this PR will not alter performance
Comparing Footnotes
|
8a37cdb to
a277384
Compare
6f562cf to
6dfa4b2
Compare
a277384 to
dacdabe
Compare
b6613ec to
772600b
Compare
58b6f2a to
91148ca
Compare
772600b to
135c718
Compare
…le-time switchable modules Extract stream operations and debug channel code from app-render.tsx into separate modules that can be swapped at compile time: - stream-ops.web.ts: web stream implementations (renderToFlightStream, renderToFizzStream, continueFizzStream, etc.) - stream-ops.ts: re-exports from .web.ts (future: conditional branch) - debug-channel-server.web.ts: web debug channel implementation - debug-channel-server.ts: re-exports from .web.ts (future: conditional branch) - node-web-streams-helper.ts: add createRuntimePrefetchTransformStream Pure code motion with no behavior changes. Web paths produce identical output.
135c718 to
c84eee6
Compare
91148ca to
5d251eb
Compare
…ig flag Add the building blocks for native Node.js stream rendering: - Config flag: experimental.useNodeStreams with __NEXT_USE_NODE_STREAMS env var - Node stream primitives: node-stream-helpers, pipeable-stream-wrappers, node-stream-tee, pipe-readable, chain-node-streams - Stream ops: stream-ops.node.ts with compile-time switcher in stream-ops.ts - Debug channel: debug-channel-server.node.ts with 3-way conditional switcher - Build: taskfile.js bundle tasks, webpack config routing, module.compiled.js - Flight response: createInlinedDataNodeStream for Node Transform encoding - Entry base: conditional exports for renderToPipeableStream/prerenderToNodeStream - Tests: pipe-readable, flight response node stream, env precedence e2e
Integrate the node stream building blocks (from prior PR) into the actual render paths, all gated behind experimental.useNodeStreams: - app-render.tsx: add useNodeStreams branching throughout render paths, teeDebugChannelForSsrAndBrowser and debugChannelClientForBrowser helpers - app-render-prerender-utils.ts: ReactServerResult accepts NodeReadable, createReactServerPrerenderResultFromPrerender dispatcher - instant-validation.tsx: node stream branches for validation renders - render-result.ts: accept Node Readable as response body with piping - flight-render-result.ts: accept Node Readable in constructor - app-page.ts: PPR resume path for node streams - stream-ops.ts/debug-channel-server.ts: widen types to include Readable
Adds a CI job that re-runs the app-dir e2e suite with __NEXT_USE_NODE_STREAMS=true to verify the node stream path. Includes a test manifest that selects the relevant test globs.
5d251eb to
71029bd
Compare
c84eee6 to
20fba0f
Compare
…yload (#35776) ## Summary Follow-up to vercel/next.js#89823 with the actual changes to React. Replaces the `JSON.parse` reviver callback in `initializeModelChunk` with a two-step approach: plain `JSON.parse()` followed by a recursive `reviveModel()` post-process (same as in Flight Reply Server). This yields a **~75% speedup** in RSC chunk deserialization. | Payload | Original (ms) | Walk (ms) | Speedup | |---------|---------------|-----------|---------| | Small (2 elements, 142B) | 0.0024 | 0.0007 | **+72%** | | Medium (~12 elements, 914B) | 0.0116 | 0.0031 | **+73%** | | Large (~90 elements, 16.7KB) | 0.1836 | 0.0451 | **+75%** | | XL (~200 elements, 25.7KB) | 0.3742 | 0.0913 | **+76%** | | Table (1000 rows, 110KB) | 3.0862 | 0.6887 | **+78%** | ## Problem `createFromJSONCallback` returns a reviver function passed as the second argument to `JSON.parse()`. This reviver is called for **every key-value pair** in the parsed JSON. While the logic inside the reviver is lightweight, the dominant cost is the **C++ → JavaScript boundary crossing** — V8's `JSON.parse` is implemented in C++, and calling back into JavaScript for every node incurs significant overhead. Even a trivial no-op reviver `(k, v) => v` makes `JSON.parse` **~4x slower** than bare `JSON.parse` without a reviver: ``` 108 KB payload: Bare JSON.parse: 0.60 ms Trivial reviver: 2.95 ms (+391%) ``` ## Change Replace the reviver with a two-step process: 1. `JSON.parse(resolvedModel)` — parse the entire payload in C++ with no callbacks 2. `reviveModel` — recursively walk the resulting object in pure JavaScript to apply RSC transformations The `reviveModel` function includes additional optimizations over the original reviver: - **Short-circuits plain strings**: only calls `parseModelString` when the string starts with `$`, skipping the vast majority of strings (class names, text content, etc.) - **Stays entirely in JavaScript** — no C++ boundary crossings during the walk ## Results You can find the related applications in the [Next.js PR ](vercel/next.js#89823 I've been testing this on Next.js applications. ### Table as Server Component with 1000 items Before: ``` "min": 13.782875000000786, "max": 22.23400000000038, "avg": 17.116868530000083, "p50": 17.10766700000022, "p75": 18.50787499999933, "p95": 20.426249999998618, "p99": 21.814125000000786 ``` After: ``` "min": 10.963916999999128, "max": 18.096083000000363, "avg": 13.543286884999988, "p50": 13.58350000000064, "p75": 14.871791999999914, "p95": 16.08429099999921, "p99": 17.591458000000785 ``` ### Table as Client Component with 1000 items Before: ``` "min": 3.888875000000553, "max": 9.044959000000745, "avg": 4.651271475000067, "p50": 4.555749999999534, "p75": 4.966624999999112, "p95": 5.47754200000054, "p99": 6.109499999998661 ```` After: ``` "min": 3.5986250000005384, "max": 5.374291000000085, "avg": 4.142990245000046, "p50": 4.10570799999914, "p75": 4.392041999999492, "p95": 4.740084000000934, "p99": 5.1652500000000146 ``` ### Nested Suspense Before: ``` Requests: 200 Min: 73ms Max: 106ms Avg: 78ms P50: 77ms P75: 80ms P95: 85ms P99: 94ms ``` After: ``` Requests: 200 Min: 56ms Max: 67ms Avg: 59ms P50: 58ms P75: 60ms P95: 65ms P99: 66ms ``` ### Even more nested Suspense (double-level Suspense) Before: ``` Requests: 200 Min: 159ms Max: 208ms Avg: 169ms P50: 167ms P75: 173ms P95: 183ms P99: 188ms ``` After: ``` Requests: 200 Min: 125ms Max: 170ms Avg: 134ms P50: 132ms P75: 138ms P95: 148ms P99: 160ms ``` ## How did you test this change? Ran it across many Next.js benchmark applications. The entire Next.js test suite passes with this change. --------- Co-authored-by: Hendrik Liebau <mail@hendrik-liebau.de>
|
React PR has been merged and added to Next.js in #90211 |

PR description:
Optimize RSC client deserialization by replacing
JSON.parsereviver with post-process tree walkSummary
Replaces the
JSON.parsereviver callback ininitializeModelChunkwith a two-step approach: plainJSON.parse()followed by a recursivewalkParsedJSON()post-process. This yields a ~75% speedup in RSC chunk deserialization.Problem
createFromJSONCallbackreturns a reviver function passed as the second argument toJSON.parse(). This reviver is called for every key-value pair in the parsed JSON. While the logic inside the reviver is lightweight, the dominant cost is the C++ → JavaScript boundary crossing — V8'sJSON.parseis implemented in C++, and calling back into JavaScript for every node incurs significant overhead.Even a trivial no-op reviver
(k, v) => vmakesJSON.parse~4x slower than bareJSON.parsewithout a reviver:This overhead is paid on every RSC chunk during SSR.
Solution
Replace the reviver with a two-step process:
JSON.parse(resolvedModel)— parse the entire payload in C++ with no callbackswalkParsedJSON(response, parsed, null, "")— recursively walk the resulting object in pure JavaScript to apply RSC transformations (resolving$-prefixed special strings, constructing React elements, handlinginitializingHandlerprotocol)The
walkParsedJSONfunction includes additional optimizations over the original reviver:parseModelStringwhen the string starts with$, skipping the vast majority of strings (class names, text content, etc.)["$", type, key, props]) upfront and processes each field with knowledge of its expected type, avoiding generic traversalBenchmarks
Benchmarks load both the original and modified production files via
require()in the same V8 context and directly callinitializeModelChunkwith realistic RSC payloads:All correctness checks pass — element counts and chunk status are identical between both approaches.
Why this matters
initializeModelChunkis called for every RSC chunk during SSR. On pages with many server components (e.g., data tables, feeds, dashboards), hundreds of chunks may be deserialized.