Skip to content

WIP: React optimization#89823

Closed
timneutkens wants to merge 12 commits intofeedthejim/node-stream-05-cifrom
02-11-wip_react_optimization
Closed

WIP: React optimization#89823
timneutkens wants to merge 12 commits intofeedthejim/node-stream-05-cifrom
02-11-wip_react_optimization

Conversation

@timneutkens
Copy link
Member

@timneutkens timneutkens commented Feb 11, 2026

⚠️ I've verified the changes do hold a ~30% speedup on a table benchmark (client component table getting passed 1000 elements to render), and the changes look good to me, but it needs to be verified by the React team.

PR description:

Optimize RSC client deserialization by replacing JSON.parse reviver with post-process tree walk

Summary

Replaces the JSON.parse reviver callback in initializeModelChunk with a two-step approach: plain JSON.parse() followed by a recursive walkParsedJSON() post-process. This yields a ~75% speedup in RSC chunk deserialization.

Problem

createFromJSONCallback returns a reviver function passed as the second argument to JSON.parse(). This reviver is called for every key-value pair in the parsed JSON. While the logic inside the reviver is lightweight, the dominant cost is the C++ → JavaScript boundary crossing — V8's JSON.parse is implemented in C++, and calling back into JavaScript for every node incurs significant overhead.

Even a trivial no-op reviver (k, v) => v makes JSON.parse ~4x slower than bare JSON.parse without a reviver:

108 KB payload:
  Bare JSON.parse:    0.60 ms
  Trivial reviver:    2.95 ms  (+391%)

This overhead is paid on every RSC chunk during SSR.

Solution

Replace the reviver with a two-step process:

  1. JSON.parse(resolvedModel) — parse the entire payload in C++ with no callbacks
  2. walkParsedJSON(response, parsed, null, "") — recursively walk the resulting object in pure JavaScript to apply RSC transformations (resolving $-prefixed special strings, constructing React elements, handling initializingHandler protocol)

The walkParsedJSON function includes additional optimizations over the original reviver:

  • Short-circuits plain strings: only calls parseModelString when the string starts with $, skipping the vast majority of strings (class names, text content, etc.)
  • Recognizes React element arrays (["$", type, key, props]) upfront and processes each field with knowledge of its expected type, avoiding generic traversal
  • Stays entirely in JavaScript — no C++ boundary crossings during the walk

Benchmarks

Benchmarks load both the original and modified production files via require() in the same V8 context and directly call initializeModelChunk with realistic RSC payloads:

Payload Original (ms) Walk (ms) Speedup
Small (2 elements, 142B) 0.0024 0.0007 +72%
Medium (~12 elements, 914B) 0.0116 0.0031 +73%
Large (~90 elements, 16.7KB) 0.1836 0.0451 +75%
XL (~200 elements, 25.7KB) 0.3742 0.0913 +76%
Table (1000 rows, 110KB) 3.0862 0.6887 +78%

All correctness checks pass — element counts and chunk status are identical between both approaches.

Why this matters

initializeModelChunk is called for every RSC chunk during SSR. On pages with many server components (e.g., data tables, feeds, dashboards), hundreds of chunks may be deserialized.

Copy link
Member Author

timneutkens commented Feb 11, 2026

Warning

This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more

This stack of pull requests is managed by Graphite. Learn more about stacking.

@nextjs-bot
Copy link
Collaborator

nextjs-bot commented Feb 11, 2026

Failing test suites

Commit: 5382259 | About building and testing Next.js

pnpm test-start test/e2e/app-dir/segment-cache/prefetch-runtime/prefetch-runtime.test.ts (job)

  • runtime prefetching > passed to a public cache > can completely prefetch a page that uses cookies and no uncached IO (DD)
Expand output

● runtime prefetching › passed to a public cache › can completely prefetch a page that uses cookies and no uncached IO

apiRequestContext.fetch: read ECONNRESET
Call log:
  - → GET http://localhost:34049/passed-to-public-cache/cookies-only?_rsc=gtqjj
  -   user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/130.0.6723.31 Safari/537.36
  -   accept: */*
  -   accept-encoding: gzip,deflate,br
  -   cookie: testCookie=initialValue
  -   next-test-fetch-priority: low
  -   referer: http://localhost:34049/
  -   next-router-prefetch: 1
  -   next-router-segment-prefetch: /!KGRlZmF1bHQp/passed-to-public-cache
  -   next-url: /
  -   rsc: 1
  -   sec-ch-ua: "Chromium";v="130", "HeadlessChrome";v="130", "Not?A_Brand";v="99"
  -   sec-ch-ua-mobile: ?0
  -   sec-ch-ua-platform: "Linux"

  225 |             // server; we pass the request to the server the immediately.
  226 |             result: (async () => {
> 227 |               const originalResponse = await page.request.fetch(request, {
      |                                                           ^
  228 |                 maxRedirects: 0,
  229 |               })
  230 |

  at fetch (lib/router-act.ts:227:59)
  at lib/router-act.ts:245:13
  at routeHandler (lib/router-act.ts:257:7)

● runtime prefetching › passed to a public cache › can completely prefetch a page that uses cookies and no uncached IO

apiRequestContext.fetch: read ECONNRESET
Call log:
  - → GET http://localhost:34049/passed-to-public-cache/cookies-only?_rsc=gtqjj
  -   user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/130.0.6723.31 Safari/537.36
  -   accept: */*
  -   accept-encoding: gzip,deflate,br
  -   cookie: testCookie=initialValue
  -   next-test-fetch-priority: low
  -   referer: http://localhost:34049/
  -   next-router-prefetch: 1
  -   next-router-segment-prefetch: /!KGRlZmF1bHQp/passed-to-public-cache
  -   next-url: /
  -   rsc: 1
  -   sec-ch-ua: "Chromium";v="130", "HeadlessChrome";v="130", "Not?A_Brand";v="99"
  -   sec-ch-ua-mobile: ?0
  -   sec-ch-ua-platform: "Linux"

  225 |             // server; we pass the request to the server the immediately.
  226 |             result: (async () => {
> 227 |               const originalResponse = await page.request.fetch(request, {
      |                                                           ^
  228 |                 maxRedirects: 0,
  229 |               })
  230 |

  at fetch (lib/router-act.ts:227:59)
  at lib/router-act.ts:245:13
  at routeHandler (lib/router-act.ts:257:7)

@nextjs-bot
Copy link
Collaborator

nextjs-bot commented Feb 11, 2026

Stats from current PR

✅ No significant changes detected

📊 All Metrics
📖 Metrics Glossary

Dev Server Metrics:

  • Listen = TCP port starts accepting connections
  • First Request = HTTP server returns successful response
  • Cold = Fresh build (no cache)
  • Warm = With cached build artifacts

Build Metrics:

  • Fresh = Clean build (no .next directory)
  • Cached = With existing .next directory

Change Thresholds:

  • Time: Changes < 50ms AND < 10%, OR < 2% are insignificant
  • Size: Changes < 1KB AND < 1% are insignificant
  • All other changes are flagged to catch regressions

⚡ Dev Server

Metric Canary PR Change Trend
Cold (Listen) 455ms 455ms ▁▁█▁▁
Cold (Ready in log) 438ms 438ms ▅▅█▅▆
Cold (First Request) 1.179s 1.148s ██▇██
Warm (Listen) 456ms 456ms ▁▁█▁▁
Warm (Ready in log) 441ms 446ms ▁▁█▁▁
Warm (First Request) 334ms 336ms ▂▂█▁▄
📦 Dev Server (Webpack) (Legacy)

📦 Dev Server (Webpack)

Metric Canary PR Change Trend
Cold (Listen) 455ms 456ms ▁▁▁▁█
Cold (Ready in log) 440ms 440ms ▁▃▂▁▇
Cold (First Request) 1.863s 1.854s ▁▁▁▁▇
Warm (Listen) 456ms 456ms ▁▁▁▁█
Warm (Ready in log) 439ms 439ms ▂▂▁▁█
Warm (First Request) 1.840s 1.855s ▁▁▁▁█

⚡ Production Builds

Metric Canary PR Change Trend
Fresh Build 3.945s 3.843s ▁▁█▁▁
Cached Build 3.868s 3.821s ▁▁█▁▁
📦 Production Builds (Webpack) (Legacy)

📦 Production Builds (Webpack)

Metric Canary PR Change Trend
Fresh Build 13.915s 13.945s ▁▁▁▁▇
Cached Build 14.031s 14.017s ▁▁▁▁▇
node_modules Size 467 MB 467 MB █████
📦 Bundle Sizes

Bundle Sizes

⚡ Turbopack

Client

Main Bundles: **437 kB** → **437 kB** ⚠️ +10 B

81 files with content-based hashes (individual files not comparable between builds)

Server

Middleware
Canary PR Change
middleware-b..fest.js gzip 758 B 756 B
Total 758 B 756 B ✅ -2 B
Build Details
Build Manifests
Canary PR Change
_buildManifest.js gzip 451 B 449 B
Total 451 B 449 B ✅ -2 B

📦 Webpack

Client

Main Bundles
Canary PR Change
5528-HASH.js gzip 5.47 kB N/A -
6280-HASH.js gzip 57 kB N/A -
6335.HASH.js gzip 169 B N/A -
912-HASH.js gzip 4.53 kB N/A -
e8aec2e4-HASH.js gzip 62.5 kB N/A -
framework-HASH.js gzip 59.7 kB 59.7 kB
main-app-HASH.js gzip 254 B 254 B
main-HASH.js gzip 39.1 kB 39.1 kB
webpack-HASH.js gzip 1.68 kB 1.68 kB
262-HASH.js gzip N/A 4.53 kB -
2889.HASH.js gzip N/A 169 B -
5602-HASH.js gzip N/A 5.49 kB -
6948ada0-HASH.js gzip N/A 62.5 kB -
9544-HASH.js gzip N/A 57.7 kB -
Total 230 kB 231 kB ⚠️ +646 B
Polyfills
Canary PR Change
polyfills-HASH.js gzip 39.4 kB 39.4 kB
Total 39.4 kB 39.4 kB
Pages
Canary PR Change
_app-HASH.js gzip 194 B 194 B
_error-HASH.js gzip 183 B 180 B 🟢 3 B (-2%)
css-HASH.js gzip 331 B 330 B
dynamic-HASH.js gzip 1.81 kB 1.81 kB
edge-ssr-HASH.js gzip 256 B 256 B
head-HASH.js gzip 351 B 352 B
hooks-HASH.js gzip 384 B 383 B
image-HASH.js gzip 580 B 581 B
index-HASH.js gzip 260 B 260 B
link-HASH.js gzip 2.49 kB 2.49 kB
routerDirect..HASH.js gzip 320 B 319 B
script-HASH.js gzip 386 B 386 B
withRouter-HASH.js gzip 315 B 315 B
1afbb74e6ecf..834.css gzip 106 B 106 B
Total 7.97 kB 7.97 kB ✅ -1 B

Server

Edge SSR
Canary PR Change
edge-ssr.js gzip 126 kB 126 kB
page.js gzip 249 kB 250 kB
Total 375 kB 376 kB ⚠️ +422 B
Middleware
Canary PR Change
middleware-b..fest.js gzip 614 B 615 B
middleware-r..fest.js gzip 156 B 155 B
middleware.js gzip 32.9 kB 33.2 kB 🔴 +332 B (+1%)
edge-runtime..pack.js gzip 842 B 842 B
Total 34.5 kB 34.8 kB ⚠️ +332 B
Build Details
Build Manifests
Canary PR Change
_buildManifest.js gzip 733 B 735 B
Total 733 B 735 B ⚠️ +2 B
Build Cache
Canary PR Change
0.pack gzip 3.84 MB 3.85 MB 🔴 +7.7 kB (+0%)
index.pack gzip 104 kB 104 kB
index.pack.old gzip 103 kB 103 kB
Total 4.05 MB 4.06 MB ⚠️ +8.21 kB

🔄 Shared (bundler-independent)

Runtimes
Canary PR Change
app-page-exp...dev.js gzip 316 kB 316 kB
app-page-exp..prod.js gzip 168 kB 168 kB
app-page-tur...dev.js gzip 315 kB 315 kB
app-page-tur..prod.js gzip 167 kB 167 kB
app-page-tur...dev.js gzip 312 kB 312 kB
app-page-tur..prod.js gzip 166 kB 166 kB
app-page.run...dev.js gzip 312 kB 312 kB
app-page.run..prod.js gzip 166 kB 166 kB
app-route-ex...dev.js gzip 70.5 kB 70.5 kB
app-route-ex..prod.js gzip 49 kB 49 kB
app-route-tu...dev.js gzip 70.5 kB 70.5 kB
app-route-tu..prod.js gzip 49 kB 49 kB
app-route-tu...dev.js gzip 70.1 kB 70.1 kB
app-route-tu..prod.js gzip 48.8 kB 48.8 kB
app-route.ru...dev.js gzip 70.1 kB 70.1 kB
app-route.ru..prod.js gzip 48.8 kB 48.8 kB
dist_client_...dev.js gzip 324 B 324 B
dist_client_...dev.js gzip 326 B 326 B
dist_client_...dev.js gzip 318 B 318 B
dist_client_...dev.js gzip 317 B 317 B
pages-api-tu...dev.js gzip 43.2 kB 43.2 kB
pages-api-tu..prod.js gzip 32.9 kB 32.9 kB
pages-api.ru...dev.js gzip 43.2 kB 43.2 kB
pages-api.ru..prod.js gzip 32.8 kB 32.8 kB
pages-turbo....dev.js gzip 52.5 kB 52.5 kB
pages-turbo...prod.js gzip 39.4 kB 39.4 kB
pages.runtim...dev.js gzip 52.5 kB 52.5 kB
pages.runtim..prod.js gzip 39.4 kB 39.4 kB
server.runti..prod.js gzip 62.7 kB 62.7 kB
Total 2.8 MB 2.8 MB ⚠️ +332 B
📝 Changed Files (1 file)

Files with changes:

  • app-page-tur..time.prod.js
View diffs
app-page-tur..time.prod.js

Diff too large to display

@timneutkens
Copy link
Member Author

With nested Suspense:

Before:

  Requests:  200
  Min:       73ms
  Max:       106ms
  Avg:       78ms
  P50:       77ms
  P75:       80ms
  P95:       85ms
  P99:       94ms

After:

  Requests:  200
  Min:       56ms
  Max:       67ms
  Avg:       59ms
  P50:       58ms
  P75:       60ms
  P95:       65ms
  P99:       66ms

Still 30% better

@timneutkens
Copy link
Member Author

More nesting

Before:

  Requests:  200
  Min:       159ms
  Max:       208ms
  Avg:       169ms
  P50:       167ms
  P75:       173ms
  P95:       183ms
  P99:       188ms

After:

  Requests:  200
  Min:       125ms
  Max:       170ms
  Avg:       134ms
  P50:       132ms
  P75:       138ms
  P95:       148ms
  P99:       160ms

Still 15% better

@codspeed-hq
Copy link

codspeed-hq bot commented Feb 13, 2026

Merging this PR will not alter performance

✅ 17 untouched benchmarks
⏩ 3 skipped benchmarks1


Comparing 02-11-wip_react_optimization (6f562cf) with canary (fe39a3c)2

Open in CodSpeed

Footnotes

  1. 3 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

  2. No successful run was found on feedthejim/node-stream-05-ci (8a37cdb) during the generation of this report, so canary (fe39a3c) was used instead as the comparison base. There might be some changes unrelated to this pull request in this report.

…le-time switchable modules

Extract stream operations and debug channel code from app-render.tsx into
separate modules that can be swapped at compile time:

- stream-ops.web.ts: web stream implementations (renderToFlightStream,
  renderToFizzStream, continueFizzStream, etc.)
- stream-ops.ts: re-exports from .web.ts (future: conditional branch)
- debug-channel-server.web.ts: web debug channel implementation
- debug-channel-server.ts: re-exports from .web.ts (future: conditional branch)
- node-web-streams-helper.ts: add createRuntimePrefetchTransformStream

Pure code motion with no behavior changes. Web paths produce identical output.
@timneutkens timneutkens force-pushed the 02-11-wip_react_optimization branch from 135c718 to c84eee6 Compare February 19, 2026 12:41
@timneutkens timneutkens force-pushed the feedthejim/node-stream-05-ci branch from 91148ca to 5d251eb Compare February 19, 2026 12:41
timneutkens and others added 11 commits February 19, 2026 13:54
…ig flag

Add the building blocks for native Node.js stream rendering:

- Config flag: experimental.useNodeStreams with __NEXT_USE_NODE_STREAMS env var
- Node stream primitives: node-stream-helpers, pipeable-stream-wrappers,
  node-stream-tee, pipe-readable, chain-node-streams
- Stream ops: stream-ops.node.ts with compile-time switcher in stream-ops.ts
- Debug channel: debug-channel-server.node.ts with 3-way conditional switcher
- Build: taskfile.js bundle tasks, webpack config routing, module.compiled.js
- Flight response: createInlinedDataNodeStream for Node Transform encoding
- Entry base: conditional exports for renderToPipeableStream/prerenderToNodeStream
- Tests: pipe-readable, flight response node stream, env precedence e2e
Integrate the node stream building blocks (from prior PR) into the
actual render paths, all gated behind experimental.useNodeStreams:

- app-render.tsx: add useNodeStreams branching throughout render paths,
  teeDebugChannelForSsrAndBrowser and debugChannelClientForBrowser helpers
- app-render-prerender-utils.ts: ReactServerResult accepts NodeReadable,
  createReactServerPrerenderResultFromPrerender dispatcher
- instant-validation.tsx: node stream branches for validation renders
- render-result.ts: accept Node Readable as response body with piping
- flight-render-result.ts: accept Node Readable in constructor
- app-page.ts: PPR resume path for node streams
- stream-ops.ts/debug-channel-server.ts: widen types to include Readable
Adds a CI job that re-runs the app-dir e2e suite with
__NEXT_USE_NODE_STREAMS=true to verify the node stream path.

Includes a test manifest that selects the relevant test globs.
@timneutkens timneutkens force-pushed the feedthejim/node-stream-05-ci branch from 5d251eb to 71029bd Compare February 19, 2026 13:49
@timneutkens timneutkens force-pushed the 02-11-wip_react_optimization branch from c84eee6 to 20fba0f Compare February 19, 2026 13:49
unstubbable added a commit to facebook/react that referenced this pull request Feb 19, 2026
…yload (#35776)

## Summary

Follow-up to vercel/next.js#89823 with the
actual changes to React.

Replaces the `JSON.parse` reviver callback in `initializeModelChunk`
with a two-step approach: plain `JSON.parse()` followed by a recursive
`reviveModel()` post-process (same as in Flight Reply Server). This
yields a **~75% speedup** in RSC chunk deserialization.

| Payload | Original (ms) | Walk (ms) | Speedup |
|---------|---------------|-----------|---------|
| Small (2 elements, 142B) | 0.0024 | 0.0007 | **+72%** |
| Medium (~12 elements, 914B) | 0.0116 | 0.0031 | **+73%** |
| Large (~90 elements, 16.7KB) | 0.1836 | 0.0451 | **+75%** |
| XL (~200 elements, 25.7KB) | 0.3742 | 0.0913 | **+76%** |
| Table (1000 rows, 110KB) | 3.0862 | 0.6887 | **+78%** |

## Problem

`createFromJSONCallback` returns a reviver function passed as the second
argument to `JSON.parse()`. This reviver is called for **every key-value
pair** in the parsed JSON. While the logic inside the reviver is
lightweight, the dominant cost is the **C++ → JavaScript boundary
crossing** — V8's `JSON.parse` is implemented in C++, and calling back
into JavaScript for every node incurs significant overhead.

Even a trivial no-op reviver `(k, v) => v` makes `JSON.parse` **~4x
slower** than bare `JSON.parse` without a reviver:

```
108 KB payload:
  Bare JSON.parse:    0.60 ms
  Trivial reviver:    2.95 ms  (+391%)
```

## Change

Replace the reviver with a two-step process:

1. `JSON.parse(resolvedModel)` — parse the entire payload in C++ with no
callbacks
2. `reviveModel` — recursively walk the resulting object in pure
JavaScript to apply RSC transformations

The `reviveModel` function includes additional optimizations over the
original reviver:
- **Short-circuits plain strings**: only calls `parseModelString` when
the string starts with `$`, skipping the vast majority of strings (class
names, text content, etc.)
- **Stays entirely in JavaScript** — no C++ boundary crossings during
the walk

## Results

You can find the related applications in the [Next.js PR
](vercel/next.js#89823 I've been testing this
on Next.js applications.

### Table as Server Component with 1000 items

Before:

```
    "min": 13.782875000000786,
    "max": 22.23400000000038,
    "avg": 17.116868530000083,
    "p50": 17.10766700000022,
    "p75": 18.50787499999933,
    "p95": 20.426249999998618,
    "p99": 21.814125000000786
```

After:

```
    "min": 10.963916999999128,
    "max": 18.096083000000363,
    "avg": 13.543286884999988,
    "p50": 13.58350000000064,
    "p75": 14.871791999999914,
    "p95": 16.08429099999921,
    "p99": 17.591458000000785
```

### Table as Client Component with 1000 items

Before:

```
    "min": 3.888875000000553,
    "max": 9.044959000000745,
    "avg": 4.651271475000067,
    "p50": 4.555749999999534,
    "p75": 4.966624999999112,
    "p95": 5.47754200000054,
    "p99": 6.109499999998661
````

After:

```
    "min": 3.5986250000005384,
    "max": 5.374291000000085,
    "avg": 4.142990245000046,
    "p50": 4.10570799999914,
    "p75": 4.392041999999492,
    "p95": 4.740084000000934,
    "p99": 5.1652500000000146
```

### Nested Suspense

Before:

```
  Requests:  200
  Min:       73ms
  Max:       106ms
  Avg:       78ms
  P50:       77ms
  P75:       80ms
  P95:       85ms
  P99:       94ms
```

After:

```
  Requests:  200
  Min:       56ms
  Max:       67ms
  Avg:       59ms
  P50:       58ms
  P75:       60ms
  P95:       65ms
  P99:       66ms
```

### Even more nested Suspense (double-level Suspense)

Before:

```
  Requests:  200
  Min:       159ms
  Max:       208ms
  Avg:       169ms
  P50:       167ms
  P75:       173ms
  P95:       183ms
  P99:       188ms
```

After:

```
  Requests:  200
  Min:       125ms
  Max:       170ms
  Avg:       134ms
  P50:       132ms
  P75:       138ms
  P95:       148ms
  P99:       160ms
```

## How did you test this change?

Ran it across many Next.js benchmark applications.

The entire Next.js test suite passes with this change.

---------

Co-authored-by: Hendrik Liebau <mail@hendrik-liebau.de>
@timneutkens
Copy link
Member Author

React PR has been merged and added to Next.js in #90211

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants