Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR introduces a comprehensive client navigation benchmarking suite for TanStack Router across React, Solid, and Vue frameworks. It adds a GitHub Actions workflow for CodSpeed integration, benchmark implementations for each framework with shared utilities, Vitest configurations, and workspace/package configuration updates to enable benchmark execution via npm scripts. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
|
View your CI Pipeline Execution ↗ for commit baa7b23
☁️ Nx Cloud last updated this comment at |
Bundle Size Benchmarks
Trend sparkline is historical gzip bytes ending with this PR measurement; lower is better. |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@benchmarks/client-nav/package.json`:
- Around line 11-14: The package.json dependencies use "workspace:^" for
internal TanStack packages; update the three dependency versions
"@tanstack/react-router", "@tanstack/solid-router", and "@tanstack/vue-router"
from "workspace:^" to "workspace:*" so they use the workspace protocol for
internal packages in benchmarks/client-nav/package.json.
In `@benchmarks/client-nav/react/client-nav.bench.tsx`:
- Around line 109-135: The Promise currently creates timeoutId and intervalId
inside its scope and only clears them when the interval observes the expected
href, leaving the interval running if the timeout fires and the cleanup() never
clears timers; move timeoutId and intervalId declarations to the outer scope
(above the new Promise) so both the timeout handler and the cleanup function can
access them, ensure the timeout handler clears the interval
(clearInterval(intervalId)) before rejecting, and update cleanup() to clear both
timers (clearTimeout(timeoutId) and clearInterval(intervalId)) in addition to
calling reactRoot.unmount() and container.remove().
In `@benchmarks/client-nav/solid/client-nav.bench.tsx`:
- Around line 117-142: The Promise-created timers (timeoutId and intervalId)
must be declared outside the Promise so cleanup can access them; update the done
Promise logic (which currently checks router.state.location.href against
expectedHref/TARGET_ID) so the timeout handler also clears the interval
(window.clearInterval) before rejecting, and ensure the cleanup function
(alongside dispose() and container.remove()) clears both timers with
null/undefined checks using window.clearTimeout(timeoutId) and
window.clearInterval(intervalId) to satisfy TypeScript strict mode; use the
existing symbols timeoutId, intervalId, done, cleanup, dispose, container.remove
and router.state.location.href to locate and modify the code.
- Around line 55-62: The mapped Link elements inside Array.from({ length:
LINK_COUNT }, (_, index) => ...) lack a key prop causing suboptimal list
reconciliation; update the mapping that returns the Link components (the Link
JSX in client-nav.bench.tsx) to include a stable key (e.g., key={index} or
key={String(index)}) using the index or id so Solid can properly track items
during rendering and improve performance.
In `@benchmarks/client-nav/vue/client-nav.bench.ts`:
- Around line 120-137: The timeout handler in the Promise created for navigation
benchmarking only rejects but doesn't clear the interval, causing the intervalId
to keep running; update the timeout callback (the function that uses timeoutId)
to call window.clearInterval(intervalId) before calling reject so that interval
polling (created for checking router.state.location.href against
expectedHref/TARGET_ID) is stopped when the timeout fires.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (12)
.github/workflows/codspeed.ymlbenchmarks/client-nav/package.jsonbenchmarks/client-nav/react/client-nav.bench.tsxbenchmarks/client-nav/react/vitest.config.tsbenchmarks/client-nav/shared.tsbenchmarks/client-nav/solid/client-nav.bench.tsxbenchmarks/client-nav/solid/vitest.config.tsbenchmarks/client-nav/tsconfig.jsonbenchmarks/client-nav/vue/client-nav.bench.tsbenchmarks/client-nav/vue/vitest.config.tspackage.jsonpnpm-workspace.yaml
we've had big performance focused pushes before, with a whole lot of benchmarking, graphs, measurements... but we don't have any form of long-term tracking. This PR proposes we set up something for the main 2 things:
We're looking into using CodSpeed because we saw that tool on the rolldown repo, and it looks nice. And if they do in fact solve the noisy machine problem, then we can have some form of stable perf tracking.
SSR
For SSR so far we've been using
autocannonand the working setup ine2e/react-start/flamegraph-benchto focus on a specific feature (link rendering, matching depth, raw baseline). This gives us a req/s number (main) as well as throughput and latency (secondaries).For CodSpeed we can't really use
autocannon(or i don't know how to). But we can take advantage of the fact that tanstack/start is "just a request handler" and just feed itRequestobjects directly. Do this inside avitest benchand we can have a req/s stable metric. We'll need to feed some parallelism and not just "wait for one, send the next".Client side
For client-side nav performance, we've been using
store-updates-during-navigation.test.tsxtests that measure re-renders, but that might become obsolete with signals and granular re-renderingI think we can find a good middle ground using
vitest benchand a DOM environment (probably jsdom? maybe browser mode?), and mounting a self-navigating router. Measure how much time it takes to navigate 100 routes and that should be a stable navigation latency metric.Bundle size
We started teacking bundle size recently. Because it's the 3rd "big metric" we should look into colocating it with the other ones:
/benchmarksroot too?