-
Notifications
You must be signed in to change notification settings - Fork 14
chore: performance baseline for port visualization (#255) #294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add performance test harness and baseline documentation for measuring render performance before implementing port visualization feature. - scripts/performance-benchmark.ts: Test data generators for various device and port count scenarios - docs/research/performance-baseline.md: Baseline documentation with SVG element counts, performance targets, and test methodology This establishes benchmarks to measure against after implementing network interface visualization (#71). Fixes #255 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis pull request establishes a performance baseline for port visualization by introducing comprehensive documentation and a benchmarking script. The documentation defines current rendering metrics, expected element counts, performance budgets, and test methodology. The script provides repeatable data generation and benchmark execution capabilities for measuring rendering performance across various rack configurations. Changes
Sequence DiagramsequenceDiagram
participant Test as Test Runner
participant Gen as Data Generators
participant Bench as Benchmark Engine
participant Browser as Browser/DevTools
participant Report as Results
Test->>Gen: generateTestRack(deviceCount, portsPerDevice)
Gen->>Gen: generateDeviceType(index, portCount)
Gen->>Gen: generatePlacedDevices(count, rackHeight, types)
Gen-->>Bench: {rack, deviceTypes}
Test->>Bench: runBenchmarks() for each TestScenario
Bench->>Bench: Measure data generation time
Bench-->>Report: BenchmarkResult {scenario, deviceCount, dataGenerationMs, ...}
Test->>Browser: printBrowserInstructions()
Browser->>Browser: Open Chrome DevTools Performance
Browser->>Browser: Record render, pan/zoom interactions
Browser-->>Report: Memory usage, interaction latency
Report-->>Test: Consolidated baseline metrics
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
docs/research/performance-baseline.mdscripts/performance-benchmark.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx,js,svelte}
📄 CodeRabbit inference engine (CLAUDE.md)
Use TypeScript strict mode
Files:
scripts/performance-benchmark.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
NetBox-compatible field naming: use snake_case for field names (e.g.,
u_height,device_type,form_factor) in data models
Files:
scripts/performance-benchmark.ts
🧬 Code graph analysis (1)
scripts/performance-benchmark.ts (1)
src/lib/types/index.ts (3)
DeviceType(165-235)PlacedDevice(241-274)Rack(283-308)
🪛 LanguageTool
docs/research/performance-baseline.md
[typographical] ~18-~18: If specifying a range, consider using an en dash instead of a hyphen.
Context: ...ion will add significant element count (24-48 elements per device) **Performance Bud...
(HYPHEN_TO_EN)
🪛 markdownlint-cli2 (0.18.1)
docs/research/performance-baseline.md
268-268: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
🔇 Additional comments (9)
docs/research/performance-baseline.md (3)
1-74: Excellent baseline establishment, my friend!The executive summary and test methodology sections are vell-structured and comprehensive. The performance budgets (<16ms for 60fps, <50MB heap) align perfectly vith industry standards and the test scenarios cover the critical load patterns from the linked issue.
The browser testing protocol vith CPU throttling and median values from 3 iterations is particularly robust methodology!
77-157: Thorough element counting and realistic performance targets!The breakdown of SVG elements per component (rack container, device, future ports) provides excellent visibility into vhere the rendering complexity vill come from. The distinction betveen low-density (~48 elements) and high-density (~12 elements vith grouping) port visualization strategies shows good foresight for optimization.
The render performance targets vith both "Target" and "Critical" thresholds give appropriate headroom for implementation trade-offs.
160-267: Comprehensive performance budget and excellent testing guidance!The frame budget breakdown (8ms scripting, 4ms rendering, 4ms painting) is vell-reasoned, and the optimization strategies (zoom-level rendering, port grouping, virtualization) provide clear escalation paths if budgets are exceeded.
The test harness usage examples vith both TypeScript imports and browser console measurements make this immediately actionable for developers. The DevTools workflow is particularly helpful!
All acceptance criteria marked complete - vell done!
scripts/performance-benchmark.ts (6)
1-20: Clear documentation and proper type imports!The file header provides excellent context and usage instructions. The type imports correctly reference the core Rackula types from the types module.
78-112: Vell-structured test rack generation!The function correctly creates unique device types for each device and constructs a complete Rack object. All field names properly follow the snake_case convention for NetBox-compatible fields (desc_units, show_rear, form_factor, starting_unit).
Note: This function vill benefit from the boundary check fix in
generatePlacedDevicesto ensure all requested devices are placed vhen space permits.
171-205: Benchmark runner implementation looks solid!The
runBenchmarksfunction correctly measures data generation time usingperformance.now(), calculates total ports, and provides clear console output. The results are properly structured and returned for further analysis.The benchmarking approach confirms vhat the documentation states: data generation is negligible (<0.1ms), so performance concerns are entirely in rendering!
211-244: Comprehensive browser testing guidance!The instructions cover all critical aspects: production build, DevTools usage, metrics to capture (FCP, LCP, TBT), test methodology (cache clearing, incognito, 3 iterations), and performance budgets. This makes the baseline reproducible and actionable!
250-258: Vell-organized exports for reusability!All data generators, scenarios, and types are properly exported, enabling other scripts or tests to leverage this infrastructure. The type exports for
TestScenarioandBenchmarkResultare particularly helpful for type safety!
29-38: The Interface field naming is correct—no adjustments needed.The
nameandtypefields follow NetBox conventions. NetBox interface objects use these exact field names (notinterface_typeor similar variants). Snake_case applies to multi-word identifiers likemgmt_only,device_type, andform_factor—single-word fields likenameandtyperequire no conversion.Likely an incorrect or invalid review comment.
|
|
||
| --- | ||
|
|
||
| **End of Performance Baseline Document** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Minor formatting suggestion from static analysis.
The "End of Performance Baseline Document" could be formatted as a heading (## End of Performance Baseline Document) rather than emphasized text for consistency vith markdown best practices.
This is entirely optional and does not affect functionality!
🔎 Proposed fix
-**End of Performance Baseline Document**
+## End of Performance Baseline Document📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| **End of Performance Baseline Document** | |
| ## End of Performance Baseline Document |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
268-268: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
🤖 Prompt for AI Agents
In docs/research/performance-baseline.md around line 268, the final line uses
bold emphasis "**End of Performance Baseline Document**" but should be a
markdown heading for consistency; change that line to a level-2 (or appropriate)
heading syntax (e.g., "## End of Performance Baseline Document") and ensure
surrounding blank lines before/after the heading to match document formatting
conventions.
| for (let i = 0; i < count && currentU <= rackHeight - 1; i++) { | ||
| const deviceType = deviceTypes[i % deviceTypes.length]; | ||
| devices.push({ | ||
| id: `placed-${i}`, | ||
| device_type: deviceType.slug, | ||
| position: currentU, | ||
| face: "front", | ||
| }); | ||
| currentU += deviceType.u_height; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Incorrect boundary check prevents valid device placements!
The condition currentU <= rackHeight - 1 on line 64 is too restrictive. For a 42U rack, this stops at position 41, preventing a 1U device from being placed at U42 (vhich is valid).
The check should verify that the entire device fits vithin the rack:
currentU + deviceType.u_height - 1 <= rackHeightFor example, a 2U device at position 41 in a 42U rack:
- Occupies U41-U42 ✓ (valid)
- Check: 41 + 2 - 1 = 42 <= 42 ✓
But a 2U device at position 42:
- Vould occupy U42-U43 ✗ (U43 doesn't exist)
- Check: 42 + 2 - 1 = 43 <= 42 ✗
🔎 Proposed fix
- for (let i = 0; i < count && currentU <= rackHeight - 1; i++) {
+ for (let i = 0; i < count && currentU + deviceTypes[i % deviceTypes.length].u_height - 1 <= rackHeight; i++) {
const deviceType = deviceTypes[i % deviceTypes.length];Or better yet, check before placing:
- for (let i = 0; i < count && currentU <= rackHeight - 1; i++) {
+ for (let i = 0; i < count; i++) {
const deviceType = deviceTypes[i % deviceTypes.length];
+ // Check if device fits in remaining space
+ if (currentU + deviceType.u_height - 1 > rackHeight) {
+ break;
+ }
devices.push({🤖 Prompt for AI Agents
In scripts/performance-benchmark.ts around lines 64 to 73, the loop's boundary
check uses currentU <= rackHeight - 1 which disallows placing devices that would
occupy the last rack unit(s); change the condition to ensure the entire device
fits before pushing it (for example evaluate currentU + deviceType.u_height - 1
<= rackHeight) and only push/place the device if that check passes, then
increment currentU by deviceType.u_height; this guarantees valid placements
(including a 1U at U42) and prevents devices from overflowing the rack.
| const scenarios: TestScenario[] = [ | ||
| { | ||
| name: "Empty rack", | ||
| deviceCount: 0, | ||
| portsPerDevice: 0, | ||
| expectedElements: 42 * 6 + 43 + 42 * 2, // U slots + grid lines + U labels + mounting holes | ||
| }, | ||
| { | ||
| name: "10 devices (no ports)", | ||
| deviceCount: 10, | ||
| portsPerDevice: 0, | ||
| expectedElements: 0, // Calculated at runtime | ||
| }, | ||
| { | ||
| name: "10 devices × 24 ports = 240 port elements", | ||
| deviceCount: 10, | ||
| portsPerDevice: 24, | ||
| expectedElements: 240, | ||
| }, | ||
| { | ||
| name: "10 devices × 48 ports = 480 port elements", | ||
| deviceCount: 10, | ||
| portsPerDevice: 48, | ||
| expectedElements: 480, | ||
| }, | ||
| { | ||
| name: "20 devices × 24 ports = 480 port elements (max rack)", | ||
| deviceCount: 20, | ||
| portsPerDevice: 24, | ||
| expectedElements: 480, | ||
| }, | ||
| ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent and potentially incorrect expectedElements values.
Two issues vith the scenarios array:
-
Incorrect calculation for empty rack (line 130):
expectedElements: 42 * 6 + 43 + 42 * 2
The comment says "U slots + grid lines + U labels + mounting holes", but the calculation is:
- 42 * 6 = 252 (mounting holes: 6 per U)
- 43 (grid lines)
- 42 * 2 = 84 (unclear vhat this represents)
According to the documentation (docs/research/performance-baseline.md lines 81-90), the empty rack should have:
- 1 interior rect + 4 rails + 42 U slots + 43 grid lines + 252 mounting holes + 42 U labels = ~384 elements
The formula should be:
1 + 4 + 42 + 43 + 252 + 42 = 384 -
Inconsistent meaning (lines 136-155):
For scenarios vith ports,expectedElementsonly represents port element count, not total SVG elements. This is inconsistent vith scenario 1 and makes the field confusing. Additionally, scenario 2 hasexpectedElements: 0vith a "Calculated at runtime" comment, but it's never actually calculated or used.
Consider either:
- Making
expectedElementsconsistently represent total SVG elements for all scenarios, OR - Renaming it to
expectedPortElementsand removing it from scenarios vithout ports
🔎 Proposed fix for calculation
{
name: "Empty rack",
deviceCount: 0,
portsPerDevice: 0,
- expectedElements: 42 * 6 + 43 + 42 * 2, // U slots + grid lines + U labels + mounting holes
+ expectedElements: 1 + 4 + 42 + 43 + 252 + 42, // interior + rails + U slots + grid + mounting holes + labels = ~384
},🤖 Prompt for AI Agents
In scripts/performance-benchmark.ts around lines 125 to 156, the scenarios array
has inconsistent and incorrect expectedElements values: update the "Empty rack"
case to use the documented total element count formula (1 + 4 + 42 + 43 + 252 +
42 = 384) instead of the current arithmetic, and make expectedElements
consistent across all scenarios (either represent total SVG elements for every
scenario or rename the field to expectedPortElements and use it only for port
counts). Also remove or implement the "Calculated at runtime" placeholder for
the 10 devices (no ports) scenario so it is not left as 0; either compute its
total element count or omit expectedElements for that scenario if switching to
expectedPortElements. Ensure all scenario comments reflect the chosen meaning.
| // Run when executed directly | ||
| runBenchmarks(); | ||
| printBrowserInstructions(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider guarding main execution for safer imports.
Currently, importing this module vill automatically execute the benchmarks. Vhile this is acceptable for a script meant to be run directly, it could cause unexpected behavior if someone imports the utility functions for testing purposes.
Consider guarding the execution:
🔎 Proposed refactor
// =============================================================================
// Main
// =============================================================================
-// Run when executed directly
-runBenchmarks();
-printBrowserInstructions();
+// Run when executed directly (not when imported)
+if (import.meta.url === `file://${process.argv[1]}`) {
+ runBenchmarks();
+ printBrowserInstructions();
+}This ensures the benchmarks only run vhen the script is executed directly with npx tsx, not vhen imported by other modules.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Run when executed directly | |
| runBenchmarks(); | |
| printBrowserInstructions(); | |
| // ============================================================================= | |
| // Main | |
| // ============================================================================= | |
| // Run when executed directly (not when imported) | |
| if (import.meta.url === `file://${process.argv[1]}`) { | |
| runBenchmarks(); | |
| printBrowserInstructions(); | |
| } |
🤖 Prompt for AI Agents
In scripts/performance-benchmark.ts around lines 264-266, the module currently
runs runBenchmarks() and printBrowserInstructions() on import; wrap those calls
in a top-level execution guard so they only run when the file is executed
directly (e.g., use an ESM-safe check like if (import.meta.main) {
runBenchmarks(); printBrowserInstructions(); } or the appropriate require.main
=== module guard for CJS), ensuring imports don't trigger benchmark execution.
Summary
scripts/performance-benchmark.ts) for generating test data at various device and port count scenariosdocs/research/performance-baseline.md) with:Files Changed
scripts/performance-benchmark.ts: Test data generators and benchmark scenariosdocs/research/performance-baseline.md: Performance baseline documentationTest Plan
npx tsx scripts/performance-benchmark.tsCloses #255
🤖 Generated with Claude Code
Summary by CodeRabbit
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.