Conversation
- SessionDetail/SessionAnalytics now look up cached candidates by ID before falling back to loadAncestry(), eliminating per-session N+1 DB round-trips - Overview() and AnalyticsOverview() switched to allowCache=true so rapid TUI/dashboard refreshes reuse the in-memory candidate set - extractToolCalls() deduplicated to one call per node in analytics loop - sessionCacheTTL bumped from 10s to 30s - Add created_at index to Node schema for future time-range pushdown - Facet polling uses exponential backoff (3s→30s) instead of fixed 3s interval
Replace linear scans in SessionDetail and SessionAnalytics with a map[string]*sessionCandidate index on the cache, turning repeated keyed lookups from O(n) to O(1).
oppegard
left a comment
There was a problem hiding this comment.
This looks awesome :D
My primary concern is lack of test coverage. I prompted codex with:
Based on the changes this branch made to
query.go, can you suggest some high value tests?
It came up with this perf/deck-query-cache-and-index...oppegard:tapes:perf/deck-query-cache-and-index
(It's wild that it can just generate this)
Before this new test suite, query.go had 27% test coverage. These tests push it to 78% coverage, and they run in ~250 ms (I don't love that). There are still some conditionals not being exercised, and functions like groupSessionDetail() and appendGroupedText() have nearly zero coverage.
That's the extent of what I've reviewed.
|
No docs update needed. This PR is a pure performance optimization that doesn't change any user-facing behavior:
All API responses and CLI behavior remain unchanged. Users will experience faster dashboard/TUI performance without any action required. PR #132 was merged: perf: tapes deck eliminates N+1 queries, extend cache coverage, add created_at index |
Summary
SessionDetail/SessionAnalytics— both now check the session candidate cache by ID before falling back toloadAncestry(), eliminating per-session parent-chain DB round-tripsmap[string]*sessionCandidateindex to the cache struct, replacing O(n) linear scans inSessionDetailandSessionAnalyticswith direct map lookups on the hot pathOverview()andAnalyticsOverview()use cache — switched fromallowCache=falsetotrue; rapid TUI/dashboard refreshes reuse the in-memory candidate set instead of triggering a full DB scanextractToolCallsin analytics loop — called once per node instead of twice, result reused for both counting and error attributionsessionCacheTTL10s → 30s — aligns with TUI refresh cadence and reduces cold-load frequencycreated_atindex to Node schema — enables future time-range predicate pushdown; applied automatically viaclient.Schema.Createon startupsetInterval(fn, 3000)with a recursivesetTimeoutthat grows 1.5× per tick up to 30s, reducing background network pressure during slow backfillsBenchmark results (Apple M4 Max, 6×5s runs,
benchstat)OverviewwarmSessionDetailwarmSessionAnalyticswarmAnalyticsOverviewwarmOverviewwarm (50 sessions)SessionDetailwarm (50 sessions)Overall geomean: −70.74%
Cold-path calls (cache miss) incur a small constant overhead (+9–11%) from writing to the cache on each full load — acceptable given warm-path savings.
Test plan
go test ./pkg/deck/...— 121 specs pass, no regressionsgo build ./...— clean buildContinue Tasks: ✅ 1 no changes — View all