Skip to content

🤖 fix: attribute session usage sources and dedupe abort cleanup#2627

Open
ibetitsmike wants to merge 1 commit intomainfrom
mike/openai-cache-fixes
Open

🤖 fix: attribute session usage sources and dedupe abort cleanup#2627
ibetitsmike wants to merge 1 commit intomainfrom
mike/openai-cache-fixes

Conversation

@ibetitsmike
Copy link
Contributor

@ibetitsmike ibetitsmike commented Feb 25, 2026

Summary

Add source-level session usage attribution and harden stream abort cleanup so interrupted streams cannot double-count token usage or continue consuming work after a soft interrupt.

Background

We needed clearer attribution for high input-token sessions (main chat vs. system overhead) and identified interrupt-path races that could overcount usage during abort handling.

Implementation

  • Added SessionUsageSource categories (main, system1, plan, subagent) and threaded attribution through:
    • usage persistence (SessionUsageService)
    • stream/session schemas
    • workspace rollup delta events
    • AI request entry points (AIService, System1ToolWrapper)
    • renderer state hydration/merge logic (WorkspaceStore)
    • Costs tab source breakdown UI
  • Hardened StreamManager abort teardown:
    • deduplicate overlapping cleanup paths with a shared abort-cleanup promise
    • signal abort immediately before awaiting partial-write flushing, preventing extra stream/tool work after a soft interrupt
    • added regression coverage for cleanup dedupe and blocked-flush interrupt ordering
  • Updated WorkspaceStore repricing to keep bySource costs aligned with repriced byModel totals using blended session component rates, preventing stale source-cost displays after pricing/mapping changes.

Validation

  • make static-check
  • bun test src/node/services/streamManager.test.ts src/node/services/sessionUsageService.test.ts src/browser/stores/WorkspaceStore.test.ts src/common/orpc/schemas/chatStats.test.ts

Risks

  • Low-to-moderate: touches stream interrupt teardown and usage accounting paths. Regressions would most likely appear as incorrect source-level cost attribution or interrupt edge-case behavior; both paths now have focused regression tests.

Generated with mux • Model: openai:gpt-5.3-codex • Thinking: xhigh • Cost: $13.74

@ibetitsmike
Copy link
Contributor Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 17ddca6fef

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@ibetitsmike ibetitsmike force-pushed the mike/openai-cache-fixes branch from 17ddca6 to 8f55612 Compare February 25, 2026 13:15
@ibetitsmike
Copy link
Contributor Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8f55612f93

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@ibetitsmike ibetitsmike force-pushed the mike/openai-cache-fixes branch from 8f55612 to d5c7e7b Compare February 25, 2026 13:30
@ibetitsmike
Copy link
Contributor Author

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. 🎉

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant