Skip to content

fix: Scope fontify buffers per session and skip redundant display updates#113

Merged
dnouri merged 1 commit intomasterfrom
fix/fontify-session-scoping
Feb 9, 2026
Merged

fix: Scope fontify buffers per session and skip redundant display updates#113
dnouri merged 1 commit intomasterfrom
fix/fontify-session-scoping

Conversation

@dnouri
Copy link
Owner

@dnouri dnouri commented Feb 9, 2026

Fontify buffers (" *pi-fontify:<lang>*") were globally shared by language. When two sessions wrote files in the same language, they corrupted each other's buffer — producing garbled syntax highlighting and triggering full O(n) re-fontification on every token instead of incremental O(1) appends. This caused multi-second freezes during write tool streaming in multi-session workflows.

Changes

  • Each chat buffer now tracks its own fontify buffers via a buffer-local hash table (pi-coding-agent--fontify-buffers). Lookup is by hash table, not by buffer name. Buffers are killed on session end.

  • Skip the delete+reinsert cycle in display-tool-streaming-text when the visible tail content is unchanged. Most LLM tokens extend a partial line not shown in the rolling preview, making ~90% of deltas no-ops for the buffer mutation path. fontify-sync still runs on every delta to keep the fontify buffer current.

  • Remove handle-message-update, which accumulated text deltas via O(n²) concat into a field never read during streaming.

  • Surface fontify-sync errors via message instead of swallowing silently with (condition-case nil ... (error nil)).

Benchmarks

Emacs 30.1, 500-delta write tool, xvfb real windows:

Metric Before After Improvement
Two-session victim (mean) 36,145µs 1,537µs 23.5x faster
Two-session victim (p95) 84,656µs 2,753µs 30.7x faster
Single-session write (p50) 641µs 594µs stable
Bash streaming (mean) 79µs 28µs 2.8x faster
Fontify buffer corruption 246 shrink events 0 eliminated
Fontify buffer leaks confirmed fixed eliminated

Realistic partial-token benchmark (1,623 tokens, 171 lines of Python):

Metric Value
Display updates 171 out of 1,623 deltas (89.5% skip rate)
Mean per-delta 1,063µs (includes 89.5% near-free no-ops)

Tests

  • 483 unit tests pass
  • 17 integration tests pass (real pi + Ollama)
  • 13 GUI tests pass (real frames via xvfb)

…ates

Fontify buffers (" *pi-fontify:<lang>*") were globally shared by
language. When two sessions wrote files in the same language, they
corrupted each other's buffer — producing garbled syntax highlighting
and triggering full O(n) re-fontification on every token instead of
incremental O(1) appends. This caused multi-second freezes during
write tool streaming in multi-session workflows.

Changes:

- Each chat buffer now tracks its own fontify buffers via a buffer-local
  hash table (pi-coding-agent--fontify-buffers). Lookup is by hash
  table, not by buffer name. Buffers are killed on session end.

- Skip the delete+reinsert cycle in display-tool-streaming-text when
  the visible tail content is unchanged. Most LLM tokens extend a
  partial line not shown in the rolling preview, making ~90% of deltas
  no-ops for the buffer mutation path. fontify-sync still runs on every
  delta to keep the fontify buffer current.

- Remove handle-message-update, which accumulated text deltas via
  O(n²) concat into a field never read during streaming.

- Surface fontify-sync errors via message instead of swallowing
  silently with (condition-case nil ... (error nil)).

Benchmarks (Emacs 30.1, 500-delta write tool, xvfb real windows):

  Two-session victim (mean):  36,145µs → 1,537µs  (23.5x faster)
  Two-session victim (p95):   84,656µs → 2,753µs  (30.7x faster)
  Single-session write (p50):    641µs →   594µs   (stable)
  Bash streaming (mean):          79µs →    28µs   (2.8x faster)

  Fontify buffer corruption:  246 shrink events → 0
  Fontify buffer leaks:       confirmed → fixed

Realistic partial-token benchmark (1,623 tokens, 171 lines of Python):

  Display updates: 171 out of 1,623 deltas (89.5% skip rate)
  Mean per-delta: 1,063µs  (includes 89.5% near-free no-ops)
@dnouri dnouri merged commit 325286e into master Feb 9, 2026
7 checks passed
@dnouri dnouri deleted the fix/fontify-session-scoping branch February 9, 2026 19:54
dnouri added a commit that referenced this pull request Feb 25, 2026
…ates (#113)

Fontify buffers (" *pi-fontify:<lang>*") were globally shared by
language. When two sessions wrote files in the same language, they
corrupted each other's buffer — producing garbled syntax highlighting
and triggering full O(n) re-fontification on every token instead of
incremental O(1) appends. This caused multi-second freezes during
write tool streaming in multi-session workflows.

Changes:

- Each chat buffer now tracks its own fontify buffers via a buffer-local
  hash table (pi-coding-agent--fontify-buffers). Lookup is by hash
  table, not by buffer name. Buffers are killed on session end.

- Skip the delete+reinsert cycle in display-tool-streaming-text when
  the visible tail content is unchanged. Most LLM tokens extend a
  partial line not shown in the rolling preview, making ~90% of deltas
  no-ops for the buffer mutation path. fontify-sync still runs on every
  delta to keep the fontify buffer current.

- Remove handle-message-update, which accumulated text deltas via
  O(n²) concat into a field never read during streaming.

- Surface fontify-sync errors via message instead of swallowing
  silently with (condition-case nil ... (error nil)).

Benchmarks (Emacs 30.1, 500-delta write tool, xvfb real windows):

  Two-session victim (mean):  36,145µs → 1,537µs  (23.5x faster)
  Two-session victim (p95):   84,656µs → 2,753µs  (30.7x faster)
  Single-session write (p50):    641µs →   594µs   (stable)
  Bash streaming (mean):          79µs →    28µs   (2.8x faster)

  Fontify buffer corruption:  246 shrink events → 0
  Fontify buffer leaks:       confirmed → fixed

Realistic partial-token benchmark (1,623 tokens, 171 lines of Python):

  Display updates: 171 out of 1,623 deltas (89.5% skip rate)
  Mean per-delta: 1,063µs  (includes 89.5% near-free no-ops)
dnouri added a commit that referenced this pull request Feb 26, 2026
…ates (#113)

Fontify buffers (" *pi-fontify:<lang>*") were globally shared by
language. When two sessions wrote files in the same language, they
corrupted each other's buffer — producing garbled syntax highlighting
and triggering full O(n) re-fontification on every token instead of
incremental O(1) appends. This caused multi-second freezes during
write tool streaming in multi-session workflows.

Changes:

- Each chat buffer now tracks its own fontify buffers via a buffer-local
  hash table (pi-coding-agent--fontify-buffers). Lookup is by hash
  table, not by buffer name. Buffers are killed on session end.

- Skip the delete+reinsert cycle in display-tool-streaming-text when
  the visible tail content is unchanged. Most LLM tokens extend a
  partial line not shown in the rolling preview, making ~90% of deltas
  no-ops for the buffer mutation path. fontify-sync still runs on every
  delta to keep the fontify buffer current.

- Remove handle-message-update, which accumulated text deltas via
  O(n²) concat into a field never read during streaming.

- Surface fontify-sync errors via message instead of swallowing
  silently with (condition-case nil ... (error nil)).

Benchmarks (Emacs 30.1, 500-delta write tool, xvfb real windows):

  Two-session victim (mean):  36,145µs → 1,537µs  (23.5x faster)
  Two-session victim (p95):   84,656µs → 2,753µs  (30.7x faster)
  Single-session write (p50):    641µs →   594µs   (stable)
  Bash streaming (mean):          79µs →    28µs   (2.8x faster)

  Fontify buffer corruption:  246 shrink events → 0
  Fontify buffer leaks:       confirmed → fixed

Realistic partial-token benchmark (1,623 tokens, 171 lines of Python):

  Display updates: 171 out of 1,623 deltas (89.5% skip rate)
  Mean per-delta: 1,063µs  (includes 89.5% near-free no-ops)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant