Skip to content

Conversation

CsBigDataHub
Copy link
Contributor

@CsBigDataHub CsBigDataHub commented Sep 12, 2025

I am not able to perform extensive testing, consider this initial draft.

@ericdallo
Copy link
Member

Will use that for my development and see if I noticed improvements/regressions thanks!

Copy link
Member

@ericdallo ericdallo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CsBigDataHub I think we need to re-think smart, maybe we cache the total text in a var by id (we have eca-set and eca-get to quickly assoc/get values for a map)?

This way we only update the UI to use that var, but every message we append to that variable

CsBigDataHub and others added 4 commits September 12, 2025 19:44
Introduce buffer-local caches for accumulating tool arguments and metadata
during streaming toolCallPrepare messages. This reduces UI churn by
throttling updates based on the new `eca-chat-tool-prepare-throttle`
options ('first-last' and improved 'smart').

Add heuristic `eca-chat--detect-final-tool-prepare-p` to identify complete
JSON-like argument blocks.

Update toolCallRun, toolCallRunning, toolCalled, and toolCallRejected to
use cached data when available. Clean up caches on completion or rejection.

Initialize new caches in `eca-chat-mode` and apply minor formatting fixes
for readability.
@CsBigDataHub
Copy link
Contributor Author

@ericdallo please try now.

@CsBigDataHub
Copy link
Contributor Author

CsBigDataHub commented Sep 14, 2025

I think we need to re-think smart, maybe we cache the total text in a var by id (we have eca-set and eca-get to quickly assoc/get values for a map)?

This way we only update the UI to use that var, but every message we append to that variable

we addressed exactly this by introducing three buffer-local hash tables—eca-chat--tool-prepare-counters, eca-chat--tool-prepare-content-cache, and eca-chat--tool-prepare-metadata-cache—to accumulate every incoming chunk of toolCallPrepare for each id. On each event we:

Always puthash the new arguments onto eca-chat--tool-prepare-content-cache[id], appending it to the cached string.

Use that full, up-to-date cached string when we actually update the UI (throttled by eca-chat-tool-prepare-throttle).

Clean up all three caches for that id once the tool call finishes or is rejected.

Since those tables already provide exactly the “cache total text by id” behavior you described, we don’t need separate eca-set/eca-get wrappers—the hash tables give us fast assoc/get and make the code straightforward. Let me know if you’d prefer an explicit eca-set/eca-get API on top of these tables, but functionally the current implementation matches the intended design.

@CsBigDataHub
Copy link
Contributor Author

image

@CsBigDataHub
Copy link
Contributor Author

Flyspell seems to be capping when LLM is writing the content, things seems to be smooth after disabling flyspell.

image image

After flyspell disable

image image

@CsBigDataHub
Copy link
Contributor Author

Memory profiler

    431,705,336  42% - #<byte-code-function C92>
    431,705,336  42%  - apply
    431,705,336  42%   - eca-process--filter
    431,705,336  42%    - let
    431,705,336  42%     - let
    431,705,336  42%      - while
    431,705,336  42%       - if
    431,705,336  42%        - let*
    431,333,624  42%         - if
    300,099,720  29%          - progn
    165,452,296  16%           - condition-case
    165,452,296  16%            - let
    165,317,035  16%             - save-current-buffer
    165,317,035  16%              - unwind-protect
    165,317,035  16%               - progn
     70,622,168   6%                  decode-coding-region
     66,709,264   6%                - setq
     66,709,264   6%                 - cons
     66,709,264   6%                    json-parse-buffer
     27,985,603   2%                - apply
     27,985,603   2%                   insert
        135,261   0%             - generate-new-buffer
        135,261   0%                get-buffer-create
    134,647,424  13%           - setq
    132,554,832  13%              substring-no-properties
      2,092,592   0%            - let*
      2,092,592   0%             - mapcar
      1,062,600   0%              - eca-process--parse-header
      1,062,600   0%               - let
        690,888   0%                - progn
        690,888   0%                 - setq
        658,152   0%                  - s-trim-left
        371,712   0%                     string-match
        245,520   0%                     replace-match
         40,920   0%                     substring
         32,736   0%                    substring
        371,712   0%                  string-match
      1,029,992   0%              - split-string
        461,864   0%               - substring-no-properties
        371,840   0%                - or
        371,840   0%                 - string-match-p
        371,840   0%                    string-match
        379,896   0%                 string-match
        188,232   0%                 #<byte-code-function 9DB>
    131,233,904  12%          - prog1
    110,530,904  10%           - setq
    110,530,904  10%              substring-no-properties
     20,703,000   2%             substring-no-properties
        371,712   0%         - and
        371,712   0%          - string-match-p
        371,712   0%             string-match

Copy link
Member

@ericdallo ericdallo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

besdies the namings, looks good, I will test

@ericdallo
Copy link
Member

Hum interesting, why flyspell would affect that I wonder

@ericdallo
Copy link
Member

Just tested and it's way better in performance @CsBigDataHub !
We can merge after your renamings

@CsBigDataHub
Copy link
Contributor Author

CsBigDataHub commented Sep 15, 2025

Hum interesting, why flyspell would affect that I wonder

Maybe be because I have flyspell enabled in all buffers, since we are writing content during tool call in eca-buffer, huge amount of information is getting proccessed through flyspell.

Let me check if I can only enable flyspell only during user input and disabling it during tool call

@CsBigDataHub
Copy link
Contributor Author

May be we can have this in README for future travelers?
I do not want to handle it in the source code rather handle it in user config. Thoughts?

(defun my/eca-chat-setup-flyspell-performance ()
  "Enable flyspell only while typing, disable on submit in `eca-chat-mode`."
  (when (derived-mode-p 'eca-chat-mode)
    ;; Disable flyspell before LLM streaming (on submit)
    (add-hook 'eca-chat-mode-hook
              (lambda ()
                (when flyspell-mode
                  (flyspell-mode -1)
                  (message \"Flyspell disabled during LLM streaming\")))
              :append :local)
    ;; Re-enable flyspell while typing
    (add-hook 'pre-command-hook
              (lambda ()
                (when (and (derived-mode-p 'eca-chat-mode)
                           (eq this-command 'self-insert-command)
                           (not flyspell-mode))
                  (flyspell-mode 1)))
              :append :local)))

(add-hook 'eca-chat-mode-hook #'my/eca-chat-setup-flyspell-performance)

@ericdallo
Copy link
Member

yeah, we can mention in troubleshooting as a optional step, WDYT?

@CsBigDataHub
Copy link
Contributor Author

yeah, we can mention in troubleshooting as a optional step, WDYT?

Yes that works, let me update readme too.

@CsBigDataHub
Copy link
Contributor Author

Ready to merge. Thanks

Copy link
Member

@ericdallo ericdallo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you again!

@ericdallo ericdallo merged commit 3dee445 into editor-code-assistant:master Sep 15, 2025
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants