Skip to content

Wire text-only prefillPrompt() to C++ runner->prefill()#17724

Open
kirklandsign wants to merge 4 commits intomainfrom
android/wire-text-prefill-to-runner
Open

Wire text-only prefillPrompt() to C++ runner->prefill()#17724
kirklandsign wants to merge 4 commits intomainfrom
android/wire-text-prefill-to-runner

Conversation

@kirklandsign
Copy link
Contributor

Summary

Previously, prefillPrompt() only appended to a JNI-side buffer
that was never consumed on the text-only code path. This was a
silent no-op: the buffered data sat there forever while generate()
passed its own prompt directly to the runner.

Now for text-only models (MODEL_TYPE_CATEGORY_LLM), prefillPrompt()
calls runner_->prefill() directly, which runs real model computation
and populates the KV cache. This enables chat history reload:

module.prefillPrompt("system: ..."); // fills KV cache
module.prefillPrompt("user: hello"); // fills KV cache
module.generate("user: new q", cb); // generates from pos_

Add prefill() to IRunner with a default NotSupported return so
vendor runners (QNN, MediaTek) are unaffected. Mark the existing
TextLLMRunner::prefill() as override.

Also clear prefill_inputs_ in resetContext() so stale multimodal
buffer data doesn't persist across context resets.

Test plan

CI

Fix prompt double-prefill in the image generate() overload:
the method called prefillPrompt(prompt) which appends to the
JNI buffer, then passed the same prompt to native generate()
which appends it again on the multimodal path, corrupting KV
cache state.

Also fix all 5 null-input checks in JNI append methods that
incorrectly returned Error::EndOfMethod (means "method execution
finished") instead of Error::InvalidArgument.

Replace deep copy of prefill_inputs_ with std::move to avoid
unnecessary allocation for large image/audio inputs.
Previously, prefillPrompt() only appended to a JNI-side buffer
that was never consumed on the text-only code path. This was a
silent no-op: the buffered data sat there forever while generate()
passed its own prompt directly to the runner.

Now for text-only models (MODEL_TYPE_CATEGORY_LLM), prefillPrompt()
calls runner_->prefill() directly, which runs real model computation
and populates the KV cache. This enables chat history reload:

  module.prefillPrompt("system: ...");  // fills KV cache
  module.prefillPrompt("user: hello");  // fills KV cache
  module.generate("user: new q", cb);   // generates from pos_

Add prefill() to IRunner with a default NotSupported return so
vendor runners (QNN, MediaTek) are unaffected. Mark the existing
TextLLMRunner::prefill() as override.

Also clear prefill_inputs_ in resetContext() so stale multimodal
buffer data doesn't persist across context resets.
The method now invokes real C++ prefill for text-only models,
so "append" no longer describes what it does.
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 26, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17724

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures

As of commit bf12659 with merge base a29539d (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 26, 2026
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@kirklandsign kirklandsign marked this pull request as ready for review February 26, 2026 00:24
Copilot AI review requested due to automatic review settings February 26, 2026 00:24
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enables the prefillPrompt() method to actually populate the KV cache for text-only (LLM) models by calling the C++ runner's prefill() method directly, rather than just buffering the input. This fixes a silent no-op where prefilled data was never consumed.

Changes:

  • Added a virtual prefill() method to the IRunner interface with a default implementation returning Error::NotSupported for backward compatibility with vendor runners
  • Updated TextLLMRunner::prefill() to be marked as override
  • Modified JNI layer to call runner_->prefill() for LLM models instead of just buffering
  • Cleared prefill_inputs_ buffer in resetContext() to prevent stale multimodal data from persisting

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.

File Description
extension/llm/runner/irunner.h Added virtual prefill() method with default NotSupported implementation
extension/llm/runner/text_llm_runner.h Marked existing prefill() as override
extension/android/jni/jni_layer_llama.cpp Renamed method and added logic to call runner->prefill() for LLM models; cleared prefill_inputs_ in reset
extension/android/executorch_android/src/main/java/org/pytorch/executorch/extension/llm/LlmModule.java Renamed JNI method from appendTextInput to prefillTextInput

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 486 to 493
/**
* Prefill a multimodal Module with the given text input.
*
* @param prompt The text prompt to prefill.
* @return 0, as the updated starting position in KV cache of the input in the LLM is no longer
* exposed to user.
* @throws RuntimeException if the prefill failed
*/
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The javadoc comment states "Prefill a multimodal Module with the given text input" but this method now also works for text-only (LLM) models. The documentation should be updated to reflect that this method works for both multimodal and text-only models, or just say "Prefill the Module with the given text input" without specifically mentioning multimodal.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants