-
Couldn't load subscription status.
- Fork 87
Fusion rule for handling transformers exported models #2632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2632 +/- ##
=======================================
Coverage 70.38% 70.39%
=======================================
Files 222 223 +1
Lines 26288 26309 +21
Branches 2629 2629
=======================================
+ Hits 18503 18519 +16
- Misses 6865 6870 +5
Partials 920 920 ☔ View full report in Codecov by Sentry. |
Output present key value from the Attention op because past key value is provided. Previously the Attention op created would consume past key/value but not produce present key/value, which is not correct for ORT. <img width="1377" height="1225" alt="image" src="https://github.com/user-attachments/assets/118958b4-bc27-4912-b70b-000549887c0f" /> Replaces #2632 Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
|
This is still useful when enable_gqa=True |
When torch.onnx exports a model from transformers with SDPA, it generates a Concat
node to concatenate past_key/value with the new key/value to produce the graph output
for kv cache. This pattern can be fused into the Attention node, which has present_key
and present_value outputs. It is necessary for ONNX Runtime because it requires the outputs
to be produced by the Attention node when past_key and past_value inputs are provided.