-
Notifications
You must be signed in to change notification settings - Fork 74
Comparing changes
Open a pull request
base repository: microsoft/onnxscript
base: v0.2.1
head repository: microsoft/onnxscript
compare: v0.2.2
- 20 commits
- 37 files changed
- 10 contributors
Commits on Feb 14, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 5c31a7e - Browse repository at this point
Copy the full SHA 5c31a7eView commit details
Commits on Feb 18, 2025
-
I believe, new code snippet annotation have more sense. --------- Co-authored-by: G. Ramalingam <grama@microsoft.com> Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
Configuration menu - View commit details
-
Copy full SHA for 6f9533e - Browse repository at this point
Copy the full SHA 6f9533eView commit details
Commits on Feb 19, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 7ab8c3c - Browse repository at this point
Copy the full SHA 7ab8c3cView commit details -
Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
Configuration menu - View commit details
-
Copy full SHA for ab2dabe - Browse repository at this point
Copy the full SHA ab2dabeView commit details
Commits on Feb 20, 2025
-
Fix misleading annotation in the documentation (#2046)
The function does not seem to work inplace in all cases. --------- Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com> Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 0361971 - Browse repository at this point
Copy the full SHA 0361971View commit details -
[torchlib] Implement clamp* scalar overloads (#2066)
Fix issues reported in #2050 (comment)
Configuration menu - View commit details
-
Copy full SHA for b57345f - Browse repository at this point
Copy the full SHA b57345fView commit details -
Enable extraction of rewritten subgraph as model-local function (#2065)
Enable extraction of rewritten subgraph as model-local function. This will enable multi-step rewrite optimizations: eg., map subgraph G1 to new-op1, and then map subgraph G2 containing new-op1 to new-op2, and then inlining can replace any remaining new-op1 (that was not rewritten) by original G1. --------- Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for c93e25f - Browse repository at this point
Copy the full SHA c93e25fView commit details
Commits on Feb 21, 2025
-
Fix #1998 Basically, just follow the logic from symbolic_opset9.py: https://github.com/pytorch/pytorch/blob/fdb1305ace9cd875611931983eada640ab837c4c/torch/onnx/symbolic_opset9.py#L2878. The implementation from symbolic_opset12.py claims it fixes static shapes issue, but I see no difference because dimension, size, step are all int. I also tested https://github.com/pytorch/pytorch/blob/fdb1305ace9cd875611931983eada640ab837c4c/test/onnx/test_pytorch_onnx_onnxruntime.py#L7300 with opset9 and it still works. So we don't need to capture the loop in dynamic I think.
Configuration menu - View commit details
-
Copy full SHA for ba99991 - Browse repository at this point
Copy the full SHA ba99991View commit details -
[torchlib] Trace several ops (#2068)
Trace ops discovered in pytorch/pytorch#147617 and simplify `repeat` implementation.
Configuration menu - View commit details
-
Copy full SHA for 013d28c - Browse repository at this point
Copy the full SHA 013d28cView commit details -
[torchlib] Update operator:pow implementation (#2069)
Register it to aten_pow instead because exponent may not be a tensor. Fixes pytorch/pytorch#147606
Configuration menu - View commit details
-
Copy full SHA for d4bbee7 - Browse repository at this point
Copy the full SHA d4bbee7View commit details -
Configuration menu - View commit details
-
Copy full SHA for 17e7bb8 - Browse repository at this point
Copy the full SHA 17e7bb8View commit details -
chore(deps): bump onnx-weekly from 1.18.0.dev20250120 to 1.18.0.dev20…
…250221 in /requirements/ci (#2072)
Configuration menu - View commit details
-
Copy full SHA for 96447fb - Browse repository at this point
Copy the full SHA 96447fbView commit details
Commits on Feb 24, 2025
-
Make onnxscript release 1ES compliant (#2071)
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Configuration menu - View commit details
-
Copy full SHA for 18adbf7 - Browse repository at this point
Copy the full SHA 18adbf7View commit details
Commits on Feb 25, 2025
-
Configuration menu - View commit details
-
Copy full SHA for 1695ff3 - Browse repository at this point
Copy the full SHA 1695ff3View commit details
Commits on Feb 27, 2025
-
Add a couple of variants of patterns in ORT fusions (#2077)
Add a couple of variants of patterns in ORT fusions (motivated by Phi4)
Configuration menu - View commit details
-
Copy full SHA for 1a8dbd7 - Browse repository at this point
Copy the full SHA 1a8dbd7View commit details -
[IR] Fix an error when checking for float8_e4m3fnuz type in ir.Tensor (…
…#2078) The float8_e4m3fnuz type was mistaken with float8_e4m3b11fnuz, which is a different type: https://github.com/jax-ml/ml_dtypes#float8_e4m3b11fnuz
Configuration menu - View commit details
-
Copy full SHA for 89dd454 - Browse repository at this point
Copy the full SHA 89dd454View commit details
Commits on Mar 1, 2025
-
Squeeze Reshape Identity optimization (#2083)
A recent fix to the translation of pytorch symints introduces a Squeeze=>Reshape pattern that can be optimized away. This PR introduces a rewrite-rule to do this optimization. TODO (in a separate PR): for now, this optimization needs to be explicitly invoked. This should be done by default. (But there are several other such optimizations that need to be collected and included in the default-rule list.)
Configuration menu - View commit details
-
Copy full SHA for ed82c3b - Browse repository at this point
Copy the full SHA ed82c3bView commit details
Commits on Mar 3, 2025
-
add cudnn_enable flag to aten_layer_norm (#2085)
Fixes #2084 This is required to land pytorch/pytorch#148140 in torch.export(). cc @angelayi @@justinchuby Co-authored-by: Shangdi Yu <shanhdiy@meta.com>
Configuration menu - View commit details
-
Copy full SHA for 8ad2403 - Browse repository at this point
Copy the full SHA 8ad2403View commit details -
[DRAFT] Extensions to transformer fusions (#2082)
* Extends the cos-sin-cache fusion to support 1D position-id (without batch dimension) * Make MatchingTracer a parameter of the rewriter to give users better control over how to report stats (for successful or failing matches) * Improve the tracer output
Configuration menu - View commit details
-
Copy full SHA for debc34d - Browse repository at this point
Copy the full SHA debc34dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 6edcfd5 - Browse repository at this point
Copy the full SHA 6edcfd5View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v0.2.1...v0.2.2