Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: microsoft/onnxscript
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.2.1
Choose a base ref
...
head repository: microsoft/onnxscript
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.2.2
Choose a head ref
  • 20 commits
  • 37 files changed
  • 10 contributors

Commits on Feb 14, 2025

  1. Configuration menu
    Copy the full SHA
    5c31a7e View commit details
    Browse the repository at this point in the history

Commits on Feb 18, 2025

  1. Doc script const 1 (#2004)

    I believe, new code snippet annotation have more sense.
    
    ---------
    
    Co-authored-by: G. Ramalingam <grama@microsoft.com>
    Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
    3 people authored Feb 18, 2025
    Configuration menu
    Copy the full SHA
    6f9533e View commit details
    Browse the repository at this point in the history

Commits on Feb 19, 2025

  1. Configuration menu
    Copy the full SHA
    7ab8c3c View commit details
    Browse the repository at this point in the history
  2. Bump version to 0.3.0 (#2059)

    Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
    justinchuby and titaiwangms authored Feb 19, 2025
    Configuration menu
    Copy the full SHA
    ab2dabe View commit details
    Browse the repository at this point in the history

Commits on Feb 20, 2025

  1. Fix misleading annotation in the documentation (#2046)

    The function does not seem to work inplace in all cases.
    
    ---------
    
    Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
    Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
    3 people authored Feb 20, 2025
    Configuration menu
    Copy the full SHA
    0361971 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    b57345f View commit details
    Browse the repository at this point in the history
  3. Enable extraction of rewritten subgraph as model-local function (#2065)

    Enable extraction of rewritten subgraph as model-local function.
    
    This will enable multi-step rewrite optimizations: eg., map subgraph G1
    to new-op1, and then map subgraph G2 containing new-op1 to new-op2, and
    then inlining can replace any remaining new-op1 (that was not rewritten)
    by original G1.
    
    ---------
    
    Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
    gramalingam and justinchuby authored Feb 20, 2025
    Configuration menu
    Copy the full SHA
    c93e25f View commit details
    Browse the repository at this point in the history

Commits on Feb 21, 2025

  1. Improve Op(unfold) (#2067)

    Fix #1998 
    
    Basically, just follow the logic from symbolic_opset9.py:
    https://github.com/pytorch/pytorch/blob/fdb1305ace9cd875611931983eada640ab837c4c/torch/onnx/symbolic_opset9.py#L2878.
    
    The implementation from symbolic_opset12.py claims it fixes static
    shapes issue, but I see no difference because dimension, size, step are
    all int. I also tested
    https://github.com/pytorch/pytorch/blob/fdb1305ace9cd875611931983eada640ab837c4c/test/onnx/test_pytorch_onnx_onnxruntime.py#L7300
    with opset9 and it still works. So we don't need to capture the loop in
    dynamic I think.
    titaiwangms authored Feb 21, 2025
    Configuration menu
    Copy the full SHA
    ba99991 View commit details
    Browse the repository at this point in the history
  2. [torchlib] Trace several ops (#2068)

    Trace ops discovered in pytorch/pytorch#147617
    and simplify `repeat` implementation.
    justinchuby authored Feb 21, 2025
    Configuration menu
    Copy the full SHA
    013d28c View commit details
    Browse the repository at this point in the history
  3. [torchlib] Update operator:pow implementation (#2069)

    Register it to aten_pow instead because exponent may not be a tensor.
    
    Fixes pytorch/pytorch#147606
    justinchuby authored Feb 21, 2025
    Configuration menu
    Copy the full SHA
    d4bbee7 View commit details
    Browse the repository at this point in the history
  4. Fix Op(unflatten) (#2070)

    The op was failing and not traced.
    titaiwangms authored Feb 21, 2025
    Configuration menu
    Copy the full SHA
    17e7bb8 View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    96447fb View commit details
    Browse the repository at this point in the history

Commits on Feb 24, 2025

  1. Make onnxscript release 1ES compliant (#2071)

    Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
    Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
    3 people authored Feb 24, 2025
    Configuration menu
    Copy the full SHA
    18adbf7 View commit details
    Browse the repository at this point in the history

Commits on Feb 25, 2025

  1. Configuration menu
    Copy the full SHA
    1695ff3 View commit details
    Browse the repository at this point in the history

Commits on Feb 27, 2025

  1. Add a couple of variants of patterns in ORT fusions (#2077)

    Add a couple of variants of patterns in ORT fusions (motivated by Phi4)
    gramalingam authored Feb 27, 2025
    Configuration menu
    Copy the full SHA
    1a8dbd7 View commit details
    Browse the repository at this point in the history
  2. [IR] Fix an error when checking for float8_e4m3fnuz type in ir.Tensor (

    …#2078)
    
    The float8_e4m3fnuz type was mistaken with float8_e4m3b11fnuz, which is
    a different type: https://github.com/jax-ml/ml_dtypes#float8_e4m3b11fnuz
    justinchuby authored Feb 27, 2025
    Configuration menu
    Copy the full SHA
    89dd454 View commit details
    Browse the repository at this point in the history

Commits on Mar 1, 2025

  1. Squeeze Reshape Identity optimization (#2083)

    A recent fix to the translation of pytorch symints introduces a
    Squeeze=>Reshape pattern that can be optimized away. This PR introduces
    a rewrite-rule to do this optimization.
    
    TODO (in a separate PR): for now, this optimization needs to be
    explicitly invoked. This should be done by default. (But there are
    several other such optimizations that need to be collected and included
    in the default-rule list.)
    gramalingam authored Mar 1, 2025
    Configuration menu
    Copy the full SHA
    ed82c3b View commit details
    Browse the repository at this point in the history

Commits on Mar 3, 2025

  1. add cudnn_enable flag to aten_layer_norm (#2085)

    Fixes #2084
    
    This is required to land pytorch/pytorch#148140
    in torch.export().
    
    
    cc @angelayi @@justinchuby
    
    Co-authored-by: Shangdi Yu <shanhdiy@meta.com>
    yushangdi and Shangdi Yu authored Mar 3, 2025
    Configuration menu
    Copy the full SHA
    8ad2403 View commit details
    Browse the repository at this point in the history
  2. [DRAFT] Extensions to transformer fusions (#2082)

    * Extends the cos-sin-cache fusion to support 1D position-id (without
    batch dimension)
    * Make MatchingTracer a parameter of the rewriter to give users better
    control over how to report stats (for successful or failing matches)
    * Improve the tracer output
    gramalingam authored Mar 3, 2025
    Configuration menu
    Copy the full SHA
    debc34d View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    6edcfd5 View commit details
    Browse the repository at this point in the history
Loading