Releases: dgasmith/opt_einsum
Releases · dgasmith/opt_einsum
v3.3.0
Adds a object
backend for optimized contractions on arbitrary Python objects.
New Features
- (#145) Adds a
object
based backend so thatcontract(backend='object')
can be used on arbitrary objects such as SymPy symbols.
Enhancements
- (#140) Better error messages when the requested
contract
backend cannot be found. - (#141) Adds a check with RandomOptimizers to ensure the objects are not accidentally reused for different contractions.
- (#149) Limits the
remaining
category for thecontract_path
output to only show up to 20 tensors to prevent issues with the quadratically scaling memory requirements and the number of print lines for large contractions.
v3.2.1
v3.2.0
v3.1.0
v3.0.1
v3.0.0
This release moves opt_einsum
to be backend agnostic while adding support
additional backends such as Jax and Autograd. Support for Python 2.7 has been dropped and Python 3.5 will become the new minimum version, a Python deprecation policy equivalent to NumPy's has been adopted.
New Features
- (#78) A new random-optimizer has been implemented which uses Boltzmann weighting to explore alternative near-minimum paths using greedy-like schemes. This provides a fairly large path performance enhancements with a linear path time overhead.
- (#78) A new PathOptimizer class has been implemented to provide a framework for building new optimizers. An example is that now custom cost functions can now be provided in the greedy formalism for building custom optimizers without a large amount of additional code.
- (#81) The
backend="auto"
keyword has been implemented forcontract
allowing automatic detection of the correct backend to use based off provided tensors in the contraction. - (#88) Autograd and Jax support have been implemented.
- (#96) Deprecates Python 2 functionality and devops improvements.
Enhancements
v2.3.2
v2.3.1
v2.3.0
This release primarily focuses on expanding the suite of available path technologies to provide better optimization characistics for 4-20 tensors while decreasing the time to find paths for 50-200+ tensors. See Path Overview for more information.
New Features:
- (#60) A new greedy implementation has been added which is up to two orders of magnitude faster for 200 tensors.
- (#73) Adds a new branch path that uses greedy ideas to prune the optimal exploration space to provide a better path than greedy at sub optimal cost.
- (#73) Adds a new auto keyword to the
opt_einsum.contract
path option. This keyword automatically chooses the best path technology that takes under 1ms to execute.
Enhancements:
- (#61) The
opt_einsum.contract
path keyword has been changed to optimize to more closely match NumPy. path will be deprecated in the future. - (#61) The
opt_einsum.contract_path
now returns aopt_einsum.contract.PathInfo
object that can be queried for the scaling, flops, and intermediates of the path. The print representation of this object is identical to before. - (#61) The default memory_limit is now unlimited by default based on community feedback.
- (#66) The Torch backend will now use tensordot when using a version of Torch which includes this functionality.
- (#68) Indices can now be any hashable object when provided in the "Interleaved Input" syntax.
- (#74) Allows the default transpose operation to be overridden to take advantage of more advanced tensor transpose libraries.
- (#73) The optimal path is now significantly faster.
Bug fixes:
- (#72) Fixes the "Interleaved Input" syntax and adds documentation.
v2.2.0
New features:
- (#48) Intermediates can now be shared between contractions, see here for more details.
- (#53) Intermediate caching is thread safe.
Enhancements:
- (#48) Expressions are now mapped to non-unicode index set so that unicode input is support for all backends.
- (#58) Adds tensorflow and theano with shared intermediates.
Bug fixes:
- (#41) PyTorch indices are mapped back to a small
a-z
subset valid for PyTorch's einsum implementation.