Skip to content
This repository was archived by the owner on Nov 7, 2024. It is now read-only.

adding random uniform initialization #412

Merged
merged 4 commits into from
Dec 12, 2019

Conversation

summer-bebop
Copy link
Contributor

@summer-bebop summer-bebop commented Dec 12, 2019

This PR solves #398. It adds :

  • random_uniform method to all backends. This provides a way to initiate a tensornetwork with values sampled from a uniform distribution.
  • tests for random_uniform for all backends

Caution : I added a test to validate the behaviour of the random uniform initialization for all backends but tensorflow. Indeed the seed system in tensorflow (graph/op seed) messed with my brain and I could not consistently init a tensornetwork directly as tf object and via the backend method.
As this test is a bit of an overkill, I'm still pushing as is. If you guys have an idea...

@summer-bebop
Copy link
Contributor Author

@mganahl Ready for review I believe.

@chaserileyroberts
Copy link
Contributor

That's ok, the TF random module I believe wasn't designed to be deterministic, so not much you can do.

Copy link
Contributor

@chaserileyroberts chaserileyroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

Very minor notes. Approved pending nit picks

@summer-bebop
Copy link
Contributor Author

Nit picks done, should be good to go.

@chaserileyroberts chaserileyroberts merged commit a8ab55a into google:master Dec 12, 2019
mganahl added a commit that referenced this pull request Jan 29, 2020
* started implementing block-sparse tensors

* removed files

* working on AbelianIndex

* working in block sparisty

* added reshape

and lots of other stuff

* added Index, an index type for symmetric tensors

* added small tutorial

* added docstring

* fixed bug in retrieve_diagonal_blocks

* TODO added

* improved initialization a bit

* more efficient initialization

* just formatting

* added random

* added fuse_degeneracies

* fixed bug in reshape

* dosctring, typing

* removed TODO

* removed confusing code line

* bug removed

* comment

* added __mul__ to Index

* added sparse_shape

and updated reshape to accept both int and Index lists

* more in tutorial

* comment

* added new test function

* testing function hacking

* docstring

* small speed up

* Remove gui directory (migrated to another repo) (#399)

* a slightly more elegant code

* use one more np function

* removed some crazy slow code

* faster code

* Update README.md (#404)

* add return_data

* doc

* bug fix

* a little faster

* substantial speedup

* renaming

* removed todo

* some comments

* comments

* fixed some bug in reshape

* comments

* default value changed

* fixed bug, old version is now faster again

* cleaned up reshape

* started adding tests

* Quantum abstractions (#379)

* Initial attempt at quantum classes.

* .tensor() and QuantumIdentity.

* check_hilberts -> check_spaces

* Add some blurb.

* Rename to Qu.

* Finish Qu.

* Fix matmul in case operators share network components.

* Add some scalar methods.

* Improve a docstring.

* Redo scalars and make identity work using copy tensors.
A QuOperator can now have a scalar component (disconnected scalar subnetwork).
Also introduce `ignore_edges` for power users.

* Remove obsolete parts of QuScalar.

* Add contraction stuff.

* Add from_tensor() constructors.

* Doctstring.

* Doc/comments fixes.

* Add typing.

* Remove some lint.

* Fix a bug.

* Add very simple constructor tests.

* Default edge ordering for eval(). Better docstrings.

* A bunch more tests.

* tensor_prod -> tensor_product, outer_product

* .is_scalar -> is_scalar() etc.

* Improve docstrings on axis ordering.

* Improve and fix scalar multiplication.

* Kill outer_product().

* CopyNode needs a backend and dtype.

* Fix __mul__ and add __rmul__.

* More docstrings and add axis arguments to vector from_tensor()s.

* Add backends to tests.

* Delint.

* CopyNode should not inflate its tensor just to tell you the dtype.

* Correct two docstrings.

* Improve some tests.
Particulary, test identity some more, since we now try to be efficient
with CopyNode identity tensors.

* Treat CopyNode identity tensors efficiently.
Also rename shape -> space.

* Add support for copying CopyNodes.

* Propagate output edges properly.

* Test that CopyNodes are propagated.
Also do a CopyNode sandwich test.

* Improve typing.
Also more shape -> space.

* adding random uniform initialization (#412)

* adding random uniform initialization

* fixes dumb pylint

* couple of nit picks

* replace kron with broadcasting

* column-major -> row-major

* documentation

* added function to compute unique charges and charge degeneracies

Function avoids explicit full fusion of all legs, and instead only keeps track of the unique charges and their degeneracies upon fusion

* improved block finding, fixed bug in reshape

re-intorduced BlockSparseTensor.dense_shape
new method for fusing charges and degeneracies (faster for very rectangular matrices)

* fuse_charge_pair added

fuse_charges added

* use is_leave

* new tests

* removed TODO, BlockSparseTensor.shape returns ref instead of copy

* added tests

* added tests

* column-major -> row-major

forgot to fix fusing order of charges and degeneracies

* fix broken tests

* test added

* mostly docstring

* docstring

* added map_to_integer

* test for map_to_integer

* added functions to find sparse positions when fusing two charges

* renaming of routines

* added unfuse

* test unfuse

* fixed bug in the new routine for finding diagonal blocks

* test added

* docstring

* added tests

* renaming

* tests

* transpose added, map_to_integer removed

(used np.unravel_index and np.ravel_multi_index instead)

* find_dense_positions made faster

* working on transpose

* transpose modified

* Index.name -> property

* added charge types

* adding tests

* fixing bugs

* implementing more symmetries

* typo + remove cython lookup

* split charge.py from index.py

* tests for charge.py

* test added

* added matmul

* added test for matmul

* tests + allow np.int8

* typo

* undo typo

* test\

* savety commit, starting to add multiple charges

* Charge -> ChargeCollection

* removed offsets from U1Charge (unnecessary), Charge -> ChargeCollection

* new tests

* tests for new index

* new Index class

* new block tensor

* shorter code

* add __eq__, remove nonzero, add unique

* working on charges.py

* fix small bug in  BaseCharge.__init__

* fix tests after bugfix

* tests for equals() and __eq__

* added equals()

for comparing with unshifted target charges
__eq__ now only compares shifted target charges

* added typing

* ChargeCollection.__repr__ modified

* *** empty log message ***

* this commit is not working

* fix bug in __len__

fix various bugs in __eq__ and __getitem__

* working in implemetation of multiple charges

* bugfix in ChargeCollection.__getitem__

* adding tests

* sleep commit

* added iterators

* ChargeCollection.__init__:

charges are now always stacked, self.charges contain views to the stacked
charges
__init__ can be called with optional shifts and stacked_charges to initialize
the BaseCharges object with it

* lunch commit

* back from lunch

* tests added

* ported find_sparse_positions and find_diagonal_sparse_blocks to new charge interface

* broken commit

* fixed bug in find_dense_positions

* fix bug in find_dense_positions

* docstring

* fix bug in Index initialization

* fix bug in Index initialization

* typo

* remove __rmul__ calls of ChargeCollection and BaseCharge

* removed __rmul__

* removed some bugs inb transpose

* broken commit

* broken commit

* remove csr matrix, use search sorted

* remove unfuse, use divmod

* broken commit, working on tensordot

* tensordot implemented, not tested

* removed commented codex

* fix tests

* fix tests

* added test for BlockSparseTensor back

* renaming files

* fix tutorial, fix import

* faster find_dense_positions

* compute reduced svd in `backends.numpy.decompositions.svd_decompostion` (#420)

* compute reduced svd when calling np.linalg.svd from numpy backend

* test SVD when max_singular_values>bond_dimension (numpy backend)

* added intersect to BaseCharge

* broken commmit (Apple sucks big time)

* broken commit

* broken commit

* broken commit

* Fixes for contract_between(). (#421)

* Fixes for contract_between().
* output_edge_ordering was not respected in trace or outer_product cases.
* axis names are now applied *after* edge reordering
* added some docstring to clarify ordering
* avoid a warning when contracting all edges of one or more of the input tensors.

* Split out ordering tests.
Also improves the basic contract_between() test so that it outputs
a non-symmetric matrix (now rectangular).

* broken commit

* broken

* added `return_indices` to intersect

* faster transpose + tensordot implementation

* Update requirements_travis.txt (#426)

* rewrote find_dense_positions to take multipe charges

avoids a for loop in _find_diagonal_dense_blocks and speeds up the code

* find_sparse_positions update to be a tiny bit faster

* Remove duplicate Dockerfile from root directory (#431)

* BaseNode / Edge class name type check protection add (#424)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fix bug in _get_diagonal_dense_blocks

* fix bug

* fixed bug in transpose

* Test network operations (#441)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added missing tests for network operations

* some linting issues

* Rename backend shape methods (#355) (#436)

concat function
* rename from cocate to shape_concat

shape function
* rename from shape to shape_tensor

prod function
* rename from prod to shape_prod
* function name is duplicated in shell_backend.py
* rename existing shape_prod function to shape_product
* need to change the name later

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fixed final_order passing for tensordot

* Added SAT Tutorial (#438)

* Add files via upload

Added SAT Tutorials

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* License changed

* Created using Colaboratory

Co-authored-by: Chase Roberts <chaseriley@google.com>

* More Test! (#444)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added a lot of tests for network components

* a lot more tests

* some more tests

* some linter things

* added test base class instead of hack

* disabled some pytype warnings

* disabled some pylint warnings

* Return empty dict for empty sequence input to MPS left_envs and right_envs (#440)

* Return empty dict for empty input to MPS envs

* Add tests for empty sequence input to MPS envs

* Use explicit sequences for MPS envs tests

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Issue #339. with tn.DefaultBackend(backend): support (#434)

* A context manager support implementation for setting up a backend for Nodes. (Issue #339)

* Stack-based backend context manager implementation

* code styele fix

* Added get_default_backend() function which returns top stack backend.
  Stack returns config.default_backend if there is nothing in stack.

A little clean-up in test file.

* - Moved `set_default_backend` to the `backend_contextmanager`
- `default_backend` now is a property of `_DefaultBackendStack`
- removed `config` imports as an unused file.
- fixed some tests in `backend_contextmanager_test.py`

* little code-style fix

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#439)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#292)

        *[BaseNode class] - add / sub / mul / truediv NotImplemented function Added
        *[Node class] - add / sub / mul / truediv function added
        *[CopyNode class] - overload the BaseNode mul / truediv as NotImplemented

        *[basebackend] - add / sub / mul / div NotImplemented function added
        *[numpy / tensorflow / pytorch] - add / sub / mul / div function added
        *[shell] - add / sub / div NotImplemented function added

        *Testing files
        [network_components_free_test]
        * Exception - Tensorflow is not tested when the operand is scalar
        * 1. Check add / sub / mul / div with int / float / Node
        * 2. Check implicit conversion
        * 2. Check the Type Error when type is not int / float / Node
        * 3. Check is the operand backend same
        * 4. Check is BaseNode has attribute _tensor

        [backend_test - numpy / tensorflow / pytorch]
        *check add / sub / mul / divide work for int / float / Node

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add __add__, __sub__, __mul__, __truediv__ to TestNode Class

Co-authored-by: Chase Roberts <chaseriley@google.com>

* improved performance, but only u1 currently supported in this commit

* Fix unsafe None checks (#449)

* None checks added for constructors

* Changes in None check and resolve comments

* Backend test (#448)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* missing test for backend contextmanager

* notimplemented tests for base backend

* added subtraction test notimplemented

* added jax backend index_update test

* first missing tests for numpy

* actually catched an error in numpy_backend eigs method!

* more eigs tests

* didnt catch an error, unexpected convention

* more tests for eigsh_lancszos

* added missing pytorch backend tests

* added missing tf backend tests

* pytype

* suppress pytype

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Version bump for release

* merging Glen's and my code

* fix bug in unique

* adding/removing tests

* add benchmark file

* adding files

* deleted some files

* fix bug

* renaming and shortening

Co-authored-by: Cutter Coryell <14116109+coryell@users.noreply.github.com>
Co-authored-by: Chase Roberts <chaseriley@google.com>
Co-authored-by: Ashley Milsted <ashmilsted@gmail.com>
Co-authored-by: Ivan PANICO <iv.panico@gmail.com>
Co-authored-by: Ori Alberton <github@oalberton.com>
Co-authored-by: Kshithij Iyer <kshithij.ki@gmail.com>
Co-authored-by: Hyunbyung, Park <hyunbyung87@gmail.com>
Co-authored-by: MichaelMarien <marien.mich@gmail.com>
Co-authored-by: kosehy <kosehy@gmail.com>
Co-authored-by: Olga Okrut <46659064+olgOk@users.noreply.github.com>
Co-authored-by: Aidan Dang <dang@aidan.gg>
Co-authored-by: Tigran Katolikyan <43802339+katolikyan@users.noreply.github.com>
Co-authored-by: Jayanth Chandra <jayanthchandra14@gmail.com>
mganahl added a commit that referenced this pull request Jan 29, 2020
* started implementing block-sparse tensors

* removed files

* working on AbelianIndex

* working in block sparisty

* added reshape

and lots of other stuff

* added Index, an index type for symmetric tensors

* added small tutorial

* added docstring

* fixed bug in retrieve_diagonal_blocks

* TODO added

* improved initialization a bit

* more efficient initialization

* just formatting

* added random

* added fuse_degeneracies

* fixed bug in reshape

* dosctring, typing

* removed TODO

* removed confusing code line

* bug removed

* comment

* added __mul__ to Index

* added sparse_shape

and updated reshape to accept both int and Index lists

* more in tutorial

* comment

* added new test function

* testing function hacking

* docstring

* small speed up

* Remove gui directory (migrated to another repo) (#399)

* a slightly more elegant code

* use one more np function

* removed some crazy slow code

* faster code

* Update README.md (#404)

* add return_data

* doc

* bug fix

* a little faster

* substantial speedup

* renaming

* removed todo

* some comments

* comments

* fixed some bug in reshape

* comments

* default value changed

* fixed bug, old version is now faster again

* cleaned up reshape

* started adding tests

* Quantum abstractions (#379)

* Initial attempt at quantum classes.

* .tensor() and QuantumIdentity.

* check_hilberts -> check_spaces

* Add some blurb.

* Rename to Qu.

* Finish Qu.

* Fix matmul in case operators share network components.

* Add some scalar methods.

* Improve a docstring.

* Redo scalars and make identity work using copy tensors.
A QuOperator can now have a scalar component (disconnected scalar subnetwork).
Also introduce `ignore_edges` for power users.

* Remove obsolete parts of QuScalar.

* Add contraction stuff.

* Add from_tensor() constructors.

* Doctstring.

* Doc/comments fixes.

* Add typing.

* Remove some lint.

* Fix a bug.

* Add very simple constructor tests.

* Default edge ordering for eval(). Better docstrings.

* A bunch more tests.

* tensor_prod -> tensor_product, outer_product

* .is_scalar -> is_scalar() etc.

* Improve docstrings on axis ordering.

* Improve and fix scalar multiplication.

* Kill outer_product().

* CopyNode needs a backend and dtype.

* Fix __mul__ and add __rmul__.

* More docstrings and add axis arguments to vector from_tensor()s.

* Add backends to tests.

* Delint.

* CopyNode should not inflate its tensor just to tell you the dtype.

* Correct two docstrings.

* Improve some tests.
Particulary, test identity some more, since we now try to be efficient
with CopyNode identity tensors.

* Treat CopyNode identity tensors efficiently.
Also rename shape -> space.

* Add support for copying CopyNodes.

* Propagate output edges properly.

* Test that CopyNodes are propagated.
Also do a CopyNode sandwich test.

* Improve typing.
Also more shape -> space.

* adding random uniform initialization (#412)

* adding random uniform initialization

* fixes dumb pylint

* couple of nit picks

* replace kron with broadcasting

* column-major -> row-major

* documentation

* added function to compute unique charges and charge degeneracies

Function avoids explicit full fusion of all legs, and instead only keeps track of the unique charges and their degeneracies upon fusion

* improved block finding, fixed bug in reshape

re-intorduced BlockSparseTensor.dense_shape
new method for fusing charges and degeneracies (faster for very rectangular matrices)

* fuse_charge_pair added

fuse_charges added

* use is_leave

* new tests

* removed TODO, BlockSparseTensor.shape returns ref instead of copy

* added tests

* added tests

* column-major -> row-major

forgot to fix fusing order of charges and degeneracies

* fix broken tests

* test added

* mostly docstring

* docstring

* added map_to_integer

* test for map_to_integer

* added functions to find sparse positions when fusing two charges

* renaming of routines

* added unfuse

* test unfuse

* fixed bug in the new routine for finding diagonal blocks

* test added

* docstring

* added tests

* renaming

* tests

* transpose added, map_to_integer removed

(used np.unravel_index and np.ravel_multi_index instead)

* find_dense_positions made faster

* working on transpose

* transpose modified

* Index.name -> property

* added charge types

* adding tests

* fixing bugs

* implementing more symmetries

* typo + remove cython lookup

* split charge.py from index.py

* tests for charge.py

* test added

* added matmul

* added test for matmul

* tests + allow np.int8

* typo

* undo typo

* test\

* savety commit, starting to add multiple charges

* Charge -> ChargeCollection

* removed offsets from U1Charge (unnecessary), Charge -> ChargeCollection

* new tests

* tests for new index

* new Index class

* new block tensor

* shorter code

* add __eq__, remove nonzero, add unique

* working on charges.py

* fix small bug in  BaseCharge.__init__

* fix tests after bugfix

* tests for equals() and __eq__

* added equals()

for comparing with unshifted target charges
__eq__ now only compares shifted target charges

* added typing

* ChargeCollection.__repr__ modified

* *** empty log message ***

* this commit is not working

* fix bug in __len__

fix various bugs in __eq__ and __getitem__

* working in implemetation of multiple charges

* bugfix in ChargeCollection.__getitem__

* adding tests

* sleep commit

* added iterators

* ChargeCollection.__init__:

charges are now always stacked, self.charges contain views to the stacked
charges
__init__ can be called with optional shifts and stacked_charges to initialize
the BaseCharges object with it

* lunch commit

* back from lunch

* tests added

* ported find_sparse_positions and find_diagonal_sparse_blocks to new charge interface

* broken commit

* fixed bug in find_dense_positions

* fix bug in find_dense_positions

* docstring

* fix bug in Index initialization

* fix bug in Index initialization

* typo

* remove __rmul__ calls of ChargeCollection and BaseCharge

* removed __rmul__

* removed some bugs inb transpose

* broken commit

* broken commit

* remove csr matrix, use search sorted

* remove unfuse, use divmod

* broken commit, working on tensordot

* tensordot implemented, not tested

* removed commented codex

* fix tests

* fix tests

* added test for BlockSparseTensor back

* renaming files

* fix tutorial, fix import

* faster find_dense_positions

* compute reduced svd in `backends.numpy.decompositions.svd_decompostion` (#420)

* compute reduced svd when calling np.linalg.svd from numpy backend

* test SVD when max_singular_values>bond_dimension (numpy backend)

* added intersect to BaseCharge

* broken commmit (Apple sucks big time)

* broken commit

* broken commit

* broken commit

* Fixes for contract_between(). (#421)

* Fixes for contract_between().
* output_edge_ordering was not respected in trace or outer_product cases.
* axis names are now applied *after* edge reordering
* added some docstring to clarify ordering
* avoid a warning when contracting all edges of one or more of the input tensors.

* Split out ordering tests.
Also improves the basic contract_between() test so that it outputs
a non-symmetric matrix (now rectangular).

* broken commit

* broken

* added `return_indices` to intersect

* faster transpose + tensordot implementation

* Update requirements_travis.txt (#426)

* rewrote find_dense_positions to take multipe charges

avoids a for loop in _find_diagonal_dense_blocks and speeds up the code

* find_sparse_positions update to be a tiny bit faster

* Remove duplicate Dockerfile from root directory (#431)

* BaseNode / Edge class name type check protection add (#424)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fix bug in _get_diagonal_dense_blocks

* fix bug

* fixed bug in transpose

* Test network operations (#441)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added missing tests for network operations

* some linting issues

* Rename backend shape methods (#355) (#436)

concat function
* rename from cocate to shape_concat

shape function
* rename from shape to shape_tensor

prod function
* rename from prod to shape_prod
* function name is duplicated in shell_backend.py
* rename existing shape_prod function to shape_product
* need to change the name later

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fixed final_order passing for tensordot

* Added SAT Tutorial (#438)

* Add files via upload

Added SAT Tutorials

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* License changed

* Created using Colaboratory

Co-authored-by: Chase Roberts <chaseriley@google.com>

* More Test! (#444)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added a lot of tests for network components

* a lot more tests

* some more tests

* some linter things

* added test base class instead of hack

* disabled some pytype warnings

* disabled some pylint warnings

* Return empty dict for empty sequence input to MPS left_envs and right_envs (#440)

* Return empty dict for empty input to MPS envs

* Add tests for empty sequence input to MPS envs

* Use explicit sequences for MPS envs tests

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Issue #339. with tn.DefaultBackend(backend): support (#434)

* A context manager support implementation for setting up a backend for Nodes. (Issue #339)

* Stack-based backend context manager implementation

* code styele fix

* Added get_default_backend() function which returns top stack backend.
  Stack returns config.default_backend if there is nothing in stack.

A little clean-up in test file.

* - Moved `set_default_backend` to the `backend_contextmanager`
- `default_backend` now is a property of `_DefaultBackendStack`
- removed `config` imports as an unused file.
- fixed some tests in `backend_contextmanager_test.py`

* little code-style fix

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#439)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#292)

        *[BaseNode class] - add / sub / mul / truediv NotImplemented function Added
        *[Node class] - add / sub / mul / truediv function added
        *[CopyNode class] - overload the BaseNode mul / truediv as NotImplemented

        *[basebackend] - add / sub / mul / div NotImplemented function added
        *[numpy / tensorflow / pytorch] - add / sub / mul / div function added
        *[shell] - add / sub / div NotImplemented function added

        *Testing files
        [network_components_free_test]
        * Exception - Tensorflow is not tested when the operand is scalar
        * 1. Check add / sub / mul / div with int / float / Node
        * 2. Check implicit conversion
        * 2. Check the Type Error when type is not int / float / Node
        * 3. Check is the operand backend same
        * 4. Check is BaseNode has attribute _tensor

        [backend_test - numpy / tensorflow / pytorch]
        *check add / sub / mul / divide work for int / float / Node

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add __add__, __sub__, __mul__, __truediv__ to TestNode Class

Co-authored-by: Chase Roberts <chaseriley@google.com>

* improved performance, but only u1 currently supported in this commit

* Fix unsafe None checks (#449)

* None checks added for constructors

* Changes in None check and resolve comments

* Backend test (#448)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* missing test for backend contextmanager

* notimplemented tests for base backend

* added subtraction test notimplemented

* added jax backend index_update test

* first missing tests for numpy

* actually catched an error in numpy_backend eigs method!

* more eigs tests

* didnt catch an error, unexpected convention

* more tests for eigsh_lancszos

* added missing pytorch backend tests

* added missing tf backend tests

* pytype

* suppress pytype

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Version bump for release

* merging Glen's and my code

* fix bug in unique

* adding/removing tests

* add benchmark file

* adding files

* deleted some files

* fix bug

* renaming and shortening

* cleaning up code

* cleaning up

* removed _check_flows

Co-authored-by: Cutter Coryell <14116109+coryell@users.noreply.github.com>
Co-authored-by: Chase Roberts <chaseriley@google.com>
Co-authored-by: Ashley Milsted <ashmilsted@gmail.com>
Co-authored-by: Ivan PANICO <iv.panico@gmail.com>
Co-authored-by: Ori Alberton <github@oalberton.com>
Co-authored-by: Kshithij Iyer <kshithij.ki@gmail.com>
Co-authored-by: Hyunbyung, Park <hyunbyung87@gmail.com>
Co-authored-by: MichaelMarien <marien.mich@gmail.com>
Co-authored-by: kosehy <kosehy@gmail.com>
Co-authored-by: Olga Okrut <46659064+olgOk@users.noreply.github.com>
Co-authored-by: Aidan Dang <dang@aidan.gg>
Co-authored-by: Tigran Katolikyan <43802339+katolikyan@users.noreply.github.com>
Co-authored-by: Jayanth Chandra <jayanthchandra14@gmail.com>
mganahl added a commit that referenced this pull request Jan 30, 2020
* started implementing block-sparse tensors

* removed files

* working on AbelianIndex

* working in block sparisty

* added reshape

and lots of other stuff

* added Index, an index type for symmetric tensors

* added small tutorial

* added docstring

* fixed bug in retrieve_diagonal_blocks

* TODO added

* improved initialization a bit

* more efficient initialization

* just formatting

* added random

* added fuse_degeneracies

* fixed bug in reshape

* dosctring, typing

* removed TODO

* removed confusing code line

* bug removed

* comment

* added __mul__ to Index

* added sparse_shape

and updated reshape to accept both int and Index lists

* more in tutorial

* comment

* added new test function

* testing function hacking

* docstring

* small speed up

* Remove gui directory (migrated to another repo) (#399)

* a slightly more elegant code

* use one more np function

* removed some crazy slow code

* faster code

* Update README.md (#404)

* add return_data

* doc

* bug fix

* a little faster

* substantial speedup

* renaming

* removed todo

* some comments

* comments

* fixed some bug in reshape

* comments

* default value changed

* fixed bug, old version is now faster again

* cleaned up reshape

* started adding tests

* Quantum abstractions (#379)

* Initial attempt at quantum classes.

* .tensor() and QuantumIdentity.

* check_hilberts -> check_spaces

* Add some blurb.

* Rename to Qu.

* Finish Qu.

* Fix matmul in case operators share network components.

* Add some scalar methods.

* Improve a docstring.

* Redo scalars and make identity work using copy tensors.
A QuOperator can now have a scalar component (disconnected scalar subnetwork).
Also introduce `ignore_edges` for power users.

* Remove obsolete parts of QuScalar.

* Add contraction stuff.

* Add from_tensor() constructors.

* Doctstring.

* Doc/comments fixes.

* Add typing.

* Remove some lint.

* Fix a bug.

* Add very simple constructor tests.

* Default edge ordering for eval(). Better docstrings.

* A bunch more tests.

* tensor_prod -> tensor_product, outer_product

* .is_scalar -> is_scalar() etc.

* Improve docstrings on axis ordering.

* Improve and fix scalar multiplication.

* Kill outer_product().

* CopyNode needs a backend and dtype.

* Fix __mul__ and add __rmul__.

* More docstrings and add axis arguments to vector from_tensor()s.

* Add backends to tests.

* Delint.

* CopyNode should not inflate its tensor just to tell you the dtype.

* Correct two docstrings.

* Improve some tests.
Particulary, test identity some more, since we now try to be efficient
with CopyNode identity tensors.

* Treat CopyNode identity tensors efficiently.
Also rename shape -> space.

* Add support for copying CopyNodes.

* Propagate output edges properly.

* Test that CopyNodes are propagated.
Also do a CopyNode sandwich test.

* Improve typing.
Also more shape -> space.

* adding random uniform initialization (#412)

* adding random uniform initialization

* fixes dumb pylint

* couple of nit picks

* replace kron with broadcasting

* column-major -> row-major

* documentation

* added function to compute unique charges and charge degeneracies

Function avoids explicit full fusion of all legs, and instead only keeps track of the unique charges and their degeneracies upon fusion

* improved block finding, fixed bug in reshape

re-intorduced BlockSparseTensor.dense_shape
new method for fusing charges and degeneracies (faster for very rectangular matrices)

* fuse_charge_pair added

fuse_charges added

* use is_leave

* new tests

* removed TODO, BlockSparseTensor.shape returns ref instead of copy

* added tests

* added tests

* column-major -> row-major

forgot to fix fusing order of charges and degeneracies

* fix broken tests

* test added

* mostly docstring

* docstring

* added map_to_integer

* test for map_to_integer

* added functions to find sparse positions when fusing two charges

* renaming of routines

* added unfuse

* test unfuse

* fixed bug in the new routine for finding diagonal blocks

* test added

* docstring

* added tests

* renaming

* tests

* transpose added, map_to_integer removed

(used np.unravel_index and np.ravel_multi_index instead)

* find_dense_positions made faster

* working on transpose

* transpose modified

* Index.name -> property

* added charge types

* adding tests

* fixing bugs

* implementing more symmetries

* typo + remove cython lookup

* split charge.py from index.py

* tests for charge.py

* test added

* added matmul

* added test for matmul

* tests + allow np.int8

* typo

* undo typo

* test\

* savety commit, starting to add multiple charges

* Charge -> ChargeCollection

* removed offsets from U1Charge (unnecessary), Charge -> ChargeCollection

* new tests

* tests for new index

* new Index class

* new block tensor

* shorter code

* add __eq__, remove nonzero, add unique

* working on charges.py

* fix small bug in  BaseCharge.__init__

* fix tests after bugfix

* tests for equals() and __eq__

* added equals()

for comparing with unshifted target charges
__eq__ now only compares shifted target charges

* added typing

* ChargeCollection.__repr__ modified

* *** empty log message ***

* this commit is not working

* fix bug in __len__

fix various bugs in __eq__ and __getitem__

* working in implemetation of multiple charges

* bugfix in ChargeCollection.__getitem__

* adding tests

* sleep commit

* added iterators

* ChargeCollection.__init__:

charges are now always stacked, self.charges contain views to the stacked
charges
__init__ can be called with optional shifts and stacked_charges to initialize
the BaseCharges object with it

* lunch commit

* back from lunch

* tests added

* ported find_sparse_positions and find_diagonal_sparse_blocks to new charge interface

* broken commit

* fixed bug in find_dense_positions

* fix bug in find_dense_positions

* docstring

* fix bug in Index initialization

* fix bug in Index initialization

* typo

* remove __rmul__ calls of ChargeCollection and BaseCharge

* removed __rmul__

* removed some bugs inb transpose

* broken commit

* broken commit

* remove csr matrix, use search sorted

* remove unfuse, use divmod

* broken commit, working on tensordot

* tensordot implemented, not tested

* removed commented codex

* fix tests

* fix tests

* added test for BlockSparseTensor back

* renaming files

* fix tutorial, fix import

* faster find_dense_positions

* compute reduced svd in `backends.numpy.decompositions.svd_decompostion` (#420)

* compute reduced svd when calling np.linalg.svd from numpy backend

* test SVD when max_singular_values>bond_dimension (numpy backend)

* added intersect to BaseCharge

* broken commmit (Apple sucks big time)

* broken commit

* broken commit

* broken commit

* Fixes for contract_between(). (#421)

* Fixes for contract_between().
* output_edge_ordering was not respected in trace or outer_product cases.
* axis names are now applied *after* edge reordering
* added some docstring to clarify ordering
* avoid a warning when contracting all edges of one or more of the input tensors.

* Split out ordering tests.
Also improves the basic contract_between() test so that it outputs
a non-symmetric matrix (now rectangular).

* broken commit

* broken

* added `return_indices` to intersect

* faster transpose + tensordot implementation

* Update requirements_travis.txt (#426)

* rewrote find_dense_positions to take multipe charges

avoids a for loop in _find_diagonal_dense_blocks and speeds up the code

* find_sparse_positions update to be a tiny bit faster

* Remove duplicate Dockerfile from root directory (#431)

* BaseNode / Edge class name type check protection add (#424)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fix bug in _get_diagonal_dense_blocks

* fix bug

* fixed bug in transpose

* Test network operations (#441)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added missing tests for network operations

* some linting issues

* Rename backend shape methods (#355) (#436)

concat function
* rename from cocate to shape_concat

shape function
* rename from shape to shape_tensor

prod function
* rename from prod to shape_prod
* function name is duplicated in shell_backend.py
* rename existing shape_prod function to shape_product
* need to change the name later

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fixed final_order passing for tensordot

* Added SAT Tutorial (#438)

* Add files via upload

Added SAT Tutorials

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* License changed

* Created using Colaboratory

Co-authored-by: Chase Roberts <chaseriley@google.com>

* More Test! (#444)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added a lot of tests for network components

* a lot more tests

* some more tests

* some linter things

* added test base class instead of hack

* disabled some pytype warnings

* disabled some pylint warnings

* Return empty dict for empty sequence input to MPS left_envs and right_envs (#440)

* Return empty dict for empty input to MPS envs

* Add tests for empty sequence input to MPS envs

* Use explicit sequences for MPS envs tests

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Issue #339. with tn.DefaultBackend(backend): support (#434)

* A context manager support implementation for setting up a backend for Nodes. (Issue #339)

* Stack-based backend context manager implementation

* code styele fix

* Added get_default_backend() function which returns top stack backend.
  Stack returns config.default_backend if there is nothing in stack.

A little clean-up in test file.

* - Moved `set_default_backend` to the `backend_contextmanager`
- `default_backend` now is a property of `_DefaultBackendStack`
- removed `config` imports as an unused file.
- fixed some tests in `backend_contextmanager_test.py`

* little code-style fix

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#439)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#292)

        *[BaseNode class] - add / sub / mul / truediv NotImplemented function Added
        *[Node class] - add / sub / mul / truediv function added
        *[CopyNode class] - overload the BaseNode mul / truediv as NotImplemented

        *[basebackend] - add / sub / mul / div NotImplemented function added
        *[numpy / tensorflow / pytorch] - add / sub / mul / div function added
        *[shell] - add / sub / div NotImplemented function added

        *Testing files
        [network_components_free_test]
        * Exception - Tensorflow is not tested when the operand is scalar
        * 1. Check add / sub / mul / div with int / float / Node
        * 2. Check implicit conversion
        * 2. Check the Type Error when type is not int / float / Node
        * 3. Check is the operand backend same
        * 4. Check is BaseNode has attribute _tensor

        [backend_test - numpy / tensorflow / pytorch]
        *check add / sub / mul / divide work for int / float / Node

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add __add__, __sub__, __mul__, __truediv__ to TestNode Class

Co-authored-by: Chase Roberts <chaseriley@google.com>

* improved performance, but only u1 currently supported in this commit

* Fix unsafe None checks (#449)

* None checks added for constructors

* Changes in None check and resolve comments

* Backend test (#448)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* missing test for backend contextmanager

* notimplemented tests for base backend

* added subtraction test notimplemented

* added jax backend index_update test

* first missing tests for numpy

* actually catched an error in numpy_backend eigs method!

* more eigs tests

* didnt catch an error, unexpected convention

* more tests for eigsh_lancszos

* added missing pytorch backend tests

* added missing tf backend tests

* pytype

* suppress pytype

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Version bump for release

* merging Glen's and my code

* fix bug in unique

* adding/removing tests

* add benchmark file

* adding files

* deleted some files

* fix bug

* renaming and shortening

* cleaning up code

* cleaning up

* removed _check_flows

* remove a print

* fix bug

* nothing

* fix big in flatten_meta_data

Co-authored-by: Cutter Coryell <14116109+coryell@users.noreply.github.com>
Co-authored-by: Chase Roberts <chaseriley@google.com>
Co-authored-by: Ashley Milsted <ashmilsted@gmail.com>
Co-authored-by: Ivan PANICO <iv.panico@gmail.com>
Co-authored-by: Ori Alberton <github@oalberton.com>
Co-authored-by: Kshithij Iyer <kshithij.ki@gmail.com>
Co-authored-by: Hyunbyung, Park <hyunbyung87@gmail.com>
Co-authored-by: MichaelMarien <marien.mich@gmail.com>
Co-authored-by: kosehy <kosehy@gmail.com>
Co-authored-by: Olga Okrut <46659064+olgOk@users.noreply.github.com>
Co-authored-by: Aidan Dang <dang@aidan.gg>
Co-authored-by: Tigran Katolikyan <43802339+katolikyan@users.noreply.github.com>
Co-authored-by: Jayanth Chandra <jayanthchandra14@gmail.com>
mganahl added a commit that referenced this pull request Jan 31, 2020
* started implementing block-sparse tensors

* removed files

* working on AbelianIndex

* working in block sparisty

* added reshape

and lots of other stuff

* added Index, an index type for symmetric tensors

* added small tutorial

* added docstring

* fixed bug in retrieve_diagonal_blocks

* TODO added

* improved initialization a bit

* more efficient initialization

* just formatting

* added random

* added fuse_degeneracies

* fixed bug in reshape

* dosctring, typing

* removed TODO

* removed confusing code line

* bug removed

* comment

* added __mul__ to Index

* added sparse_shape

and updated reshape to accept both int and Index lists

* more in tutorial

* comment

* added new test function

* testing function hacking

* docstring

* small speed up

* Remove gui directory (migrated to another repo) (#399)

* a slightly more elegant code

* use one more np function

* removed some crazy slow code

* faster code

* Update README.md (#404)

* add return_data

* doc

* bug fix

* a little faster

* substantial speedup

* renaming

* removed todo

* some comments

* comments

* fixed some bug in reshape

* comments

* default value changed

* fixed bug, old version is now faster again

* cleaned up reshape

* started adding tests

* Quantum abstractions (#379)

* Initial attempt at quantum classes.

* .tensor() and QuantumIdentity.

* check_hilberts -> check_spaces

* Add some blurb.

* Rename to Qu.

* Finish Qu.

* Fix matmul in case operators share network components.

* Add some scalar methods.

* Improve a docstring.

* Redo scalars and make identity work using copy tensors.
A QuOperator can now have a scalar component (disconnected scalar subnetwork).
Also introduce `ignore_edges` for power users.

* Remove obsolete parts of QuScalar.

* Add contraction stuff.

* Add from_tensor() constructors.

* Doctstring.

* Doc/comments fixes.

* Add typing.

* Remove some lint.

* Fix a bug.

* Add very simple constructor tests.

* Default edge ordering for eval(). Better docstrings.

* A bunch more tests.

* tensor_prod -> tensor_product, outer_product

* .is_scalar -> is_scalar() etc.

* Improve docstrings on axis ordering.

* Improve and fix scalar multiplication.

* Kill outer_product().

* CopyNode needs a backend and dtype.

* Fix __mul__ and add __rmul__.

* More docstrings and add axis arguments to vector from_tensor()s.

* Add backends to tests.

* Delint.

* CopyNode should not inflate its tensor just to tell you the dtype.

* Correct two docstrings.

* Improve some tests.
Particulary, test identity some more, since we now try to be efficient
with CopyNode identity tensors.

* Treat CopyNode identity tensors efficiently.
Also rename shape -> space.

* Add support for copying CopyNodes.

* Propagate output edges properly.

* Test that CopyNodes are propagated.
Also do a CopyNode sandwich test.

* Improve typing.
Also more shape -> space.

* adding random uniform initialization (#412)

* adding random uniform initialization

* fixes dumb pylint

* couple of nit picks

* replace kron with broadcasting

* column-major -> row-major

* documentation

* added function to compute unique charges and charge degeneracies

Function avoids explicit full fusion of all legs, and instead only keeps track of the unique charges and their degeneracies upon fusion

* improved block finding, fixed bug in reshape

re-intorduced BlockSparseTensor.dense_shape
new method for fusing charges and degeneracies (faster for very rectangular matrices)

* fuse_charge_pair added

fuse_charges added

* use is_leave

* new tests

* removed TODO, BlockSparseTensor.shape returns ref instead of copy

* added tests

* added tests

* column-major -> row-major

forgot to fix fusing order of charges and degeneracies

* fix broken tests

* test added

* mostly docstring

* docstring

* added map_to_integer

* test for map_to_integer

* added functions to find sparse positions when fusing two charges

* renaming of routines

* added unfuse

* test unfuse

* fixed bug in the new routine for finding diagonal blocks

* test added

* docstring

* added tests

* renaming

* tests

* transpose added, map_to_integer removed

(used np.unravel_index and np.ravel_multi_index instead)

* find_dense_positions made faster

* working on transpose

* transpose modified

* Index.name -> property

* added charge types

* adding tests

* fixing bugs

* implementing more symmetries

* typo + remove cython lookup

* split charge.py from index.py

* tests for charge.py

* test added

* added matmul

* added test for matmul

* tests + allow np.int8

* typo

* undo typo

* test\

* savety commit, starting to add multiple charges

* Charge -> ChargeCollection

* removed offsets from U1Charge (unnecessary), Charge -> ChargeCollection

* new tests

* tests for new index

* new Index class

* new block tensor

* shorter code

* add __eq__, remove nonzero, add unique

* working on charges.py

* fix small bug in  BaseCharge.__init__

* fix tests after bugfix

* tests for equals() and __eq__

* added equals()

for comparing with unshifted target charges
__eq__ now only compares shifted target charges

* added typing

* ChargeCollection.__repr__ modified

* *** empty log message ***

* this commit is not working

* fix bug in __len__

fix various bugs in __eq__ and __getitem__

* working in implemetation of multiple charges

* bugfix in ChargeCollection.__getitem__

* adding tests

* sleep commit

* added iterators

* ChargeCollection.__init__:

charges are now always stacked, self.charges contain views to the stacked
charges
__init__ can be called with optional shifts and stacked_charges to initialize
the BaseCharges object with it

* lunch commit

* back from lunch

* tests added

* ported find_sparse_positions and find_diagonal_sparse_blocks to new charge interface

* broken commit

* fixed bug in find_dense_positions

* fix bug in find_dense_positions

* docstring

* fix bug in Index initialization

* fix bug in Index initialization

* typo

* remove __rmul__ calls of ChargeCollection and BaseCharge

* removed __rmul__

* removed some bugs inb transpose

* broken commit

* broken commit

* remove csr matrix, use search sorted

* remove unfuse, use divmod

* broken commit, working on tensordot

* tensordot implemented, not tested

* removed commented codex

* fix tests

* fix tests

* added test for BlockSparseTensor back

* renaming files

* fix tutorial, fix import

* faster find_dense_positions

* compute reduced svd in `backends.numpy.decompositions.svd_decompostion` (#420)

* compute reduced svd when calling np.linalg.svd from numpy backend

* test SVD when max_singular_values>bond_dimension (numpy backend)

* added intersect to BaseCharge

* broken commmit (Apple sucks big time)

* broken commit

* broken commit

* broken commit

* Fixes for contract_between(). (#421)

* Fixes for contract_between().
* output_edge_ordering was not respected in trace or outer_product cases.
* axis names are now applied *after* edge reordering
* added some docstring to clarify ordering
* avoid a warning when contracting all edges of one or more of the input tensors.

* Split out ordering tests.
Also improves the basic contract_between() test so that it outputs
a non-symmetric matrix (now rectangular).

* broken commit

* broken

* added `return_indices` to intersect

* faster transpose + tensordot implementation

* Update requirements_travis.txt (#426)

* rewrote find_dense_positions to take multipe charges

avoids a for loop in _find_diagonal_dense_blocks and speeds up the code

* find_sparse_positions update to be a tiny bit faster

* Remove duplicate Dockerfile from root directory (#431)

* BaseNode / Edge class name type check protection add (#424)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fix bug in _get_diagonal_dense_blocks

* fix bug

* fixed bug in transpose

* Test network operations (#441)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added missing tests for network operations

* some linting issues

* Rename backend shape methods (#355) (#436)

concat function
* rename from cocate to shape_concat

shape function
* rename from shape to shape_tensor

prod function
* rename from prod to shape_prod
* function name is duplicated in shell_backend.py
* rename existing shape_prod function to shape_product
* need to change the name later

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fixed final_order passing for tensordot

* Added SAT Tutorial (#438)

* Add files via upload

Added SAT Tutorials

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* License changed

* Created using Colaboratory

Co-authored-by: Chase Roberts <chaseriley@google.com>

* More Test! (#444)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added a lot of tests for network components

* a lot more tests

* some more tests

* some linter things

* added test base class instead of hack

* disabled some pytype warnings

* disabled some pylint warnings

* Return empty dict for empty sequence input to MPS left_envs and right_envs (#440)

* Return empty dict for empty input to MPS envs

* Add tests for empty sequence input to MPS envs

* Use explicit sequences for MPS envs tests

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Issue #339. with tn.DefaultBackend(backend): support (#434)

* A context manager support implementation for setting up a backend for Nodes. (Issue #339)

* Stack-based backend context manager implementation

* code styele fix

* Added get_default_backend() function which returns top stack backend.
  Stack returns config.default_backend if there is nothing in stack.

A little clean-up in test file.

* - Moved `set_default_backend` to the `backend_contextmanager`
- `default_backend` now is a property of `_DefaultBackendStack`
- removed `config` imports as an unused file.
- fixed some tests in `backend_contextmanager_test.py`

* little code-style fix

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#439)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#292)

        *[BaseNode class] - add / sub / mul / truediv NotImplemented function Added
        *[Node class] - add / sub / mul / truediv function added
        *[CopyNode class] - overload the BaseNode mul / truediv as NotImplemented

        *[basebackend] - add / sub / mul / div NotImplemented function added
        *[numpy / tensorflow / pytorch] - add / sub / mul / div function added
        *[shell] - add / sub / div NotImplemented function added

        *Testing files
        [network_components_free_test]
        * Exception - Tensorflow is not tested when the operand is scalar
        * 1. Check add / sub / mul / div with int / float / Node
        * 2. Check implicit conversion
        * 2. Check the Type Error when type is not int / float / Node
        * 3. Check is the operand backend same
        * 4. Check is BaseNode has attribute _tensor

        [backend_test - numpy / tensorflow / pytorch]
        *check add / sub / mul / divide work for int / float / Node

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add __add__, __sub__, __mul__, __truediv__ to TestNode Class

Co-authored-by: Chase Roberts <chaseriley@google.com>

* improved performance, but only u1 currently supported in this commit

* Fix unsafe None checks (#449)

* None checks added for constructors

* Changes in None check and resolve comments

* Backend test (#448)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* missing test for backend contextmanager

* notimplemented tests for base backend

* added subtraction test notimplemented

* added jax backend index_update test

* first missing tests for numpy

* actually catched an error in numpy_backend eigs method!

* more eigs tests

* didnt catch an error, unexpected convention

* more tests for eigsh_lancszos

* added missing pytorch backend tests

* added missing tf backend tests

* pytype

* suppress pytype

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Version bump for release

* merging Glen's and my code

* fix bug in unique

* adding/removing tests

* add benchmark file

* adding files

* deleted some files

* fix bug

* renaming and shortening

* cleaning up code

* cleaning up

* removed _check_flows

* remove a print

* fix bug

* nothing

* fix big in flatten_meta_data

* added bunch of tests

* added inner and outer product

* removed binary tree, switched to a list of charges

* added copy()

Co-authored-by: Cutter Coryell <14116109+coryell@users.noreply.github.com>
Co-authored-by: Chase Roberts <chaseriley@google.com>
Co-authored-by: Ashley Milsted <ashmilsted@gmail.com>
Co-authored-by: Ivan PANICO <iv.panico@gmail.com>
Co-authored-by: Ori Alberton <github@oalberton.com>
Co-authored-by: Kshithij Iyer <kshithij.ki@gmail.com>
Co-authored-by: Hyunbyung, Park <hyunbyung87@gmail.com>
Co-authored-by: MichaelMarien <marien.mich@gmail.com>
Co-authored-by: kosehy <kosehy@gmail.com>
Co-authored-by: Olga Okrut <46659064+olgOk@users.noreply.github.com>
Co-authored-by: Aidan Dang <dang@aidan.gg>
Co-authored-by: Tigran Katolikyan <43802339+katolikyan@users.noreply.github.com>
Co-authored-by: Jayanth Chandra <jayanthchandra14@gmail.com>
mganahl added a commit that referenced this pull request Feb 1, 2020
* started implementing block-sparse tensors

* removed files

* working on AbelianIndex

* working in block sparisty

* added reshape

and lots of other stuff

* added Index, an index type for symmetric tensors

* added small tutorial

* added docstring

* fixed bug in retrieve_diagonal_blocks

* TODO added

* improved initialization a bit

* more efficient initialization

* just formatting

* added random

* added fuse_degeneracies

* fixed bug in reshape

* dosctring, typing

* removed TODO

* removed confusing code line

* bug removed

* comment

* added __mul__ to Index

* added sparse_shape

and updated reshape to accept both int and Index lists

* more in tutorial

* comment

* added new test function

* testing function hacking

* docstring

* small speed up

* Remove gui directory (migrated to another repo) (#399)

* a slightly more elegant code

* use one more np function

* removed some crazy slow code

* faster code

* Update README.md (#404)

* add return_data

* doc

* bug fix

* a little faster

* substantial speedup

* renaming

* removed todo

* some comments

* comments

* fixed some bug in reshape

* comments

* default value changed

* fixed bug, old version is now faster again

* cleaned up reshape

* started adding tests

* Quantum abstractions (#379)

* Initial attempt at quantum classes.

* .tensor() and QuantumIdentity.

* check_hilberts -> check_spaces

* Add some blurb.

* Rename to Qu.

* Finish Qu.

* Fix matmul in case operators share network components.

* Add some scalar methods.

* Improve a docstring.

* Redo scalars and make identity work using copy tensors.
A QuOperator can now have a scalar component (disconnected scalar subnetwork).
Also introduce `ignore_edges` for power users.

* Remove obsolete parts of QuScalar.

* Add contraction stuff.

* Add from_tensor() constructors.

* Doctstring.

* Doc/comments fixes.

* Add typing.

* Remove some lint.

* Fix a bug.

* Add very simple constructor tests.

* Default edge ordering for eval(). Better docstrings.

* A bunch more tests.

* tensor_prod -> tensor_product, outer_product

* .is_scalar -> is_scalar() etc.

* Improve docstrings on axis ordering.

* Improve and fix scalar multiplication.

* Kill outer_product().

* CopyNode needs a backend and dtype.

* Fix __mul__ and add __rmul__.

* More docstrings and add axis arguments to vector from_tensor()s.

* Add backends to tests.

* Delint.

* CopyNode should not inflate its tensor just to tell you the dtype.

* Correct two docstrings.

* Improve some tests.
Particulary, test identity some more, since we now try to be efficient
with CopyNode identity tensors.

* Treat CopyNode identity tensors efficiently.
Also rename shape -> space.

* Add support for copying CopyNodes.

* Propagate output edges properly.

* Test that CopyNodes are propagated.
Also do a CopyNode sandwich test.

* Improve typing.
Also more shape -> space.

* adding random uniform initialization (#412)

* adding random uniform initialization

* fixes dumb pylint

* couple of nit picks

* replace kron with broadcasting

* column-major -> row-major

* documentation

* added function to compute unique charges and charge degeneracies

Function avoids explicit full fusion of all legs, and instead only keeps track of the unique charges and their degeneracies upon fusion

* improved block finding, fixed bug in reshape

re-intorduced BlockSparseTensor.dense_shape
new method for fusing charges and degeneracies (faster for very rectangular matrices)

* fuse_charge_pair added

fuse_charges added

* use is_leave

* new tests

* removed TODO, BlockSparseTensor.shape returns ref instead of copy

* added tests

* added tests

* column-major -> row-major

forgot to fix fusing order of charges and degeneracies

* fix broken tests

* test added

* mostly docstring

* docstring

* added map_to_integer

* test for map_to_integer

* added functions to find sparse positions when fusing two charges

* renaming of routines

* added unfuse

* test unfuse

* fixed bug in the new routine for finding diagonal blocks

* test added

* docstring

* added tests

* renaming

* tests

* transpose added, map_to_integer removed

(used np.unravel_index and np.ravel_multi_index instead)

* find_dense_positions made faster

* working on transpose

* transpose modified

* Index.name -> property

* added charge types

* adding tests

* fixing bugs

* implementing more symmetries

* typo + remove cython lookup

* split charge.py from index.py

* tests for charge.py

* test added

* added matmul

* added test for matmul

* tests + allow np.int8

* typo

* undo typo

* test\

* savety commit, starting to add multiple charges

* Charge -> ChargeCollection

* removed offsets from U1Charge (unnecessary), Charge -> ChargeCollection

* new tests

* tests for new index

* new Index class

* new block tensor

* shorter code

* add __eq__, remove nonzero, add unique

* working on charges.py

* fix small bug in  BaseCharge.__init__

* fix tests after bugfix

* tests for equals() and __eq__

* added equals()

for comparing with unshifted target charges
__eq__ now only compares shifted target charges

* added typing

* ChargeCollection.__repr__ modified

* *** empty log message ***

* this commit is not working

* fix bug in __len__

fix various bugs in __eq__ and __getitem__

* working in implemetation of multiple charges

* bugfix in ChargeCollection.__getitem__

* adding tests

* sleep commit

* added iterators

* ChargeCollection.__init__:

charges are now always stacked, self.charges contain views to the stacked
charges
__init__ can be called with optional shifts and stacked_charges to initialize
the BaseCharges object with it

* lunch commit

* back from lunch

* tests added

* ported find_sparse_positions and find_diagonal_sparse_blocks to new charge interface

* broken commit

* fixed bug in find_dense_positions

* fix bug in find_dense_positions

* docstring

* fix bug in Index initialization

* fix bug in Index initialization

* typo

* remove __rmul__ calls of ChargeCollection and BaseCharge

* removed __rmul__

* removed some bugs inb transpose

* broken commit

* broken commit

* remove csr matrix, use search sorted

* remove unfuse, use divmod

* broken commit, working on tensordot

* tensordot implemented, not tested

* removed commented codex

* fix tests

* fix tests

* added test for BlockSparseTensor back

* renaming files

* fix tutorial, fix import

* faster find_dense_positions

* compute reduced svd in `backends.numpy.decompositions.svd_decompostion` (#420)

* compute reduced svd when calling np.linalg.svd from numpy backend

* test SVD when max_singular_values>bond_dimension (numpy backend)

* added intersect to BaseCharge

* broken commmit (Apple sucks big time)

* broken commit

* broken commit

* broken commit

* Fixes for contract_between(). (#421)

* Fixes for contract_between().
* output_edge_ordering was not respected in trace or outer_product cases.
* axis names are now applied *after* edge reordering
* added some docstring to clarify ordering
* avoid a warning when contracting all edges of one or more of the input tensors.

* Split out ordering tests.
Also improves the basic contract_between() test so that it outputs
a non-symmetric matrix (now rectangular).

* broken commit

* broken

* added `return_indices` to intersect

* faster transpose + tensordot implementation

* Update requirements_travis.txt (#426)

* rewrote find_dense_positions to take multipe charges

avoids a for loop in _find_diagonal_dense_blocks and speeds up the code

* find_sparse_positions update to be a tiny bit faster

* Remove duplicate Dockerfile from root directory (#431)

* BaseNode / Edge class name type check protection add (#424)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fix bug in _get_diagonal_dense_blocks

* fix bug

* fixed bug in transpose

* Test network operations (#441)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added missing tests for network operations

* some linting issues

* Rename backend shape methods (#355) (#436)

concat function
* rename from cocate to shape_concat

shape function
* rename from shape to shape_tensor

prod function
* rename from prod to shape_prod
* function name is duplicated in shell_backend.py
* rename existing shape_prod function to shape_product
* need to change the name later

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fixed final_order passing for tensordot

* Added SAT Tutorial (#438)

* Add files via upload

Added SAT Tutorials

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* License changed

* Created using Colaboratory

Co-authored-by: Chase Roberts <chaseriley@google.com>

* More Test! (#444)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added a lot of tests for network components

* a lot more tests

* some more tests

* some linter things

* added test base class instead of hack

* disabled some pytype warnings

* disabled some pylint warnings

* Return empty dict for empty sequence input to MPS left_envs and right_envs (#440)

* Return empty dict for empty input to MPS envs

* Add tests for empty sequence input to MPS envs

* Use explicit sequences for MPS envs tests

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Issue #339. with tn.DefaultBackend(backend): support (#434)

* A context manager support implementation for setting up a backend for Nodes. (Issue #339)

* Stack-based backend context manager implementation

* code styele fix

* Added get_default_backend() function which returns top stack backend.
  Stack returns config.default_backend if there is nothing in stack.

A little clean-up in test file.

* - Moved `set_default_backend` to the `backend_contextmanager`
- `default_backend` now is a property of `_DefaultBackendStack`
- removed `config` imports as an unused file.
- fixed some tests in `backend_contextmanager_test.py`

* little code-style fix

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#439)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#292)

        *[BaseNode class] - add / sub / mul / truediv NotImplemented function Added
        *[Node class] - add / sub / mul / truediv function added
        *[CopyNode class] - overload the BaseNode mul / truediv as NotImplemented

        *[basebackend] - add / sub / mul / div NotImplemented function added
        *[numpy / tensorflow / pytorch] - add / sub / mul / div function added
        *[shell] - add / sub / div NotImplemented function added

        *Testing files
        [network_components_free_test]
        * Exception - Tensorflow is not tested when the operand is scalar
        * 1. Check add / sub / mul / div with int / float / Node
        * 2. Check implicit conversion
        * 2. Check the Type Error when type is not int / float / Node
        * 3. Check is the operand backend same
        * 4. Check is BaseNode has attribute _tensor

        [backend_test - numpy / tensorflow / pytorch]
        *check add / sub / mul / divide work for int / float / Node

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add __add__, __sub__, __mul__, __truediv__ to TestNode Class

Co-authored-by: Chase Roberts <chaseriley@google.com>

* improved performance, but only u1 currently supported in this commit

* Fix unsafe None checks (#449)

* None checks added for constructors

* Changes in None check and resolve comments

* Backend test (#448)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* missing test for backend contextmanager

* notimplemented tests for base backend

* added subtraction test notimplemented

* added jax backend index_update test

* first missing tests for numpy

* actually catched an error in numpy_backend eigs method!

* more eigs tests

* didnt catch an error, unexpected convention

* more tests for eigsh_lancszos

* added missing pytorch backend tests

* added missing tf backend tests

* pytype

* suppress pytype

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Version bump for release

* merging Glen's and my code

* fix bug in unique

* adding/removing tests

* add benchmark file

* adding files

* deleted some files

* fix bug

* renaming and shortening

* cleaning up code

* cleaning up

* removed _check_flows

* remove a print

* fix bug

* nothing

* fix big in flatten_meta_data

* added bunch of tests

* added inner and outer product

* removed binary tree, switched to a list of charges

* added copy()

* bugfix

* added __add__ __sub__ __mul__ __rmul__

* added __eq__

* change dosctring

* added svd

* better check

Co-authored-by: Cutter Coryell <14116109+coryell@users.noreply.github.com>
Co-authored-by: Chase Roberts <chaseriley@google.com>
Co-authored-by: Ashley Milsted <ashmilsted@gmail.com>
Co-authored-by: Ivan PANICO <iv.panico@gmail.com>
Co-authored-by: Ori Alberton <github@oalberton.com>
Co-authored-by: Kshithij Iyer <kshithij.ki@gmail.com>
Co-authored-by: Hyunbyung, Park <hyunbyung87@gmail.com>
Co-authored-by: MichaelMarien <marien.mich@gmail.com>
Co-authored-by: kosehy <kosehy@gmail.com>
Co-authored-by: Olga Okrut <46659064+olgOk@users.noreply.github.com>
Co-authored-by: Aidan Dang <dang@aidan.gg>
Co-authored-by: Tigran Katolikyan <43802339+katolikyan@users.noreply.github.com>
Co-authored-by: Jayanth Chandra <jayanthchandra14@gmail.com>
mganahl added a commit that referenced this pull request Feb 1, 2020
* started implementing block-sparse tensors

* removed files

* working on AbelianIndex

* working in block sparisty

* added reshape

and lots of other stuff

* added Index, an index type for symmetric tensors

* added small tutorial

* added docstring

* fixed bug in retrieve_diagonal_blocks

* TODO added

* improved initialization a bit

* more efficient initialization

* just formatting

* added random

* added fuse_degeneracies

* fixed bug in reshape

* dosctring, typing

* removed TODO

* removed confusing code line

* bug removed

* comment

* added __mul__ to Index

* added sparse_shape

and updated reshape to accept both int and Index lists

* more in tutorial

* comment

* added new test function

* testing function hacking

* docstring

* small speed up

* Remove gui directory (migrated to another repo) (#399)

* a slightly more elegant code

* use one more np function

* removed some crazy slow code

* faster code

* Update README.md (#404)

* add return_data

* doc

* bug fix

* a little faster

* substantial speedup

* renaming

* removed todo

* some comments

* comments

* fixed some bug in reshape

* comments

* default value changed

* fixed bug, old version is now faster again

* cleaned up reshape

* started adding tests

* Quantum abstractions (#379)

* Initial attempt at quantum classes.

* .tensor() and QuantumIdentity.

* check_hilberts -> check_spaces

* Add some blurb.

* Rename to Qu.

* Finish Qu.

* Fix matmul in case operators share network components.

* Add some scalar methods.

* Improve a docstring.

* Redo scalars and make identity work using copy tensors.
A QuOperator can now have a scalar component (disconnected scalar subnetwork).
Also introduce `ignore_edges` for power users.

* Remove obsolete parts of QuScalar.

* Add contraction stuff.

* Add from_tensor() constructors.

* Doctstring.

* Doc/comments fixes.

* Add typing.

* Remove some lint.

* Fix a bug.

* Add very simple constructor tests.

* Default edge ordering for eval(). Better docstrings.

* A bunch more tests.

* tensor_prod -> tensor_product, outer_product

* .is_scalar -> is_scalar() etc.

* Improve docstrings on axis ordering.

* Improve and fix scalar multiplication.

* Kill outer_product().

* CopyNode needs a backend and dtype.

* Fix __mul__ and add __rmul__.

* More docstrings and add axis arguments to vector from_tensor()s.

* Add backends to tests.

* Delint.

* CopyNode should not inflate its tensor just to tell you the dtype.

* Correct two docstrings.

* Improve some tests.
Particulary, test identity some more, since we now try to be efficient
with CopyNode identity tensors.

* Treat CopyNode identity tensors efficiently.
Also rename shape -> space.

* Add support for copying CopyNodes.

* Propagate output edges properly.

* Test that CopyNodes are propagated.
Also do a CopyNode sandwich test.

* Improve typing.
Also more shape -> space.

* adding random uniform initialization (#412)

* adding random uniform initialization

* fixes dumb pylint

* couple of nit picks

* replace kron with broadcasting

* column-major -> row-major

* documentation

* added function to compute unique charges and charge degeneracies

Function avoids explicit full fusion of all legs, and instead only keeps track of the unique charges and their degeneracies upon fusion

* improved block finding, fixed bug in reshape

re-intorduced BlockSparseTensor.dense_shape
new method for fusing charges and degeneracies (faster for very rectangular matrices)

* fuse_charge_pair added

fuse_charges added

* use is_leave

* new tests

* removed TODO, BlockSparseTensor.shape returns ref instead of copy

* added tests

* added tests

* column-major -> row-major

forgot to fix fusing order of charges and degeneracies

* fix broken tests

* test added

* mostly docstring

* docstring

* added map_to_integer

* test for map_to_integer

* added functions to find sparse positions when fusing two charges

* renaming of routines

* added unfuse

* test unfuse

* fixed bug in the new routine for finding diagonal blocks

* test added

* docstring

* added tests

* renaming

* tests

* transpose added, map_to_integer removed

(used np.unravel_index and np.ravel_multi_index instead)

* find_dense_positions made faster

* working on transpose

* transpose modified

* Index.name -> property

* added charge types

* adding tests

* fixing bugs

* implementing more symmetries

* typo + remove cython lookup

* split charge.py from index.py

* tests for charge.py

* test added

* added matmul

* added test for matmul

* tests + allow np.int8

* typo

* undo typo

* test\

* savety commit, starting to add multiple charges

* Charge -> ChargeCollection

* removed offsets from U1Charge (unnecessary), Charge -> ChargeCollection

* new tests

* tests for new index

* new Index class

* new block tensor

* shorter code

* add __eq__, remove nonzero, add unique

* working on charges.py

* fix small bug in  BaseCharge.__init__

* fix tests after bugfix

* tests for equals() and __eq__

* added equals()

for comparing with unshifted target charges
__eq__ now only compares shifted target charges

* added typing

* ChargeCollection.__repr__ modified

* *** empty log message ***

* this commit is not working

* fix bug in __len__

fix various bugs in __eq__ and __getitem__

* working in implemetation of multiple charges

* bugfix in ChargeCollection.__getitem__

* adding tests

* sleep commit

* added iterators

* ChargeCollection.__init__:

charges are now always stacked, self.charges contain views to the stacked
charges
__init__ can be called with optional shifts and stacked_charges to initialize
the BaseCharges object with it

* lunch commit

* back from lunch

* tests added

* ported find_sparse_positions and find_diagonal_sparse_blocks to new charge interface

* broken commit

* fixed bug in find_dense_positions

* fix bug in find_dense_positions

* docstring

* fix bug in Index initialization

* fix bug in Index initialization

* typo

* remove __rmul__ calls of ChargeCollection and BaseCharge

* removed __rmul__

* removed some bugs inb transpose

* broken commit

* broken commit

* remove csr matrix, use search sorted

* remove unfuse, use divmod

* broken commit, working on tensordot

* tensordot implemented, not tested

* removed commented codex

* fix tests

* fix tests

* added test for BlockSparseTensor back

* renaming files

* fix tutorial, fix import

* faster find_dense_positions

* compute reduced svd in `backends.numpy.decompositions.svd_decompostion` (#420)

* compute reduced svd when calling np.linalg.svd from numpy backend

* test SVD when max_singular_values>bond_dimension (numpy backend)

* added intersect to BaseCharge

* broken commmit (Apple sucks big time)

* broken commit

* broken commit

* broken commit

* Fixes for contract_between(). (#421)

* Fixes for contract_between().
* output_edge_ordering was not respected in trace or outer_product cases.
* axis names are now applied *after* edge reordering
* added some docstring to clarify ordering
* avoid a warning when contracting all edges of one or more of the input tensors.

* Split out ordering tests.
Also improves the basic contract_between() test so that it outputs
a non-symmetric matrix (now rectangular).

* broken commit

* broken

* added `return_indices` to intersect

* faster transpose + tensordot implementation

* Update requirements_travis.txt (#426)

* rewrote find_dense_positions to take multipe charges

avoids a for loop in _find_diagonal_dense_blocks and speeds up the code

* find_sparse_positions update to be a tiny bit faster

* Remove duplicate Dockerfile from root directory (#431)

* BaseNode / Edge class name type check protection add (#424)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fix bug in _get_diagonal_dense_blocks

* fix bug

* fixed bug in transpose

* Test network operations (#441)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added missing tests for network operations

* some linting issues

* Rename backend shape methods (#355) (#436)

concat function
* rename from cocate to shape_concat

shape function
* rename from shape to shape_tensor

prod function
* rename from prod to shape_prod
* function name is duplicated in shell_backend.py
* rename existing shape_prod function to shape_product
* need to change the name later

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fixed final_order passing for tensordot

* Added SAT Tutorial (#438)

* Add files via upload

Added SAT Tutorials

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* License changed

* Created using Colaboratory

Co-authored-by: Chase Roberts <chaseriley@google.com>

* More Test! (#444)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added a lot of tests for network components

* a lot more tests

* some more tests

* some linter things

* added test base class instead of hack

* disabled some pytype warnings

* disabled some pylint warnings

* Return empty dict for empty sequence input to MPS left_envs and right_envs (#440)

* Return empty dict for empty input to MPS envs

* Add tests for empty sequence input to MPS envs

* Use explicit sequences for MPS envs tests

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Issue #339. with tn.DefaultBackend(backend): support (#434)

* A context manager support implementation for setting up a backend for Nodes. (Issue #339)

* Stack-based backend context manager implementation

* code styele fix

* Added get_default_backend() function which returns top stack backend.
  Stack returns config.default_backend if there is nothing in stack.

A little clean-up in test file.

* - Moved `set_default_backend` to the `backend_contextmanager`
- `default_backend` now is a property of `_DefaultBackendStack`
- removed `config` imports as an unused file.
- fixed some tests in `backend_contextmanager_test.py`

* little code-style fix

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#439)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#292)

        *[BaseNode class] - add / sub / mul / truediv NotImplemented function Added
        *[Node class] - add / sub / mul / truediv function added
        *[CopyNode class] - overload the BaseNode mul / truediv as NotImplemented

        *[basebackend] - add / sub / mul / div NotImplemented function added
        *[numpy / tensorflow / pytorch] - add / sub / mul / div function added
        *[shell] - add / sub / div NotImplemented function added

        *Testing files
        [network_components_free_test]
        * Exception - Tensorflow is not tested when the operand is scalar
        * 1. Check add / sub / mul / div with int / float / Node
        * 2. Check implicit conversion
        * 2. Check the Type Error when type is not int / float / Node
        * 3. Check is the operand backend same
        * 4. Check is BaseNode has attribute _tensor

        [backend_test - numpy / tensorflow / pytorch]
        *check add / sub / mul / divide work for int / float / Node

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add __add__, __sub__, __mul__, __truediv__ to TestNode Class

Co-authored-by: Chase Roberts <chaseriley@google.com>

* improved performance, but only u1 currently supported in this commit

* Fix unsafe None checks (#449)

* None checks added for constructors

* Changes in None check and resolve comments

* Backend test (#448)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* missing test for backend contextmanager

* notimplemented tests for base backend

* added subtraction test notimplemented

* added jax backend index_update test

* first missing tests for numpy

* actually catched an error in numpy_backend eigs method!

* more eigs tests

* didnt catch an error, unexpected convention

* more tests for eigsh_lancszos

* added missing pytorch backend tests

* added missing tf backend tests

* pytype

* suppress pytype

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Version bump for release

* merging Glen's and my code

* fix bug in unique

* adding/removing tests

* add benchmark file

* adding files

* deleted some files

* fix bug

* renaming and shortening

* cleaning up code

* cleaning up

* removed _check_flows

* remove a print

* fix bug

* nothing

* fix big in flatten_meta_data

* added bunch of tests

* added inner and outer product

* removed binary tree, switched to a list of charges

* added copy()

* bugfix

* added __add__ __sub__ __mul__ __rmul__

* added __eq__

* change dosctring

* added svd

* better check

* add proper compute_uv flag , remove artifact return values

Co-authored-by: Cutter Coryell <14116109+coryell@users.noreply.github.com>
Co-authored-by: Chase Roberts <chaseriley@google.com>
Co-authored-by: Ashley Milsted <ashmilsted@gmail.com>
Co-authored-by: Ivan PANICO <iv.panico@gmail.com>
Co-authored-by: Ori Alberton <github@oalberton.com>
Co-authored-by: Kshithij Iyer <kshithij.ki@gmail.com>
Co-authored-by: Hyunbyung, Park <hyunbyung87@gmail.com>
Co-authored-by: MichaelMarien <marien.mich@gmail.com>
Co-authored-by: kosehy <kosehy@gmail.com>
Co-authored-by: Olga Okrut <46659064+olgOk@users.noreply.github.com>
Co-authored-by: Aidan Dang <dang@aidan.gg>
Co-authored-by: Tigran Katolikyan <43802339+katolikyan@users.noreply.github.com>
Co-authored-by: Jayanth Chandra <jayanthchandra14@gmail.com>
mganahl added a commit that referenced this pull request Feb 2, 2020
* started implementing block-sparse tensors

* removed files

* working on AbelianIndex

* working in block sparisty

* added reshape

and lots of other stuff

* added Index, an index type for symmetric tensors

* added small tutorial

* added docstring

* fixed bug in retrieve_diagonal_blocks

* TODO added

* improved initialization a bit

* more efficient initialization

* just formatting

* added random

* added fuse_degeneracies

* fixed bug in reshape

* dosctring, typing

* removed TODO

* removed confusing code line

* bug removed

* comment

* added __mul__ to Index

* added sparse_shape

and updated reshape to accept both int and Index lists

* more in tutorial

* comment

* added new test function

* testing function hacking

* docstring

* small speed up

* Remove gui directory (migrated to another repo) (#399)

* a slightly more elegant code

* use one more np function

* removed some crazy slow code

* faster code

* Update README.md (#404)

* add return_data

* doc

* bug fix

* a little faster

* substantial speedup

* renaming

* removed todo

* some comments

* comments

* fixed some bug in reshape

* comments

* default value changed

* fixed bug, old version is now faster again

* cleaned up reshape

* started adding tests

* Quantum abstractions (#379)

* Initial attempt at quantum classes.

* .tensor() and QuantumIdentity.

* check_hilberts -> check_spaces

* Add some blurb.

* Rename to Qu.

* Finish Qu.

* Fix matmul in case operators share network components.

* Add some scalar methods.

* Improve a docstring.

* Redo scalars and make identity work using copy tensors.
A QuOperator can now have a scalar component (disconnected scalar subnetwork).
Also introduce `ignore_edges` for power users.

* Remove obsolete parts of QuScalar.

* Add contraction stuff.

* Add from_tensor() constructors.

* Doctstring.

* Doc/comments fixes.

* Add typing.

* Remove some lint.

* Fix a bug.

* Add very simple constructor tests.

* Default edge ordering for eval(). Better docstrings.

* A bunch more tests.

* tensor_prod -> tensor_product, outer_product

* .is_scalar -> is_scalar() etc.

* Improve docstrings on axis ordering.

* Improve and fix scalar multiplication.

* Kill outer_product().

* CopyNode needs a backend and dtype.

* Fix __mul__ and add __rmul__.

* More docstrings and add axis arguments to vector from_tensor()s.

* Add backends to tests.

* Delint.

* CopyNode should not inflate its tensor just to tell you the dtype.

* Correct two docstrings.

* Improve some tests.
Particulary, test identity some more, since we now try to be efficient
with CopyNode identity tensors.

* Treat CopyNode identity tensors efficiently.
Also rename shape -> space.

* Add support for copying CopyNodes.

* Propagate output edges properly.

* Test that CopyNodes are propagated.
Also do a CopyNode sandwich test.

* Improve typing.
Also more shape -> space.

* adding random uniform initialization (#412)

* adding random uniform initialization

* fixes dumb pylint

* couple of nit picks

* replace kron with broadcasting

* column-major -> row-major

* documentation

* added function to compute unique charges and charge degeneracies

Function avoids explicit full fusion of all legs, and instead only keeps track of the unique charges and their degeneracies upon fusion

* improved block finding, fixed bug in reshape

re-intorduced BlockSparseTensor.dense_shape
new method for fusing charges and degeneracies (faster for very rectangular matrices)

* fuse_charge_pair added

fuse_charges added

* use is_leave

* new tests

* removed TODO, BlockSparseTensor.shape returns ref instead of copy

* added tests

* added tests

* column-major -> row-major

forgot to fix fusing order of charges and degeneracies

* fix broken tests

* test added

* mostly docstring

* docstring

* added map_to_integer

* test for map_to_integer

* added functions to find sparse positions when fusing two charges

* renaming of routines

* added unfuse

* test unfuse

* fixed bug in the new routine for finding diagonal blocks

* test added

* docstring

* added tests

* renaming

* tests

* transpose added, map_to_integer removed

(used np.unravel_index and np.ravel_multi_index instead)

* find_dense_positions made faster

* working on transpose

* transpose modified

* Index.name -> property

* added charge types

* adding tests

* fixing bugs

* implementing more symmetries

* typo + remove cython lookup

* split charge.py from index.py

* tests for charge.py

* test added

* added matmul

* added test for matmul

* tests + allow np.int8

* typo

* undo typo

* test\

* savety commit, starting to add multiple charges

* Charge -> ChargeCollection

* removed offsets from U1Charge (unnecessary), Charge -> ChargeCollection

* new tests

* tests for new index

* new Index class

* new block tensor

* shorter code

* add __eq__, remove nonzero, add unique

* working on charges.py

* fix small bug in  BaseCharge.__init__

* fix tests after bugfix

* tests for equals() and __eq__

* added equals()

for comparing with unshifted target charges
__eq__ now only compares shifted target charges

* added typing

* ChargeCollection.__repr__ modified

* *** empty log message ***

* this commit is not working

* fix bug in __len__

fix various bugs in __eq__ and __getitem__

* working in implemetation of multiple charges

* bugfix in ChargeCollection.__getitem__

* adding tests

* sleep commit

* added iterators

* ChargeCollection.__init__:

charges are now always stacked, self.charges contain views to the stacked
charges
__init__ can be called with optional shifts and stacked_charges to initialize
the BaseCharges object with it

* lunch commit

* back from lunch

* tests added

* ported find_sparse_positions and find_diagonal_sparse_blocks to new charge interface

* broken commit

* fixed bug in find_dense_positions

* fix bug in find_dense_positions

* docstring

* fix bug in Index initialization

* fix bug in Index initialization

* typo

* remove __rmul__ calls of ChargeCollection and BaseCharge

* removed __rmul__

* removed some bugs inb transpose

* broken commit

* broken commit

* remove csr matrix, use search sorted

* remove unfuse, use divmod

* broken commit, working on tensordot

* tensordot implemented, not tested

* removed commented codex

* fix tests

* fix tests

* added test for BlockSparseTensor back

* renaming files

* fix tutorial, fix import

* faster find_dense_positions

* compute reduced svd in `backends.numpy.decompositions.svd_decompostion` (#420)

* compute reduced svd when calling np.linalg.svd from numpy backend

* test SVD when max_singular_values>bond_dimension (numpy backend)

* added intersect to BaseCharge

* broken commmit (Apple sucks big time)

* broken commit

* broken commit

* broken commit

* Fixes for contract_between(). (#421)

* Fixes for contract_between().
* output_edge_ordering was not respected in trace or outer_product cases.
* axis names are now applied *after* edge reordering
* added some docstring to clarify ordering
* avoid a warning when contracting all edges of one or more of the input tensors.

* Split out ordering tests.
Also improves the basic contract_between() test so that it outputs
a non-symmetric matrix (now rectangular).

* broken commit

* broken

* added `return_indices` to intersect

* faster transpose + tensordot implementation

* Update requirements_travis.txt (#426)

* rewrote find_dense_positions to take multipe charges

avoids a for loop in _find_diagonal_dense_blocks and speeds up the code

* find_sparse_positions update to be a tiny bit faster

* Remove duplicate Dockerfile from root directory (#431)

* BaseNode / Edge class name type check protection add (#424)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fix bug in _get_diagonal_dense_blocks

* fix bug

* fixed bug in transpose

* Test network operations (#441)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added missing tests for network operations

* some linting issues

* Rename backend shape methods (#355) (#436)

concat function
* rename from cocate to shape_concat

shape function
* rename from shape to shape_tensor

prod function
* rename from prod to shape_prod
* function name is duplicated in shell_backend.py
* rename existing shape_prod function to shape_product
* need to change the name later

Co-authored-by: Chase Roberts <keeper6928@gmail.com>

* fixed final_order passing for tensordot

* Added SAT Tutorial (#438)

* Add files via upload

Added SAT Tutorials

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* Update SATTutorial.ipynb

* License changed

* Created using Colaboratory

Co-authored-by: Chase Roberts <chaseriley@google.com>

* More Test! (#444)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* added a lot of tests for network components

* a lot more tests

* some more tests

* some linter things

* added test base class instead of hack

* disabled some pytype warnings

* disabled some pylint warnings

* Return empty dict for empty sequence input to MPS left_envs and right_envs (#440)

* Return empty dict for empty input to MPS envs

* Add tests for empty sequence input to MPS envs

* Use explicit sequences for MPS envs tests

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Issue #339. with tn.DefaultBackend(backend): support (#434)

* A context manager support implementation for setting up a backend for Nodes. (Issue #339)

* Stack-based backend context manager implementation

* code styele fix

* Added get_default_backend() function which returns top stack backend.
  Stack returns config.default_backend if there is nothing in stack.

A little clean-up in test file.

* - Moved `set_default_backend` to the `backend_contextmanager`
- `default_backend` now is a property of `_DefaultBackendStack`
- removed `config` imports as an unused file.
- fixed some tests in `backend_contextmanager_test.py`

* little code-style fix

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#439)

* BaseNode / Edge class text input protection added (#423)

BaseNode class - Add protection to name, axis_names
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name / add_axis_names
*Property - Add @Property to name to protect direct adding node.name = 123

Edge class - Add protection to name
*Protected in 3 place
*Initialize stage - __init__
*Function use setting - set_name
*Property

* BaseNode / Edge class text input protection code revise (#423)

*if type(name) != str
*if not isinstance(name, str)
*change using type to isinstance to follow pylint

* Algebraic operation add( + ), sub( - ), mul( * ), div( / ) for BaseNode class (#292)

        *[BaseNode class] - add / sub / mul / truediv NotImplemented function Added
        *[Node class] - add / sub / mul / truediv function added
        *[CopyNode class] - overload the BaseNode mul / truediv as NotImplemented

        *[basebackend] - add / sub / mul / div NotImplemented function added
        *[numpy / tensorflow / pytorch] - add / sub / mul / div function added
        *[shell] - add / sub / div NotImplemented function added

        *Testing files
        [network_components_free_test]
        * Exception - Tensorflow is not tested when the operand is scalar
        * 1. Check add / sub / mul / div with int / float / Node
        * 2. Check implicit conversion
        * 2. Check the Type Error when type is not int / float / Node
        * 3. Check is the operand backend same
        * 4. Check is BaseNode has attribute _tensor

        [backend_test - numpy / tensorflow / pytorch]
        *check add / sub / mul / divide work for int / float / Node

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add test cases for Tensorflow Algebraic operation and fix add, sub name (#292)

[Change name]
*add -> addition
*subtract -> substraction

[Add test case for Tensorflow]
* Specify the datatype to resolve the conflict between different dtype operation

[Test case for pytorch / jax]
* pytorch - [int / int -> int] give different answer for torch when it is dividing two integer
* jax - Different from other backend jax backend return 64bits dtype even operate between 32bits
        so put exceptional dtype test case for jax backend

* Add __add__, __sub__, __mul__, __truediv__ to TestNode Class

Co-authored-by: Chase Roberts <chaseriley@google.com>

* improved performance, but only u1 currently supported in this commit

* Fix unsafe None checks (#449)

* None checks added for constructors

* Changes in None check and resolve comments

* Backend test (#448)

* added test for mps switch backend

* added switch backend method to MPS

* added test for network operations switch backend

* make sure switch_backend not only fixes tensor but also node property

* added switch_backend to init

* missing test for backend contextmanager

* notimplemented tests for base backend

* added subtraction test notimplemented

* added jax backend index_update test

* first missing tests for numpy

* actually catched an error in numpy_backend eigs method!

* more eigs tests

* didnt catch an error, unexpected convention

* more tests for eigsh_lancszos

* added missing pytorch backend tests

* added missing tf backend tests

* pytype

* suppress pytype

Co-authored-by: Chase Roberts <chaseriley@google.com>

* Version bump for release

* merging Glen's and my code

* fix bug in unique

* adding/removing tests

* add benchmark file

* adding files

* deleted some files

* fix bug

* renaming and shortening

* cleaning up code

* cleaning up

* removed _check_flows

* remove a print

* fix bug

* nothing

* fix big in flatten_meta_data

* added bunch of tests

* added inner and outer product

* removed binary tree, switched to a list of charges

* added copy()

* bugfix

* added __add__ __sub__ __mul__ __rmul__

* added __eq__

* change dosctring

* added svd

* better check

* add proper compute_uv flag , remove artifact return values

* added qr, eigh, eig

* added tests

* fix tests

* fix index bugs

Co-authored-by: Cutter Coryell <14116109+coryell@users.noreply.github.com>
Co-authored-by: Chase Roberts <chaseriley@google.com>
Co-authored-by: Ashley Milsted <ashmilsted@gmail.com>
Co-authored-by: Ivan PANICO <iv.panico@gmail.com>
Co-authored-by: Ori Alberton <github@oalberton.com>
Co-authored-by: Kshithij Iyer <kshithij.ki@gmail.com>
Co-authored-by: Hyunbyung, Park <hyunbyung87@gmail.com>
Co-authored-by: MichaelMarien <marien.mich@gmail.com>
Co-authored-by: kosehy <kosehy@gmail.com>
Co-authored-by: Olga Okrut <46659064+olgOk@users.noreply.github.com>
Co-authored-by: Aidan Dang <dang@aidan.gg>
Co-authored-by: Tigran Katolikyan <43802339+katolikyan@users.noreply.github.com>
Co-authored-by: Jayanth Chandra <jayanthchandra14@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants