|
1 | 1 | <img src="https://github.com/Jutho/TensorOperations.jl/blob/master/docs/src/assets/logo.svg" width="150">
|
2 |
| - |
3 | 2 | # TensorOperations.jl
|
4 |
| - |
5 | 3 | Fast tensor operations using a convenient Einstein index notation.
|
6 | 4 |
|
7 |
| -| **Documentation** | **Build Status** | |
8 |
| -|:-----------------:|:----------------:| |
9 |
| -| [![][docs-stable-img]][docs-stable-url] [![][docs-dev-img]][docs-dev-url] | [![CI][ci-img]][ci-url] [![CI (Julia nightly)][ci-julia-nightly-img]][ci-julia-nightly-url] [![][codecov-img]][codecov-url] | |
10 |
| - |
11 |
| -| **Digital Object Identifier** | **Downloads** | |
12 |
| -|:-----------------------------:|:-------------:| |
13 |
| -[![DOI][doi-img]][doi-url] | [![TensorOperations Downloads][genie-img]][genie-url] |
14 |
| - |
15 |
| -[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg |
16 |
| -[docs-dev-url]: https://jutho.github.io/TensorOperations.jl/latest |
17 |
| - |
18 | 5 | [docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
|
19 | 6 | [docs-stable-url]: https://jutho.github.io/TensorOperations.jl/stable
|
| 7 | +[docs-dev-img]: https://img.shields.io/badge/docs-dev-blue.svg |
| 8 | +[docs-dev-url]: https://jutho.github.io/TensorOperations.jl/latest |
| 9 | +[ci-img]: https://github.com/Jutho/TensorOperations.jl/workflows/CI/badge.svg |
| 10 | +[ci-url]: |
| 11 | + https://github.com/Jutho/TensorOperations.jl/actions?query=workflow%3ACI |
| 12 | +[ci-julia-nightly-img]: |
| 13 | + https://github.com/Jutho/TensorOperations.jl/workflows/CI%20(Julia%20nightly)/badge.svg |
| 14 | +[ci-julia-nightly-url]: |
| 15 | + https://github.com/Jutho/TensorOperations.jl/actions?query=workflow%3A%22CI+%28Julia+nightly%29%22 |
| 16 | +[codecov-img]: |
| 17 | + https://codecov.io/gh/Jutho/TensorOperations.jl/branch/master/graph/badge.svg |
| 18 | +[codecov-url]: https://codecov.io/gh/Jutho/TensorOperations.jl |
| 19 | +[doi-img]: https://zenodo.org/badge/DOI/10.5281/zenodo.3245497.svg |
| 20 | +[doi-url]: https://doi.org/10.5281/zenodo.3245497 |
| 21 | +[downloads-img]: |
| 22 | + https://shields.io/endpoint?url=https://pkgs.genieframework.com/api/v1/badge/TensorOperations |
| 23 | +[downloads-url]: https://pkgs.genieframework.com?packages=TensorOperations |
20 | 24 |
|
21 |
| -[github-img]: https://github.com/Jutho/TensorOperations.jl/workflows/CI/badge.svg |
22 |
| -[github-url]: https://github.com/Jutho/TensorOperations.jl/actions?query=workflow%3ACI |
| 25 | +| **Documentation** | **Build Status** | |
| 26 | +| :-----------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------: | |
| 27 | +| [![][docs-stable-img]][docs-stable-url] [![][docs-dev-img]][docs-dev-url] | [![CI][ci-img]][ci-url] [![CI (Julia nightly)][ci-julia-nightly-img]][ci-julia-nightly-url] [![][codecov-img]][codecov-url] | |
23 | 28 |
|
24 |
| -[ci-img]: https://github.com/Jutho/TensorOperations.jl/workflows/CI/badge.svg |
25 |
| -[ci-url]: https://github.com/Jutho/TensorOperations.jl/actions?query=workflow%3ACI |
| 29 | +| **Digital Object Identifier** | **Downloads** | |
| 30 | +| :---------------------------: | :-----------------------------------------------------------: | |
| 31 | +| [![DOI][doi-img]][doi-url] | [![TensorOperations Downloads][downloads-img]][downloads-url] | |
26 | 32 |
|
27 |
| -[ci-julia-nightly-img]: https://github.com/Jutho/TensorOperations.jl/workflows/CI%20(Julia%20nightly)/badge.svg |
28 |
| -[ci-julia-nightly-url]: https://github.com/Jutho/TensorOperations.jl/actions?query=workflow%3A%22CI+%28Julia+nightly%29%22 |
| 33 | +## What's new in v4 |
29 | 34 |
|
30 |
| -[codecov-img]: https://codecov.io/gh/Jutho/TensorOperations.jl/branch/master/graph/badge.svg |
31 |
| -[codecov-url]: https://codecov.io/gh/Jutho/TensorOperations.jl |
| 35 | +- Moved CUDA to a package extension, to avoid unnecessary dependencies for Julia versions >= 1.9 |
32 | 36 |
|
33 |
| -[doi-img]: https://zenodo.org/badge/DOI/10.5281/zenodo.3245497.svg |
34 |
| -[doi-url]: https://doi.org/10.5281/zenodo.3245497 |
| 37 | +- The cache for temporaries has been removed, but support for something similar is now provided through explicit allocating and freeing calls within the macro. |
35 | 38 |
|
36 |
| -[genie-img]: https://shields.io/endpoint?url=https://pkgs.genieframework.com/api/v1/badge/TensorOperations |
37 |
| -[genie-url]: https://pkgs.genieframework.com?packages=TensorOperations |
| 39 | +- The interface for custom types has been changed and thoroughly documented, making it easier to know what to implement. This has as a consequence that more general element types of tensors are now also possible. |
38 | 40 |
|
39 |
| -## What's new in v3 |
| 41 | +- There is a new interface to work with backends, to allow for dynamic switching between different implementations of the TensorOperations interface. |
40 | 42 |
|
41 |
| -* Switched to CUDA.jl instead of CuArrays.jl, which effectively restricts support to |
42 |
| - Julia 1.4 and higher. |
43 |
| -* The default cache size for intermediate results is now the minimum of either 4GB or one |
44 |
| - quarter of your total memory (obtained via `Sys.total_memory()`). Furthermore, the |
45 |
| - structure (i.e. `size`) and `eltype` of the temporaries is now also used as lookup key |
46 |
| - in the LRU cache, such that you can run the same code on different objects with |
47 |
| - different sizes or element types, without constantly having to reallocate the |
48 |
| - temporaries. Finally, the task rather than `threadid` is used to make the cache |
49 |
| - compatible with concurrency at any level. |
| 43 | +- The `@tensor` macro now accepts keyword arguments to facilitate a variety of options that help with debugging, contraction cost and type wrapping. |
50 | 44 |
|
51 |
| - As a consequence, different objects for the same temporary location can now be cached, |
52 |
| - such that the cache can grow out of size quickly. Once the cache is not able to hold all |
53 |
| - the temporary objects needed for your simulation, it might actually deteriorate |
54 |
| - perfomance, and you might be better off disabling the cache alltogether with |
55 |
| - `TensorOperations.disable_cache()`. |
| 45 | +- Some support for Automatic Differentiation has been added by adding reverse-mode chainrules. |
56 | 46 |
|
57 |
| -> **WARNING:** TensorOperations 3.0 contains breaking changes if you did implement support |
58 |
| -for custom array / tensor types by overloading `checked_similar_from_indices` etc. |
| 47 | +> **WARNING:** TensorOperations 4.0 contains breaking changes and is in general incompatible with previous versions. |
59 | 48 |
|
60 | 49 | ### Code example
|
61 |
| -TensorOperations.jl is mostly used through the `@tensor` macro which allows one to express |
62 |
| -a given operation in terms of |
63 |
| -[index notation](https://en.wikipedia.org/wiki/Abstract_index_notation) format, a.k.a. |
64 |
| -[Einstein notation](https://en.wikipedia.org/wiki/Einstein_notation) |
| 50 | + |
| 51 | +TensorOperations.jl is mostly used through the `@tensor` macro which allows one |
| 52 | +to express a given operation in terms of |
| 53 | +[index notation](https://en.wikipedia.org/wiki/Abstract_index_notation) format, |
| 54 | +a.k.a. [Einstein notation](https://en.wikipedia.org/wiki/Einstein_notation) |
65 | 55 | (using Einstein's summation convention).
|
66 | 56 |
|
67 | 57 | ```julia
|
68 | 58 | using TensorOperations
|
69 |
| -α=randn() |
70 |
| -A=randn(5,5,5,5,5,5) |
71 |
| -B=randn(5,5,5) |
72 |
| -C=randn(5,5,5) |
73 |
| -D=zeros(5,5,5) |
| 59 | +α = randn() |
| 60 | +A = randn(5, 5, 5, 5, 5, 5) |
| 61 | +B = randn(5, 5, 5) |
| 62 | +C = randn(5, 5, 5) |
| 63 | +D = zeros(5, 5, 5) |
74 | 64 | @tensor begin
|
75 |
| - D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b] |
76 |
| - E[a,b,c] := A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b] |
| 65 | + D[a, b, c] = A[a, e, f, c, f, g] * B[g, b, e] + α * C[c, a, b] |
| 66 | + E[a, b, c] := A[a, e, f, c, f, g] * B[g, b, e] + α * C[c, a, b] |
77 | 67 | end
|
78 | 68 | ```
|
79 | 69 |
|
80 |
| -In the second to last line, the result of the operation will be stored in the preallocated |
81 |
| -array `D`, whereas the last line uses a different assignment operator `:=` in order to |
82 |
| -define and allocate a new array `E` of the correct size. The contents of `D` and `E` will |
83 |
| -be equal. |
| 70 | +In the second to last line, the result of the operation will be stored in the |
| 71 | +preallocated array `D`, whereas the last line uses a different assignment |
| 72 | +operator `:=` in order to define and allocate a new array `E` of the correct |
| 73 | +size. The contents of `D` and `E` will be equal. |
84 | 74 |
|
85 | 75 | For more information, please see the documentation.
|
0 commit comments