-
Notifications
You must be signed in to change notification settings - Fork 55
Add absorb for putting (part of) the contents of one tensor in another
#283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@Yue-Zhengyuan, @pbrehmer, @leburgel this should help with the PEPSKit expansion/shrinking procedures, but I also think this can be used to replace https://github.com/QuantumKitHub/MPSKit.jl/blob/a34817dafae7c18100e46729d179833e5c4d46a2/src/algorithms/changebonds/changebonds.jl#L23 with a more efficient version that does not require permutations. |
|
I do take some offense with the name as the concept of an embedding in mathematics has injectivity tied strongly to it. So embedding small into large is fine, but extracting part of a larger tensor into a smaller one should not really be called "embed". Do you need both directions? |
Codecov Report✅ All modified and coverable lines are covered by tests.
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
|
I'm happy to change the name, I just don't have that many great alternatives in mind. I think usually the only direction we really care about is embedding something small in something large, since the other direction really is just an arbitrary truncation of the "keeping the first x rows and y columns", but it can definitely be useful to just "make it fit" in some contexts. For example, whenever a twosite update appears in the context of MPS and the truncation scheme is set to some tolerance, it is in principle possible for the "expansion" to actually become both a truncation and expansion, for example when weights are shifted between the different sectors. Then, it is convenient to not have to handle this edge-case separately, since this really is achieving the desired result. (and the resulting tensor is only used as a reasonable initial guess, not necessarily as a fully rigourous state) |
|
For the names, the opposite to embedding seems to be called either projection or retraction (whose opposite is called inclusion). Retraction/inclusion seem to be more similar to what we are doing here, because their restriction to the "smaller" space is just the identity. |
|
I agree that retraction or projection are probably the best terminology for the operation where you loose information. For the injective operation (from smaller to larger bond dimension) I think embedding is fine. The inclusion map in mathematics is really very specific to the case where you want to make explicit that elements from some subset of a set also live in the surrounding set. If for example you have a function I am not sure if one really wants to think of smaller vector spaces as being natural subsets of the larger ones, especially when you have a tensor product of several such spaces. |
|
Since, as Lukas pointed out, the action is not always purely either expanding or truncating but often both at the same time, neither How about Regardless, I think it's nice to also have an out of place version right away, in case we don't want to destroy the target. |
|
If we were willing to abandon all rigor, |
|
Would it be okay if I keep Alternatively, I'm really perfectly happy with |
|
The only potential issue I have with The current |
|
After a few more clicks in the Apple Thesaurus, I came across "absorb" which does not seem to have a strict mathematical meaning and is thus free from injectivity or surjectivity assumptions. Just throwing this out here as one of the many suggestions. |
src/tensors/linalg.jl
Outdated
| tdst[sub_axes...] .= tsrc[sub_axes...] | ||
| ``` | ||
| """ | ||
| function embed!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is zeroing out existing data in tdst part of the contract, or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not, it is actually useful to be able to seed the destination with small random noise
|
In this dire times, we need democracy to thrive, so time for a vote: #285 (If I don't like the outcome, I might switch over to authoritarian mode 😄 ). |
|
Since the Github poll doesn't allow to cast multiple votes, I'll do this the old fashioned way:
Luckily, the number of options does not exceed (and exactly matches) the number of standard emoji reactions. So react to this message with your preferred choices (multiple possible), and my apologies for the double work. |
absorb for putting (part of) the contents of one tensor in another
|
I refactored based on the current poll status, ending with |
| throw(SpaceMismatch("Supremum of space and dual space does not exist")) | ||
| end | ||
| return typeof(V₁)((Visdual ? dual(c) : c) => max(dim(V₁, c), dim(V₂, c)) | ||
| for c in union(sectors(V₁), sectors(V₂)); dual=Visdual) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, are both implementations functionally equivalent and is it just nicer to have the error checks at the top. Or is there something technically superior to the new implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They would be, but there is a weird error that got introduced at some point which I actually traced to a formatting change where there is a very subtle difference between:
julia> function my_function(args; kwargs...)
@info "my args are" args
@info "my kwargs are" kwargs
end
my_function (generic function with 1 method)
julia> my_function(x for x in 1:4, y = 3)
┌ Info: my args are
└ args = Base.Generator{Base.Iterators.ProductIterator{Tuple{UnitRange{Int64}, Int64}}, var"#20#21"}(var"#20#21"(), Base.Iterators.ProductIterator{Tuple{UnitRange{Int64}, Int64}}((1:4, 3)))
┌ Info: my kwargs are
└ kwargs = Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}()
julia> my_function(x for x in 1:4; y = 3)
┌ Info: my args are
└ args = Base.Generator{UnitRange{Int64}, typeof(identity)}(identity, 1:4)
┌ Info: my kwargs are
│ kwargs =
│ pairs(::NamedTuple) with 1 entry:
└ :y => 3
julia> my_function(x for x in 1:4, y in 3)
┌ Info: my args are
└ args = Base.Generator{Base.Iterators.ProductIterator{Tuple{UnitRange{Int64}, Int64}}, var"#22#23"}(var"#22#23"(), Base.Iterators.ProductIterator{Tuple{UnitRange{Int64}, Int64}}((1:4, 3)))
┌ Info: my kwargs are
└ kwargs = Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}()
julia> my_function(3, y=3)
┌ Info: my args are
└ args = 3
┌ Info: my kwargs are
│ kwargs =
│ pairs(::NamedTuple) with 1 entry:
└ :y => 3unfortunately both iterators and keyword arguments can be specified using in and =, so because previously we weren't being explicit about the ; to separate arguments from keyword arguments, it seems like we messed up the dual as keyword argument and it became a product iterator, therefore losing the dual flag altogether.
To avoid having this confusion, I felt like instead of changing the , for ; it might be more obvious to just have the error upfront

This PR adds a small utility function that embeds tensors into a different tensor, possibly of differing spaces.
The idea is that for each sector of each space, the "top-left" corner of the source data is pasted into the "top-left" corner of the destination, where the sizes are chosen such that the data fits.
In other words, this either embeds a vector space into a larger one by inclusion or into a smaller one by restriction.
The tests aren't so great, but I'm not entirely sure how to cover this better...
Additionally, this fixes an (seemingly long hidden) issue where a formatter update previously exchanged a keyword argument for an iterable, therefore losing the
dualflag ininfimumandsupremumforGradedSpace.