-
Notifications
You must be signed in to change notification settings - Fork 23
Implement approx_itensornetwork
#66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@
## main #66 +/- ##
==========================================
+ Coverage 76.82% 77.79% +0.96%
==========================================
Files 60 61 +1
Lines 3115 3233 +118
==========================================
+ Hits 2393 2515 +122
+ Misses 722 718 -4
... and 9 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
|
@mtfishman Not sure what happened with the documentation CI, any ideas? |
|
Not sure, I hadn't even realized we had doctests. I don't think that doctest even makes sense since it relies on https://github.com/GiggleLiu/ITensorContractionOrders.jl which was moved into this package anyway, so we could just remove that doctest: EDIT: It looks like that test starting getting run when it wasn't running before because you added |
|
This makes me think about a slightly different interface and code structure. Here is a summary: function approx_itensornetwork(tn::ITensorNetwork, output_structure::IndsNetwork; alg="density_matrix", cutoff, maxdim)
partitioned_tn = partition(tn, output_structure) # Partition the network based on the `output_structure`, outputs a `DataGraph`
return approx_itensornetwork(partitioned_tn; alg, cutoff, maxdim)
end
function approx_itensornetwork(partitioned_tn::DataGraph; alg="density_matrix", cutoff, maxdim)
return approx_itensornetwork(Algorithm(alg), partitioned_tn; cutoff, maxdim)
end
function approx_itensornetwork(alg::Algorithm"density_matrix", partitioned_tn::DataGraph; cutoff, maxdim)
@assert is_tree(partitioned_tn) # For now restrict the desired tensor network structure to be a tree
# Implementation of the density matrix algorithm on the partitioned tensor network
end
function approx_itensornetwork(alg::Algorithm"orthogonalize", partitioned_tn::DataGraph; cutoff, maxdim)
@assert is_tree(partitioned_tn) # For now restrict the desired tensor network structure to be a tree
# Implementation of the orthogonalization algorithm on the partitioned tensor network.
# Probably has overlap with the density matrix algorithm so code should be shared through generic functions.
end
function approx_itensornetwork(tn::ITensorNetwork, output_structure::Function=path_graph_structure; alg, cutoff, maxdim)
output_structure_indsnetwork = output_structure(tn) # Outputs an `IndsNetwork`
return approx_itensornetwork(tn, output_structure_indsnetwork; alg, cutoff, maxdim)
end
function path_graph_structure(tn::ITensorNetwork)
# Outputs a maximimally unbalanced binary tree IndsNetwork defining the desired graph structure
end
function binary_tree_structure(tn::ITensorNetwork)
# Outputs a binary tree IndsNetwork defining the desired graph structure
endBesides some restructuring/renaming, I think the main difference would be using an
Let me know if that makes sense. Some of the proposal may clash with realities about the flow/logic of the algorithm so that would be helpful to know as well. |
|
@mtfishman The suggestions above as to the interfaces make sense to me, here are some details I would like to bring up:
partition(alg::Algorithm"mincut_recursive_bisection", tn::ITensorNetwork, output_structure::IndsNetwork, root) |
|
That's helpful, thanks. Maybe what we could do is use a directed Definitely fine to keep it working only for binary trees, good idea to add a check for that. Also sounds like a good plan to specialize on |
approx_binary_tree_itensornetworkapprox_itensornetwork
src/binary_tree_partition.jl
Outdated
| end | ||
| tn_deltas = ITensorNetwork(vcat(output_deltas_vector...)) | ||
| return partition(ITensorNetwork{Any}(disjoint_union(out_tn, tn_deltas)), subgraph_vs) | ||
| par = partition(ITensorNetwork{Any}(disjoint_union(out_tn, tn_deltas)), subgraph_vs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was there an issue using ITensorNetwork instead of ITensorNetwork{Any}, i.e. let it try to infer the vertex type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason I force it to be ITensorNetwork{Any} is that in approx_itensornetwork I update each partition inplace during the density matrix algorithm, and that update sometimes will assign a new itensornetwork type to each partition.
But it's right that forcing the type here doesn't make sense. I made a modification to update the datatype of partition inside approx_itensornetwork right before the update of density matrix algorithm, so the problem should be fixed now.
|
@mtfishman could you retake a look at this? |
|
@LinjianMa yes I was trying to make some time for this, I should have time in the next few days. There is a lot of code in Also, I have a more general question about how this fits into the over all algorithm. My current understanding is that the algorithm can be summarized as follows. Algorithm Algorithm Inputs
Subalgorithm Subalgorithm Inputs
Questions
|
|
@mtfishman Sure, I will summarize For your questions:
function approx_itensornetwork(
tn::ITensorNetwork,
inds_btree::DataGraph;
alg::String,
cutoff=1e-15,
maxdim=10000,
contraction_sequence_alg="optimal",
contraction_sequence_kwargs=(;),
)to let So the interface below function approx_itensornetwork(
tn::ITensorNetwork,
output_structure::Function;
alg::String,
cutoff=1e-15,
maxdim=10000,
contraction_sequence_alg="optimal",
contraction_sequence_kwargs=(;),
)will not be used there since |
|
Ok, so to summarize: The part I'm still confused about is that it seems like in order to implement the logic of determining the output structure from the rest of the graph it would affect the function |
|
@mtfishman yes that's correct. Note that the current function binary_tree_structure(tn::ITensorNetwork, outinds::Vector{<:Index})where |
|
@mtfishman the first post is updated |
|
Thanks for the summary @LinjianMa, that's helpful. I'll review this PR by the end of the week. |
|
@mtfishman The PR has been updated, please let me know if you have any other questions! |
|
Looks great, thanks for all of the documentation and thanks for iterating on the code! This will be really useful. |
Interfaces:
Note: The current code is designed so that its performance is perfect (as I’m aware of, this should already be the state-of-the-art density matrix algorithm implementation, in terms of the caching implementations)
Some internally-used definitions:
density matrix:
Example
partial density matrix:
Example
Calculate
_DensityMatrixusing_PartialDensityMatrix:_PartialDensityMatrixis introduced for the efficiency purpose. For a density matrixdmwith rootvand childrenc1,c2, we can calculatedmvia two_PartialDensityMatrixpdm1,pdm2, wherepdm1is defined onvwith childc1andpdm2defined onvwith childc2. The contraction can be done by property sim the inds ofpdm2, then contract withpdm1. In this way, everytime when we want to use a specificdmwe don’t need to re-compute it, sincepdm1orpdm2can be cached and we can reuse them.The density matrix algorithm core part is below:
each
_rem_vertex!will removevin the partition, and output its projectorU. ThisUis then used to update the output tree tensor network._rem_vertex!also update and use all the caches.Discussions:
Note that it’s very possible that the
_DensityMatrixAlgCacheshas similar concepts as theITensorNetworkCache@mtfishman plans to work on. If so I would be happy to discuss and we can think about how to design it in a better way.