Open
Description
Here's an issue to track improvements to make to the general tensor network solver code, alternating_update
, following up on #59.
- Switch default
tdvp
solver toKrlyovKit.exponentiate
. - Define
default_projected_operator
which generically defines how to convert an input operator to a projected operator, for exampledefault_projected_operator(x::TTN) = ProjTTN(x)
,default_projected_operator(x::TTNSum) = ProjTTNSum(x)
, etc. This could also help with creating certain caches/projected operators when solving different kinds of problems. - Rethink the interface for choosing between different solvers as well as custom ones in
tdvp
, ideally we don't hard code a list of them (i.e. deprecate thesolver_backend="exponentiate"
/solver_backend="applyexp"
interface) and instead make it easy for users to pass a solver function and solver keyword arguments. - Remove the time argument
t
fromalternating_update
, since it only makes sense for solvers that implement time evolution liketdvp
. - Decide on the interface for
tdvp
. How to specify total time, time step, number of steps, which argument ordering? -
nsweeps
vs.nsteps
.nsteps
makes a bit more sense for a time stepping algorithm liketdvp
, butnsweeps
is more generic (i.e. defines a sweep through the graph). Also a step can involve multiple sweeps in a higher order method. - Improve keyword argument code patterns based on: https://itensor.github.io/ITensors.jl/dev/DeveloperGuide.html#Keyword-Argument-Best-Practices
- Generalize
ProjTTN
types to a more generalITensorNetworkCache
type (or specifying custom caching types which are relevant for different contraction backends), based on a contraction sequence tree. Also allow custom contraction sequences and contraction backends, to generalize to optimizing/updating other tensor networks. - Replace
ProjTTNSum
with a more general lazy sum, based on theApplied
type inITensors.LazyApply
. - Generalize to accept
AbstractITensorNetwork
, and ensure the interface requirements are general enough to work for more general networks. - Think more carefully about the interface and implementation for solving various linear equations, like
Ax ≈ λx
,Ax ≈ λBx
,Ax ≈ b
,x ≈ y
, etc. which shares caching structures, works with general tensor networks, contraction backends, etc.
Metadata
Metadata
Assignees
Labels
No labels