Skip to content

Improvements to alternating_update #60

Open
@mtfishman

Description

@mtfishman

Here's an issue to track improvements to make to the general tensor network solver code, alternating_update, following up on #59.

  • Switch default tdvp solver to KrlyovKit.exponentiate.
  • Define default_projected_operator which generically defines how to convert an input operator to a projected operator, for example default_projected_operator(x::TTN) = ProjTTN(x), default_projected_operator(x::TTNSum) = ProjTTNSum(x), etc. This could also help with creating certain caches/projected operators when solving different kinds of problems.
  • Rethink the interface for choosing between different solvers as well as custom ones in tdvp, ideally we don't hard code a list of them (i.e. deprecate the solver_backend="exponentiate"/solver_backend="applyexp" interface) and instead make it easy for users to pass a solver function and solver keyword arguments.
  • Remove the time argument t from alternating_update, since it only makes sense for solvers that implement time evolution like tdvp.
  • Decide on the interface for tdvp. How to specify total time, time step, number of steps, which argument ordering?
  • nsweeps vs. nsteps. nsteps makes a bit more sense for a time stepping algorithm like tdvp, but nsweeps is more generic (i.e. defines a sweep through the graph). Also a step can involve multiple sweeps in a higher order method.
  • Improve keyword argument code patterns based on: https://itensor.github.io/ITensors.jl/dev/DeveloperGuide.html#Keyword-Argument-Best-Practices
  • Generalize ProjTTN types to a more general ITensorNetworkCache type (or specifying custom caching types which are relevant for different contraction backends), based on a contraction sequence tree. Also allow custom contraction sequences and contraction backends, to generalize to optimizing/updating other tensor networks.
  • Replace ProjTTNSum with a more general lazy sum, based on the Applied type in ITensors.LazyApply.
  • Generalize to accept AbstractITensorNetwork, and ensure the interface requirements are general enough to work for more general networks.
  • Think more carefully about the interface and implementation for solving various linear equations, like Ax ≈ λx, Ax ≈ λBx, Ax ≈ b, x ≈ y, etc. which shares caching structures, works with general tensor networks, contraction backends, etc.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions