Skip to content

Latest commit

 

History

History
23 lines (15 loc) · 3.49 KB

README.md

File metadata and controls

23 lines (15 loc) · 3.49 KB
⚠️ WARNING
This package is a work in progress! It mainly exists as an internal experiment by the ITensor developers to test out some parallelization strategies in MPS algorithms like DMRG, and for now it is focused on multithreaded and distributed parallelization over sums of MPOs. Your mileage may vary in terms of seeing speedups for your calculations.

ITensorParallel

Stable Dev Build Status Coverage Code Style: Blue

Overview

This package adds more shared and distributed memory parallelism to ITensors.jl, with the goal of implementing the techniques for nested parallelization in DMRG laid out in by Zhai and Chan in arXiv:2103.09976. So far, we are focusing on parallelizing over optimizing or evolving sums of tensor networks, like in arXiv:2203.10216 (in particular, we are focusing on parallelized sums of MPOs), which can be composed with dense or block sparse threaded tensor operations that are implemented in ITensors.jl. We plan to add real-space parallel DMRG, TDVP, and TEBD based on arXiv:1301.3494 as well.

For multithreading, we are using Julia's standard Threads.jl library, as well as convenient abstractions on top of that for parallelizing over maps and reductions provided by Folds.jl and FLoops.jl. See here for a nice overview of parallelization in Julia.

For distributed computing, we make use of Julia's standard Distributed.jl library along with it's interface through Folds.jl and FLoops.jl, as well as MPI.jl. Take a look at Julia'd documentation on distributed computing for more information and background. We may investigate other parallelization abstractions such as Dagger.jl as well.

To run Distributed.jl-based computations on clusters, we recommend using Julia's cluster manager tools like ClusterManagers.jl, SlurmClusterManager.jl, and MPIClusterManagers.jl.

See the examples folder for examples of running DMRG parallelized over sums of MPOs, using Threads.jl, Distributed.jl, and MPI.jl.