Skip to content

Commit

Permalink
PCA and SVD for low-rank matrices, LOBPCG for positive-defined genera…
Browse files Browse the repository at this point in the history
…lized eigenvalue problem (pytorch#29488)

Summary:
This PR implements the following linear algebra algorithms for low-rank matrices:
- [x] Approximate `A` as `Q Q^H A` - using Algorithm 4.4 from [Halko et al, 2009](http://arxiv.org/abs/0909.4061).
  + exposed as `torch.lowrank.get_approximate_basis(A, q, niter=2, M=None) -> Q`
  + [x] dense matrices
  + [x] batches of dense matrices
  + [x] sparse matrices
  + [x] documentation
- [x] SVD - using Algorithm 5.1 from [Halko et al, 2009](http://arxiv.org/abs/0909.4061).
  + uses `torch.lowrank.get_approximate_basis`
  + exposed as `torch.svd_lowrank(A, q=6, niter=2, M=None) -> (U, S, V)`
  + [x] dense matrices
  + [x] batches of dense matrices
  + [x] sparse matrices
  + [x] documentation
- [x] PCA - using `torch.svd_lowrank`
  + uses `torch.svd_lowrank`
  + exposed as `torch.pca_lowrank(A, center=True, q=None, niter=2) -> (U, S, V)`
  + [x] dense matrices
  + [x] batches of dense matrices
  + [x] sparse matrices, uses non-centered sparse matrix algorithm
  + [x] documentation
- [x] generalized eigenvalue solver using the original LOBPCG algorithm [Knyazev, 2001](https://epubs.siam.org/doi/abs/10.1137/S1064827500366124)
  + exposed as `torch.lobpcg(A, B=None, k=1, method="basic", ...)`
  + [x] dense matrices
  + [x] batches of dense matrices
  + [x] sparse matrices
  + [x] documentation
- [x] generalized eigenvalue solver using robust LOBPCG with orthogonal basis selection [Stathopoulos, 2002](https://epubs.siam.org/doi/10.1137/S1064827500370883)
  + exposed as `torch.lobpcg(A, B=None, k=1, method="ortho", ...)`
  + [x] dense matrices
  + [x] batches of dense matrices
  + [x] sparse matrices
  + [x] documentation
- [x] generalized eigenvalue solver using the robust and efficient LOBPCG Algorithm 8 from [Duersch et al, 2018](https://epubs.siam.org/doi/abs/10.1137/17M1129830) that switches to orthogonal basis selection automatically
  + the "ortho" method improves iterations so rapidly that in the current test cases it does not make sense to use the basic iterations at all. If users will have matrices for which basic iterations could improve convergence then the `tracker` argument allows breaking the iteration process at user choice so that the user can switch to the orthogonal basis selection if needed. In conclusion, there is no need to implement Algorithm 8 at this point.
- [x] benchmarks
  + [x] `torch.svd` vs `torch.svd_lowrank`, see notebook [Low-rank SVD](https://github.com/Quansight/pearu-sandbox/blob/master/pytorch/Low-rank%20SVD.ipynb). In conclusion, the low-rank SVD is going to be useful only for large sparse matrices where the full-rank SVD will fail due to memory limitations.
  + [x] `torch.lobpcg` vs `scipy.sparse.linalg.lobpcg`, see notebook [LOBPCG - pytorch vs scipy](https://github.com/Quansight/pearu-sandbox/blob/master/pytorch/LOBPCG%20-%20pytorch%20vs%20scipy.ipynb). In conculsion, both implementations give the same results (up to numerical errors from different methods), scipy lobpcg implementation is generally faster.
  + [x] On very small tolerance cases, `torch.lobpcg` is more robust than `scipy.sparse.linalg.lobpcg` (see `test_lobpcg_scipy` results)

Resolves pytorch#8049.
Pull Request resolved: pytorch#29488

Differential Revision: D20193196

Pulled By: vincentqb

fbshipit-source-id: 78a4879912424595e6ea95a95e483a37487a907e
  • Loading branch information
pearu authored and facebook-github-bot committed Mar 11, 2020
1 parent 5fc5cf6 commit 2ec779d
Show file tree
Hide file tree
Showing 11 changed files with 1,676 additions and 11 deletions.
4 changes: 4 additions & 0 deletions docs/source/torch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -377,10 +377,14 @@ BLAS and LAPACK Operations
.. autofunction:: qr
.. autofunction:: solve
.. autofunction:: svd
.. autofunction:: svd_lowrank
.. autofunction:: pca_lowrank
.. autofunction:: symeig
.. autofunction:: lobpcg
.. autofunction:: trapz
.. autofunction:: triangular_solve


Utilities
----------------------------------
.. autofunction:: compiled_with_cxx11_abi
Expand Down
4 changes: 4 additions & 0 deletions test/test_overrides.py
Original file line number Diff line number Diff line change
Expand Up @@ -560,6 +560,8 @@ def decorator(func):
(torch.layer_norm, lambda input, normalized_shape, weight=None, bias=None, esp=1e-05, cudnn_enabled=True: -1),
(torch.le, lambda input, other, out=None: -1),
(torch.lerp, lambda input, end, weight, out=None: -1),
(torch.lobpcg, lambda input, k=None, B=None, X=None, n=None, iK=None, niter=None, tol=None, largest=None, method=None,
tracker=None, ortho_iparams=None, ortho_fparams=None, ortho_bparams=None: -1),
(torch.lgamma, lambda input, out=None: -1),
(torch.log, lambda input, out=None: -1),
(torch.log_softmax, lambda input, dim, dtype: -1),
Expand Down Expand Up @@ -748,6 +750,7 @@ def decorator(func):
(torch.orgqr, lambda input1, input2: -1),
(torch.ormqr, lambda input, input2, input3, left=True, transpose=False: -1),
(torch.pairwise_distance, lambda x1, x2, p=2.0, eps=1e-06, keepdim=False: -1),
(torch.pca_lowrank, lambda input, q=None, center=True, niter=2: -1),
(torch.pdist, lambda input, p=2: -1),
(torch.pinverse, lambda input, rcond=1e-15: -1),
(torch.pixel_shuffle, lambda input, upscale_factor: -1),
Expand Down Expand Up @@ -831,6 +834,7 @@ def decorator(func):
(torch.sub, lambda input, other, out=None: -1),
(torch.sum, lambda input: -1),
(torch.svd, lambda input, some=True, compute_uv=True, out=None: -1),
(torch.svd_lowrank, lambda input, q=6, niter=2, M=None: -1),
(torch.symeig, lambda input, eigenvectors=False, upper=True, out=None: -1),
(torch.t, lambda input: -1),
(torch.take, lambda input, index: -1),
Expand Down
Loading

0 comments on commit 2ec779d

Please sign in to comment.