Avoid dense matrices in chol_cap_mat for ConstantDiagLinearOperator A#120
Avoid dense matrices in chol_cap_mat for ConstantDiagLinearOperator A#120Osburg wants to merge 1 commit intocornellius-gp:mainfrom
Conversation
| sigma_inv = A_inv.diag_values[0] | ||
| cap_mat = to_dense(C + sigma_inv * V.matmul(U)) | ||
| else: | ||
| cap_mat = to_dense(C + V.matmul(A_inv.matmul(U))) |
There was a problem hiding this comment.
Note that A_inv here is a DiagLinearOperator. Thus, A_inv.matmul(U) basically scales each row of U by corresponding entries in A_inv.
|
@Osburg I don't think |
|
Hi @kayween, If I try to do operations including my own The setting is that I want to train a GP with covariance Exiting with the error Cheers |
|
Your implementation makes sense to me. This is a tricky use case as it involves sparsity. I think the main challenge here is that For simplicity, let's say we want to invert I do notice that your sparse linear operator's So I think there are two ways going forward.
|
Hey :)
$C + V^T D^{-1} V$ where $D$ is a diagonal matrix. In a setting as described here this can lead to the formation of a large dense representation of $D$ . In the special case of $D$ being a $D=\sigma I$ we can avoid this by using the equivalent expression $C + \sigma^{-1} V^T V$ . This PR implements this special case.
LowRankRootAddedDiagLinearOperator.chol_cap_mat()performs the matrix operationConstantDiagLinearOperatorCheers
Aaron :)