-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stdlib/SparseArrays: add rmul! and lmul! of sparse matrix with Diagonal #29296
Conversation
Add rmul! and lmul! of sparse matrix with Diagonal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the nice contribution! I'm not the best person to review this but it looks good to me.
stdlib/SparseArrays/src/linalg.jl
Outdated
(n == size(D, 1)) || throw(DimensionMismatch()) | ||
Anzval = A.nzval | ||
for col = 1:n, p = A.colptr[col]:(A.colptr[col+1]-1) | ||
@inbounds Anzval[p] *= D.diag[col] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be possible to gain a bit of speed here by manually hoisting the D.diag[col]
lookup outside of the inner loop. Of course, the compiler may well be able to do that automatically. Depends on how much it knows about whether D.diag
can change or move.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry I can't follow. I did the change @StefanKarpinski suggested. Was that just (potentially) unnecessary or even "wrong"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I benchmarked the previous ("my") against the latest version ("Stefan's") on these:
P = sprand(12000, 12000, 0.01)
q = Diagonal(dropdims(sum(P, dims=2), dims=2))
and Stefan's won with 607.996 μs
vs. 631.496 μs
. So I guess it was worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, might be neglible. For
P = sprand(120000, 120000, 0.01)
(an order of magnitude larger) and q
as before I get 104.832 ms
vs. 103.934 ms
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The question is if the compiler tries to reload
D.diag` every time in the loop or if it realizes that this will never change and hoist the load outside the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like it doesn't reload, at least it doesn't show up in performance loss. Should I, for better readability, then merge the two loops again as before, but leave the @inbounds
upfront like in the last commit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't matter that much but perhaps it makes sense to at least have the two methods in this PR be consistent with eachother.
Is there anything that I can do more to get this merged? This is my first PR to Julia, so that would make me really proud. Maybe @andreasnoack may want to have a look at it? I can't seem to make much sense of the CI failures... |
stdlib/SparseArrays/src/linalg.jl
Outdated
function lmul!(b::Number, A::SparseMatrixCSC) | ||
lmul!(b, A.nzval) | ||
return A | ||
end | ||
|
||
function rmul!(A::SparseMatrixCSC, D::Diagonal{T}) where T |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
T
is never used in the function body so I think you can make this signature
function rmul!(A::SparseMatrixCSC, D::Diagonal)
. (same for lmul)
Do you mind giving a small case where rmul and lmul would be called? Just doing regular multiplication doesn't hit those methods. Also the CI failures are likely not because of your codet and just need to be restarted. |
One use case is shown in #29111 : given a sparse "weight matrix", you may wish to turn it into a row-stochastic matrix (graph Laplacian) by row-normalizing by the sum of each row:
Very often, the weight matrix is huge and therefore sparse, and row-normalization corresponds to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes look great! Thanks and welcome @dkarrasch! :)
Perhaps add tests exercising these new methods?
Thank you! I thought tests already existed at julia/stdlib/SparseArrays/test/sparse.jl Line 375 in e1f42b1
but after a more thorough look I see that the result of rmul!´ and lmul` (a few lines below) were tested against a dense array. I'll change the above line then to
and similarly for |
Thanks for pointing the existing tests out! The existing tests look good to me. I imagine the rationale for testing against the result of an equivalent dense operation is that the relevant code paths are more likely to be disjoint, whereas e.g. |
I was not aware of these details (or better, never thought about it), but the density/sparsity doesn't matter in the test. So even though |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @dkarrasch! :)
(Absent objections, requests for time, or someone beating me to it, I plan to merge these changes tomorrow morning PT or later.)
Once again thanks and welcome @dkarrasch! :) |
Thank you thank you! This improves the speed of my code |
I know, mine, too! :-) But all I did was tracing back what |
Good question! I imagine these changes are a backport candidate. @ararslan? |
@ararslan, is there any chance for a backport? |
Yep should be possible to put into 1.0.3. cc @KristofferC |
Fixes #29111. Previously,
lmul!
andrmul!
had no special method for multiplying a sparse matrix with a diagonal matrix in-place, but fell back to the genericLinearAlgebra/Diagonal
-method. Tests had been included before.