Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix #28369, transpose of SparseMatrixCSC is not recursive #28376

Merged
merged 1 commit into from
Aug 1, 2018

Conversation

jlapeyre
Copy link
Contributor

No description provided.

@andreasnoack andreasnoack added sparse Sparse arrays bugfix This change fixes an existing bug linear algebra Linear algebra labels Jul 31, 2018
@test Array(transpose(S)) == copy(transpose(M))
@test permutedims(S) == SP
@test permutedims(S, (2,1)) == SP
@test permutedims(M, (1,2)) == S
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you perhaps mean to test permutedims(S, (1,2)) == M instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

permutedims(M, (1,2)) == S doesn't test the new code. permutedims(S, (1,2)) == M is definitely better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this and rebased and pushed the PR again.

(a, b) == (1, 2) && return copy(A)
throw(ArgumentError("no valid permutation of dimensions"))
end
Base.permutedims(A::SparseMatrixCSC) = permutedims(A, (2,1))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is redundant. It is covered by the permutedims(A::AbstractMatrix) = permutedims(A, (2,1) in permuteddimsarray.jl

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this and rebased and pushed the PR again

@jlapeyre jlapeyre force-pushed the gjl/sparsetranspose branch from 84bc31c to 56b504a Compare August 1, 2018 18:39
@andreasnoack andreasnoack merged commit b526c52 into JuliaLang:master Aug 1, 2018
@Sacha0
Copy link
Member

Sacha0 commented Aug 1, 2018

A nanosoldier run might've been in order, to make certain that the replacement of the function passed to ftranspose did not introduce any performance regressions?

@andreasnoack
Copy link
Member

Another time please just go ahead and trigger a run.

@nanosoldier runbenchmarks("array", vs = "@c05fd201baedb6681947cc212e0a9f49564d1568")

@jlapeyre
Copy link
Contributor Author

jlapeyre commented Aug 2, 2018

Is this something I can do locally ?

@andreasnoack
Copy link
Member

Yes, you can run the same tests locally. See https://github.com/JuliaCI/BaseBenchmarks.jl. Though, it might be easier to ask a collaborator to trigger a nanosoldier run.

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @ararslan

@jlapeyre
Copy link
Contributor Author

jlapeyre commented Aug 2, 2018

There appear to be statistically significant regressions (+ 30-40%) in some sparse benchmarks. I'll look at this.

@andreasnoack
Copy link
Member

I'm not convinced that any of these are genuine but it would be great if you could take a closer look anyway

@jlapeyre
Copy link
Contributor Author

jlapeyre commented Aug 2, 2018

I misread the results. Time increases of about %30 are precisely on those with about %30 tolerance.

But, there is an oversight in the PR.
transpose! and adjoint! should also be recursive.

transpose!(X::SparseMatrixCSC{Tv,Ti}, A::SparseMatrixCSC{Tv,Ti}) where {Tv,Ti} = ftranspose!(X, A, identity)
adjoint!(X::SparseMatrixCSC{Tv,Ti}, A::SparseMatrixCSC{Tv,Ti}) where {Tv,Ti} = ftranspose!(X, A, conj)

@Sacha0
Copy link
Member

Sacha0 commented Aug 3, 2018

The collection of sparse-dense and dense-sparse matrix multiplication regressions look potentially legitimate; that they occur as a block makes the likelihood that those represent a real change reasonable. Perhaps check those locally?

@jlapeyre
Copy link
Contributor Author

jlapeyre commented Aug 13, 2018

EDIT: Please see the comment below.

It looks like doing this correctly will take a bit more work. As someone mentioned, the sparsematrix.jl code relies in a few places on the elements being scalars.

Suppose
S = sparse(reshape([[1 2; 3 4], [9 10; 11 12], [5 6; 7 8], [13 14; 15 16]], (2,2))).

One problem is unnecessary allocation. copy(transpose(S)) calls ftranspose, which allocates a container

function ftranspose(A::SparseMatrixCSC{Tv,Ti}, f::Function) where {Tv,Ti}
X = SparseMatrixCSC(A.n, A.m,
Vector{Ti}(undef, A.m+1),
Vector{Ti}(undef, nnz(A)),
Vector{Tv}(undef, nnz(A)))

Eventually, _distributevals_halfperm! is called, which rebinds the top-level elements in the allocated container to the (materially) transposed sub-matrices.

This PR neglected to fix tranpose!. If we fix it by again replacing identity with x -> copy(tranpose(x)) here

transpose!(X::SparseMatrixCSC{Tv,Ti}, A::SparseMatrixCSC{Tv,Ti}) where {Tv,Ti} = ftranspose!(X, A, identity)

the same additional allocation will happen.

But, I haven't done or tested anything yet because Revise is broken for the moment on Julia master. I guess that redundant allocation is preferable in the short term to silently giving the wrong result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix This change fixes an existing bug linear algebra Linear algebra sparse Sparse arrays
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants