-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pg/one big module #47
Conversation
Thanks a lot Paul for all the work. The issues with Diagonal*CSC matrix multiplication have been resolved as of Julia 1.0.1. Thus, I believe that the specialised function
gives
i.e. Julia's built-in functions are faster, even though they allocate memory at every run. As a matter of fact I think that we are overly concerned about memory allocation. I don't think that we should be, unless there is a timing impact (say >10%) that supports that.
Julia's garbage collector is very efficient in most cases, so I don't see the benefit of aiming at minimising memory allocation at the cost of code readability. |
I think that there still remain places where Also, I would suggest replacing
which currently throws the error:
|
I agree that we should consider the 1.0.1 operations, which only appeared recently. I rewrote the algebra.jl code slightly to try to improve things via @inbounds but it didn't really help. I don't understand why your example is so fast in the second case. If I replace this
with this
then the times are about the same between our version and the current 1.0.1. Regarding memory allocation : I think my strategy here is to be careful about it where it doesn't seem to matter or if it makes an improvement, since I have one eye always on an embedded version. If your example is correct though I'm happy to not worry about it someplace like this case. |
The only point where I would hesitate to do this would be for things like reported timings and solve accuracy to the user. Maybe then it makes sense to have a second parametric type defined, or just to use Float64 always.
Yes, I agree with this. It was that way originally but I was getting weird results from I think making all of the individual projection operations (except for CompositeConvexSet) work with an AbstractVector input is fine as long as it doesn't kill the performance, which I don't think it will. Note however that the SplitVector{T} type should still contain an array of SplitView{T}, not AbstractArray{T}, since I think it is faster to have a single clear type within a container. |
I guess my benchmark was flawed as running:
ends up replacing Z with D^k*Z (where k equal to the number of calls by I believe that what you say is correct, i.e. that your |
Maybe for the final reporting of the accuracy we can convert the matrices to
Do you mean not have the option of |
Yes, I'm fine with this also. If the performance is about the same and we don't allocate though, then what we should really do is submit a fix to the main julia repo. |
No, I meant something different. I have tried to write the code such that it an use any floating point type, but that consistency of type is enforced across the iterates, problem data, residuals, working data etc. Everything is the same type However, if Note that it's not clear to me whether what I have done is the right approach. You could make a case that some of the problem data should be allowed to be some small size |
You can avoid the memory allocation by using |
In that case there is no reason not to use the Julia base version, unless it is really bad with identity matrices or something. I'm not committed to our internal version and am happy to dump it. I can also imagine that having it there might cause namespace conflicts as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes approved. I will merge this into devel and create a separate issue about the open lmul!
rmul!
question.
Implements the following changes: