You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be valuable to benchmark rulinalg. It would also be great to do comparisons against other libraries, both utilizing LAPACK and standalone - to see how well rulinalg holds its own.
Things to benchmark:
Matrix Multiplication
Matrix Inversion
Eigen Decomposition
Singular Value Decomposition
It would be good to show rulinalg vs. LAPACK (or some library consuming, i.e. scipy). Even just having these benchmarks without a direct comparison would be very valuable.
It would be nice to produce a BENCHMARKS.md file in the root project directory. I see this file containing a table (or set of tables) with the details of the machine used to produce the table clearly visible. The table should have rows per function benchmarked with columns for each framework tested (rulinalg in first column for each).
The text was updated successfully, but these errors were encountered:
I'm labeling this as easy to highlight that there is room for some really valuable and lightweight contributions here.
Most of the functionality is not bench-marked right now which leaves us open to performance regressions. It would be greatly appreciated to add simple benchmarks to any functionality which is currently missing them (most are...)
It would be valuable to benchmark rulinalg. It would also be great to do comparisons against other libraries, both utilizing LAPACK and standalone - to see how well rulinalg holds its own.
Things to benchmark:
It would be good to show rulinalg vs. LAPACK (or some library consuming, i.e. scipy). Even just having these benchmarks without a direct comparison would be very valuable.
It would be nice to produce a BENCHMARKS.md file in the root project directory. I see this file containing a table (or set of tables) with the details of the machine used to produce the table clearly visible. The table should have rows per function benchmarked with columns for each framework tested (rulinalg in first column for each).
The text was updated successfully, but these errors were encountered: