-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse Reconciliation #210
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Thanks, @mcsqr From the execution of the CI tests, we will need to fix the scikit-learn version to have the updated sparse OneHotEncoder. This would need to be done here: |
CI tests are fixed. Note that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome @mcsqr,
My most important comment is on the declaration of S_sparse in core.py.
Thanks a lot for your contribution.
Using sparse matrices to reduce the computational time and memory appetite of a subset of the reconciliation methods.
The PR contains MinTraceSparse which supports the diagonal methods (ols, wls_struct, and wls_var), the speed-up is very considerable for datasets of ~2k-20k time series (the most I checked so far).
It also contains BottomUpSparse, which doesn't really show speed-up until 20k, but should hopefully help with even bigger datasets.
Note: This version still doesn't guarantee that no O(N^2) dense matrix is instantiated, it could probably be checked/improved by testing on even bigger datasets.
Also likely some docs and tests are missing, tbd.