Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature caching mechanism in LLLA #170

Merged
merged 3 commits into from
Jun 11, 2024
Merged

Feature caching mechanism in LLLA #170

merged 3 commits into from
Jun 11, 2024

Conversation

wiseodd
Copy link
Collaborator

@wiseodd wiseodd commented Apr 25, 2024

Closes #161.

@runame as discussed in the other PR. Please wait until #144 is merged, though.

@wiseodd wiseodd added the enhancement New feature or request label Apr 25, 2024
@wiseodd wiseodd added this to the 0.2 milestone Apr 25, 2024
@wiseodd wiseodd requested a review from runame April 25, 2024 16:45
@wiseodd wiseodd self-assigned this Apr 25, 2024
@wiseodd wiseodd changed the base branch from main to mc-subset2 April 27, 2024 17:26
Base automatically changed from mc-subset2 to main April 27, 2024 18:53
Copy link
Collaborator

@runame runame left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

laplace/lllaplace.py Outdated Show resolved Hide resolved
laplace/lllaplace.py Outdated Show resolved Hide resolved
@wiseodd wiseodd merged commit 425a79d into main Jun 11, 2024
@wiseodd wiseodd deleted the llla-efficient branch June 11, 2024 13:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Efficient LLLA predictive samples
2 participants