-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support pre-generated maps in CorrelationDecoder
#782
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #782 +/- ##
==========================================
+ Coverage 88.74% 88.92% +0.17%
==========================================
Files 41 41
Lines 4825 4856 +31
==========================================
+ Hits 4282 4318 +36
+ Misses 543 538 -5
... and 2 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
@adelavega I think this is ready for review. Thanks! |
@JulioAPeraza can we test this PR on the pre-generated gene decoding maps? Would be interesting to see if they get the same results that NeuroVault would have previously spit out. let's not let that prevent this merge though |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, but now that I've thought about it more my only concern is what happens when somebody is using CorrelationalDecoder
w/ prefitted images and decoding various images?
In that case, it doesn't seem that helpful to keep returning MetaResult
with all the images in transform
.
We could either:
- Only return df in
transform
-- assuming its rare to want to save alongsideMetaResult
or - Make it configurable what
transform
should return (probably w/ default to df, in my opinion). This adds a small amount of complexity.
images.append(np.squeeze(self.masker.transform(img))) | ||
|
||
maps = {feature: image for feature, image in zip(features, images)} | ||
self.results_ = MetaResult(self, mask=self.masker, maps=maps) | ||
|
||
def transform(self, img): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only question is whether we should allow this to be configurable and allow only the data frame the returned?
This would prevent a lot of copies of the MetaResult
object and images if somebody is doing multiple decodings and doesn't want to save the images.
Also-- what happens when you save a MetaResult
object? Do the images get saved or also the tables
?
Sure!
I agree, I like the number 1 option. It is true that users would prefer to access the DataFrame directly instead of getting the meta-analytics map every time they run the transformer. And with that, we avoid the braking change in this PR.
I was wondering if at some point it would be a good idea to allow the parameter
MetaResult object can be saved using pickle. The images and tables are saved if we call |
Great! Lets go w/ #1 I would also be open to adding list as input |
LGTM! |
Anything holding this up? |
I wasn't sure I was allowed to merge this PR, because it has 1 change requested. |
Closes #781.
Changes proposed in this pull request:
load_imgs
. Load pre-generated meta-analytic maps for decoding.results_
. MetaResult object containing masker, meta-analytic maps, and tables. This attribute replacesmasker
,features_
, andimages_
.