-
Notifications
You must be signed in to change notification settings - Fork 532
ENH signal extraction interface #1647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
output = np.vstack((self.inputs.class_labels, region_signals.astype(str))) | ||
|
||
# save output | ||
np.savetxt(self.inputs.out_file, output, fmt=b'%s', delimiter='\t') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a header row to this file based on the labels?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, on line 71
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gotch ya!
Current coverage is 70.98% (diff: 46.76%)@@ master #1647 diff @@
==========================================
Files 1025 1027 +2
Lines 51415 51865 +450
Methods 0 0
Messages 0 0
Branches 7286 7385 +99
==========================================
+ Hits 36484 36816 +332
- Misses 13848 13960 +112
- Partials 1083 1089 +6
|
issue #1628 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two things I would consider:
- moving this to interfaces.nilearn (like we do with dipy, nipy etc.)
- when dealing with weight maps it would be nice to add an option to extract signal from each weight map individually (practically calling NiftiMapsMasker in a loop) instead of fitting a least squares model with all of them. This would be important when trying to extract signals for the purpose of using them to denoise data in a later step. The current implementation (fitting the regression with all maps included) will return unique signals for each class discarding the shared variance. This is good for the purpose of calculating connectivity based on extracted time series but when we want to use the same machinery to extract time series of structured noise to regress it later, discarding shared variance is suboptimal.
self.inputs.label_files)) | ||
|
||
if self.inputs.include_global: | ||
all_ones_mask = nb.Nifti1Image(np.ones(self._label_data_shape()), np.eye(4)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will use voxels outside of the brain! Please use the combined mask from all of the classes instead of np.ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
d'oh!
'signals.tsv by default') | ||
include_global = traits.Bool(False, usedefault=True, mandatory=False, | ||
desc='If True, include an extra column ' | ||
'labeled "global"') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add an explanation how the values in this column would be calculated.
I can't see Chris's most recent review -_- github get it together! Luckily I got an email with the actual review in it. |
@chrisfilo take a look at the changes, tell me what you think |
desc='If True, include an extra column ' | ||
'labeled "global", with values calculated from the entire brain ' | ||
'(instead of just regions). Only has effect with 4D probability ' | ||
'maps.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why wouldn't we be able to do the same with a 3-d label image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, that edit was supposed to be for incl_shared_variance
n_labels = label_data.get_data().shape[3] | ||
if self.inputs.incl_shared_variance: # 4d labels, independent computation | ||
for img in nli.iter_img(label_data): | ||
sortof_4d_img = nb.Nifti1Image(img.get_data()[:, :, :, np.newaxis], np.eye(4)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this necessary? If so please use the original image affine (img.affine
) instead of np.eye(4)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to know about img.affine
. NiftiMapsMasker
complains if you input a 3-D nifti file, so that is my circuitous way of adding a 4th dimension (length 1). LMK if there is a native way to do it, I couldn't find one.
global_label_data = np.rint(global_label_data).astype(int).clip(0, 1) | ||
global_label_data = global_label_data[:, :, :, np.newaxis] # add back 4th dimension | ||
global_label_data = nb.Nifti1Image(global_label_data, np.eye(4)) | ||
global_masker = nl.NiftiMapsMasker(global_label_data, detrend=self.inputs.detrend) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the global mask is binary wouldn't it make more sense to use NiftiLabelMasker here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result is exactly the same as far as I can tell, so I changed it. What is the reason to prefer NiftiLabelMasker?
skipif) | ||
from .. import nilearn as iface | ||
|
||
no_nilearn = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this check here and not in the nilearn interfaces file? see the nipy or dipy interfaces.
the check should be used inside run interface to indicate to the user that they need to install nilearn. at this point if they use this interface, it will result in an error deep in the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was using the recent dvars testing file as a model (https://github.com/nipy/nipype/blob/master/nipype/algorithms/tests/test_confounds.py), which used nitime. I'm guessing that needs to be fixed too? Since this and the dvars interfaces have already been merged, I opened a separate issue for this: #1661
|
||
no_nilearn = True | ||
try: | ||
__import__('nilearn') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this form of import - out of curiosity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, import nilearn
would actually perform the import, while __import__('nilearn')
only checks that nilearn
is importable. Since nilearn
wasn't used at all in this file, that seemed more appropriate. I don't feel strongly about it though.
@chrisfilo - on future merges, please update the CHANGES file after a merge. see #1658 |
-> using nilearn
-> incl. partial voluming
-> supports multiple 3d label files
-> option for demeaning
From code review: