-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shrink mask after MBM #455
Comments
(Also see #352.) Right now you need to do atlas-based segmentation either using MBM.py's capability to run this or separately using MAGeT.py. See https://wiki.mouseimaging.ca/display/MICePub/Mouse+Brain+Atlases. I guess one solution would be to provide 'tight' masks with the initial model and not just with our atlases, and either provide these expanded masks or recreate them on the fly based on a specified amount of expansion. |
Hi @bcdarwin, Thanks for the quick reply. Let me check if I've understood correctly. Having run MBM.py, I now need to either
I have, in fact run MAGeT.py separately. Does running this yield a relevant mask that I can then propagate to voxelwise statistics? If so, where is that mask as I can't find it. Also, I do recognize that much of the power of MBM + MAGeT is in the atlas parcellation and plan to use that as well. Just trying to get a good grasp on what I can get out of this software. Thanks for a tremendous package which I think is going to fill a major need in our analyses. |
If you've run MAGeT.py on the your newly constructed template (filename something like |
Sounds good. I will see if I can make sense of all that with my files and let you know. Thanks again for your help. |
Nice. That mask for the nlin-3 template worked great. Point of interest: Why, after running the LSQ6, 12, and nlin parts of MBM would running MAGeT on the lsq6 files recommended over running on the nlin-3 template as in the test data? |
Just following up on my last question @bcdarwin. I've run MAGeT on the nlin-3 template and the LSQ6 files and get markedly different outcomes when importing with anatGetAll on the same mice. Looking at subcortical ROIs (hippocampus), if there is damage to other areas (olfactory bulb, cerebellum, knicks in other areas of the cortex from brain extraction), could that dramatically affect the jacobians to give almost opposite findings? |
That's possible but quite unlikely.
First, I'd visually inspect the segmentation results (template and individual) for the hippocampus and check the nonlinear registration results between the subjects and template, generally by overlaying the two using some viewer.
If those steps seem to have worked well, would you post the code you ran?
…________________________________
From: araikes <notifications@github.com>
Sent: September 4, 2020 4:54 PM
To: Mouse-Imaging-Centre/pydpiper <pydpiper@noreply.github.com>
Cc: Ben Darwin <benjamin.darwin@sickkids.ca>; Mention <mention@noreply.github.com>
Subject: Re: [Mouse-Imaging-Centre/pydpiper] Shrink mask after MBM (#455)
Just following up on my last question @bcdarwin<https://urldefense.com/v3/__https://github.com/bcdarwin__;!!D0zGoin7BXfl!s92I7utEpisUZri8ke24YYZ4HQtvON4qRs0qhNJhOa9VC-bb2K8691oOt6JCYUdDwfVcP7Qz$>. I've run MAGeT on the nlin-3 template and the LSQ6 files and get markedly different outcomes when importing with anatGetAll on the same mice. Looking at subcortical ROIs (hippocampus), if there is damage to other areas (olfactory bulb, cerebellum, knicks in other areas of the cortex from brain extraction), could that dramatically affect the jacobians to give almost opposite findings?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://urldefense.com/v3/__https://github.com/Mouse-Imaging-Centre/pydpiper/issues/455*issuecomment-687378303__;Iw!!D0zGoin7BXfl!s92I7utEpisUZri8ke24YYZ4HQtvON4qRs0qhNJhOa9VC-bb2K8691oOt6JCYUdDwdm1MaZY$>, or unsubscribe<https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AABICNFW5HI664XEOAKGAKDSEFHXLANCNFSM4PTX5OTA__;!!D0zGoin7BXfl!s92I7utEpisUZri8ke24YYZ4HQtvON4qRs0qhNJhOa9VC-bb2K8691oOt6JCYUdDwUnpHV17$>.
________________________________
This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies.
|
I'll look into stuff and get back to. Thanks for confirming what I suspected, specifically that they shouldn't differ substantially. |
It seems like recent ANTS (2.3.x and later?) are broken (see ANTsX/ANTs#1076), which could be part of your problem. I guess we need to switch to antsRegistration immediately, but you could try with an older ANTS (say 2.2.0) in the meantime. |
I am currently using ANTs 2.2.0 in the singularity image contain pydpiper. I've adapted the code from the test dataset but I'm certain there's room for better optimization. We are looking at a mouse model of AD in older mice (18 months-ish) so I anticipate some oddities in the anatomy. However, it looks like a couple of things are true:
Here's the code I'm running for MBM:
and for MAGeT on the template:
|
When I run MAGeT for the LSQ6 files, the no-pairwise flag should be removed, yes? |
I've done a fair amount of experimentation with antsRegistration, this script has most of my experiments distilled into a tool which generates a "bestlinreg" like output, and a pyramid for SyN. Edit: link: https://github.com/CoBrALab/minc-toolkit-extras/blob/master/ants_generate_iterations.py |
@gdevenyi did you mean to link to something in https://github.com/CoBrALab/antsRegistration-MAGeT or in the ANTS repo itself? |
Probably, since this disables the intermediate template registrations. (In general this won't have a huge effect, but it could in this case with only 1 initial template.) |
@bcdarwin oops apologies, seems I forgot to copy paste: |
So the Dorr atlas definitely worked better for segmenting the LSQ6 files and even worked well for brains with some odd anatomical features (cortical damage, possibly induced during brain extraction; see below for screenshots). The olfactory bulb segmentations are still wonky (images below are pretty representative of the olfactory segmentations across the dataset) but given that our interest is subcortical structures (hippocampus, in particular), I think that I can get away with poorer segmentation there. So, to that end, I have the following questions:
Thanks @bcdarwin and @gdevenyi for all your help. I'm new to animal imaging and this software has been tremendously helpful for me. |
For improving the DSURQE registration, did you try re-running without I'm not sure why the olfactory bulb registration is failing so badly, but if the images above are representative brains then it's possibly just due to the extensive cortical damage? (We don't remove the skull for this reason.) |
I did try without the So in the short term, I will probably use the Dorr atlas segmentation. I'm definitely advocating for not removing the skull the next time around. Any other suggestions on processing with MBM would be most welcome. Thanks again. |
Something I just though of and maybe @gdevenyi has the answer... Our imaging is at 75 micron resolution and I'm compelling MBM to output at that resolution. Does MAGeT do any specific resampling or do I need to resample my atlases to the same resolution first? |
MAGeT will segment the images in whatever native resolution you provided. |
Great. Thanks @gdevenyi |
Greetings,
I recently found this tool (thanks to a Neurostars answer by @gdevenyi) and have been testing it with a mouse model of neurodegeneration (ex-vivo imaging, 75 microns). I based things on the pydpiper test data script using the 40 micron basket_mouse_brain.mnc and mask file. Overall, it appears that everything has run well (mouse brains are correctly oriented and aligned, nlin-3.mnc file looks reasonable) but I'm sure there is some room for optimization (like bootstrapping for a study specific template; however many of our brains have some damage/degen so ID'ing a good bootstrap target has been difficult).
I was hoping to test out the voxel-wise analyses described on the RMINC vignettes but the nonlinear mask is really large (see attached, overlaid on the nlin-3.mnc brain), so voxel-wise analyses are finding voxels outside the brain. Is there any way to get a better mask out of this?
This is my first foray into the MINC commands. So far everything has seemed reasonably straightforward, but I haven't found a way to do this. I'm running pydpiper in a Singularity image.
Thanks in advance for any help you can be.
The text was updated successfully, but these errors were encountered: