-
Notifications
You must be signed in to change notification settings - Fork 301
[RTM] Resource annotation #746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This PR will likely also make the changes necessary to close #712 |
Yeah. And I'm hoping your profiling work can give us good estimates of the memory usage that we can just run through and plug in. |
2fdad67
to
7bb6f49
Compare
I can confirm lots of MemoryErrors for specific datasets on LS5 (less memory than Stampede2) |
I'm brewing a new singularity image with these annotations to run on ls5. |
Some nodes might be overestimated, but that should be ok. Let's merge this and see what happens. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a few num_threads
inputs set that could be left to the nodes.
fmriprep/workflows/util.py
Outdated
@@ -299,7 +298,7 @@ def init_bbreg_wf(use_bbr, bold2t1w_dof, omp_nthreads, name='bbreg_wf'): | |||
mri_coreg = pe.Node( | |||
MRICoregRPT(dof=bold2t1w_dof, sep=[4], ftol=0.0001, linmintol=0.01, | |||
num_threads=omp_nthreads, generate_report=not use_bbr), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can drop num_threads
here.
fmriprep/workflows/bold.py
Outdated
MultiApplyTransforms(interpolation="LanczosWindowedSinc", float=True), | ||
name='bold_to_mni_transform', mem_gb=8, n_procs=omp_nthreads) | ||
# bold_to_mni_transform.terminal_output = 'file' | ||
======= | ||
bold_to_mni_transform = pe.Node(MultiApplyTransforms( | ||
interpolation="LanczosWindowedSinc", float=True, num_threads=omp_nthreads, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is re-adding num_threads to bold_to_mni_transform.interface
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about that. Lazy merge with badly resolved conflicts.
fmriprep/workflows/bold.py
Outdated
# bold_to_t1w_transform.terminal_output = 'file' # OE: why this? | ||
bold_to_t1w_transform = pe.Node(MultiApplyTransforms( | ||
interpolation="LanczosWindowedSinc", float=True, copy_dtype=True, | ||
num_threads=omp_nthreads), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
num_threads
re-added to bold_to_t1w_transform.interface
.
Let me just push a quick fix... |
Back-burner work, but we should be specifying resources better. I'm consistently seeing my machine dip into swap while claiming it has 13/14 GiB of memory free. (Thanks to @oesteban for the status updates that make this clear.)
This starts with:
num_threads
input argument so that it's automatically synced withNode.n_procs
(see [ENH] New ResourceMonitor (replaces resource profiler) nipy/nipype#2200)inputs.num_threads
, to letNode.n_procs
manage the syncingTodo: