-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Executing the multiview-deconvolution step on a GPU cluster without GUI #24
Comments
Dear Tomer,
So my first question is:
Can I set all the parameters of the deconvolution stage interactively on the currently active server, and then send them to the multi-GPU cluster through the command line? other solutions are also welcome under these settings.
In case it is important, both the currently active server and the multi-GPU server have access to the HD that contains the time-lapse, the .xml file and so on.
Yes, this is possible. In fact (but potentially outdated) I collaborated
on a project where we automated a full deconvolution pipeline on our HPC
cluster using snakemake.
Here is the link to paper:
http://bioinformatics.oxfordjournals.org/content/early/2015/12/30/bioinformatics.btv706.long
And here is the link to the wiki:
http://imagej.net/Automated_workflow_for_parallel_Multiview_Reconstruction
And here is the link to the snakemake code:
https://github.com/mpicbg-scicomp/snakemake-workflows/tree/master/spim_registration
My second question is:
Is there a way to parallelize the deconvolution between several hundred GPU's (or even less)? E.g. each GPU will be responsible for a single time point.
Not that I am aware of. There is a stub project on accelerating the
deconvolution (achieved a 7x speed-up over the java+CUDA implementation
2 years ago) only which is unfinished due to lack of resources and
demand. If you convince Pavel Tomancak (MPI CBG, Dresden, Germany) et al
to provide me these resources I could continue on it and speed-up the
deconvolution. Making it multi-GPU aware is possible but challenging
(depending on what you had in mind). That would require another stack of
resources by Pavel et al.
Best,
Peter
|
Dear Peter, Thank you for your prompt reply. Best regards, |
Hi Peter, Following your recommendation from a few weeks ago (please see above), together with a colleague I'm currently working on setting the multiview-reconstruction pipeline on a GUI-less GPU cluster. As far as I understand, what the user needs to do is run the pipeline on a small sample data (possibly on the reference time point) on a platform with GUI, write down all the relevant parameters, then using these parameters write the 'config.yaml' file which is sent to the GPU cluster. So far am I right? To make it a bit more friendly to our potential users I'm now writing a matlab script that after the user runs the pipeline on the small sample data, the script takes all the parameters from Fiji and builds 'config.yaml' automatically. Initially I tried to extract all the parameters from the .xml file and Fiji's Log file, but I found out that recording the execution of the pipeline as a macro in Fiji gives a much easier access to most/all the used parameters. My questions are:
Kind regards, |
Hi,
I would like to use the plugin for deconvolving light-sheet time-lapses that can reach 1TB. I'm currently using the plugin on a server that includes two GPU's, but I would like to shift to another cluster that includes several hundred GPU's, which is of course preferable with such large data. The problem is that the cluster I would like to shift to (the one with the several hundred GPU's) does not have any GUI, which is of course a problem, since there are interactive steps to do on the way to executing the deconvolution itself. I don't mind doing all the stages up until the deconvolution (bead identification, registration, ...) on the same cluster I've been working on so far (which does include GUI) and run just the deconvolution itself on the big GPU cluster.
So my first question is:
Can I set all the parameters of the deconvolution stage interactively on the currently active server, and then send them to the multi-GPU cluster through the command line? other solutions are also welcome under these settings.
In case it is important, both the currently active server and the multi-GPU server have access to the HD that contains the time-lapse, the .xml file and so on.
My second question is:
Is there a way to parallelize the deconvolution between several hundred GPU's (or even less)? E.g. each GPU will be responsible for a single time point.
Many thanks,
Tomer Stern
The text was updated successfully, but these errors were encountered: