Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Executing the multiview-deconvolution step on a GPU cluster without GUI #24

Open
SternTomer opened this issue Aug 23, 2017 · 3 comments

Comments

@SternTomer
Copy link

Hi,

I would like to use the plugin for deconvolving light-sheet time-lapses that can reach 1TB. I'm currently using the plugin on a server that includes two GPU's, but I would like to shift to another cluster that includes several hundred GPU's, which is of course preferable with such large data. The problem is that the cluster I would like to shift to (the one with the several hundred GPU's) does not have any GUI, which is of course a problem, since there are interactive steps to do on the way to executing the deconvolution itself. I don't mind doing all the stages up until the deconvolution (bead identification, registration, ...) on the same cluster I've been working on so far (which does include GUI) and run just the deconvolution itself on the big GPU cluster.

So my first question is:
Can I set all the parameters of the deconvolution stage interactively on the currently active server, and then send them to the multi-GPU cluster through the command line? other solutions are also welcome under these settings.
In case it is important, both the currently active server and the multi-GPU server have access to the HD that contains the time-lapse, the .xml file and so on.

My second question is:
Is there a way to parallelize the deconvolution between several hundred GPU's (or even less)? E.g. each GPU will be responsible for a single time point.

Many thanks,
Tomer Stern

@psteinb
Copy link
Collaborator

psteinb commented Aug 24, 2017 via email

@SternTomer
Copy link
Author

Dear Peter,

Thank you for your prompt reply.
In 'parallelization' I actually meant the simpler version, which is to assign each GPU with a single time point independently - exactly what the paper you attached addresses. So everything I need is already implemented.
Regarding the 'potentially outdated', are there any particular limitations or obstacles I should keep my eye on? Currently I'm using the most updated version of the multiview-reconstruction Fiji plugin, not the SPIM-registration.

Best regards,
Tomer

@SternTomer
Copy link
Author

Hi Peter,

Following your recommendation from a few weeks ago (please see above), together with a colleague I'm currently working on setting the multiview-reconstruction pipeline on a GUI-less GPU cluster.

As far as I understand, what the user needs to do is run the pipeline on a small sample data (possibly on the reference time point) on a platform with GUI, write down all the relevant parameters, then using these parameters write the 'config.yaml' file which is sent to the GPU cluster. So far am I right?

To make it a bit more friendly to our potential users I'm now writing a matlab script that after the user runs the pipeline on the small sample data, the script takes all the parameters from Fiji and builds 'config.yaml' automatically. Initially I tried to extract all the parameters from the .xml file and Fiji's Log file, but I found out that recording the execution of the pipeline as a macro in Fiji gives a much easier access to most/all the used parameters.

My questions are:

  1. Is this the best/easiest way to automate the process?

  2. The association between the names of the parameters in the macro text and the config.yaml file are not always clear to me, is there some documentation that could help me with this task?

  3. To see what parameters I can control and how to write them in the 'config.yaml' file I'm relying on the example 'config.yaml' file you provide online. Does this example file include all the parameters that the pipeline allows? or in other words, are there parameters which the pipeline knows how to handle that are not mentioned int the 'config.yaml' file?

Kind regards,
Tomer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants