Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MIAL Super-Resolution Toolkit v2.0 #2

Merged
merged 336 commits into from
Nov 6, 2020
Merged

MIAL Super-Resolution Toolkit v2.0 #2

merged 336 commits into from
Nov 6, 2020

Conversation

sebastientourbier
Copy link
Member

@sebastientourbier sebastientourbier commented Jun 2, 2020

This PR is dedicated to the upcoming MIAL Super-Resolution Toolkit v2.0 release which will adopt the BIDS standard and will provide a workflow, implemented in Python using the Nipype library, that will interface with the MIALSRTK C++ tools and will be executed following the BIDS-App standard.

To be done before release:

  • Creation of BIDS sample dataset
  • Update CI and current example scripts with new BIDS organization
  • Creation of pymialsrtk package with setup.py
  • Implementation of Nipype components interfacing with MIALSRTK C++ compiled tools
    • Implementation of all Nipype interfaces necessary to reproduce the pipeline implemented in the original bash example script, along with the creation of dedicated example jupyter notebooks for example and testing.
    • Implementation of Nipype workflow reproducing the pipeline with manual masks, along with the creation of dedicated example jupyter notebooks for example and testing.
    • Use of Nipype interface DataSinker to move and rename files according to BIDS derivatives standard
  • Creation of the BIDS-App run.py entrypoint script that should parse BIDS-App input flags and build workflows (multiple subjects/sessions) according those inputs.
    with https://github.com/brainhack-ch/superres-mri/blob/master/sinapps/nlmdenoise/run.py as starting point.
    Discussed:
    how to input parameters specific to the SR pipeline such as list of stacks, stack order, TV parameters, ... (see MIAL Super-Resolution Toolkit v2.0 #2 (comment))
  • Review Dockerfile for the BIDS-App
  • Review CI: build, test and deployment of two different docker images (one encapsulating the MIALSRTK compiled C++ tools, one specifically dedicated to the BIDS App)
  • Integration of DL-based brain localization and extraction interface in workflow for automatic pipeline (merge Hamza branch, modify parser and add the interface in the workflow implemented in run.py) - See PR Integration of automatic DL-based brain extraction #4
  • Review output structure for multiple runs with different list/order of scans
  • Documentation
  • Improve code quality based on codacy report: https://app.codacy.com/gh/Medical-Image-Analysis-Laboratory/mialsuperresolutiontoolkit/pullRequest?prid=6104929
  • Last review before merge

@sebastientourbier
Copy link
Member Author

For the moment this provides the complete solution to run the SR pipeline with manual masks as inputs.

Should we also integrate your implementation for automated DL-based brain localization and extraction? @pdedumast @hamzake
I guess this PR already includes a number of major changes. This could come later in an next 2.1 minor release version.

pymialsrtk/info.py Outdated Show resolved Hide resolved
pymialsrtk/interfaces/utils.py Outdated Show resolved Hide resolved
pymialsrtk/interfaces/reconstruction.py Outdated Show resolved Hide resolved
pymialsrtk/interfaces/postprocess.py Outdated Show resolved Hide resolved
pymialsrtk/interfaces/preprocess.py Outdated Show resolved Hide resolved
scripts/superresolution Outdated Show resolved Hide resolved
@sebastientourbier
Copy link
Member Author

sebastientourbier commented Jun 2, 2020

@pdedumast Here comes an example of how a json config file (input to the BIDS app config_file) could look like (assuming a simple subject structure - no session):

{
  "01": {
    "ordered-stack-list": [1, 2, 4, 5, 3, 6],
    "lambdaTV": 0.05,
    "deltatTV": 0.1
  },
  
  "02": {
    "ordered-stack-list": [2, 1, 4, 5, 6, 3],
    "lambdaTV": 0.05,
    "deltatTV": 0.1
  }
}

We could think about having one config file (let's say pymialsrtk_config.json) in the code directory of the BIDS dataset to be processed, that could store the parameters for ALL participants (cases) and then in the BIDS app entrypoint run.py, we extract from the json file parameters corresponding to the case(s) specified by the list of subject labels to be processed.

Copy link
Member Author

@sebastientourbier sebastientourbier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This nice work @pdedumast !
Just need a little correction for the output structure but we are very close to have a first version of the BIDS App 👍

pymialsrtk/pipelines/anatomical/srr.py Outdated Show resolved Hide resolved
pymialsrtk/pipelines/anatomical/srr.py Outdated Show resolved Hide resolved
pymialsrtk/pipelines/anatomical/srr.py Outdated Show resolved Hide resolved
pymialsrtk/pipelines/anatomical/srr.py Outdated Show resolved Hide resolved
pymialsrtk/pipelines/anatomical/srr.py Outdated Show resolved Hide resolved
pymialsrtk/pipelines/anatomical/srr.py Outdated Show resolved Hide resolved
@sebastientourbier
Copy link
Member Author

sebastientourbier commented Jun 10, 2020

Once output structure has been corrected, we could then integrate the brain localization module of @hamzake. For this, it would require:

  • Finalize pull request (PR) Integration of automatic DL-based brain extraction #4 that would merge hamza branch into the branch dedicated to this PR.
  • move Network_checkpoints folder from data to pymialsrtk/data. Otherwise the dataset in data is not any more BIDS compliant ;)
    Note
    Just be aware we had to use git-LFS to handle the .checkpointfile so now, we would have to install git lfs locally (see installation instructions)

@pdedumast
Copy link
Member

@pdedumast Here comes an example of how a json config file (input to the BIDS app config_file) could look like (assuming a simple subject structure - no session):

{
  "01": {
    "ordered-stack-list": [1, 2, 4, 5, 3, 6],
    "lambdaTV": 0.05,
    "deltatTV": 0.1
  },
  
  "02": {
    "ordered-stack-list": [2, 1, 4, 5, 6, 3],
    "lambdaTV": 0.05,
    "deltatTV": 0.1
  }
}

We could think about having one config file (let's say pymialsrtk_config.json) in the code directory of the BIDS dataset to be processed, that could store the parameters for ALL participants (cases) and then in the BIDS app entrypoint run.py, we extract from the json file parameters corresponding to the case(s) specified by the list of subject labels to be processed.

@sebastientourbier, here is what is currently implemented in the last commit:

{
  "01": [
    { "sr-id":"001",
      "stacksOrder": [1, 2, 4, 5, 3, 6],
      "paramTV": { 
        "lambdaTV": 0.75, 
        "deltatTV": 0.01,
        "primal_dual_loops": 10  }
    },
    { 
      "sr-id":"002",
      "session":"001",
      "stacksOrder": [1, 5, 4]
    }]
}

A list of reconstruction can be specified for a subject, which allows to proceed to the reconstruction of the same exam with different parameters (identified with sr-id) or of different MR exams sessions.
session and paramTV are not mandatory entries.

Comment on lines 326 to 345
for stack in p_stacksOrder:

print( sub_ses+'_run-'+str(stack)+'_T2w_nlm_uni_bcorr_histnorm.nii.gz', ' ---> ',sub_ses+'_run-'+str(stack)+'_id-'+srID+'_T2w_preproc.nii.gz')
substitutions.append( ( sub_ses+'_run-'+str(stack)+'_T2w_nlm_uni_bcorr_histnorm.nii.gz', sub_ses+'_run-'+str(stack)+'_id-'+srID+'_T2w_preproc.nii.gz') )

print( sub_ses+'_run-'+str(stack)+'_T2w_nlm_uni_bcorr_histnorm_transform_'+str(len(p_stacksOrder))+'V.txt', ' ---> ', sub_ses+'_run-'+str(stack)+'_id-'+srID+'_T2w_from-origin_to-SDI_mode-image_xfm.txt')
substitutions.append( ( sub_ses+'_run-'+str(stack)+'_T2w_nlm_uni_bcorr_histnorm_transform_'+str(len(p_stacksOrder))+'V.txt', sub_ses+'_run-'+str(stack)+'_id-'+srID+'_T2w_from-origin_to-SDI_mode-image_xfm.txt') )

print( sub_ses+'_run-'+str(stack)+'_T2w_uni_bcorr_histnorm_LRmask.nii.gz', ' ---> ', sub_ses+'_run-'+str(stack)+'_id-'+srID+'_T2w_desc-LRmask.nii.gz')
substitutions.append( ( sub_ses+'_run-'+str(stack)+'_T2w_uni_bcorr_histnorm_LRmask.nii.gz', sub_ses+'_run-'+str(stack)+'_id-'+srID+'_T2w_desc-LRmask.nii.gz') )


print( 'SDI_'+sub_ses+'_'+str(len(p_stacksOrder))+'V_rad1.nii.gz', ' ---> ', sub_ses+'_rec-SDI'+'_id-'+srID+'_T2w.nii.gz')
substitutions.append( ( 'SDI_'+sub_ses+'_'+str(len(p_stacksOrder))+'V_rad1.nii.gz', sub_ses+'_rec-SDI'+'_id-'+srID+'_T2w.nii.gz') )

print( 'SRTV_'+sub_ses+'_'+str(len(p_stacksOrder))+'V_rad1_gbcorr.nii.gz', ' ---> ', sub_ses+'_rec-SR'+'_id-'+srID+'_T2w.nii.gz')
substitutions.append( ( 'SRTV_'+sub_ses+'_'+str(len(p_stacksOrder))+'V_rad1_gbcorr.nii.gz', sub_ses+'_rec-SR'+'_id-'+srID+'_T2w.nii.gz') )

print( sub_ses+'_T2w_uni_bcorr_histnorm_srMask.nii.gz', ' ---> ', sub_ses+'_rec-SR'+'_id-'+srID+'_T2w_desc-brain_mask.nii.gz')
substitutions.append( ( sub_ses+'_T2w_uni_bcorr_histnorm_srMask.nii.gz', sub_ses+'_rec-SR'+'_id-'+srID+'_T2w_desc-SRmask.nii.gz') )
Copy link
Member Author

@sebastientourbier sebastientourbier Jun 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed I might remove this condition as you would not have to differentiate the <srId> in the file name has each of the output_dir/run-<srID> folder would correspond to an independent BIDS derivatives directory which already allow us to differentiate results from multiple successive runs with different parameters.

@sebastientourbier
Copy link
Member Author

sebastientourbier commented Jun 10, 2020

@sebastientourbier, here is what is currently implemented in the last commit:

{
  "01": [
    { "sr-id":"001",
      "stacksOrder": [1, 2, 4, 5, 3, 6],
      "paramTV": { 
        "lambdaTV": 0.75, 
        "deltatTV": 0.01,
        "primal_dual_loops": 10  }
    },
    { 
      "sr-id":"002",
      "session":"001",
      "stacksOrder": [1, 5, 4]
    }]
}

A list of reconstruction can be specified for a subject, which allows to proceed to the reconstruction of the same exam with different parameters (identified with sr-id) or of different MR exams sessions.
session and paramTV are not mandatory entries.

@pdedumast Tip top 👍

@sebastientourbier
Copy link
Member Author

These new commits involve:

  1. the creation of the BIDS App (Dockerfile / pymialsrtk and dependencies installation)
  2. the integration of circleci for continuous integration testing of the BIDS App
    Also I had to make a few fixes and made the output more BIDS compliant (the question of id-<srId> and the naming of the different pipeline run - now 'rec' - is still opened.)

@sebastientourbier
Copy link
Member Author

Here is the preview of the documentation draft on readthedocs. Not sure it is publicly acciessible though:
https://mialsrtk.readthedocs.io/en/dev-pgd-hk/

@sebastientourbier sebastientourbier merged commit b6f4735 into master Nov 6, 2020
@sebastientourbier
Copy link
Member Author

@all-contributors please add @hamzake for doc

@allcontributors
Copy link
Contributor

@sebastientourbier

I've put up a pull request to add @hamzake! 🎉

@sebastientourbier
Copy link
Member Author

@all-contributors please add @pdedumast for doc

@allcontributors
Copy link
Contributor

@sebastientourbier

I've put up a pull request to add @pdedumast! 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment