PennSIVE_neuro_pip

There are multiple pipelines available through the PennSIVE_neuro_pip repo to facilitate MRI processing. This article will walk you through instructions and examples for using each of them.

For each pipeline, usage instructions will be provided with dummy paths and an example path. Clicking the pipeline name will take you to the respective pipeline’s GitHub page.

Note: some of these pipelines are also available as BIDS apps, with instructions on the PennSIVE pipelines page of this wiki.

BIDS

The BIDS pipeline converts DICOM images to NIfTI and organizes the files in BIDS format. It uses the heudiconv DICOM converter and an RShiny app for heuristic customization.

Usage

This pipeline contains three stages: 1) Heuristic: prepares heuristic template, 2) Customization: launches RShiny app for heuristic customization, and 3) BIDS: runs DICOM to NIfTI conversion and format into BIDS structure.

This pipeline must be run through a container, either Singularity on a cluster or Docker locally. Steps 1 and 3 can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively. Step 2 must be run in batch mode.

These examples will run the pipeline in batch mode on the cluster via Singularity. To run individually or locally with Docker, set --mode individual, or -c docker, respectively.

Step 1. Heuristic

This step prepares the heuristic file by copying the template to a template folder, as well as each subject and session folder. These files will be edited in Step 2: Customization.


Required flags:

-m or --mainpath: path to parent data folder
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--step: step of pipeline - heuristic, customization, bids. Default is heuristic
--mode: run pipeline individually or batch. Default is individual
-c or --container: which container to use: singularity, docker. Default is docker
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/bids/code/bash/bids_curation.sh -m /path/to/project --mode batch -c singularity --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/bids/code/bash/bids_curation.sh -m /home/ehorwath/projects/test_data --mode batch -c singularity --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. Customization

In this step, an RShiny app will launch to customize the heuristic template created in the last step. **This step only runs in batch mode and does not need a container specification. If you are unable to connect to the app from your terminal, try running this step in VSCode.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - heuristic, customization, bids. Default is heuristic. This step is customization
--toolpath: path to pipeline folder

Other flags:

-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/bids/code/bash/bids_curation.sh -m /path/to/project --step customization --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/bids/code/bash/bids_curation.sh -m /home/ehorwath/projects/test_data --step customization --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Using the app:

The Shiny app allows you to edit the heuristic file for all subjects in the original_data folder or for each subject individually.

To begin, under Choose Python Script, load in the heuristic.py file in the template folder.

To review each subjects’ DICOM info and edit the heuristic on a subject-level basis, the DICOM Info Review will load each subject’s info by clicking Next and Previous in the DICOM Selection. Edits can be made in the Update Heuristic Script section and finalized by clicking Update Script.

Group-level changes to the heuristic can be made by edits to the Update Heuristic Script section, and when finished, clicking Update All Scripts. This will apply those changes to all subjects in the folder.


Step 3. BIDS

This step runs DICOM to NIfTI conversion and format into BIDS structure based on the heuristic files edited in Step 2.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - heuristic, customization, bids. Default is heuristic. This step is bids
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--mode: run pipeline individually or batch. Default is individual
-c or --container: which container to use: singularity, docker. Default is docker
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/bids/code/bash/bids_curation.sh -m /path/to/project --step bids --mode batch -c singularity --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/bids/code/bash/bids_curation.sh -m /home/ehorwath/projects/test_data --step bids --mode batch -c singularity --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip




MIMoSA

The MIMoSA pipeline integrates an automated technique for white matter lesion segmentation. It provides processed T1-weighted and T2-FLAIR images, as well as a white matter lesion mask. (The current pipeline uses a pre-trained MIMoSA model, which was trained using 3T T1-weighted and T2-FLAIR images as the input).

Usage

This pipeline processes raw T1 and T2-FLAIR images to segment white matter lesions. By default, it runs bias correction, registration to FLAIR space, WhiteStripe normalization, and MIMoSA. Skullstripping can be turned on if input images contain non-brain tissue.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively.


Required flags:

-m or --mainpath: path to parent data folder
-t or --t1: T1 sequence name
-f or --flair: FLAIR sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--t2: T2 sequence name
-n or --n4: run N4 bias correction. Default is TRUE
-s or --skullstripping: run skullstripping. Default is FALSE
-r or --registration: run registration. Default is TRUE
-w or --whitestripe: run WhiteStripe normalization. Default is TRUE
--threshold: threshold for generating mimosa mask. Default is 0.2
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/mimosa/code/bash/mimosa.sh -m /path/to/data -t "*T1w*.nii.gz" -f "*FLAIR*.nii.gz" -s TRUE --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/mimosa/code/bash/mimosa.sh -m /home/ehorwath/projects/mscamras --t1 "*MPRAGE*.nii.gz" --flair "*FLAIR*.nii.gz" -s TRUE --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip



CVS

The CVS pipeline integrates an automated technique for the detection of the central vein sign in white matter lesions. It provides processed T1-weighted, T2-FLAIR, and T2*-EPI images, as well as subject-level CVS probabilities.

Usage

This pipeline contains three stages: 1) Preprocessing and CVS Probability Calculation: calculates the probability of each participant’s probability of having cvs lesions, and 2) Consolidation: consolidates all participants’ CVS results.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively. Only Steps 1 and 2 have the option of individual or batch; Step 3 will always run in batch mode.


Step 1. Preprocessing & CVS Probability Calculation

This step processes raw T1, T2-FLAIR, and T2*-EPI images to prepare for CVS probability calculation. By default, it runs bias correction, registration to FLAIR space, WhiteStripe normalization, MIMoSA, CSF extraction, splitting confluent lesions, registration to EPI space, and CVS score calculation. Skullstripping can be turned on if input images contain non-brain tissue.


Required flags:

-m or --mainpath: path to parent data folder
-t or --t1: T1 sequence name
-f or --flair: FLAIR sequence name
-e or --epi: EPI sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-n or --n4: run N4 bias correction. Default is TRUE
-s or --skullstripping: run skullstripping. Default is FALSE
-r or --registration: run registration. Default is TRUE
-w or --whitestripe: run WhiteStripe normalization. Default is TRUE
--mimosa: run MIMoSA segmentation. Default is TRUE
--threshold: threshold for generating MIMoSA mask. Default is 0.2
--csf: extract CSF mask. Default is TRUE
--step: step of pipeline - estimation, consolidation. Default is estimation
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/cvs/code/bash/cvs.sh -m /path/to/data -t "*T1w*.nii.gz" -f "*FLAIR*.nii.gz" -e "*T2star.nii.gz" -s TRUE --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/cvs/code/bash/cvs.sh -m /home/ehorwath/projects/mscamras --t1 "*MPRAGE*.nii.gz" --flair "*FLAIR*.nii.gz" --epi "*EPI*.nii.gz" -s TRUE --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. Consolidation

This step consolidates the CVS results for all participants and sessions.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - estimation, consolidation. Default is estimation. This step is consolidation
--toolpath: path to pipeline folder

Other flags:

-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/cvs/code/bash/cvs.sh -m /path/to/data --step consolidation --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/cvs/code/bash/cvs.sh -m /home/ehorwath/projects/mscamras --step consolidation --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip 



APRL

The APRL pipeline integrates an automated technique for paramagnetic rim lesion (PRL) detection. It provides processed T1-weighted, T2-FLAIR, and T2star-Phase images, as well as white matter lesion masks and the probability of each lesion being a PRL. (The current pipeline uses a pre-trained MIMoSA model and APRL model).

Usage

This pipeline contains three stages: 1) Preprocessing: processes MRI images to prepare for PRL probability calculation, 2) PRL Probability Calculation: calculates the probability of each lesion being a PRL, and 3) Consolidation: consolidates all participants’ PRL results.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively. Only Steps 1 and 2 have the option of individual or batch; Step 3 will always run in batch mode.


Step 1. Preprocessing

This step processes raw T1, T2-FLAIR, and T2*-PHASE images to prepare for PRL probability calculation. By default, it runs bias correction, registration to FLAIR space, WhiteStripe normalization, MIMoSA, registration to EPI space, lesion dilation, and splitting and labeling of confluent lesions. Skullstripping can be turned on if input images contain non-brain tissue.


Required flags:

-m or --mainpath: path to parent data folder
-t or --t1: T1 sequence name
-f or --flair: FLAIR sequence name
--phase: PHASE sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-n or --n4: run N4 bias correction. Default is TRUE
-s or --skullstripping: run skullstripping. Default is FALSE
-r or --registration: run registration. Default is TRUE
-w or --whitestripe: run WhiteStripe normalization. Default is TRUE
--mimosa: run MIMoSA segmentation. Default is TRUE
--threshold: threshold for generating MIMoSA mask. Default is 0.2
--dilation: dilate lesion. Default is TRUE
--step: step of pipeline - preparation, PRL_run, consolidation. Default is preparation
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/PRL/code/bash/PRL -m /path/to/data -t "*T1w*.nii.gz" -f "*FLAIR*.nii.gz" --phase "*UNWRAPPED*.nii.gz" -s TRUE --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/PRL/code/bash/PRL.sh -m /home/ehorwath/projects/prl --t1 "*T1w*.nii.gz" --flair "*FLAIR*.nii.gz" --phase "*UNWRAPPED*.nii.gz" -s TRUE --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. PRL Probability Calculation

This step calculates the probability of each lesion being a PRL from the preprocessed images.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - preparation, PRL_run, consolidation. Default is preparation. This step is PRL_run
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/PRL/code/bash/PRL.sh -m /path/to/data --step PRL_run --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/PRL/code/bash/PRL.sh -m /home/ehorwath/projects/prl --step PRL_run --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip 


Step 3. Consolidation

This step consolidates the PRL results for all participants and sessions.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - preparation, PRL_run, consolidation. Default is preparation. This step is consolidation
--toolpath: path to pipeline folder

Other flags:

-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/PRL/code/bash/PRL.sh -m /path/to/data --step consolidation --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/PRL/code/bash/PRL.sh -m /home/ehorwath/projects/prl --step consolidation --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip 



Lesion Count

The lesion_count pipeline provides two options to count the number of lesions present in MRI images using the “DworCount” method, developed by Dr. Jordan Dworkin and the connected components method.

Usage

The pipeline allows for three count options: DworCount (set –method dworcount), connected components (set –method cc), or both (default; set –method both).

This pipeline contains three stages: 1) Preparation: preprocesses and prepares data for lesion counting, 2) Count: counts number of lesions using specified method, 3) Consolidation: consolidates all participants’ results into a single .csv file.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively.


Step 1. Preparation

This step processes raw T1 and T2-FLAIR images to prepare for lesion counting. By default, it runs bias correction, registration to FLAIR space, WhiteStripe normalization, and MIMoSA. For the DworCount method, confluent lesions are split and labeled. Skullstripping can be turned on if input images contain non-brain tissue.


Required flags:

-m or --mainpath: path to parent data folder
-t or --t1: T1 sequence name
-f or --flair: FLAIR sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-n or --n4: run N4 bias correction. Default is TRUE
-s or --skullstripping: run skullstripping. Default is FALSE
-r or --registration: run registration. Default is TRUE
-w or --whitestripe: run WhiteStripe normalization. Default is TRUE
--mimosa: run MIMoSA segmentation. Default is TRUE
--threshold: threshold for generating MIMoSA mask. Default is 0.2
--method: cc, dworcount, both. Default is both
--step: step of pipeline - preparation, count, consolidation. Default is preparation
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/lesion_count/code/bash/lesion_count.sh -m /path/to/data -t "*T1w*.nii.gz" -f "*FLAIR*.nii.gz" -s TRUE --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/lesion_count/code/bash/lesion_count.sh -m /home/ehorwath/projects/mscamras --t1 "*T1w*.nii.gz" --flair "*FLAIR*.nii.gz" -s TRUE --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. Count

This step counts lesions based on connected components or DworCount method.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - preparation, count, consolidation. Default is preparation. This step is count
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--method: cc, dworcount, both. Default is both
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/lesion_count/code/bash/lesion_count.sh -m /path/to/data --step count --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/lesion_count/code/bash/lesion_count.sh -m /home/ehorwath/projects/mscamras --step count --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 3. Consolidation

This step consolidates the lesion counts for all participants and sessions.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - preparation, count, consolidation. Default is preparation. This step is consolidation
--toolpath: path to pipeline folder

Other flags:

--method: cc, dworcount, both. Default is both
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/lesion_count/code/bash/lesion_count.sh -m /path/to/data --step consolidation --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/lesion_count/code/bash/lesion_count.sh -m /home/ehorwath/projects/mscamras --step consolidation --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip



Radiomic Feature

The lesion radiomic feature extraction pipeline utilizes PyRadiomics to extract lesion features from T1-weighted, T2-FLAIR, and T2*-EPI images.

Usage

The pipeline contains three stages: 1) Preprocessing: processes MRI images to prepare for radiomic feature extraction, 2) Feature Extraction: extract radiomic features using PyRadiomics package, and 3) Consolidation: consolidates all participants’ lesion radiomic feature data.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively.


Step 1. Processing

This step processes raw T1, T2-FLAIR, and T2*-EPI images to prepare for extraction of radiomic features. By default, it runs bias correction, registration to FLAIR space, WhiteStripe normalization, MIMoSA, CSF extraction, splitting confluent lesions, and registration to EPI space. Skullstripping can be turned on if input images contain non-brain tissue.


Required flags:

-m or --mainpath: path to parent data folder
-t or --t1: T1 sequence name
-f or --flair: FLAIR sequence name
-e or --epi: EPI sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-n or --n4: run N4 bias correction. Default is TRUE
-s or --skullstripping: run skullstripping. Default is FALSE
-r or --registration: run registration. Default is TRUE
-w or --whitestripe: run WhiteStripe normalization. Default is TRUE
--mimosa: run MIMoSA segmentation. Default is TRUE
--threshold: threshold for generating MIMoSA mask. Default is 0.2
--csf: extract CSF mask. Default is TRUE
--step: step of pipeline - processing, extraction, consolidation. Default is processing
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/radiomic_feature/code/bash/pyradiomics.sh -m /path/to/project -t "*_T1w.nii.gz" -f "*_FLAIR.nii.gz" -e "*_T2star.nii.gz" --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/radiomic_feature/code/bash/pyradiomics.sh -m /home/ehorwath/projects/mscamras --t1 "*MPRAGE*.nii.gz" --flair "*FLAIR*.nii.gz" --epi "*EPI*.nii.gz" --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. Feature Extraction

This step extracts radiomic features.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - processing, extraction, consolidation. Default is preparation. This step is extraction
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/radiomic_feature/code/bash/pyradiomics.sh -m /path/to/project --step extraction --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/radiomic_feature/code/bash/pyradiomics.sh -m /home/ehorwath/projects/mscamras --step extraction --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 3. Consolidation

This step consolidates the radiomic features for all participants and sessions.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - processing, extraction, consolidation. Default is preparation. This step is consolidation
--toolpath: path to pipeline folder

Other flags:

-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/radiomic_feature/code/bash/pyradiomics.sh -m /path/to/data --step consolidation --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/radiomic_feature/code/bash/pyradiomics.sh -m /home/ehorwath/projects/mscamras --step consolidation --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip



T1/T2

The T1T2 pipeline generates the ratio of T1-weighted to T2-weighted signal intensity (T1/T2). The T2 sequence can be specified as a T2-weighted or FLAIR (default) image.

Usage

The pipeline contains two stages: 1) Estimation: calculates each participants’ T1/T2 ratio and 2) Consolidation: consolidates all participants’ results into a single .csv file.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively.


Step 1. Estimation

This step processes T1 and T2 or T2-FLAIR (default) images to estimate the T1/T2 ratio. By default, it extracts lesion volumes and generation of T1/T2 ratio. This is different from most other pipelines that run all preprocessing steps by default. Bias correction, skullstripping, registration to FLAIR or T2 space, WhiteStripe normalization can be turned on if required.


Required flags:

-m or --mainpath: path to parent data folder
--t1: T1 sequence name
--t2: T2 sequence name
OR
-f or --flair: FLAIR sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-n or --n4: run N4 bias correction. Default is FALSE
-s or --skullstripping: run skullstripping. Default is FALSE
-r or --registration: run registration. Default is FALSE
-w or --whitestripe: run WhiteStripe normalization. Default is FALSE
-l or --lesion: extract lesion volumes. Default is TRUE
--t2type: T2 sequence for generating T1/T2 ratio - t2, flair. Default is flair
--masktype: segmentation for generating ROI T1/T2 - fast, jlf, freesurfer. Default is freesurfer
--step: step of pipeline - estimation, consolidation. Default is estimation
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/t1t2/code/bash/t1t2.sh -m /path/to/project --t1 "*T1w.nii.gz" -f "*FLAIR.nii.gz" --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/t1t2/code/bash/t1t2.sh -m /home/ehorwath/projects/mscamras --t1 "*MPRAGE*.nii.gz" --flair "*FLAIR*.nii.gz" --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. Consolidation

This step consolidates the T1/T2 ratios for all participants and sessions.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - estimation, consolidation. Default is estimation. This step is consolidation
--toolpath: path to pipeline folder

Other flags:

--t2type: T2 sequence for generating T1/T2 ratio - t2, flair. Default is flair
--masktype: segmentation for generating ROI T1/T2 - fast, jlf, freesurfer. Default is freesurfer
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/t1t2/code/bash/t1t2.sh -m /path/to/project --step consolidation --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/t1t2/code/bash/t1t2.sh -m /home/ehorwath/projects/mscamras --step consolidation --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip



FreeSurfer

The FreeSurfer pipeline integrates FreeSurfer software to provide a full processing stream for structural MRI data. It takes a T1-weighted image as the only input and generates brain ROI segmentation masks as well as brain-related statistics.

Usage

This pipeline contains three stages: 1) Segmentation: runs the FreeSurfer recon-all command to obtain ROI segmentation masks and brain statistics, 2) Estimation: converts the brain statistics into CSV format, and 3) Consolidation: consolidates all participants’ data.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively. Only Steps 1 and 2 have the option of individual or batch; Step 3 will always run in batch mode.


Step 1. Segmentation

This step runs FreeSurfer’s recon-all command on a T1 image to obtain ROI segmentation masks and brain statistics.


Required flags:

-m or --mainpath: path to parent data folder
-n or --name: T1 sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-s or --step: step of pipeline - segmentation, estimation, consolidation. Default is segmentation
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/freesurfer/code/bash/freesurfer.sh -m /path/to/data -n "*T1w*.nii.gz" --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/freesurfer/code/bash/freesurfer.sh -m /home/ehorwath/projects/mscamras -n "*MPRAGE*.nii.gz" --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. Estimation

This step converts the brain statistics output from Step 1 and converts to CSV format.


Required flags:

-m or --mainpath: path to parent data folder
-s or --step: step of pipeline - segmentation, estimation, consolidation. Default is segmentation
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--parc: parcellation - aparc, aparc.a2009s. Default is aparc
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/freesurfer/code/bash/freesurfer.sh -m /path/to/data --step estimation --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/freesurfer/code/bash/freesurfer.sh -m /home/ehorwath/projects/mscamras --step estimation --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 3. Consolidation

This step consolidates the brain statistics for all participants and sessions.


Required flags:

-m or --mainpath: path to parent data folder
--toolpath: path to pipeline folder

Other flags:

-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/freesurfer/code/bash/freesurfer.sh -m /path/to/data --step consolidation --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/freesurfer/code/bash/freesurfer.sh -m /home/ehorwath/projects/mscamras --step consolidation --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip



JLF

The JLF pipeline produces a high-resolution anatomical segmentation using the ANTs Joint Label Fusion algorithm. A T1-weighted image is the only input needed.

Usage

The pipeline contains three stages: 1) Registration: registers atlas into participants’ T1-weighted space, 2) antsjointfusion: segments T1 images using multi-atlas segmentation with joint label fusion, 3) Extraction: extracts ROI and lesion volumes.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively. Only Steps 1 and 2 have the option of individual or batch; Step 3 will always run in batch mode.


Step 1. Registration

This step registers the selected atlas to the subjects’ native T1 space.


Required flags:

-m or --mainpath: path to parent data folder
-t or --t1: T1 sequence name
-s or --step: step of pipeline - registration, antsjointfusion, extraction. Default is antsjointfusion. This step is registration
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-n or --num: number of templates. Default is 9
--type: type of templates - WMGM, thal. Default is WMGM
--lesion: extract lesion volumes. Default is TRUE
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/JLF/code/bash/JLF.sh -m /path/to/project --t1 "*T1w*.nii.gz" --step registration --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/JLF/code/bash/JLF.sh -m /home/ehorwath/projects/mscamras --t1 "*MPRAGE*.nii.gz" --step registration --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 2. antsJointFusion

This step runs the antsJointFusion function from ANTs.


Required flags:

-m or --mainpath: path to parent data folder
-t or --t1: T1 sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-n or --num: number of templates. Default is 9
--type: type of templates - WMGM, thal. Default is WMGM
-s or --step: step of pipeline - registration, antsjointfusion, extraction. Default is antsjointfusion
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/JLF/code/bash/JLF.sh -m /path/to/project --t1 "*T1w*.nii.gz" --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/JLF/code/bash/JLF.sh -m /home/ehorwath/projects/mscamras --t1 "*MPRAGE*.nii.gz" --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


Step 3. Extraction

This step extracts ROI and lesion volumes for all participants and sessions.


Required flags:

-m or --mainpath: path to parent data folder
-s or --step: step of pipeline - registration, antsjointfusion, extraction. Default is antsjointfusion. This step is extraction
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
--type: type of templates - WMGM, thal. Default is WMGM
--lesion: extract lesion volumes. Default is TRUE
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/JLF/code/bash/JLF.sh -m /path/to/project --step extraction --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/JLF/code/bash/JLF.sh -m /home/ehorwath/projects/mscamras --step extraction --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip



Skullstripping

The skullstripping pipeline provides several options (MASS and HD-BET) to remove skull and non-brain matter from brain images.

Usage

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. This pipeline can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively.

The pipeline allows for two skullstripping options: mass (default) and hdbet. To use hdbet, set -t “hdbet”.


Required flags:

-m or --mainpath: path to parent data folder
-f or --file: T1 sequence name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--ses: session ID (only needed for individual mode)
-t or --type: skullstripping method - mass, hdbet. Default is mass
-n or --number: number of templates. Default is 20
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/skullstripping/code/bash/skullstripping.sh -m /path/to/project -f "*MPRAGE*.nii.gz" --toolpath /path/to/PennSIVE_neuro_pip

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/skullstripping/code/bash/skullstripping.sh -m /home/ehorwath/projects/mscamras -f "*MPRAGE*.nii.gz" --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip



BrainQC

This pipeline contains the Brain WM Lesion/ROI Segmentation QC Shiny App, a collaborative tool designed to facilitate the evaluation of white matter lesion masks generated by segmentation algorithms like MIMoSA, brain ROI masks generated by JLF and FreeSurfer, as well as lesion center masks for CVS score calculation and PRL lesion masks for PRL score calculation. It provides a user-friendly interface where multiple users can collectively assess segmentation quality through interactive features.

Installation

Cluster:

Sys.setenv(CURL_CA_BUNDLE = "/etc/ssl/certs/ca-bundle.trust.crt")
library(devtools)
new_path = "/path/to/save/r_packages"
# for example:
# new_path = "/home/ehorwath/rpackages"
withr::with_libpaths(new = new_path, install_github("Zheng206/BrainQC"))
# change the QC_CLI.R Script
.libPaths(c("/misc/appl/R-4.1/lib64/R/library",new_path))

Local:

library(devtools)
new_path = "/path/to/save/r_packages"
# for example:
# new_path = "/home/ehorwath/rpackages"
withr::with_libpaths(new = new_path, install_github("Zheng206/BrainQC"))

Usage

This pipeline contains three stages: 1) QC Preparation: prepares for QC results, 2) Interactive Evaluation: runs interactive QC sessions to evaluate segmentation masks, and 3) Post-QC: reviews QC results interactively.

This pipeline can be run with or without a container. For containerized usage, Singularity can be used on a cluster or Docker locally. Step 1 can be run in individual or batch mode, meaning you can specify a certain subject and session or run the pipeline for all subjects in the folder, respectively. Steps 2 and 3 must be run in batch mode.

These examples will run the pipeline in batch mode on the cluster. To run individually or locally/with a container, set --mode -individual, or -c local/-c singularity/-c docker, respectively.


Step 1. QC Preparation

This step prepares all data for QC app.


Required flags:

-m or --mainpath: path to parent data folder
-i or --img: brain image name
--seg: ROI or lesion mask name
--toolpath: path to pipeline folder

Other flags:

-p or --participant: participant ID (only needed for individual mode)
--step: step of pipeline - prep, qc, post. Default is prep
-t or --type: type of QC procedure - lesion, cvs, freesurfer, JLF, PRL. Default is lesion
--defaultseg: default ROI to be evaluated first (only needed if type is freesurfer or JLF)
--mode: run pipeline individually or batch. Default is batch
-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
--cores: number of cores used for parallel computing. Default is 1
-h or --help: show help message

Lesion QC

bash /path/to/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /path/to/data -i flair_n4_brain.nii.gz --seg mimosa_mask.nii.gz -t lesion --toolpath /path/to/PennSIVE_neuro_pip 
  • For example:
bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /home/ehorwath/projects/mscamras_defaced_regSites -i flair_n4_brain_ses.nii.gz --seg mimosa_mask.nii.gz -t lesion --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


FreeSurfer QC

bash /path/to/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /path/to/data -i ^brain.mgz --seg ^aseg.mgz -t freesurfer --defaultseg choroid-plexus --toolpath /path/to/PennSIVE_neuro_pip 
  • For example:
bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /home/ehorwath/projects/mscamras -i ^brain.mgz --seg ^aseg.mgz -t freesurfer --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


JLF QC

bash /path/to/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /path/to/data -i "*t1_brain.nii.gz" --seg fused_WMGM_seg.nii.gz -t JLF --toolpath /path/to/PennSIVE_neuro_pip 
  • For example:
bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /home/ehorwath/projects/mscamras -i "*t1w_brain.nii.gz" --seg fused_WMGM_seg.nii.gz -t JLF --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip 


CVS QC

bash /path/to/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /path/to/data -i epi_n4_brain.nii.gz --seg les_reg_epi.nii.gz -t cvs --toolpath /path/to/PennSIVE_neuro_pip 
  • For example:
bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /home/ehorwath/projects/mscamras -i epi_n4_brain.nii.gz --seg les_reg_epi.nii.gz -t cvs --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip


PRL QC

bash /path/to/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /path/to/data -i phase_n4_brain.nii.gz --seg lesions_reg_epi_labeled.nii.gz -t PRL --toolpath /path/to/PennSIVE_neuro_pip 
  • For example:
bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /home/ehorwath/projects/prl -i phase_n4_brain.nii.gz --seg lesions_reg_epi_labeled.nii.gz -t PRL --toolpath /path/to/PennSIVE_neuro_pip 


Step 2. Interactive Evaluation

This step runs the QC app to evaluate accuarcy of segmentation masks with processed images.


Required flags:

-m or --mainpath: path to parent data folder
-t or --type: type of QC procedure - lesion, cvs, freesurfer, JLF, PRL. Default is lesion
--step: step of pipeline - prep, qc, post. Default is prep. This step is qc
--toolpath: path to pipeline folder

Other flags:

-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
--cores: number of cores used for parallel computing. Default is 1
-h or --help: show help message

For this step, the only flag that will differ across QC types is -t or --type. Be sure to use the same type as Step 1.

bash /path/to/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /path/to/data --step qc -t lesion --toolpath /path/to/PennSIVE_neuro_pip 

For example (Lesion QC):

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /home/ehorwath/projects/mscamras_defaced_regSites --step qc -t lesion --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip 


Step 3. Post-QC

This step allows for reviewing QC results in the app.


Required flags:

-m or --mainpath: path to parent data folder
--step: step of pipeline - prep, qc, post. Default is prep. This step is qc
--toolpath: path to pipeline folder

Other flags:

-c or --container: which container to use: singularity, docker, local, cluster. Default is cluster
--sinpath: path to singularity image (only needed if using singularity container - don’t need to specify if using takim cluster)
--dockerpath: path to docker image (only needed if using docker container)
--cores: number of cores used for parallel computing. Default is 1
-h or --help: show help message

bash /path/to/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /path/to/data --step post --toolpath /path/to/PennSIVE_neuro_pip 

For example:

bash /home/ehorwath/projects/PennSIVE_neuro_pip/pipelines/BrainQC/code/bash/QC.sh -m /home/ehorwath/projects/mscamras_defaced_regSites --step post --toolpath /home/ehorwath/projects/PennSIVE_neuro_pip