Journal Papers

A

Stamm, Aymeric, Jolene Singh, Onur Afacan, and Simon Warfield. (2015) 2015. “Analytic Quantification of Bias and Variance of Coil Sensitivity Profile Estimators for Improved Image Reconstruction in MRI”. Med Image Comput Comput Assist Interv 9350: 684-91. https://doi.org/10.1007/978-3-319-24571-3_82.
Magnetic resonance (MR) imaging provides a unique in-vivo capability of visualizing tissue in the human brain non-invasively, which has tremendously improved patient care over the past decades. However, there are still prominent artifacts, such as intensity inhomogeneities due to the use of an array of receiving coils (RC) to measure the MR signal or noise amplification due to accelerated imaging strategies. It is critical to mitigate these artifacts for both visual inspection and quantitative analysis. The cornerstone to address this issue pertains to the knowledge of coil sensitivity profiles (CSP) of the RCs, which describe how the measured complex signal decays with the distance to the RC. Existing methods for CSP estimation share a number of limitations: (i) they primarily focus on CSP magnitude, while it is known that the solution to the MR image reconstruction problem involves complex CSPs and (ii) they only provide point estimates of the CSPs, which makes the task of optimizing the parameters and acquisition protocol for their estimation difficult. In this paper, we propose a novel statistical framework for estimating complex-valued CSPs. We define a CSP estimator that uses spatial smoothing and additional body coil data for phase normalization. The main contribution is to provide detailed information on the statistical distribution of the CSP estimator, which yields automatic determination of the optimal degree of smoothing for ensuring minimal bias and provides guidelines to the optimal acquisition strategy.
Scherrer, Benoit, Onur Afacan, Maxime Taquet, Sanjay Prabhu, Ali Gholipour, and Simon Warfield. (2015) 2015. “Accelerated High Spatial Resolution Diffusion-Weighted Imaging”. Inf Process Med Imaging 24: 69-81. https://doi.org/10.1007/978-3-319-19992-4_6.
Acquisition of a series of anisotropically oversampled acquisitions (so-called anisotropic "snapshots") and reconstruction in the image space has recently been proposed to increase the spatial resolution in diffusion weighted imaging (DWI), providing a theoretical 8x acceleration at equal signal-to-noise ratio (SNR) compared to conventional dense k-space sampling. However, in most works, each DW image is reconstructed separately and the fact that the DW images constitute different views of the same anatomy is ignored. In addition, current approaches are limited by their inability to reconstruct a high resolution (HR) acquisition from snapshots with different subsets of diffusion gradients: an isotropic HR gradient image cannot be reconstructed if one .of its anisotropic snapshots is missing, for example due to intra-scan motion, even if other snapshots for this gradient were successfully acquired. In this work, we propose a novel multi-snapshot DWI reconstruction technique that simultaneously achieves HR reconstruction and local tissue model estimation while enabling reconstruction from snapshots containing different subsets of diffusion gradients, providing increased robustness to patient motion and potential for acceleration. Our approach is formalized as a joint probabilistic model with missing observations, from which interactions between missing snapshots, HR reconstruction and a generic tissue model naturally emerge. We evaluate our approach with synthetic simulations, simulated multi-snapshot scenario and in vivo multi-snapshot imaging. We show that (1) our combined approach ultimately provides both better HR reconstruction and better tissue model estimation and (2) the error in the case of missing snapshots can be quantified. Our novel multi-snapshot technique will enable improved high spatial characterization of the brain connectivity and microstructure in vivo.

3

Pier, Danielle, Ali Gholipour, Onur Afacan, Clemente Velasco-Annis, Sean Clancy, Kush Kapur, Judy Estroff, and Simon Warfield. 2016. “3D Super-Resolution Motion-Corrected MRI: Validation of Fetal Posterior Fossa Measurements”. J Neuroimaging 26 (5): 539-44. https://doi.org/10.1111/jon.12342.
PURPOSE: Current diagnosis of fetal posterior fossa anomalies by sonography and conventional MRI is limited by fetal position, motion, and by two-dimensional (2D), rather than three-dimensional (3D), representation. In this study, we aimed to validate the use of a novel magnetic resonance imaging (MRI) technique, 3D super-resolution motion-corrected MRI, to image the fetal posterior fossa. METHODS: From a database of pregnant women who received fetal MRIs at our institution, images of 49 normal fetal brains were reconstructed. Six measurements of the cerebellum, vermis, and pons were obtained for all cases on 2D conventional and 3D reconstructed MRI, and the agreement between the two methods was determined using concordance correlation coefficients. Concordance of axial and coronal measurements of the transcerebellar diameter was also assessed within each method. RESULTS: Between the two methods, the concordance of measurements was high for all six structures (P < .001), and was highest for larger structures such as the transcerebellar diameter. Within each method, agreement of axial and coronal measurements of the transcerebellar diameter was superior in 3D reconstructed MRI compared to 2D conventional MRI (P < .001). CONCLUSIONS: This comparison study validates the use of 3D super-resolution motion-corrected MRI for imaging the fetal posterior fossa, as this technique results in linear measurements that have high concordance with 2D conventional MRI measurements. Lengths of the transcerebellar diameter measured within a 3D reconstruction are more concordant between imaging planes, as they correct for fetal motion and orthogonal slice acquisition. This technique will facilitate further study of fetal abnormalities of the posterior fossa.
Villarini, Barbara, Hykoush Asaturyan, Sila Kurugol, Onur Afacan, Jimmy Bell, and Louise Thomas. (2021) 2021. “3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities”. Proc IEEE Int Symp Comput Based Med Syst 2021: 166-71. https://doi.org/10.1109/cbms52027.2021.00066.
Accurate, quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided assisted diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, the presence of edge-based artefacts, and heavy un-controlled breathing that can produce blurred motion-based artefacts. This paper presents a novel computing approach for automatic organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal detailed organ or muscle boundaries. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and psoas-muscle and achieves quantitative measures of mean Dice similarity coefficient (DSC) that surpass or are comparable with the state-of-the-art. A qualitative evaluation performed by two independent radiologists verified the preservation of detailed organ and muscle boundaries.