Sensorless Volumetric Reconstruction of Fetal Brain Freehand
Ultrasound Scans with Deep Implicit Representation


Medical Image Analysis, Volume 94, May 2024

Pak Hei Yeung1
Linde Hesse1
Moska Aliasi2
Monique Haak2
The INTERGROWTH-21st Consortium3
Weidi Xie4*
Ana I.L. Namburete1*

1Oxford Machine Learning in NeuroImaging Lab, University of Oxford
2Department of Obstetrics, Leiden University Medical Center
3Nuffield Department of Women's and Reproductive Health, University of Oxford
4Visual Geometry Group, University of Oxford
* These authors jointly supervised this work

[Paper (MedIA)]
[Paper (Arxiv)]
[Code]
[Video]
[Bibtex]



Demonstration



Abstract


Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited diagnostic practicality. Freehand 2D ultrasound imaging, in contrast, is routinely used in standard obstetric exams, but inherently lacks a 3D representation of the anatomies, which limits its potential for more advanced assessment. Such full representations are challenging to recover even with external tracking devices due to internal fetal movement which is independent from the operator-led trajectory of the probe. Capitalizing on the flexibility offered by freehand 2D ultrasound acquisition, we propose ImplicitVol to reconstruct 3D volumes from non-sensor-tracked 2D ultrasound sweeps. Conventionally, reconstructions are performed on a discrete voxel grid. We, however, employ a deep neural network to represent, for the first time, the reconstructed volume as an implicit function. Specifically, ImplicitVol takes a set of 2D images as input, predicts their locations in 3D space, jointly refines the inferred locations, and learns a full volumetric reconstruction. When testing natively-acquired and volume-sampled 2D ultrasound video sequences collected from different manufacturers, the 3D volumes reconstructed by ImplicitVol show significantly better visual and semantic quality than the existing interpolation-based reconstruction approaches. The inherent continuity of implicit representation also enables ImplicitVol to reconstruct the volume to arbitrarily high resolutions. As formulated, ImplicitVol has the potential to integrate seamlessly into the clinical workflow, while providing richer information for diagnosis and evaluation of the developing brain.


Pipeline




Qualitative Results (Volume-Sampled Images)



Visualization of 3D reconstruction from volume-sampled testing images by different approaches. Images sampled at different novel cross-sectional views are presented and compared with the ground-truth.


Qualitative Results (Native Freehand Images)



Results of 3D reconstruction from native freehand 2D ultrasound. Novel view images sampled from different planes from volumes reconstructed by different approaches are presented. ImplicitVol, shows better visual quality in under-sampled region (yellow boxes) and is more robust against inaccurate position estimation (red boxes)


Qunatitative Results



Evaluation results (mean ± standard deviation) of different approaches on volume-sampled 2D images. ↑ indicates higher values being more accurate, vice versa.



Bibtex

   
      @article{yeung2021implicitvol,
        title={ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation},
        author={Yeung, Pak-Hei and Hesse, Linde and Aliasi, Moska and Haak, Monique 
                and Xie, Weidi and Namburete, Ana IL and others},
        journal={arXiv preprint arXiv:2109.12108},
        year={2021}
      }
	  
      @article{yeung2024implicitvol,
        title={Sensorless Volumetric Reconstruction of Fetal Brain Freehand Ultrasound Scans 
               with Deep Implicit Representation},
        author={Yeung, Pak-Hei and Hesse, Linde and Aliasi, Moska and Haak, Monique 
                and Xie, Weidi and Namburete, Ana IL and others},
        journal={Medical Image Analysis},
        volume={94},
        pages={103147},
        year={2024},
        publisher={Elsevier}
      }
    

Acknowledgements

PH. Yeung is grateful for support from the RC Lee Centenary Scholarship. L. Hesse acknowledges the support of the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Award. W. Xie is supported by the EPSRC Programme Grant Visual AI (EP/T028572/1). A. Namburete is funded by the UK Royal Academy of Engineering under its Engineering for Development Research Fellowship scheme and the Academy of Medical Sciences. We thank Madeleine Wyburd and Nicola Dinsdale for their valuable suggestions and comments about the work.

This template of this project webpage was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.