Learning to Map 2D Ultrasound Images into 3D Space
with Minimal Human Annotation
Pak Hei Yeung
Moska Aliasi
Aris T. Papageorghiou
Monique Haak
Weidi Xie
Ana I.L. Namburete
[Paper]
[Code]
[Video]
[Bibtex]

Medical Image Analysis, Volume 70, May 2021

Demonstration



Abstract

In fetal neurosonography, aligning two-dimensional (2D) ultrasound scans to their corresponding plane in the three-dimensional (3D) space remains a challenging task. In this paper, we propose a convolutional neural network that predicts the position of 2D ultrasound fetal brain scans in 3D atlas space. Instead of purely supervised learning that requires heavy annotations for each 2D scan, we train the model by sampling 2D slices from 3D fetal brain volumes, and target the model to predict the inverse of the sampling process, resembling the idea of self-supervised learning. We propose a model that takes a set of images as input, and learns to compare them in pairs. The pairwise comparison is weighted by the attention module based on its contribution to the prediction, which is learnt implicitly during training. The feature representation for each image is thus computed by incorporating the relative position information to all the other images in the set, and is later used for the final prediction. We benchmark our model on 2D slices sampled from 3D fetal brain volumes at 18 to 22 weeks' gestational age. Using three evaluation metrics, namely, Euclidean distance, plane angles and normalized cross correlation, which account for both the geometric and appearance discrepancy between the ground-truth and prediction, in all these metrics, our model outperforms a baseline model by as much as 23%, when the number of input images increases. We further demonstrate that our model generalizes to (i) real 2D standard transthalamic plane images, achieving comparable performance as human annotations, as well as (ii) video sequences of 2D freehand fetal brain scans.


Pipeline



Qualitative Results

Snow
Forest
Snow
Forest
(Left) Input 2D frames; (middle) slices sampled from the atlas using the predicted 3D location and (right) predicted 3D plane location in atlas space


Standard Plane Localization


Examples of successful cases (e.g. 1-3) and unsatisfactory cases (e.g. 4-6), which are caused by the misalignment of the testing volumes and the atlas.



Bibtex

   
      @article{yeung2021learning,
        title={Learning to map 2D ultrasound images into 3D space with minimal human annotation},
        author={Yeung, Pak-Hei and Aliasi, Moska and Papageorghiou, Aris T and Haak, Monique 
		and Xie, Weidi and Namburete, Ana IL},
        journal={Medical Image Analysis},
        volume={70},
        pages={101998},
        year={2021},
        publisher={Elsevier}
      }
    

Acknowledgements

PH. Yeung is grateful for support from the RC Lee Centenary Scholarship. A. Papageorghiou is supported by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC). W. Xie is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Programme Grant Seebibyte (EP/M013774/1) and Grant Visual AI (EP/T028572/1). A. Namburete is funded by the UK Royal Academy of Engineering under its Engineering for Development Research Fellowship scheme. We thank Lior Drukker for his valuable suggestions and comments about the work.

This template of this project webpage was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.