Adaptive 3D Localization of 2D Freehand
Ultrasound Brain Images


MICCAI 2022

Pak Hei Yeung1
Moska Aliasi2
Monique Haak2
the INTERGROWTH-21st Consortium3
Weidi Xie4,5
Ana I.L. Namburete1

1Ultrasound NeuroImage Analysis Group, University of Oxford
2Division of Fetal Medicine, Leiden University Medical Center
3Nuffield Department of Women’s and Reproductive Health, University of Oxford
4Shanghai Jiao Tong University
5Visual Geometry Group, University of Oxford

[Paper]
[Code]
[Video]
[Bibtex]


Video


*subtitles avaliable

Abstract


Two-dimensional (2D) freehand ultrasound is the mainstay in prenatal care and fetal growth monitoring. The task of matching corresponding cross-sectional planes in the 3D anatomy for a given 2D ultrasound brain scan is essential in freehand scanning, but challenging. We propose AdLocUI, a framework that Adaptively Localizes 2D Ultrasound Images in the 3D anatomical atlas without using any external tracking sensor. We first train a convolutional neural network with 2D slices sampled from co-aligned 3D ultrasound volumes to predict their locations in the 3D anatomical atlas. Next, we fine-tune it with 2D freehand ultrasound images using a novel unsupervised cycle consistency, which utilizes the fact that the overall displacement of a sequence of images in the 3D anatomical atlas is equal to the displacement from the first image to the last in that sequence. We demonstrate that AdLocUI can adapt to three different ultrasound datasets, acquired with different machines and protocols, and achieves significantly better localization accuracy than the baselines. AdLocUI can be used for sensorless 2D freehand ultrasound guidance by the bedside.


Pipeline


Pipeline of our proposed framework, AdLocUI. During training, 2D slices, Si, sampled from co-aligned 3D volumes are used to train a regression ConvNet to predict the locations, Li, and displacement Dik, of the 2D slices in the 3D anatomical atlas. The ConvNet is then fined-tuned unsupervisedly with 2D freehand ultrasound images, Ii, based on the proposed cycle consistency. We can then use the fine-tuned ConvNet to localize Ii of the same domain (i.e. acquired with the same machines and protocols) in the predefined 3D anatomical atlas.


Datasets

Summarization of different datasets used for our experiments.


Results

Evaluation results (mean ± standard deviation) on volume-sampled 2D images on two settings, (a) offline analysis and (b) online prediction, evaluated by Euclidean distance (ED) and dihedral angle (DA). The voxel size is 0.6mm. ↓ indicates lower values being more accurate. * indicates manual annotation being used.


Some Qualitative Results


Localization of 2D freehand ultrasound images in the 3D anatomical atlas (i.e. fetal brain). 2D slices sampled from the 3D atlas using image locations predicted by the baseline (Yeung et al.) and AdLocUI are presented, where our predictions show better correspondence (i.e. emphasized by the red arrows) with the ultrasound images, suggesting more accurate 3D localization prediction by AdLocUI.



Bibtex

   
      @inproceedings{yeung2022adaptive,
        title={Adaptive 3D Localization of 2D Freehand Ultrasound Brain Images},
        author={Yeung, Pak-Hei and Aliasi, Moska and Haak, Monique and the INTERGROWTH-21ST Consortium 
		and Namburete, Ana IL and Xie, Weidi},
        booktitle={International conference on Medical Image Computing and Computer Assisted Intervention},
        pages={},
        year={2022}
      }
    

Acknowledgements

PH. Yeung is grateful for support from the RC Lee Centenary Scholarship. A. Namburete is funded by the UK Royal Academy of Engineering under its Engineering for Development Research Fellowship scheme. W. Xie is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Programme Grant Seebibyte (EP/M013774/1) and Grant Visual AI (EP/T028572/1).

This template of this project webpage was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.