ImplicitVol: Sensorless 3D Ultrasound Reconstruction
with Deep Implicit Representation


Arxiv preprint

Pak Hei Yeung1
Linde Hesse1
Moska Aliasi2
Monique Haak2
The INTERGROWTH-21st Consortium3
Weidi Xie4*
Ana I.L. Namburete1*

1Oxford Machine Learning in NeuroImaging Lab, University of Oxford
2Department of Obstetrics, Leiden University Medical Center
3Nuffield Department of Women's and Reproductive Health, University of Oxford
4Visual Geometry Group, University of Oxford
* These authors jointly supervised this work

[Paper]
[Code (coming soon)]
[Video]
[Bibtex]



Demonstration



Abstract


The objective of this work is to achieve sensorless reconstruction of a 3D volume from a set of 2D freehand ultrasound images with deep implicit representation. In contrast to the conventional way that represents a 3D volume as a discrete voxel grid, we do so by parameterizing it as the zero level-set of a continuous function, i.e. implicitly representing the 3D volume as a mapping from the spatial coordinates to the corresponding intensity values. Our proposed model, termed as ImplicitVol, takes a set of 2D scans and their estimated locations in 3D as input, jointly refining the estimated 3D locations and learning a full reconstruction of the 3D volume. When testing on real 2D ultrasound images, novel cross-sectional views that are sampled from ImplicitVol show significantly better visual quality than those sampled from existing reconstruction approaches, outperforming them by over 30% (NCC and SSIM), between the output and ground-truth on the 3D volume testing data. The code will be made publicly available.


Pipeline




Qualitative Results (Volume-Sampled Images)



Visualization of 3D reconstruction from volume-sampled testing images by different approaches. Images sampled at different novel cross-sectional views are presented and compared with the ground-truth.


Qualitative Results (Native Freehand Images)



Results of 3D reconstruction from native freehand 2D ultrasound. Novel view images sampled from different planes from volumes reconstructed by different approaches are presented. ImplicitVol, shows better visual quality in under-sampled region (yellow boxes) and is more robust against inaccurate position estimation (red boxes)


Qunatitative Results



Evaluation results (mean ± standard deviation) of different approaches on volume-sampled 2D images. ↑ indicates higher values being more accurate, vice versa.



Bibtex

   
      @article{yeung2021implicitvol,
        title={ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation},
        author={Yeung, Pak-Hei and Hesse, Linde and Aliasi, Moska and Haak, Monique 
                and Xie, Weidi and Namburete, Ana IL and others},
        journal={arXiv preprint arXiv:2109.12108},
        year={2021}
      }
    

Acknowledgements

PH. Yeung is grateful for support from the RC Lee Centenary Scholarship. A. Namburete is funded by the UK Royal Academy of Engineering under its Engineering for Development Research Fellowship scheme. W. Xie is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Programme Grant Seebibyte (EP/M013774/1) and Grant Visual AI (EP/T028572/1).

This template of this project webpage was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.