|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The objective of this work is to achieve sensorless reconstruction of a 3D volume from a set of 2D freehand ultrasound images with deep implicit representation. In contrast to the conventional way that represents a 3D volume as a discrete voxel grid, we do so by parameterizing it as the zero level-set of a continuous function, i.e. implicitly representing the 3D volume as a mapping from the spatial coordinates to the corresponding intensity values. Our proposed model, termed as ImplicitVol, takes a set of 2D scans and their estimated locations in 3D as input, jointly refining the estimated 3D locations and learning a full reconstruction of the 3D volume. When testing on real 2D ultrasound images, novel cross-sectional views that are sampled from ImplicitVol show significantly better visual quality than those sampled from existing reconstruction approaches, outperforming them by over 30% (NCC and SSIM), between the output and ground-truth on the 3D volume testing data. The code will be made publicly available. |
|
|
|
|
Bibtex@article{yeung2021implicitvol, title={ImplicitVol: Sensorless 3D Ultrasound Reconstruction with Deep Implicit Representation}, author={Yeung, Pak-Hei and Hesse, Linde and Aliasi, Moska and Haak, Monique and Xie, Weidi and Namburete, Ana IL and others}, journal={arXiv preprint arXiv:2109.12108}, year={2021} } |
AcknowledgementsThis template of this project webpage was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here. |