Sli2Vol: Annotate a 3D Volume from a Single Slice
with Self-Supervised Learning


Pak Hei Yeung1
Ana I.L. Namburete1
Weidi Xie2

1Ultrasound NeuroImage Analysis Group, University of Oxford
2Visual Geometry Group, University of Oxford

[Code (coming soon)]


The objective of this work is to segment any arbitrary structures of interest (SOI) in 3D volumes by only annotating a single slice, (i.e. semi-automatic 3D segmentation). We show that high accuracy can be achieved by simply propagating the 2D slice segmentation with an affinity matrix between consecutive slices, which can be learnt in a self-supervised manner, namely slice reconstruction. Specifically, we compare the proposed framework, termed as Sli2Vol, with supervised approaches and two other unsupervised/ self-supervised slice registration approaches, on 8 public datasets (both CT and MRI scans), spanning 9 different SOIs. Without any parameter-tuning, the same model achieves superior performance with Dice scores (0-100 scale) of over 80 for most of the benchmarks, including the ones that are unseen during training. Our results show generalizability of the proposed approach across data from different machines and with different SOIs: a major use case of semi-automatic segmentation methods where fully supervised approaches would normally struggle. The source code will be made publicly available.



Summarization of different datasets used for our experiments.


Evaluation results of different approaches on different datasets and SOIs. Mean Dice scores ± standard deviation of Dice score between volumes are reported. Higher value represents better performance. In row a, some results from both state-of-the-art methods and 3D UNets trained by us (values in the bracket) are reported.

Some Qualitative Results

Examples of segmentation result generated by Sli2Vol. The middle slice is the initial annotation. Red contours represent groundtruth segmentation while blue contours represent segmentation generated by Sli2Vol.


PH. Yeung is grateful for support from the RC Lee Centenary Scholarship. A. Namburete is funded by the UK Royal Academy of Engineering under its Engineering for Development Research Fellowship scheme. W. Xie is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Programme Grant Seebibyte (EP/M013774/1) and Grant Visual AI (EP/T028572/1).

This template of this project webpage was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.