This notebook provides a minimal working example of the liver and liver cancer segmentation nnU-Net models, a series of tools for the segmentation of such anatomies from contrast-enhanced CT images. The model was trained using 201 contrast-enhanced CT images from several clinical sites. The dataset included a variety of primary cancers, pre- and post-therapy images, with variable volume size, field of view, slice thickness, and the presence of metal artefacts.
We test the model by implementing an end-to-end (cloud-based) pipeline on publicly available chest CT scans hosted on the Imaging Data Commons (IDC), starting from raw DICOM CT data and ending with a DICOM SEG object storing the segmentation masks generated by the AI pipeline. The testing dataset we use is external and independent from the data used in the development phase of the model (training and validation) and is composed of a wide variety of image types (from image acquisition settings, to the phase of the contrast agent, to the presence, location and size of a tumor mass).
The way all the operations are executed - from pulling data to data postprocessing and the standardisation of the results - have the goal of promoting transparency and reproducibility.
You can find the notebook here.
Please cite the following article if you use this code or pre-trained models:
Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J. and Maier-Hein, K.H., 2021. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), pp.203-211, https://doi.org/10.1038/s41592-020-01008-z.
The original code is published on GitHub and the p[retrained network can be found here. The original code is published using the Apache-2.0 license.