A framework for urinary bladder segmentation in CT images using deep learning.
Contains code to train and test two different deep neural network architectures for semantic segmentation using training and testing data obtained from combined PET/CT scans.
To use the framework, you need:
- Python 3.5 with the packages specified in the requirements.txt file
- TensorFlow 1.3
- TensorFlow-Slim library
Our networks were trained and tested on the publically available RIDER Lung PET CT Dataset.
The data was preprocessed and prepared using the MeVisLab network Exploit 18F-FDG enhanced urinary bladder in PET data for Deep Learning Ground Truth Generation in CT scans.
This software produces ground-truth segmentations of the urinary bladder in CT using the co-registered PET data. PET radiotracer 18F-FDG accumulates in the urinary bladder, therefore, this organ can be distinguished using simple thresholding. Furthermore, data augmentation is applied using MeVisLab software. For further information, please refer to the corresponding paper:
-
Creating TFRecords files for training and testing data. The script
make_tfrecords_dataset.py
contains code to convert a directory of image files to the TensorFlow recommended file format TFRecords. TFRecords files are easy and fast to process in TensorFlow. -
Training networks. The scripts
FCN_training.py
andResNet_training.py
contain code for training two different neural network architectures for semantic segmentation. FCN is based on FCN-8s by Long et al. using pre-trained VGG. ResNet is based on DeepLab by Chen et al. using pre-trained ResNet V2. -
Testing networks. The scripts
FCN_testing.py
andResNet_testing.py
contain code for testing the previously trained networks. -
Evaluation metrics. The file
metrics.py
contains functions to calculate following metrics for evaluating segmentation results:- True Positive Rate (TPR)
- True Negative Rate (TNR)
- Intersection over union (Jaccard Index, IoU)
- Dice-Sorensen coefficient (DSC)
- Hausdorff distance (HD)
To use the framework for creating a tf-records file:
- Place your images and ground truth in folders called Images and Labels, respectively.
- Specify the path to your data, the desired filename and the desired image size in
make_tfrecords_dataset.py
. - Run the script!
To use the framework for training:
- Download the pre-trained model checkpoint you want to use from TensorFlow-Slim and place it in a \Checkpoints folder in your project repository.
- Specify your paths in the top section of
FCN_training.py
orResNet_training.py
. - Run the script!
To use the framework for testing:
- Specify your paths in the top section of
FCN_testing.py
orResNet_testing.py
. - Run the script!
Parts of the code are based on tf-image-segmentation. If using, please cite his paper:
@article{pakhomov2017deep,
title={Deep Residual Learning for Instrument Segmentation in Robotic Surgery},
author={Pakhomov, Daniil and Premachandran, Vittal and Allan, Max and Azizian, Mahdi and Navab, Nassir},
journal={arXiv preprint arXiv:1703.08580},
year={2017}
}
This project is licensed under the MIT License - see the LICENSE.md file for details.
If you use the framework, please cite the following paper:
Gsaxner, Christina et al. Exploit 18F-FDG Enhanced Urinary Bladder in PET Data for Deep Learning Ground Truth Generation in CT Scans. SPIE Medical Imaging 2018.
@inproceedings{gsaxner2018exploit,
title={Exploit 18 F-FDG enhanced urinary bladder in PET data for deep learning ground truth generation in CT scans},
author={Gsaxner, Christina and Pfarrkirchner, Birgit and Lindner, Lydia and Jakse, Norbert and Wallner, J{\"u}rgen and Schmalstieg, Dieter and Egger, Jan},
booktitle={Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging},
volume={10578},
pages={105781Z},
year={2018},
organization={International Society for Optics and Photonics}
}