This repository contains the code used to run experiments related to Virtual Contrast Enhancement CESM.
git clone
cd VCE_CESM
- First, create a Python virtual environment (optional) using the following command:
python -m venv ve-vce_cesm
- Activate the virtual environment using the following command:
source ve-vce_cesm/bin/activate
- Install the dependencies using the following command:
pip install -r requirements.txt
The code is organized into three folders:
- "configs": contains YAML files used to set experiment parameters.
- "data": contains "raw" and "processed" folders. The "processed" folder contains TXT files with paths to dataset images, while the "raw" folder should contain images without preprocessing.
- "src": contains "model" and "utils" folders. The "model" folder includes subfolders for Autoencoder, CycleGAN, and PixPix, each containing specific model code. The "utils" folder includes the "utils.py" generic utility code and the "util_data.py" code for image processing.
For each model (Autoencoder, CycleGAN, and Pix2Pix), you can run training, transfer learning, or testing code. These codes are located in the "src -> model -> Autoencoder/CycleGAN/Pix2Pix" folders.
- Running the training code trains the model on the public dataset. Experiment parameters can be set using the configuration files "autoencoder_train.yaml", "cyclegan_train.yaml", or "pix2pix_train.yaml".
- Running the transfer learning code loads a pre-trained model on the public dataset and retrains it on the FPUCBM dataset. Experiment parameters, including the pre-trained model to load, can be set using the configuration files "autoencoder_tran_learn.yaml", "cyclegan_tran_learn.yaml", or "pix2pix_tran_learn.yaml".
- Running the test code tests the trained model on the desired dataset. Experiment parameters, including the test dataset, can be set using the configuration files "autoencoder_test.yaml", "cyclegan_test.yaml", or "pix2pix_test.yaml".
For questions and comments, feel free to contact: aurora.rofena@unicampus.it, valerio.guarrasi@unicampus.it
If you use our code in your study, please cite as:
@article{rofena2024deep,
title={A deep learning approach for virtual contrast enhancement in contrast enhanced spectral mammography},
author={Rofena, Aurora and Guarrasi, Valerio and Sarli, Marina and Piccolo, Claudia Lucia and Sammarra, Matteo and Zobel, Bruno Beomonte and Soda, Paolo},
journal={Computerized Medical Imaging and Graphics},
pages={102398},
year={2024},
publisher={Elsevier}
}
You can download the article at: A Deep Learning Approach for Virtual Contrast Enhancement in Contrast Enhanced Spectral Mammography
The code is distributed under a MIT License