Skip to content

matteo-ronchetti/torch-radon

Repository files navigation

Travis (.com) Documentation Status GitHub Open In Colab

TorchRadon: Fast Differentiable Routines for Computed Tomography

TorchRadon is a PyTorch extension written in CUDA that implements differentiable routines for solving computed tomography (CT) reconstruction problems.

The library is designed to help researchers working on CT problems to combine deep learning and model-based approaches.

Main features:

  • Forward projections, back projections and shearlet transforms are differentiable and integrated with PyTorch .backward().
  • Up to 125x faster than Astra Toolbox.
  • Batch operations: fully exploit the power of modern GPUs by processing multiple images in parallel.
  • Transparent API: all operations are seamlessly integrated with PyTorch, gradients can be computed using .backward(), half precision can be used with Nvidia AMP.
  • Half precision: storing data in half precision allows to get sensible speedups when doing Radon forward and backward projections with a very small accuracy loss.

Implemented operations:

  • Parallel Beam projections
  • Fan Beam projections
  • Shearlet transform

Installation

Currently only Linux is supported, if you are running a different OS please use Google Colab or the Docker image.

Precompiled packages

If you are running Linux you can install Torch Radon by running:

wget -qO- https://raw.githubusercontent.com/matteo-ronchetti/torch-radon/master/auto_install.py  | python -

Google Colab

You can try the library from your browser using Google Colab, you can find an example notebook here.

Docker Image

Docker images with PyTorch CUDA and Torch Radon are available here.

docker pull matteoronchetti/torch-radon

To use the GPU in docker you need to use nvidia-docker

Build from source

You need to have CUDA and PyTorch installed, then run:

git clone https://github.com/matteo-ronchetti/torch-radon.git
cd torch-radon
python setup.py install

If you encounter any problem please contact the author or open an issue.

Benchmarks

The library is noticeably faster than the Astra Toolbox, especially when data is already on the GPU. Main disadvantage of Astra is that it only takes inputs which are on the CPU, this makes training end-to-end neural networks very inefficient. The following benchmark compares the speed of Astra Toolbox and Torch Radon: V100 Benchmark

If we set clip_to_circle=True (consider only the part of the image that is inside the circle) the speed difference is even larger: V100 Benchmark circle

These results hold also on a cheap laptop GPU: GTX1650 Benchmark

Cite

If you are using TorchRadon in your research, please cite the following paper:

@article{torch_radon,
Author = {Matteo Ronchetti},
Title = {TorchRadon: Fast Differentiable Routines for Computed Tomography},
Year = {2020},
Eprint = {arXiv:2009.14788},
journal={arXiv preprint arXiv:2009.14788},
}

Testing

Install testing dependencies with pip install -r test_requirements.txt then test with:

nosetests tests/