Skip to content

DeepTrack2 is a modular Python library for generating, manipulating, and analyzing image data pipelines for machine learning and experimental imaging.

License

Notifications You must be signed in to change notification settings

DeepTrackAI/DeepTrack2

Repository files navigation

DeepTrack2 - A comprehensive deep learning framework for digital microscopy.

PyPI version Python version

InstallationGetting StartedExamplesAdvanced TutorialsDeveloper TutorialsCite usLicense

DeepTrack2 is a modular Python library for generating, manipulating, and analyzing image data pipelines for machine learning and experimental imaging.

TensorFlow Compatibility Notice: DeepTrack2 version 2.0 and subsequent do not support TensorFlow. If you need TensorFlow support, please install the legacy version 1.7.

The following quick start guide is intended for complete beginners to understand how to use DeepTrack2, from installation to training your first model. Let's get started!

Installation

DeepTrack2 2.0 requires at least python 3.9.

To install DeepTrack2, open a terminal or command prompt and run:

pip install deeptrack

or

python -m pip install deeptrack

This will automatically install the required dependencies.

Getting Started

Here you find a series of notebooks that give you an overview of the core features of DeepTrack2 and how to use them:

Examples

These are examples of how DeepTrack2 can be used on real datasets:

Specific examples for label-free particle tracking using LodeSTAR:

Specific examples for graph-neural-network-based particle linking and trace characterization using MAGIK:

Advanced Tutorials

Developer Tutorials

Here you find a series of notebooks tailored for DeepTrack2's developers:

Documentation

The detailed documentation of DeepTrack2 is available at the following link: https://deeptrackai.github.io/DeepTrack2

Cite us!

If you use DeepTrack 2.1 in your project, please cite us:

https://pubs.aip.org/aip/apr/article/8/1/011310/238663

"Quantitative Digital Microscopy with Deep Learning."
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).

See also:

https://nostarch.com/deep-learning-crash-course

Deep Learning Crash Course
Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, Carlo Manzo & Giovanni Volpe.
2025, No Starch Press (San Francisco, CA)
ISBN-13: 9781718503922

https://www.nature.com/articles/s41467-022-35004-y

"Single-shot self-supervised object detection in microscopy." 
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt & Giovanni Volpe
Nature Communications, volume 13, article number 7492 (2022).

https://www.nature.com/articles/s42256-022-00595-0

"Geometric deep learning reveals the spatiotemporal fingerprint ofmicroscopic motion."
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe & Carlo Manzo
Nature Machine Intelligence volume 5, pages 71–82 (2023).

https://doi.org/10.1364/OPTICA.6.000506

"Digital video microscopy enhanced by deep learning."
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).

Funding

This work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511), the ERC Starting Grant MAPEI (101001267), and the Knut and Alice Wallenberg Foundation.