Installation • Getting Started • Examples • Advanced Tutorials • Developer Tutorials • Cite us • License
DeepTrack2 is a modular Python library for generating, manipulating, and analyzing image data pipelines for machine learning and experimental imaging.
TensorFlow Compatibility Notice: DeepTrack2 version 2.0 and subsequent do not support TensorFlow. If you need TensorFlow support, please install the legacy version 1.7.
The following quick start guide is intended for complete beginners to understand how to use DeepTrack2, from installation to training your first model. Let's get started!
DeepTrack2 2.0 requires at least python 3.9.
To install DeepTrack2, open a terminal or command prompt and run:
pip install deeptrack
or
python -m pip install deeptrack
This will automatically install the required dependencies.
Here you find a series of notebooks that give you an overview of the core features of DeepTrack2 and how to use them:
-
DTGS101 Introduction to DeepTrack2
Overview of how to use DeepTrack 2. Creating images combining DeepTrack2 features, extracting properties, and using them to train a neural network.
-
DTGS111 Loading Image Files Using Sources
Using sources to load image files and to train a neural network.
-
DTGS121 Tracking a Point Particle with a CNN
Tracking a point particle with a convolutional neural network (CNN) using simulated images in the training process.
-
DTGS131 Tracking a Multiple Particles with a U-Net
Tracking multiple particles using a U-net trained on simulated images.
-
DTGS141 Tracking a Multiple Particles with a U-Net
Tracking and distinguishing particles of different sizes in brightfield microscopy using a U-net trained on simulated images.
-
DTGS151 Unsupervised Object Detection
Single-shot unsupervised object detection a using LodeSTAR.
These are examples of how DeepTrack2 can be used on real datasets:
-
DTEx201 MNIST
Training a fully connected neural network to identify handwritten digits using MNIST dataset.
-
DTEx202 Single Particle Tracking
Tracks experimental videos of a single particle. (Requires opencv-python compiled with ffmpeg)
-
DTEx203 Multi-Particle tracking
Detecting quantum dots in a low SNR image.
-
DTEx204 Particle Feature Extraction
Extracting the radius and refractive index of particles.
-
DTEx205 Cell Counting
Counting the number of cells in fluorescence images.
-
DTEx206 3D Multi-Particle tracking
Tracking multiple particles in 3D for holography.
-
DTEx207 GAN image generation
Using a GAN to create cell image from masks.
Specific examples for label-free particle tracking using LodeSTAR:
-
DTEx231A LodeSTAR Autotracker Template
-
DTEx231C LodeSTAR Measuring the Mass of Particles in Holography
-
DTEx231D LodeSTAR Detecting the Cells in the BF-C2DT-HSC Dataset
-
DTEx231E LodeSTAR Detecting the Cells in the Fluo-C2DT-Huh7 Dataset
-
DTEx231F LodeSTAR Detecting the Cells in the PhC-C2DT-PSC Dataset
-
DTEx231G LodeSTAR Detecting Plankton
-
DTEx231H LodeSTAR Detecting in 3D Holography
-
DTEx231J LodeSTAR Measuring the Mass of Cells
Specific examples for graph-neural-network-based particle linking and trace characterization using MAGIK:
-
DTEx241A MAGIK Tracing Migrating Cells
-
DTEx241B MAGIK to Track HeLa Cells
-
DTAT301 deeptrack.features
-
DTAT306 deeptrack.properties
-
DTAT311 deeptrack.image
-
DTAT321 deeptrack.scatterers
-
DTAT323 deeptrack.optics
-
DTAT324 [deeptrack.holography]
-
DTAT325 deeptrack.aberrations
-
DTAT327 deeptrack.noises
-
DTAT329 deeptrack.augmentations
-
DTAT341 deeptrack.sequences
-
DTAT381 deeptrack.math
-
DTAT383 deeptrack.utils
-
DTAT385 [deeptrack.statistics]
-
DTAT387 [deeptrack.types]
-
DTAT389 [deeptrack.elementwise]
-
DTAT391A [deeptrack.sources.base]
-
DTAT391B [deeptrack.sources.folder]
-
DTAT391C [deeptrack.sources.rng]
-
DTAT393A [deeptrack.pytorch.data]
-
DTAT393B [deeptrack.pytorch.features]
-
DTAT395 [deeptrack.extras.radialcenter]
-
DTAT399A [deeptrack.backend.core]
-
DTAT399B [deeptrack.backend.pint_definition]
-
DTAT399C [deeptrack.backend.units]
-
DTAT399D [deeptrack.backend.polynomials]
-
DTAT399E [deeptrack.backend.mie]
-
DTAT399E [deeptrack.backend._config]
Here you find a series of notebooks tailored for DeepTrack2's developers:
-
DTDV401 [Overview of Code Base]
-
DTDV411 Style Guide
The detailed documentation of DeepTrack2 is available at the following link: https://deeptrackai.github.io/DeepTrack2
If you use DeepTrack 2.1 in your project, please cite us:
https://pubs.aip.org/aip/apr/article/8/1/011310/238663
"Quantitative Digital Microscopy with Deep Learning."
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).
See also:
https://nostarch.com/deep-learning-crash-course
Deep Learning Crash Course
Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, Carlo Manzo & Giovanni Volpe.
2025, No Starch Press (San Francisco, CA)
ISBN-13: 9781718503922
https://www.nature.com/articles/s41467-022-35004-y
"Single-shot self-supervised object detection in microscopy."
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt & Giovanni Volpe
Nature Communications, volume 13, article number 7492 (2022).
https://www.nature.com/articles/s42256-022-00595-0
"Geometric deep learning reveals the spatiotemporal fingerprint ofmicroscopic motion."
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe & Carlo Manzo
Nature Machine Intelligence volume 5, pages 71–82 (2023).
https://doi.org/10.1364/OPTICA.6.000506
"Digital video microscopy enhanced by deep learning."
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).
This work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511), the ERC Starting Grant MAPEI (101001267), and the Knut and Alice Wallenberg Foundation.