I'm a career software engineer with a computer science background, but do not work with AI/ML in my daily duties. This repository aims to take concepts, models, and code from papers and around the web, and re-implement them using the techniques and best practices I've picked up over the years, and hopefully learn some things in the process. If you're like me and learned to code long before you learned about ML, this repo could help you, too.
All training, measurements, etc. performed on one desktop with an RTX 3090 Ti.
Model | Purpose | Paper |
---|---|---|
Rectified Flow | Image Synthesis | Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, Esser et al. (2024) |
Assuming python and CUDA are installed, you can ensure necessary packages are available via pip install -r requirements.txt
.
You will also need ffmpeg installed.
Models and optimizers available on HuggingFace.
This should be run from the root of the repository.
python -m yarr.trainers.rectified_flow
Note: Weights and Biases is a freemium service for monitoring AI/ML training runs, using it will allow you to see details and samples throughout training. Use --wandb-entity
to pass your team name to use it.
Usage: python -m yarr.trainers.rectified_flow [OPTIONS]
Train a Rectified Flow model on either MNIST, Fashion-MNIST, CIFAR-10, or CIFAR-100.
Options:
-lr, --learning-rate FLOAT Learning rate for the optimizer. [default: 0.001]
-e, --num-epochs INTEGER Number of epochs to train the model. [default: 100]
-w, --num-workers INTEGER Number of workers for the data loader. [default: 4]
-b, --batch-size INTEGER Batch size for training. [default: 250]
--wandb-entity TEXT Weights and Biases entity.
--resume Resume training from the latest checkpoint.
--fashion-mnist Use Fashion MNIST dataset instead of MNIST.
--cifar-10 Use CIFAR-10 dataset instead of MNIST.
--cifar-100 Use CIFAR-100 dataset instead of MNIST.
--help Show this message and exit.
All code in this repository is released into the public domain with no guarantees or warranty under the unlicense.
While this repository is primarily for my own use and likely won't be useful as a library, I would welcome any recommendations others may have on other experiments to conduct or ways my implementations can be improved.
Simo Ryu (cloneofsimo), minRF for the inspiration and reference implementation.
@misc{ryu2024minrf,
author = {Simo Ryu},
title = {minRF: Minimal Implementation of Scalable Rectified Flow Transformers},
year = 2024,
publisher = {Github},
url = {https://github.com/cloneofsimo/minRF},
}