Lumina-T2X is a unified framework for Text to Any Modality Generation
-
Updated
Aug 6, 2024 - Python
Lumina-T2X is a unified framework for Text to Any Modality Generation
SOTA Text-to-music (TTM) Generation (OpenMusic)
[ICCV 2023] Efficient Diffusion Training via Min-SNR Weighting Strategy
🔥🔥🔥Official Codebase of "DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation"
The official implementation of "CAME: Confidence-guided Adaptive Memory Optimization"
Implementation of Diffusion Transformer Model in Pytorch
FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.
Implementation of Latent Diffusion Transformer Model in Tensorflow / Keras
This repo implements Diffusion Transformers(DiT) in PyTorch and provides training and inference code on CelebHQ dataset
A repo of a modified version of Diffusion Transformer
Pytorch and JAX Implementation of Scalable Diffusion Models with Transformers | Diffusion Transformers in Pytorch and JAX
A diffusion transformer implementation in Flax
Add a description, image, and links to the diffusion-transformer topic page so that developers can more easily learn about it.
To associate your repository with the diffusion-transformer topic, visit your repo's landing page and select "manage topics."