This project implements a Conditional DCGAN using PyTorch to generate synthetic images conditioned on class labels. The implementation is tested on the BreastMNIST dataset, part of the MedMNIST collection.
- Uses Conditional GAN architecture to generate images conditioned on class labels.
- Efficient training with GPU acceleration.
- Integration with MedMNIST for medical image synthesis.
- Visualization of generated images and training loss metrics.
- Customizable training parameters and model architecture.
- Clone the repository from GitHub.
- Navigate to the project directory.
- Install the required dependencies listed in the
requirements.txt
file.
The project uses the BreastMNIST dataset, which is a subset of the MedMNIST collection. This dataset comprises mammography images labeled as normal or abnormal.
Execute the training script to start the training process. The script trains both the generator and discriminator, displaying the loss for each epoch and saving model checkpoints.
After training, use the generator model to synthesize new images conditioned on specific class labels. This demonstrates the model's ability to generate diverse and realistic images based on the learned data distribution.
The generated images can be evaluated qualitatively by visual inspection and compared with real images to assess the generative model's performance.
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to the creators of the MedMNIST dataset for providing the medical images used in training and testing the model.
For more information and to contribute, please refer to the official repository.