Attempt at Pytorch implementation of Unsupervised Attention-guided Image-to-Image Translation.
This architecture uses an attention module to identify the foreground or salient parts of the images onto which image translation is to be done.
Some of the results shown in the paper -
Download dataset
bash datasets/download_datasets.sh <cyclegan dataset arguement>
Train
train.py
optional arguments
--resume <checkpoint path to resume from>
--dataroot <root of the dataset images>
--LRgen <lr of generator>
--LRdis <lr of discriminator>
--LRattn <lr of attention module>