Skip to content

[ICCV 2023] Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation

License

Notifications You must be signed in to change notification settings

Yuxinn-J/Scenimefy

Repository files navigation

Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation

MMLab@NTU affiliated with S-Lab, Nanyang Technological University
In ICCV 2023.

📃Paper | 🌐Project Page | 📂Anime Scene Dataset | 🤗Demo



Updates

  • [11/2023] Train code is available.
  • [08/2023] Integrated to Hugging Face. Enjoy the web demo!
  • [08/2023] Inference code and Dataset is released.
  • [08/2023] Project page is built.
  • [07/2023] The paper is accepted to ICCV 2023!

🔧 Installation

  1. Clone this repo:
    git clone https://github.com/Yuxinn-J/Scenimefy.git
    cd Scenimefy
  2. Install dependent packages: After installing Anaconda, create a new Conda environment using conda env create -f Semi_translation/environment.yml.

⚡ Quick Inference

  1. Python script 2. Gradio demo

Python script

  • Download pre-trained models: Shinkai_net_G.pth

    wget https://github.com/Yuxinn-J/Scenimefy/releases/download/v0.1.0/Shinkai_net_G.pth -P Semi_translation/pretrained_models/shinkai-test/
  • Inference! Simply run the following command, or refer the ./Semi_translation/script/test.sh for detailed usage:

    cd Semi_translation
    
    python test.py --dataroot ./datasets/Sample --name shinkai-test --CUT_mode CUT  --model cut --phase test --epoch Shinkai --preprocess none
    • Results will be saved in ./Semi_translation/results/shinkai-test/ by default.
    • To prepare your own test images, you can refer to the data folder structure in ./Semi_translation/datasets/Sample, and place your test images in testA.

Gradio demo

  • We provide a UI for testing Scenimefy, which is built with gradio. To launch the demo, simply execute the following command in your terminal:
    git clone https://huggingface.co/spaces/YuxinJ/Scenimefy
    pip install -r requirements.txt
    pip install gradio
    python app.py
    
  • This demo is also hosted on Hugging Face🤗.

🚋 Quick I2I Train

Dataset Preparation

  • LHQ dataset: a dataset of 90,000 nature landscape images [downlaod link]. Place it in ./datasets/unpaired_s2a, and rename as trainA.
  • Anime dataset: 5,958 shinkai-style anime scene images. Please follow the instructions in Anime_dataset/README.md. Place it in ./datasets/unpaired_s2a, and rename as trainB.
  • Pseudo-paired dataset: 30,000 synthetic pseudo paired images generated from StyleGAN with the same seed. You may finetune your own StyleGAN or use our provided data [downlaod link] for quick start. Place them in ./datasets/pair_s2a
  • Create your own dataset

Training

Refer to the ./Semi_translation/script/train.sh file, or use the following command:

python train.py --name exp_shinkai  --CUT_mode CUT --model semi_cut \ 
--dataroot ./datasets/unpaired_s2a --paired_dataroot ./datasets/pair_s2a \ 
--checkpoints_dir ./pretrained_models \
--dce_idt --lambda_VGG -1  --lambda_NCE_s 0.05 \ 
--use_curriculum  --gpu_ids 0
  • If the anime dataset quality is low, consider add a global perceptual loss to maintain content consistency, e.g., set --lambda_VGG 0.2.

🏁 Start From Scratch

StyleGAN Finetuning [TODO]

Segmenation Selection

📂 Anime Scene Dataset

anime-dataset It is a high-quality anime scene dataset comprising 5,958 images with the following features:

  • High-resolution (1080×1080)
  • Shinkai-style (from 9 Mokota Shinkai films)
  • Pure anime scene: manual dataset curation by eliminating irrelevant and low-quality images

In compliance with copyright regulations, we cannot directly release the anime images. However, you can conveniently prepare the dataset following instructions here.

🤟 Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{jiang2023scenimefy,
  title={Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation},
  author={Jiang, Yuxin and Jiang, Liming and Yang, Shuai and Loy, Chen Change},
  booktitle={ICCV},
  year={2023}
}

🤗 Acknowledgments

Our code is mainly developed based on Cartoon-StyleGAN and Hneg_SRC. We thank facebook for their contribution of Mask2Former.

🗞️ License

Distributed under the S-Lab License. See LICENSE.md for more information.

About

[ICCV 2023] Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •