PyTorch implementation of Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision
Figure 1: Illustration of the proposed joint learning framework.
Figure 2: Example of data pairs of ZRR and SR-RAW datasets, where clear spatial misalignment can be observed with the reference line. With such inaccurately aligned training data, PyNet [22] and Zhang et al. [62] are prone to generating blurry results with spatial misalignment, while our results are well aligned with the input.
-
Prerequisites
- Python 3.x and PyTorch 1.6.
- OpenCV, NumPy, Pillow, CuPy, colour_demosaicing, tqdm, lpips, scikit-image and tensorboardX.
-
Dataset
- Zurich RAW to RGB dataset. It can also be downloaded from Baidu Netdisk.
- Preprocessed SR-RAW Dataset. Note that here we preprocessed the original SR-RAW dataset according to the code. You can also download the original SR-RAW dataset here.
- The pre-trained models can be downloaded. You need to put them in the
RAW-to-sRGB/ckpt/
folder.
-
Zurich RAW to RGB dataset
-
SR-RAW Dataset
-
Zurich RAW to RGB dataset
-
SR-RAW Dataset
- You can specify which GPU to use by
--gpu_ids
, e.g.,--gpu_ids 0,1
,--gpu_ids 3
,--gpu_ids -1
(for CPU mode). In the default setting, all GPUs are used. - You can refer to options for more arguments.
If you find it useful in your research, please consider citing:
@inproceedings{RAW-to-sRGB,
title={Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision},
author={Zhang, Zhilu and Wang, Haolin and Liu, Ming and Wang, Ruohao and Zuo, Wangmeng and Zhang, Jiawei},
booktitle={ICCV},
year={2021}
}
This repo is built upon the framework of CycleGAN, and we borrow some code from PyNet, Zoom-Learn-Zoom, PWC-Net and AdaDSR, thanks for their excellent work!