Video Restoration Processing Pipeline
We've released a public colab notebook for use! use the link below to try:
As the name suggests, this is a video restoration pipeline pulling from various cutting-edge technologies and merging them to create one processing pipeline, for videos, to rule them all. The pipeline borrows from multiple AI techniques from different contributors, these techniques are mentioned in our releases page. If you like our project please give us a star and also don't forget to like the other projects used by the video restoration pipeline 🤠
NOTE Only one video at a time!
Setting up the environment
# Make sure you have git installed
git clone https://github.com/cliffordkleinsr/DE-SRFREN.git
cd DE-SRFREN/v0.0.3
# Make sure you have Python and PyTorch installed -.-"
# Install basicsr
pip install basicsr
# Install facexlib
# We use face detection and face restoration helper in the facexlib package
pip install facexlib #parsing path net and ResNet faces
pip install realesrgan
pip install gfpgan
pip install -r requirements.txt
As a side note, make sure you have Pytorch compiled with Cuda binaries installed otherwise inference speed will be greatly impacted
- Basic argument structure:
-i or --input, your input video directory
-o or --video_output, your video output
-n, model name
--ffmpeg_bin, path to ffmpeg.exe
--ffprobe_bin, path to fprobe.exe
--batch, ability to batch images
--batches, num batches default is 4
-h or --help, for help with arguments
Note The arguments --ffmpef_bin and --ffprobe_bin should only be used if you have not specified the 'ffmpeg binaries' in your environment variables. Batched inference (controlled by --batch parameter, default is 4). lower is better but not <=1
- For quick inference on Windows
Use if ffmpeg is not installed to path
python inference.py -i inputs/your_video.mp4 --ffmpeg_bin ffmpeg/bin/ffmpeg.exe --ffprobe_bin ffmpeg/bin/ffprobe.exe --face_enhance --suffix outx2
Note: face_enhancer only works with videos of real people, If you are working with anime/animation (cartoon) characters, use:
python inference.py -i inputs/your_anime_video.mp4 --ffmpeg_bin ffmpeg/bin/ffmpeg.exe --ffprobe_bin ffmpeg/bin/ffprobe.exe -n realesr-animevideov3 --suffix outx2
Use if ffmpeg is installed to windows environment path
python inference.py -i inputs/your_video.mp4 --face_enhance --suffix outx2
face_enhancer only works with videos of real people
python inference.py -i inputs/your_anime_video.mp4 -n realesr-animevideov3 --suffix outx2
- For quick inference on Colab/Linux environment is similar to Windows but avoid using the
--ffmpeg_bin
and--ffprobe_bin
when the binaries are already installed - The Vector Quantized Code book is deprecated and thus can only be used with v0.0.1:
sher10s.1.mp4
processed.mp4
Original | Processed |
---|---|
Original | Processed |
---|---|
- Moved the final scaling and uint8 quantization to GPU, reducing CPU and main memory bandwidth consumption. 2.5x speed-up.
- Instruct FFMPEG to use RGB frames instead of BGR so no need to swap channels.
- Batched inference (controlled by invoking the --batch & --batches parameter, default is 4).
- Instruct torch to make contiguous tensors after the BCHW -> BHWC transform on GPU. So no need to copy the buffer before writing to FFMPEG . Reduced output IO time by 10x.
- Use NVENC pipilene when available to decode and encode the images when piping inputs
- Take a video frame and turn it into images
- Super resolve the image
- Restore the Faces in each frame step
- Merge Frames H.264 codec MP4
- Speed up inference Uses NVENC PIPE
- Old Video Scratch Detection
- Global Scene restoration
- Frame Generation 24-60 FPS
- More support for different video formats
- Color Black and White Images
- Lossless Decoding and encoding
- Sound restoration
@InProceedings{clifford2023desrfren,
author = {Clifford Njoroge},
title = {DE-SRFREN: Video restoration Processing Pipeline},
date = {2023}
}
Real-ESRGAN
@InProceedings{wang2021realesrgan,
author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
date = {2021}
}
VQFR
@inproceedings{gu2022vqfr,
title={VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder},
author={Gu, Yuchao and Wang, Xintao and Xie, Liangbin and Dong, Chao and Li, Gen and Shan, Ying and Cheng, Ming-Ming},
year={2022},
booktitle={ECCV}
}
GFPGAN
@InProceedings{wang2021gfpgan,
author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}
IMAGEIO