Educational repository for applying the main video data curation techniques presented in the Stable Video Diffusion paper. The goal is to provide an interactive resource for the community dig deeper into the curation techniques presented in the paper. As such, it doesn't guarantee precise correctness.
The curation techniques used in this repository are discussed in Appendix C of the paper, in detail. These include:
- Cascaded Cut Detection
- Keyframe-Aware Clipping
- Optical Flow
- Synthetic Captioning
- Caption similarities and Aesthetics
- Text Detection
The notebooks present these techniques on a single video file sourced from the UCF-101 dataset.
Readers are advised to first go through Appendix C of the Stable Video Diffusion paper before referring to the notebooks.
Below are the primary dependencies:
- PyTorch (follow the installation instructions from the official site)
transformers
opencv-python
numpy
ffmpeg
Other dependencies will be detailed below.
Clip extraction
Refer to the video_preprocessing_clip_extraction.ipynb
notebook for this. You'd need to install the scenedetect
library from here: https://github.com/Breakthrough/PySceneDetect. This shows both cascaded cut detection and keyframe-aware clipping. At the end of the notebook, you should expect to see different clips extracted from the provided video.
Captioning
video_preprocessing_captioning.ipynb
presents synthetic captioning from a single video clip.
This uses three models:
- CoCa (relies on
open_clip
) - V-BLIP (relies on EILEV)
- Zephyr-7B (relies on
transformers
)
We had to apply some corrections to eilev
to make it work. The correction patch can be found here.
Optical Flow
video_preprocessing_optical_flow_score.ipynb
notebook shows the optical flow score computation only using the Farneback algorithm. It doesn't, however, show RAFT.
Caption similarities and Aesthetics
This is straightforward and is implemented in the video_preprocessing_similarity_aesthetics.ipynb
notebook.
Text Detection
Refer to the video_preprocessing_text_detection.ipynb
notebook for this. We use a wrapper library called craft_text_detector
(repository) for this as it provides a handy package around the CRAFT text detection model. However, to make it work, we had to do some changes. The patch can be found here.
Thanks to ChatGPT for all the help.
Thanks to Dhruv Nair for his reviews.