The Styled Video Generator is a project that generates a video from a given set of short video clips. The generator can also apply style translations to the video (e.g. painting-like videos).
- A CUDA compatible GPU is highly recommended
- Python 3
- Python packages Torch | Torchvision | Pandas | Pillow | NumPy | Decord | MoviePy
- Clone this repo:
git clone https://github.com/williamhxy/styled_video_generator
- Initialize the submodule "pytorch-CycleGAN-and-pix2pix":
git submodule init
- Update the submodule "pytorch-CycleGAN-and-pix2pix":
git submodule update
Run the "initialize.py" script in the root folder to generate the classification label record.
- {path_to_the_video_dataset} is the path to the video dataset folder.
python initialize.py {path_to_the_video_dataset}
Run the "generate.py" script in the root folder to generate the translated output video.
- {path_to_the_video_dataset} is the path to the video dataset folder.
- {length_of_output_video_in_second} is the video length in second as an integer.
- {video_frame_rate} is the frame rate per second as an integer
python generate.py {path_to_the_video_dataset} {length_of_output_video_in_second} {video_frame_rate}
- data: Folder used to keep temporary data and metadata.
- extern: All submodule repositories are here.
- models: Pre-trained CycleGAN models for running the image translation.
- results: The outputs from the algorithm.
- scripts: All supporting scripts are here.