[CSUR] A Survey on Video Diffusion Models
-
Updated
Dec 9, 2024
[CSUR] A Survey on Video Diffusion Models
A python tool that uses GPT-4, FFmpeg, and OpenCV to automatically analyze videos, extract the most interesting sections, and crop them for an improved viewing experience.
Codes for ID-Specific Video Customized Diffusion
Generate video from text using AI
Video Diffusion Alignment via Reward Gradients. We improve a variety of video diffusion models such as VideoCrafter, OpenSora, ModelScope and StableVideoDiffusion by finetuning them using various reward models such as HPS, PickScore, VideoMAE, VJEPA, YOLO, Aesthetics etc.
[CVPR 2024 Highlight] ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models
[NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation
[3DV 2025] MotionDreamer: Exploring Semantic Video Diffusion features for Zero-Shot 3D Mesh Animation
Text to Video API generation documentation
Add a description, image, and links to the video-diffusion topic page so that developers can more easily learn about it.
To associate your repository with the video-diffusion topic, visit your repo's landing page and select "manage topics."