Skip to content

Official implementation of Bootstrap3D: Improving Multi-view Diffusion Model with Synthetic Data

Notifications You must be signed in to change notification settings

SunzeY/Bootstrap3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Bootstrap3D

Bootstrap3D: Improving Multi-view Diffusion Model with Synthetic Data
Zeyi Sun, Tong Wu, Pan Zhang, Yuhang Zang, Xiaoyi Dong Yuanjun Xiong, Dahua Lin, Jiaqi Wang

📜 News

🚀 [2024/6/4] The paper and project page are released!

💡 Highlights

  • 🔥 A new Multi-View Diffusion model trained on high quality synthetic data and capable of generating multi-view images closely follow text prompt.
  • 🔥 Denser captioned Objaverse Dataset using finetuned 3D aware MV-LLaVA powered by GPT-4V.
  • 🔥 A High Quality synthetic dataset for high asethetic 3D content creation.

👨‍💻 Todo

  • Training code of MV-Diffusion model based on PixArt.
  • BS-Synthetic3D HQ 3D-object dataset.
  • Release of MV-PixArt-alpha, MV-Pixart-sigma model
  • BS-Objaverse Dataset cart launched on huggingface.
  • MV-LLaVA model and web demo.
  • Paper and project page.

⚡ Quick Start

MV-LLaVA

📜 News

MV-LLaVA is trained on 30K GPT-4V generated instructive conversation pairs, enable LLaVA to process multi-view images rendered from 3D content, chat about it and generate dense descriptive captions or provide quality estimation.

It's 7B model is available on huggingface

We use this model to provide quality estimation on Objaverse and rewrite dense descriptive captions, We call this caption dataset BS-Objaverse(BootStrap-Objaverse), it is now available on huggingface.

We also use this model to process synthetic multi-view images generated by SV3D and Zero123++, It's full version of 1M high quality multi-view images will be made publicly available soon.

🛠️ Usage

Installation (Infer only)

Our MV-LLaVA is based on ShareGPT-4V, thanks for their awesome work! You can clone our repo and cd MV_LLaVA && pip install -e . to install share4v package.

  • launch our demo through python app.py
  • batch inference your multi-view images using batch scripts in tools/

Installation (Training)

training demo clone our repo and cd MV_LLaVA && pip install -e . to install share4v package. first use bash scripts/slurm_pretrain_7b_mv.sh to align CLIP with LLama, than run bash scripts/slrum_finetune_7b_mv.sh to do instruct tuning.

we have uploaded a demo objaverse multi-view data (10 images only) in data/obj_demo, its json for pretraining and instruct tuning are available in data/demo_obj_pretrain.json and data/demo_obj_instruct.json. You can generate your own data following the same format. It's worth noticing that pretraining data only support single-turn conversation.

You can overlook the modification here to MV-LLaVA's modification based on Share4V. During your own special usage, you only need to focus on these lines of code.

If you only need to change training data, you can focus on line of codes with modify tag (search this tag in your IDE).

full data preparation (Objaverse)

  1. download full cap3D dataset of objaverse rendered images.
  2. download BS-Objaverse dataset GPT-4V generated annotations obj_descript_gpt_10k.json, convert its into the similar format as demo did.
  3. prepare share4v dataset (optional to mitigate overfitting)

✒️ Citation

If you find our work helpful for your research, please consider giving a star ⭐ and citation 📝

@misc{sun2024bootstrap3dimprovingmultiviewdiffusion,
      title={Bootstrap3D: Improving Multi-view Diffusion Model with Synthetic Data}, 
      author={Zeyi Sun and Tong Wu and Pan Zhang and Yuhang Zang and Xiaoyi Dong and Yuanjun Xiong and Dahua Lin and Jiaqi Wang},
      year={2024},
      eprint={2406.00093},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2406.00093}, 
}

About

Official implementation of Bootstrap3D: Improving Multi-view Diffusion Model with Synthetic Data

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published