DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus
[π Project Page | arXiv] (NeurIPS 2024)
Install the conda environment of ZeroGS.
conda create -n dogs python=3.9
conda activate dogs
cd DOGS/scripts
./scripts/env/install.sh
Our method accelerates the training of 3DGS by 6+ times when evaluated on large-scale scenes while concurrently achieving state-of-the-art rendering quality.
- βοΈ Release evaluation code π
- π² Release pre-trained models on
Mill19
,UrbanScene3D
, andMatrixCity
- βοΈ Release web-viewer
- βοΈ Release training code
- βοΈ Gaussian Splatting trainer π
- βοΈ Decoupled Appearance Embedding of VastGaussian
- βοΈ Gaussian Pruning of LightGaussian
- βοΈ Scaffold-GS trainer π
- βοΈ Support Taming-3DGS π
- π² ADMM Gaussian Splatting trainer
- βοΈ Gaussian Splatting trainer π
- π² Test on street-view scenes
- π² Support distributed training of
Scaffold-GS
andOctree-GS
Follow the instruction of Mill 19 and UrbanScene 3D in Mega-NeRF to download the Mill-19 dataset and the UrbanScene3D dataset. We provide scripts to convert the Mega-NeRF camera poses to the format of COLMAP.
cd DOGS
# Replace the `data_dir` at Line 202 and Line 205 by your own.
python -m scripts.preprocess.meganerf_to_colmap
We also provide a script to convert the camera poses of the MatrixCity dataset into the COLMAP format:
cd DOGS
# Replace the 'data_dir_list' in Line 31 by your own;
# also set the scenes you want to convert at Line 23-27.
python -m scripts.preprocess.matrix_city_to_colmap
We first run the provided script to pre-process a large-scale scene into several blocks:
cd scripts/preprocess
./preprocess_large_scale_data.sh 0 urban3d gaussian_splatting
Visualize scene splitting
Please check and compile my modification of COLMAP. After installation, launch COLMAP's GUI. I extended the original model files of COLMAP with an additional cluster.txt
file, where each line of the file follows the format: [image_id, cluster_id]. Once COLMAP's GUI finds this file, it will render each image with its color corresponding to its cluster ID. Below are some examples of scene splitting:
Additionally, we provide scripts to preprocess your own dataset which inputs a .MOV
video and outputs
the camera poses in COLMAP format:
VIDEO_DIR=x
INPUT_FILE=xx
OUTPUT_FOLDER=xxx
FRAMERATE=3
VOC_TREE_PATH=xxxx
cd scripts/preprocess
# (1) Convert video to image sequence
./video_to_sequence.sh $VIDEO_DIR $INPUT_FILE $OUTPUT_FOLDER $FRAMERATE
# (2) Compute camera poses with COLMAP
./colmap_mapping.sh $VIDEO_DIR $VIDEO_DIR $VOC_TREE_PATH 100 0
cd scripts/train
DATASET=mipnerf360
./train_nvs.sh 0 $EXP_SUFFIX $DATASET gaussian_splatting
We provide configuration files for training the blender
, llff
, matrix_city
, mipnerf360
, tanks_and_temples
and urban3d
datasets. We can also train our own dataset by setting the correct dataset path and scenes in config/gaussian_splatting/custom.yaml
.
We are still brewing the distributed training code due to refactoring and testing. You can try the admm
branch for a quick test: git checkout admm
. Note since the code and CUDA rasterizer for training 3DGS are different to the code when finishing the experiments in the camera ready, it is suggested to evaluate the performance once the our pretrained models are released.
Here we provide scripts and an example to show how to run DOGS on three compute nodes with 9 GPUs in total (1 GPU on a master node and 4 GPUs each of two slave nodes).
Before running the program, we may need to modify the parameters in the provided scripts:
(1) scripts/train/train_admm_master.sh
:
- set
NUM_TOTAL_NODES
to the correct total number of GPUs (In this example, we use 9 GPUs as described above) - set
ETHERNET_INTERFACE
to the ethernet interface of your computer(we can get the correct interface of your server by typingifconfig
in the terminal of a Linux machine) - set
DATASET
to the dataset you want to reconstruct - set the correct IP address of the master node
--master_addr=xx.xx.xx.xx
(2) Modify the above mentioned parameters accordingly in scripts/train/train_admm_worker1.sh
and scripts/train/train_admm_worker2.sh
.
At first, in the terminal of the master node, we run:
cd scripts/train
./train_admm_master.sh $EXP_SUFFIX urban3d_admm
Then, we establish workers in the terminal for each of the two slave nodes:
cd scripts/train
./train_admm_worker1.sh $EXP_SUFFIX urban3d_admm
cd scripts/train
./train_admm_worker2.sh $EXP_SUFFIX urban3d_admm
cd scripts/eval
./eval_nvs.sh 0 $EXP_SUFFIX urban3d gaussian_splatting
After that, we can have a cup of coffee and wait the master node connects with the slave nodes and finishes the training.
If you find this project useful for your research, please consider citing our paper:
@inproceedings{yuchen2024dogaussian,
title={DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus},
author={Yu Chen, Gim Hee Lee},
booktitle={arXiv},
year={2024},
}
This work is built upon 3d-gaussian-splatting. We sincerely thank the authors for releasing their code. Yu Chen is also partially supported by a Google PhD Fellowship for finishing this project.
Copyright Β© 2024, Chen Yu. All rights reserved. Please see the license file for terms.