Skip to content

Code for Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception

Notifications You must be signed in to change notification settings

UT-Austin-RPL/HouseDitto

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception

Cheng-Chun Hsu, Zhenyu Jiang, Yuke Zhu

ICRA 2023

Project | arxiv

teaser

Introduction

Our approach, named Ditto in the House, discovers possible articulated objects through affordance prediction, interacts with these objects to produce articulated motions, and infers the articulation properties from the visual observations before and after each interaction. The approach consists of two stages — affordance prediction and articulation inference. During affordance prediction, we pass the static scene point cloud into the affordance network and predict the scene-level affordance map. Then, the robot interacts with the object based on those contact points. During articulation inference, we feed the point cloud observations before and after each interaction into the articulation model network to obtain articulation estimation. By aggregating the estimated articulation models, we build the articulation models of the entire scene.

intro

If you find our work useful in your research, please consider citing.

Installation

The codebase consists of three modules:

  • DITH-igibson: interaction and observation collection in iGibson simulator
  • DITH-pointnet: affordance prediction
  • DITH-ditto: articulation inference

Create conda environments and install required packages by running

cd DITH-igibson
conda env create -f conda_env.yaml -n DITH-igibson

cd ../DITH-pointnet
conda env create -f conda_env.yaml -n DITH-pointnet

cd ../DITH-ditto
conda env create -f conda_env.yaml -n DITH-ditto

Build Ditto's dependents by running

cd DITH-ditto && conda activate DITH-ditto
python scripts/convonet_setup.py build_ext --inplace

Data Collection

  1. Run cd DITH-igibson && conda activate DITH-igibson
  2. Follow these instructions to import CubiCasa5k scenes into iGibon simulator.
  3. Generate training and testing data by running
python room_dataset_generate.py
python room_dataset_split.py
python room_dataset_preprocess.py

The generated data can be found under dataset/cubicasa5k_rooms_processed.

Affordance Prediction

  1. Run cd DITH-pointnet && conda activate DITH-pointnet

  2. Set datadir in configs/train_pointnet2.yaml and configs/test_pointnet2.yaml.

  3. Train the model

python train.py
  1. Set ckpt_path in configs/test_pointnet2.yaml

  2. Test the model

python test.py

Interaction

  1. Run cd DITH-igibson && conda activate DITH-igibson
  2. Interact with the scene and save the results
python affordance_prediction_generate.py
  1. Collect novel scene observations
# generate articulation observation for training
python object_dataset_generate_train_set.py
# for testing
python object_dataset_generate_test_set.py
  1. Preprocess for further training
python object_dataset_preprocess.py

The generated data can be found under dataset/cubicasa5k_objects_processed.

Articulation Inference

  1. Run cd DITH-ditto && conda activate DITH-ditto

  2. Set data_dir in configs/config.yaml.

  3. Train the model

python run.py experiment=Ditto
  1. Set resume_from_checkpoint in configs/experiment/Ditto_test.yaml

  2. Test the model

python run_test.py experiment=Ditto_test

Related Repositories

  1. The codebase is based on the amazing Lightning-Hydra-Template.

  2. We use Ditto and PointNet++ as our backbone.

Citing

@inproceedings{Hsu2023DittoITH,
  title={Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception},
  author={Cheng-Chun Hsu and Zhenyu Jiang and Yuke Zhu},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2023}
}

About

Code for Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages