Skip to content

ucsdarclab/CtRNet-robot-pose-estimation

Repository files navigation

Markerless Camera-to-Robot Pose Estimation via Self-supervised Sim-to-Real Transfer

PyTorch implementation of CtRNet https://arxiv.org/abs/2302.14332

Dependencies

Recommend set up the environment using Anaconda. Code is developed and tested on Ubuntu 20.04.

  • Python(3.8)
  • Numpy(1.22.4)
  • PyTorch(1.10.0)
  • torchvision(0.11.1)
  • pytorch3d(0.6.2)
  • Kornia(0.6.3)
  • Transforms3d(0.3.1)

More details see environment.yml.

Usage

  • See inference_single_frame.ipynb for example single frame inference.
  • We provide ROS node for CtRNet, which subscribes image and joint state topics and publishes robot pose.
python ros_node/panda_pose.py

Dataset

  1. DREAM dataset
  2. Baxter dataset

Weights

Weights for Panda and Baxter can be found here.

Videos

Using CtRNet for visual servoing experiment with moving camera.

video_1.mp4
video_2.mp4