Interactive Visual Grounding of Referring Expressions for Human Robot Interaction
Mohit Shridhar, David Hsu
RSS 2018
This is a docker image (~9.2GB) of my demo setup for grounding referring expressions. You can treat this is as a black box; input: image & expression, output: bounding boxes and question captions. See Architecture for more details.
If you find the code useful, please cite:
@inproceedings{Shridhar-RSS-18,
author = {Mohit Shridhar AND David Hsu},
title = {Interactive Visual Grounding of Referring Expressions for Human-Robot Interaction},
booktitle = {Proceedings of Robotics: Science and Systems},
year = {2018}
}
And works in the acknowledgements.
- Ubuntu 16.04
- Docker 18.03.1+
- NVIDIA Docker
- ROS Kinetic
- OpenCV 2 (Optional)
- Tested on NVIDIA GTX 1080 (needs about 2.5 GB RAM)
The docker image contains: ROS (kinetic), Torch, Caffe, and Ingress (source code). To run and test Ingress inside the docker image, you don't need to install any dependencies other than nvidia-docker itself.
Follow the instructions to install NVIDIA docker. You should be able to run this inside docker, if everything is installed properly:
$ nvidia-smi
Clone the repo and build ROS workspace:
$ git clone https://github.com/AdaCompNUS/ingress-proj.git
$ cd ingress/ingress_ros_ws
$ catkin_make
$ source devel/setup.bash
Run the script. The first time you run this command, Docker downloads an 9.2GB image (could take a while!):
$ cd <ingress_dir>
$ sh start_ingress.sh
Inside docker, install lua torch and cuda libraries:
$ luarocks install cutorch
$ luarocks install cunn
$ luarocks install cudnn
In the demo, the ingress docker image is used as a grounding server and the host system acts as a client.
Go inside the docker, edit the ~/ingress_server.sh
script with your network settings:
...
export ROS_MASTER_URI=http://<roscore_ip_addr>:11311
export ROS_IP=<ingress_system_ip_addr>
...
or manually set up the ROS_MASTER_URI
export ROS_MASTER_URI="http://<roscore_ip_addr>:11311"
export ROS_IP=<ingress_system_ip_addr>
Start roscore
on your robot or client-pc.
root@pc:/# roscore
Then start ingress
inside the docker image:
$ sh start_ingress.sh
root@docker-container:/# ingress
Wait until you see METEOR initialized
. That means the grounding server is ready. Now you can send images and expressions to the server, and receive grounded bounding boxes and question captions as output.
Now you can run the example on robot or client-pc
root@pc:/# cd <ingress-repo-root>/examples/
root@pc:/# python interactive_grounding_example.py
Type "the red cup" into the query. This outputs grounding_result.png
and prints out self-referrential and relational question captions:
[INFO] [WallTime: 1532576914.160205] Self Referential Captions:
['a red cup on the table', 'red cup on the table', 'red cup on the table']
[INFO] [WallTime: 1532576914.160599] Relational Captions:
['the red cup in the middle.', 'the red cup on the left.', 'the red cup on the right.']
In docker, to shutdown the ingress
server, use Ctrl + c
or Ctrl + \
.
You can make changes and test ingress ROS client normally
The source code for ingress server is stored in /docker_root. You can make change there. However, when you want to test the change, you have to copy it into docker. If you make changes to ingress ros interfaces (such as ingress_msgs), you have to copy them into docker as well.
docker cp docker_root/. <container-id>:/root/
docker cp ingress_ros_ws <container-id>:/root/
By default, the disambiguation is enabled. It can disabled by setting DISAMBIGUATE=false
in ~/ingress_server.sh
for fast-grounding without disambiguation in docker:
root@docker-container:/# sed -i 's/DISAMBIGUATE=true/DISAMBIGUATE=false/g' ~/ingress_server.sh
root@docker-container:/# ingress
- Make sure the input image is well-lit, and the scene is uncluttered
- Crop the image to exclude irrelevant parts of the scene (e.g: backdrop of the table) to reduce mis-detections
roscore
should be up and running before you start theingress
server- Use tmux to multiplex
roscore
,ingress
andpython interactive_grounding_example.py
- This demo code doesn't contain the interactive (robot-pointing) question asking interface.
- For grounding perspectives (e.g: 'my left', 'your right') see perspective correction guide.
If Lua complains that certain CUDA functions were not found during execution, run the script and reinstall the rocks:
$ luarocks install cutorch
$ luarocks install cunn
$ luarocks install cudnn
Exit and docker commit
the changes to the image.
@inproceedings{densecap,
title={DenseCap: Fully Convolutional Localization Networks for Dense Captioning},
author={Johnson, Justin and Karpathy, Andrej and Fei-Fei, Li},
booktitle={Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition},
year={2016}
}
Nagaraja et. al, Referring Expressions
@inproceedings{nagaraja16refexp,
title={Modeling Context Between Objects for Referring Expression Understanding},
author={Varun K. Nagaraja and Vlad I. Morariu and Larry S. Davis},
booktitle={ECCV},
year={2016}
}