Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROS topic as capture input #20

Open
awesomebytes opened this issue Apr 4, 2014 · 10 comments
Open

ROS topic as capture input #20

awesomebytes opened this issue Apr 4, 2014 · 10 comments

Comments

@awesomebytes
Copy link
Member

Right now you can only capture from a plugged in sensor, it would be handy to be able to train from a topic (say, rosbag, running robot, whatever).

Also, the first image that comes out from my kinect/xtion is always super dark because of warming up of the device and that may be bad for the training to be done later on with the captured data.

@corot
Copy link

corot commented Mar 22, 2015

I'm trying to do so right now, in order to use my Creative Senz3D. Looks like feasible (I manage to use topics instead of the openni driver) but there are still problems to solve. I will commit my changes in a fork if it ever works, but I don't think I can find time to merge it with the main repo and create all the alternative operation modes and documentation required.

@xMutzelx
Copy link

@corot Can you please publish your changes of the source code? I would like to use topics instead of the openni driver. I could continue your work. Thank you very much

@corot
Copy link

corot commented Feb 8, 2017

Hi @xMutzelx, I didn't work on this for ages, as I'm not capturing nor training objects. But looking back at my local ORK workspace, I created at the time versions of the files you need to modify, renamed as xxx_ros:

ork_capture.zip

As you can see, are very similar to the ones using OpenNI, so you should have no problems using them.

@fivef
Copy link

fivef commented Apr 4, 2017

Based on @corot's code I have capture working with the Asus Xtion via topics. I had to change the image encoding for the depth image from mono16 to 32FC1 (it has to be the same as it is set in the rgb/image_raw messages you receive from your 3d camera). I haven't tried it on the Kinect 2 yet. I used the dot pattern for capture.

Here the capture command with all the remaps:
rosrun object_recognition_capture capture -n 40 --seg_z_min 0.01 -o swiss_cup.bag /camera/rgb/image_color:=/marvin/camera/rgb/image_raw /camera/depth_registered/camera_info:=/marvin/camera/depth_registered/camera_info /camera/rgb/camera_info:=/marvin/camera/rgb/camera_info /camera/depth_registered/image_raw:=/marvin/camera/depth_registered/image_raw

https://github.com/iki-wgt/capture/tree/topic_capture

Stuff which needs to be done before this can be merged:

  • move the origin of the objects to the center (needed by linemod. currently it's at the bottom and has to be altered manually e.g. via meshlab)

  • rename the changed files to e.g. capture_ros

  • Test with orb template

  • Find a way to set the depth image encoding based on the input image or convert to e.g. mono16?

  • Test with Kinect 2

@chyphen777
Copy link

I am also trying to run ork_capture with Asus Xtion Pro (openni2), and get errors relate to opnni_wrapper (openni). I am new to ORK, and not sure how to have ork_capture modified to work with openni2, even after reading comments above. Is there any new version of ork_capture supporting openni2. Or are there any more detailed instructions for how to make it work with Asus Xtion Pro. Thanks.

@EmilyJrxx
Copy link

@fivef @corot Hi! thanks for your hints I had a try to use topic as input, but I failed. I'm using a realsense D435 depth camera, and I also have a Kinect v2, unfortunately both cannot be launched with openni2.launch. Anyway I added ecto_ros.init(sys.argv, 'orb_template', anonymous=False) in my local src, and changed the OpenNISource to OpenNISubscriber, and remap topics to the corresbonding topics published by my camera.

But when I ran rosrun object_recognition_capture orb_template.py -o my_textured_plane, it seemed that these two cameras both don't have a 'mask' property

[diag_msg] = no inputs or outputs found

[cell_name] = Source

[tendril_key] = mask_depth

Maybe I should modify the name/tag used in 'orb_template'? If you have any advice or thought please tell me. Thanks for your help!

@corot
Copy link

corot commented Oct 15, 2019

Wow,,,, I got rid of these problems ages ago by switching to a ASUX Xtion.

That said, and if I remember properly, yes, I tweaked keys in the code, and I remember in particular mask_depth. I think I changed to just depth somewhere, but I won't have the code to verify it until the end of the month, sorry.

@EmilyJrxx
Copy link

@corot Thanks for replying so soon ! I am new to RGBD cameras like those mentioned above, I once thought whatever RGBD camera it is, it should publish similar topics (image, depth & camera_info), and if these codes can work on one, it should work on others too based on OpenNISubscriber and ROS, so the problem is just about different names of 'properties' published by cameras, like using different standards. Maybe I misunderstood that, I will try to modify the code and also try other ways to get 3D models for ORK. I appreciate it if i can get a chance to see your code. Thanks !

@corot
Copy link

corot commented Oct 29, 2019

Hi @EmilyJrxx , unfortunately I lost my local changes. But as far as I can remember, the change was only about changing tendrils keys, as I explained before. Good luck!

@EmilyJrxx
Copy link

@corot Thanks for remembering this ! I already managed to launch the capture process by changing tendril keys as you suggested. As for Details:

  • System: ubuntu 16.04 LTS + ROS kinetic
  • Camera: Realsense D435
  • Changes: 'mask_depth' ->> 'depth'

But now I run into another problem as I presented in #33 , maybe it's about the system itself I dont know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants