Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perpendicular Pattern in calibration and open cv Error #966

Open
upaltamentept opened this issue Jun 28, 2024 · 28 comments
Open

Perpendicular Pattern in calibration and open cv Error #966

upaltamentept opened this issue Jun 28, 2024 · 28 comments

Comments

@upaltamentept
Copy link

Good afterrnoon.

I was using the ATOM for a project involving a camera(OAKD) and a single LiDAR(Livox) calibration and, after running the calibration command, it would give me very high result values for the camera, to which the rviz looks like this:

error

After suggestion of the professor, I changed the tf's in both the urdf.xacro and then in the set_initial_estimate, and after confirming that they were correct, and calibrating, it would give this error:

error2

For an aditional comment, when examining the labeling in the dataset_playback phase, the labeling would always be very wrong, in the likes of the next photo, which lead to me having to correct the entire dataset 1 by 1:

erro3

Wondering if I could get some help debugging these errors, been a long time trying to figure out how to fix them. Thanks in advance!

@miguelriemoliveira
Copy link
Member

Hello @upaltamentept ,

Read the documentation and try to follow every step. It sounds to me like a problem with the labelling. Did you confirm that the labels are correct using the dataset playback?

@upaltamentept
Copy link
Author

I read and confirmed. I tried collecting the data once again, and noticed this warning:

image

I'm using a Stereo Camera of OAK-D S2, but only the RGB part of the image, could that be the origin of the problem?

@miguelriemoliveira
Copy link
Member

Please paste text instead of images. Its easier to search and read.

Yes, that model is unsupported, but I am not sure that is the problem.

Can you paste here a camera_info message?

@upaltamentept
Copy link
Author

Sure, here's the camera_info topic in text of the camera in text:
"
header:
stamp:
sec: 1716911888
nanosec: 293393600
frame_id: oakd_link_color_camera_optical_frame
height: 468
width: 624
distortion_model: rational_polynomial
d:

  • 9.699882507324219
  • -86.14419555664062
  • -0.00038348272209987044
  • 0.0012107902439311147
  • 251.8818359375
  • 9.480454444885254
  • -84.89832305908203
  • 248.2621307373047
    k:
  • 500.65032958984375
  • 0.0
  • 315.5879211425781
  • 0.0
  • 500.65032958984375
  • 233.938720703125
  • 0.0
  • 0.0
  • 1.0
    r:
  • 1.0
  • 0.0
  • 0.0
  • 0.0
  • 1.0
  • 0.0
  • 0.0
  • 0.0
  • 1.0
    p:
  • 500.65032958984375
  • 0.0
  • 315.5879211425781
  • 0.0
  • 0.0
  • 500.65032958984375
  • 233.938720703125
  • 0.0
  • 0.0
  • 0.0
  • 1.0
  • 0.0
    binning_x: 0
    binning_y: 0
    roi:
    x_offset: 0
    y_offset: 0
    height: 0
    width: 0
    do_rectify: false
    "

I was unsure if I could switch from rational polynomial to a pinhole model, this by publishing this topic and changing that distortion model and matrix to the pinhole format, which has less information. Thanks in advance

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Jul 13, 2024

To help you further I need you to

  1. paste the command you ran when having the error
  2. paste the text of the error and not the image

Thanks

@upaltamentept
Copy link
Author

Good evening, here's the command and the result that it gives in its entirety, just to have a little ambiguity. If there's need for any other information feel free to express it.

Command:

roslaunch segunda_estufa collect_data.launch output_folder:=/home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_8 overwrite:=true bag_rate:=0.5 bag_start:=0

Message:
'''

 logging to /home/pedro/.ros/log/ddfea06a-4154-11ef-bb67-c7372614b833/roslaunch-pedro-Legion-5-15ACH6H-8999.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://pedro-Legion-5-15ACH6H:45741/

SUMMARY
========

PARAMETERS
 * /robot_description: <?xml version="1....
 * /rosdistro: noetic
 * /rosversion: 1.16.0
 * /use_sim_time: True

NODES
  /
    automatic_data_collector (atom_calibration/automatic_data_collector)
    collect_data (atom_calibration/collect_data)
    hand_camera_labeler (atom_calibration/rgb_labeler)
    lidar_labeler (atom_calibration/lidar3d_labeler)
    robot_state_publisher (robot_state_publisher/robot_state_publisher)
    rosbag_play (rosbag/play)
    rviz (rviz/rviz)
    throttler_hand_camera (topic_tools/throttle)
    throttler_lidar (topic_tools/throttle)

auto-starting new master
process[master]: started with pid [9024]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to ddfea06a-4154-11ef-bb67-c7372614b833
process[rosout-1]: started with pid [9049]
started core service [/rosout]
process[throttler_hand_camera-2]: started with pid [9052]
process[throttler_lidar-3]: started with pid [9057]
process[robot_state_publisher-4]: started with pid [9059]
process[rosbag_play-5]: started with pid [9068]
process[rviz-6]: started with pid [9071]
[ WARN] [1720901823.731282500]: The root link base_link has an inertia specified in the URDF, but KDL does not support a root link with an inertia.  As a workaround, you can add an extra dummy link to your URDF.
process[collect_data-7]: started with pid [9080]
[ INFO] [1720901823.736090906]: Opening /home/pedro/ros/workspaces/atom/src/data/estufa_test/ros1/2.bag
process[hand_camera_labeler-8]: started with pid [9083]
process[lidar_labeler-9]: started with pid [9084]
process[automatic_data_collector-10]: started with pid [9085]

Waiting 2 seconds after advertising topics...Creating a lidar labeler for sensor lidar
[INFO] [1720901824.434437, 0.000000]: Waiting for first message on topic /livox/lidar_mod ...
Starting RGB labeler for sensor hand_camera
[INFO] [1720901824.471021, 0.000000]: Waiting for first message on topic /oakd/color/image ...


Warning: Dataset /home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_8 exists.
Moving it to temporary folder: /tmp/estufa_2_8_2024-07-13-21-17-04
This will be deleted after a system reboot! If you want to keep it, copy the folder to some other location.


Executing command: mv /home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_8 /tmp/estufa_2_8_2024-07-13-21-17-04
Command executed in 0.00839 secs.
Parsing description file /home/pedro/ros/workspaces/atom/src/Agrob4Simulation-ros-melodic-AgRob_simulator-launch-weta_v2/AgRob_simulator/launch/weta_v2/weta.urdf.xacro
Executing command: xacro /home/pedro/ros/workspaces/atom/src/Agrob4Simulation-ros-melodic-AgRob_simulator-launch-weta_v2/AgRob_simulator/launch/weta_v2/weta.urdf.xacro -o /tmp/description.urdf

Command executed in 0.07219 secs.
Initializing patterns ... [OK]
Sensors:
Number of sensors: 2
Configuring sensor hand_camera
Waiting for message /oakd/color/image ...
 done.

Hit space to toggle paused, or 's' to step.
Created interactive marker for point clouds.ration: 0.046134 / 50.926012               
... received!ag Time: 1717594981.500368   Duration: 0.171897 / 50.926012               
Topic /oakd/color/image has type Image
Waiting for camera_info message on topic /oakd/color/camera_info ...
/oakd/color/image
Waiting for camera_info message on topic /oakd/color/camera_info ... 
... received!ag Time: 1717594981.616403   Duration: 0.287932 / 50.926012               
Waiting for transformation from oakd_link_color_camera_optical_frame to base_link
... received!
Configuration for sensor hand_camera is complete.
Configuring sensor lidar
Waiting for message /livox/lidar_mod ...
received!
ATOM Warn: Unsupported distortion model: rational_polynomial
... received!ag Time: 1717594981.837969   Duration: 0.509498 / 50.926012               
Topic /livox/lidar_mod has type PointCloud2
Waiting for transformation from livox_frame to base_link
... received!
Configuration for sensor lidar is complete.
Waiting for transformation from BWL_Link to base_link (max 3 secs)926012               
Waiting for transformation from left_body_Link to base_link (max 3 secs)
Waiting for transformation from BWR_Link to base_link (max 3 secs)
Waiting for transformation from right_body_Link to base_link (max 3 secs)
Waiting for transformation from FWL_Link to base_link (max 3 secs)
Waiting for transformation from FWR_Link to base_link (max 3 secs)
Waiting for transformation from VLP-16_base_link to base_link (max 3 secs)
Waiting for transformation from visual_lid_Link to base_link (max 3 secs)
Waiting for transformation from VLP-16 to base_link (max 3 secs)
Waiting for transformation from camera_link to base_link (max 3 secs)
Waiting for transformation from oakd_link_color_camera_optical_frame to base_link (max 3 secs)
Waiting for transformation from sprayer_nozzle_Link to base_link (max 3 secs)
Waiting for transformation from visual_Lidar_Link to base_link (max 3 secs)
Waiting for transformation from visual_lidar_support_Link to base_link (max 3 secs)
Waiting for transformation from visual_Rim_BWL_Link to base_link (max 3 secs)
Waiting for transformation from visual_Rim_BWR_Link to base_link (max 3 secs)
Waiting for transformation from visual_Rim_FWL_Link to base_link (max 3 secs)
Waiting for transformation from visual_Rim_FWR_Link to base_link (max 3 secs)
Waiting for transformation from visual_antenna_bullet_Link to base_link (max 3 secs)
Waiting for transformation from visual_antenna_support_Link to base_link (max 3 secs)
Waiting for transformation from visual_sprayer_tower_Link to base_link (max 3 secs)
Waiting for transformation from livox_frame to base_link (max 3 secs)
Waiting for transformation from visual_gps_antenas_Link to base_link (max 3 secs)
Waiting for transformation from visual_green_button_Link to base_link (max 3 secs)
Waiting for transformation from visual_sprayer_front_body_Link to base_link (max 3 secs)
Waiting for transformation from visual_nozzle_black_Link to base_link (max 3 secs)
Waiting for transformation from visual_nozzle_parts_Link to base_link (max 3 secs)
Waiting for transformation from visual_sprayer_back_body_Link to base_link (max 3 secs)
[ WARN] [1720901827.147832264, 1717594981.983966623]: Interactive marker 'lidar' contains unnormalized quaternions. This warning will only be output once but may be true for others; enable DEBUG messages for ros.rviz.quaternions to see more details.
Waiting for first message on topic hand_camera/labels ... received!26012               
Waiting for first message on topic lidar/labels ... received!/ 50.926012               
Setting up subscriber for hand_camera label msgs on topic hand_camera/labels
Setting up subscriber for lidar label msgs on topic lidar/labels
Changes applied ...'''

@miguelriemoliveira
Copy link
Member

Wait, but there is no error in the paste above? It should abort with the error you mentioned in your first post, right?

Also, no need to put all the output, search for the place where the error is and paste only a few lines above

@upaltamentept
Copy link
Author

No, it doesn't abort, simply issues the next warning as ATOM Warn:


... received!ag Time: 1717594981.500824   Duration: 0.172353 / 50.926012               
Topic /oakd/color/image has type Image
Waiting for camera_info message on topic /oakd/color/camera_info ...
/oakd/color/image
Waiting for camera_info message on topic /oakd/color/camera_info ... 
received!]  Bag Time: 1717594981.616598   Duration: 0.288126 / 50.926012               
ATOM Warn: Unsupported distortion model: rational_polynomial
... received!
Waiting for transformation from oakd_link_color_camera_optical_frame to base_link
... received!
Configuration for sensor hand_camera is complete.
Configuring sensor lidar
Waiting for message /livox/lidar_mod ...

The collections seem to work fine for the camera, as shown in the picture above in the first message of this issue.

I am able to run everything, even besides this error. However, I still posted it as it might be the reason why the lidar labeling is not working properly and/or the final part the target appears sideways.

Just to make sure I'm doing this right, I'm not using the data labeler in the next picture as my supervisor, André Aguiar, said there was no need as there is a "cloud" effect around the pattern. Is that statement correct?

image

There is also and example of one dataset, similar to the others, where I couldn't not run the calibration step after finishing correcting the dataset in the pointcloud part, giving me the next error message :

Traceback (most recent call last):
  File "/home/pedro/ros/workspaces/atom/devel/lib/atom_calibration/calibrate", line 15, in <module>
    exec(compile(fh.read(), python_script, 'exec'), context)
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_calibration/scripts/calibrate", line 792, in <module>
    main()
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_calibration/scripts/calibrate", line 739, in main
    opt.startOptimization(optimization_options=options)
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_core/src/atom_core/optimization_utils.py", line 372, in startOptimization
    self.getNumberOfFunctionCallsPerIteration(optimization_options)
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_core/src/atom_core/optimization_utils.py", line 412, in getNumberOfFunctionCallsPerIteration
    _ = least_squares(self.internalObjectiveFunction, self.x, verbose=0, jac_sparsity=self.sparse_matrix,
  File "/home/pedro/.local/lib/python3.8/site-packages/scipy/optimize/_lsq/least_squares.py", line 830, in least_squares
    f0 = fun_wrapped(x0)
  File "/home/pedro/.local/lib/python3.8/site-packages/scipy/optimize/_lsq/least_squares.py", line 825, in fun_wrapped
    return np.atleast_1d(fun(x, *args, **kwargs))
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_core/src/atom_core/optimization_utils.py", line 284, in internalObjectiveFunction
    self.vis_function_handle(self.data_models)  # call visualization function
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_calibration/src/atom_calibration/calibration/visualization.py", line 720, in visualizationFunction
    cv2.line(image, (x, y), (x, y), color, int(6E-3 * diagonal))
cv2.error: OpenCV(4.6.0) :-1: error: (-5:Bad argument) in function 'line'
> Overload resolution failed:
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type

Don't know if this error as any prior precedent.

Thank you in advance

@miguelriemoliveira
Copy link
Member

I am able to run everything, even besides this error.

Is it an error or a warning?

However, I still posted it as it might be the reason why the lidar labeling is not working properly and/or the final part the target appears sideways.

It could be the reason...

I'm not using the data labeler in the next picture as my supervisor, André Aguiar, said there was no need as there is a "cloud" effect around the pattern. Is that statement correct?

Not sure what this means. We do need to have the lidar data correctly annotated.

Can you run the dataset_playback and post an image of the annotations of the lidar?

There is also and example of one dataset, similar to the others, where I couldn't not run the calibration step after finishing correcting the dataset in the pointcloud part, giving me the next error message :

This error occurs when you run which command?

@upaltamentept
Copy link
Author

Is it an error or a warning?

A warning, my mistake

Not sure what this means. We do need to have the lidar data correctly annotated.

In this section, there is a placeable marker in the Rviz saying "lidar_labeler, place on pattern points", like in this image:
image

What I said was that my supervisor said it was not needed to use this marker as the pattern is being detected. And I therefore asked if this statement was true or not, did I really need not to use the marker?

Can you run the dataset_playback and post an image of the annotations of the lidar?

Sure, here's the dataset_playback of a single collection of a certain dataset, given each dataset possesses around 15 collections, before and after manual labeling

Before:
image
After:
image

This error occurs when you run which command?

This error occurs when i execute the next command
roslaunch segunda_estufa calibrate.launch

rosrun atom_calibration calibrate -json /home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_5/dataset_corrected.json -v -rv -si

@miguelriemoliveira
Copy link
Member

What I said was that my supervisor said it was not needed to use this marker as the pattern is being detected. And I therefore asked if this statement was true or not, did I really need not to use the marker?

OK, now I get it. The idea is to move the interactive marker until the yellow spheres are on top of the points that belong to the pattern. André said there is nothing to do because the points are already in the pattern.

Sure, here's the dataset_playback of a single collection of a certain dataset, given each dataset possesses around 15 collections, before and after manual labeling

The labelling seems fine.

This error occurs when i execute the next command

OK, got it. Can you please upload or send to me the corrected dataset.
I will also need your calibration package to try to calibrate from my side.
Much better if you share the github link for it.

I think the problem may be in the rgb camera model, as you suggested.
But that error is in the visualization, so I think it is an error before.

@miguelriemoliveira
Copy link
Member

The problem occurs here

for idx, point in enumerate(collection['labels'][pattern_key][sensor_key]['idxs_projected']):

And I think that, as you suggested, it has to do with the camera model, because these points are obtained in the projection here:

# Required by the visualization function to publish annotated images
idxs_projected = []
for idx in range(0, pts_in_image.shape[1]):
idxs_projected.append({'x': pts_in_image[0][idx], 'y': pts_in_image[1][idx]})
collection['labels'][pattern_key][sensor_key]['idxs_projected'] = idxs_projected # store projections

Which come from this projection function:

def projectToCamera(intrinsic_matrix, distortion, width, height, pts):
"""
Projects a list of points to the camera defined transform, intrinsics and distortion
:param intrinsic_matrix: 3x3 intrinsic camera matrix
:param distortion: should be as follows: (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]])
:param width: the image width
:param height: the image height
:param pts: a list of point coordinates (in the camera frame) with the following format: np array 4xn or 3xn
:return: a list of pixel coordinates with the same length as pts
"""
# print('intrinsic_matrix=' + str(intrinsic_matrix))
# print('distortion=' + str(distortion))
# print('width=' + str(width))
# print('height=' + str(height))
# print('pts.shape=' + str(pts.shape))
_, n_pts = pts.shape
# Project the 3D points in the camera's frame to image pixels
# From https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
pixs = np.zeros((2, n_pts), dtype=float)
k1, k2, p1, p2, k3 = distortion
# fx, _, cx, _, fy, cy, _, _, _ = intrinsic_matrix
# print('intrinsic=\n' + str(intrinsic_matrix))
fx = intrinsic_matrix[0, 0]
fy = intrinsic_matrix[1, 1]
cx = intrinsic_matrix[0, 2]
cy = intrinsic_matrix[1, 2]
x = pts[0, :]
y = pts[1, :]
z = pts[2, :]
dists = norm(pts[0:3, :], axis=0) # compute distances from point to camera
xl = np.divide(x, z) # compute homogeneous coordinates
yl = np.divide(y, z) # compute homogeneous coordinates
r2 = xl ** 2 + yl ** 2 # r square (used multiple times bellow)
xll = xl * (1 + k1 * r2 + k2 * r2 ** 2 + k3 * r2 ** 3) + 2 * p1 * xl * yl + p2 * (r2 + 2 * xl ** 2)
yll = yl * (1 + k1 * r2 + k2 * r2 ** 2 + k3 * r2 ** 3) + p1 * (r2 + 2 * yl ** 2) + 2 * p2 * xl * yl
pixs[0, :] = fx * xll + cx
pixs[1, :] = fy * yll + cy
# Compute mask of valid projections
valid_z = z > 0
valid_xpix = np.logical_and(pixs[0, :] >= 0, pixs[0, :] < width)
valid_ypix = np.logical_and(pixs[1, :] >= 0, pixs[1, :] < height)
valid_pixs = np.logical_and(valid_z, np.logical_and(valid_xpix, valid_ypix))
return pixs, valid_pixs, dists

So yes, I think it has something to do with the model.

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Jul 14, 2024

Why do you have this strange model for the oak camera? Is it a wide angle lens?
Did you run the opencv intrinsic calibration? That should produce a pinhole model ...

In my datasets I typically have plump_bob

image

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Jul 14, 2024

Ok, so I think I have good news.

The distortion_model is defined in the camera info messages

https://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/CameraInfo.html

and there are only two types of distortion.

Here's how in opencv we use the plumb_bob distortion model.

And here's how to implement the rational_polynomial distortion model.

So this function only tackles the plum_bob, but I think I can change it to accommodate the rational_polynomial model as well.

I will try to make the adaptation in this branch

https://github.com/lardemua/atom/tree/rational_polynomial_distortion_model

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Jul 14, 2024

Sure, here's the camera_info topic in text of the camera in text:

This format is not really helpful. Look at my camera_info message post:

header: 
  seq: 5
  stamp: 
    secs: 1910
    nsecs: 188000000
  frame_id: "rgb_left_optical_frame"
height: 800
width: 1200
distortion_model: "plumb_bob"
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K: [1039.2275459834757, 0.0, 600.5, 0.0, 1039.2275459834757, 400.5, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [1039.2275459834757, 0.0, 600.5, -0.0, 0.0, 1039.2275459834757, 400.5, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False

Looks better right? The easier it is for me to read your posts, the better I can help you.
Can you please post your camera info in this format?

@miguelriemoliveira
Copy link
Member

Well I am not really sure how to advance because I cannot find anywhere on the documentation how the distortion parameters are organized in the distortion vector.

Here:

https://docs.ros.org/en/api/sensor_msgs/html/msg/CameraInfo.html

we get this comment:

# The distortion parameters, size depending on the distortion model.
# For "plumb_bob", the 5 parameters are: (k1, k2, t1, t2, k3).
float64[] D

but for the rational_polynomial I do not know how the parameters are arranged.

@miguelriemoliveira
Copy link
Member

Chat GPT says:

image

@miguelriemoliveira
Copy link
Member

miguelriemoliveira commented Jul 15, 2024

Hi there. Just commited a change to support the rational_polynomial model.

69831fb

Can you try it and see if it works? Don't forget to use the correct branch.

@upaltamentept
Copy link
Author

Hello, many updates, so i'll try to reply to all of them

Can you please post your camera info in this format?

header:
  stamp:
    sec: 1716911888
    nanosec: 293393600
  frame_id: oakd_link_color_camera_optical_frame
height: 468
width: 624
distortion_model: rational_polynomial
d: [9.699882507324219 ,  -86.14419555664062, -0.00038348272209987044, 0.0012107902439311147, 251.8818359375, 9.480454444885254, -84.89832305908203, 248.2621307373047]
k: [- 500.65032958984375, 0.0, 315.5879211425781, 0.0, 500.65032958984375, 233.938720703125, 0.0, 0.0, 1.0]
r:[1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
p:[ 500.65032958984375, 0.0, 315.5879211425781, 0.0, 0.0, 500.65032958984375, 233.938720703125, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi:
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: false
---

OK, got it. Can you please upload or send to me the corrected dataset.

Sure, here is the dataset and the calibration package

"estufa2_9" is the dataset and "segunda_estufa" is the calibration package

About the model conversion, what changes did you make? I was experimenting other calibration algorithms and they all have the same problem with the distortion mode, which led to me maybe changing the model within the message itself. That way, I would need to transform those values into the plumb_bob, which it seems that it's just omitting the last values? Not sure about it, so asking for the confirmation.

Hi there. Just commited a change to support the rational_polynomial model.

Will download and try it, will get back at you

@upaltamentept
Copy link
Author

Ok, i created a new workspace to host the new branch, as it was conflicting with the previous one.

When i was running this command
roslaunch terceira_estufa collect_data.launch output_folder:=/home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_10 overwrite:=true bag_rate:=0.5 bag_start:=0

It gave me this error repeatedly

[ERROR] [1721045433.947114, 1717594981.641234]: bad callback: <bound method RGBLabeler.labelData of <__main__.RGBLabeler object at 0x7f3238331af0>>
Traceback (most recent call last):
  File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/pedro/ros/workspaces/rational/src/atom/atom_calibration/src/atom_calibration/collect/rgb_labeler", line 154, in labelData
    detector.drawKeypoints(image_gui, result, K=self.K, D=self.D, pattern_name=detector_key, color=color,
AttributeError: 'RGBLabeler' object has no attribute 'D'

@miguelriemoliveira
Copy link
Member

When i was running this command
roslaunch terceira_estufa collect_data.launch output_folder:=/home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_10 overwrite:=true bag_rate:=0.5 bag_start:=0

It gave me this error repeatedly

but why are you collecting data again? You should run the same calibrate command as before.

@upaltamentept
Copy link
Author

but why are you collecting data again? You should run the same calibrate command as before.

I was collecting data again due to that step being the one that first flagged the distortion model error. I will then proceed to do the calibration only. Still, it gives that error

@miguelriemoliveira
Copy link
Member

Still, it gives that error

The same error? And is there a print just before?

What does this instruction print

print('idx = ' + str(idx) + ' x = ' + str(x) + ' y = ' + str(y))

@upaltamentept
Copy link
Author

I meant this error, I understand that it was not to collect data again but the error still happens in case someone wants to try this branch from scratch.

[ERROR] [1721045433.947114, 1717594981.641234]: bad callback: <bound method RGBLabeler.labelData of <__main__.RGBLabeler object at 0x7f3238331af0>>
Traceback (most recent call last):
  File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/pedro/ros/workspaces/rational/src/atom/atom_calibration/src/atom_calibration/collect/rgb_labeler", line 154, in labelData
    detector.drawKeypoints(image_gui, result, K=self.K, D=self.D, pattern_name=detector_key, color=color,
AttributeError: 'RGBLabeler' object has no attribute 'D'

The comment I had made did not go trough idk why. Here's what sit aid when running the calibration command
rosrun atom_calibration calibrate -json /home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_9/dataset_corrected.json -v -rv -si -uic

Starting calibration
_______________________________________________________

Deleted collections: ['001']: at least one detection by a camera should be present.
After filtering, will use 10 collections: ['000', '002', '003', '004', '005', '006', '007', '008', '009', '010']
Loaded dataset containing 2 sensors and 10 collections.
Selected collection key is 000
No joints are being optimized

Initializing optimizer...
Configuring visualization ... 
INFO - 2024-07-15 14:10:02,704 - visualization - Reading description file /home/pedro/ros/workspaces/atom/src/Agrob4Simulation-ros-melodic-AgRob_simulator-launch-weta_v2/AgRob_simulator/launch/weta_v2/weta.urdf.xacro...
immovable links are: ['VLP-16_base_link', 'VLP-16', 'base_link', 'left_body_Link', 'BWL_Link', 'FWL_Link', 'right_body_Link', 'BWR_Link', 'FWR_Link', 'oakd_link_color_camera_optical_frame', 'sprayer_nozzle_Link', 'visual_Rim_FWR_Link', 'visual_Rim_BWL_Link', 'visual_Rim_FWL_Link', 'visual_Rim_BWR_Link', 'visual_sprayer_front_body_Link', 'visual_lidar_support_Link', 'visual_Lidar_Link', 'visual_sprayer_back_body_Link', 'visual_sprayer_tower_Link', 'visual_green_button_Link', 'visual_antenna_support_Link', 'visual_antenna_bullet_Link', 'camera_link', 'livox_frame', 'visual_lid_Link', 'visual_gps_antenas_Link', 'visual_nozzle_black_Link', 'visual_nozzle_parts_Link']
movable links are: []
Robot has all joints fixed. Will render only collection 000
Creating sensor transformation parameters ...
Creating additional_tfs parameters ...
Creating residuals ... 
Computing sparse matrix ... 

ATOM Error: Rational_polynomial distortion model requires 8 distortion parameters

I'm not understanding why it is not recognizing the 8 distortion parameters

@miguelriemoliveira
Copy link
Member

I meant this error, I understand that it was not to collect data again but the error still happens in case someone wants to try this branch from scratch.

I agree we have to solve this as well, but its another issue and I think we should focus on the other first.

I'm not understanding why it is not recognizing the 8 distortion parameters

your camera info has a D with a list of 8 params.

https://github.com/upaltamentept/Dataset-and-package/blob/a8be36024fe9777fb01f2bcdd12f383a7fe8f961/estufa_2_9/dataset_corrected.json#L44848-L44858

I have to try to run this on my side, but that will take a while longer. I will to do it by the end of the week,

@miguelriemoliveira
Copy link
Member

Just found the problem with the 8 parameters. Can you try again please?

@upaltamentept
Copy link
Author

Just found the problem with the 8 parameters. Can you try again please?

Gives this error this time:

Traceback (most recent call last):
  File "/home/pedro/ros/workspaces/rational/devel/lib/atom_calibration/calibrate", line 15, in <module>
    exec(compile(fh.read(), python_script, 'exec'), context)
  File "/home/pedro/ros/workspaces/rational/src/atom/atom_calibration/scripts/calibrate", line 792, in <module>
    main()
  File "/home/pedro/ros/workspaces/rational/src/atom/atom_calibration/scripts/calibrate", line 712, in main
    residuals = objectiveFunction(opt.data_models)
  File "/home/pedro/ros/workspaces/rational/src/atom/atom_calibration/src/atom_calibration/calibration/objective_function.py", line 445, in objectiveFunction
    D = np.asarray(
ValueError: cannot reshape array of size 8 into shape (5,1)

@upaltamentept
Copy link
Author

Not an update, but I was trying to run a calibration again and caught this next error

Initializing optimization ...
Traceback (most recent call last):
  File "/home/pedro/ros/workspaces/atom/devel/lib/atom_calibration/calibrate", line 15, in <module>
    exec(compile(fh.read(), python_script, 'exec'), context)
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_calibration/scripts/calibrate", line 792, in <module>
    main()
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_calibration/scripts/calibrate", line 739, in main
    opt.startOptimization(optimization_options=options)
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_core/src/atom_core/optimization_utils.py", line 372, in startOptimization
    self.getNumberOfFunctionCallsPerIteration(optimization_options)
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_core/src/atom_core/optimization_utils.py", line 412, in getNumberOfFunctionCallsPerIteration
    _ = least_squares(self.internalObjectiveFunction, self.x, verbose=0, jac_sparsity=self.sparse_matrix,
  File "/home/pedro/.local/lib/python3.8/site-packages/scipy/optimize/_lsq/least_squares.py", line 830, in least_squares
    f0 = fun_wrapped(x0)
  File "/home/pedro/.local/lib/python3.8/site-packages/scipy/optimize/_lsq/least_squares.py", line 825, in fun_wrapped
    return np.atleast_1d(fun(x, *args, **kwargs))
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_core/src/atom_core/optimization_utils.py", line 284, in internalObjectiveFunction
    self.vis_function_handle(self.data_models)  # call visualization function
  File "/home/pedro/ros/workspaces/atom/src/atom/atom_calibration/src/atom_calibration/calibration/visualization.py", line 819, in visualizationFunction
    cv2.line(image, (x, y), (x, y), color, int(6E-3 * diagonal))
cv2.error: OpenCV(4.6.0) :-1: error: (-5:Bad argument) in function 'line'
> Overload resolution failed:
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type
>  - Can't parse 'pt1'. Sequence item with index 0 has a wrong type

I saw this error happened in the issue #850, not sure if there's a fix, as it's somewhat critical due to being on the final step and crashing.

The command I'm running is the next one :

rosrun atom_calibration calibrate -json /home/pedro/ros/workspaces/atom/src/data/datasets/estufa_2_9/dataset_corrected.json -v -rv -si -uic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants