Skip to content
This repository has been archived by the owner on Jul 26, 2024. It is now read-only.

Add confidence values for body tracking joint data #87

Open
RoseFlunder opened this issue Oct 14, 2019 · 12 comments
Open

Add confidence values for body tracking joint data #87

RoseFlunder opened this issue Oct 14, 2019 · 12 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@RoseFlunder
Copy link
Contributor

RoseFlunder commented Oct 14, 2019

With the new body tracking sdk 0.9.4 each joint has a confidence value which indicates if the joint is out of range, not observed or observed:

/** k4abt_joint_confidence_level_t
 *
 * \remarks
 * This enumeration specifies the joint confidence level.

 */
typedef enum
{
    K4ABT_JOINT_CONFIDENCE_NONE = 0,          /**< The joint is out of range (too far from depth camera) */
    K4ABT_JOINT_CONFIDENCE_LOW = 1,           /**< The joint is not observed (likely due to occlusion), predicted joint pose */
    K4ABT_JOINT_CONFIDENCE_MEDIUM = 2,        /**< Medium confidence in joint pose. Current SDK will only provide joints up to this confidence level */
    K4ABT_JOINT_CONFIDENCE_HIGH = 3,          /**< High confidence in joint pose. Placeholder for future SDK */
    K4ABT_JOINT_CONFIDENCE_LEVELS_COUNT = 4,  /**< The total number of confidence levels. */
} k4abt_joint_confidence_level_t;

We should publish this information together with the position of the joints.
Currently we use a marker array which can be displayed easily in RViz:
http://docs.ros.org/melodic/api/visualization_msgs/html/msg/MarkerArray.html

Each Marker has an ID, which is a combination of body ID & joint ID, and a position:
http://docs.ros.org/melodic/api/visualization_msgs/html/msg/Marker.html

The question is:
Where do we put in a confidence value?
I guess it would be nice to stick with ROS standard messages but I don't see field that would fit for this use case.
We could put it in the "text" field because its unused for non-text-markers and therefore free in our case.
But this will show a warning in RViz: "Non empty marker text is ignored"
So its not so nice.
Any other ideas?

@RoseFlunder RoseFlunder added enhancement New feature or request triage needed The Issue still needs to be reviewed by the Azure Kinect ROS Driver Team labels Oct 14, 2019
@bearpaw
Copy link

bearpaw commented Oct 15, 2019

@RoseFlunder
Copy link
Contributor Author

Hmm that would mean that "CONFIDENCE_NONE" joints would be fully transparent in RViz (alpha=0).
All other levels should be intransparent because of alpha >= 1.
Is this better than using the text field or even more confusing?

@RoseFlunder
Copy link
Contributor Author

RoseFlunder commented Oct 15, 2019

I guess I could extend the ID, which is already a combination of body id and joint id, with the confidence level as well.

For example an ID of 1021 would mean:
body id = 1
joint id = 02
confidence = 1

ID of 12223
body id = 12
joint id = 22
confidence level = 3

The level is only 0 to 4, so one decimal place is ok
The joint id gets two decimals as before and the rest is for the body id.

Publisher calculates it this way:
marker_id = body_id * 1000 + join_id * 10 + confidence_level

Clients could calculate it like this:
body_id = marker_id / 1000
joint_id = (marker_id % 1000) / 10
confidence = marker_id % 10

Are there any flaws with this?
Would still be kind of human readable compared to bit shifting things.

EDIT:
Nevermind, marker_id can't have the confidence in it. It must only contain body_id & joint_id
If we would add confidence the old markers for the same body id & joint id would not be replaced automatically in rviz when the confidence levels changes.
Or we would need to a send delete Marker message in between two Messages with filled data but thats not pretty I think..

@d-walsh
Copy link
Contributor

d-walsh commented Oct 17, 2019

Perhaps it would be good to publish two different topics. One topic for visualization and a separate topic for other nodes to interpret the BodyTracking data.

  1. Visualization = MarkerArray
  • Can be viewed in RViz
  • Different colors for the confidence values (eg. Green, Red, Yellow)
  1. Other nodes = Custom message
  • Everything clearly defined.
  • Enum for confidence values

@RoseFlunder
Copy link
Contributor Author

That would be optimal way if we want to introduce custom messages.
@skalldri
Whats your opinion about standard vs custom messages?

About 1: We already use different colors depending on the body ID like the simple viewer from SDK. For example body 1 = green, body 2 = red etc.

@ooeygui
Copy link
Member

ooeygui commented Oct 17, 2019

I think having both options would be a good idea. If you need confidence values, subscribe to the new message; otherwise use the standard.

If we go down that path, Please do it in two different pull requests.

The first change request introducing the custom message infrastructure (including moving the current codebase down a level and adding a new node), the second would be adding the custom message and implementation.

Make sense?

@d-walsh
Copy link
Contributor

d-walsh commented Oct 17, 2019

About 1: We already use different colors depending on the body ID like the simple viewer from SDK. For example body 1 = green, body 2 = red etc.

You could also use different namespaces in the Marker message (the "ns" variable) to be able to enable/disable a subset of the markers in RViz. http://docs.ros.org/melodic/api/visualization_msgs/html/msg/Marker.html

Either separating based on Person or Confidence value?

@AswinkarthikeyenAK
Copy link

Hi Guys,
Is it possible to obtain the pose and perform joint tracking in RViz using Kinect Azure camera?
Is there any ros package that can do this? I am looking for something like openpose package.

Thanks

@ooeygui ooeygui added help wanted Extra attention is needed and removed triage needed The Issue still needs to be reviewed by the Azure Kinect ROS Driver Team labels Aug 3, 2020
@ooeygui
Copy link
Member

ooeygui commented Aug 3, 2020

@AswinkarthikeyenAK Yes, the data is available in the SDK, but it hasn't been plumbed through the ROS node. We had a discussion about how it could be done. I added the help wanted tag as this work hasn't made it to the top of the teams' priority queue.

@AswinkarthikeyenAK
Copy link

@ooeygui,
I noticed the body tracking SDK shows the TF frames of the joints in the k4abt_simple_3d_viewer, but the ros publishes the body joint information as marker arrays. Is there a way to visualize the TF in RViz like shown in the k4abt_simple_3d_viewer?

Thanks

@ravijo
Copy link

ravijo commented Aug 3, 2022

Is it possible to obtain the pose and perform joint tracking in RViz using Kinect Azure camera?
Is there any ros package that can do this? I am looking for something like openpose package.

It is an old discussion, yet open. So I will quickly provide a reference for future readers. Please check out the following URL: https://github.com/ravijo/ros_openpose

@ooeygui
Copy link
Member

ooeygui commented Aug 3, 2022

Thanks for the ping.

There is a ROS REP for Human Robot Interaction which includes a definition of how people and skeletons are presented. ros-infrastructure/rep#338.

We intend to converge Azure Kinect body tracking when this REP is accepted. (and I'm reviewing to see what changes need to be made so we can align to the kinect body tracking).

I am also going to state that new features like this will be for ROS2 only, not ROS1.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

6 participants