Pose estimation shows lines connecting some pairs of the 17 keypoints to draw a human stick figure and then rings are drawn on some of the keypoints. It seems that the keypoints identified with higher confidence get a ring, the ones with lower confidence don’t get a ring e.g. when the body part is obstructed or is on the other side of the body.
How can I access if a ring is drawn (true/false) or the confidence level of each keypoint in app_callback() of pose_estimation.py? This is to find frames where the confidence is too low to continue processing the coordinates.
“It seems that the keypoints identified with higher confidence get a ring, the ones with lower confidence don’t get a ring”: The demo app draws the same circle for all landmarks. There is no per landmark confidence’s drawing logic.
Thanks for the links, I’ve read them. Predictably, I do need your further assistance please with the link between the Python app and the C++ post-process. Specifically, how can the Python app get the 17 confidence values from the C++ post-process function?
If I read the .cpp code right, the confidence value (score(i,0)) is “pushed back” as the last argument. Does this mean the confidence value becomes a field in keypoints? Is the confidence value then accessible in the Python app_callback()? How do I do that in Python?
for (uint i = 0; i < score.shape(0); i++)
{
if (score(i, 0) > joint_threshold)
{
keypoints.push_back(KeyPt({coordinates(i, 0) / network_dims[0], coordinates(i, 1) / network_dims[1], score(i, 0)}));
}
}
C++ side: The landmarks array is filled with 3 columns per keypoint:
landmarks(i, 0) = scaled_keypoints[i].xs; // normalized x
landmarks(i, 1) = scaled_keypoints[i].ys; // normalized y
landmarks(i, 2) = scaled_keypoints[i].joints_scores; // confidence
This is then attached to the detection via hailo_common::add_landmarks_to_detection(), which creates a HailoLandmarks object containing HailoPoint entries - each with x, y, and confidence.
Python side: Each HailoPoint exposes a .confidence() method. In your callback you’re already getting the points:
landmarks = detection.get_objects_typed(hailo.HAILO_LANDMARKS)
if landmarks:
points = landmarks[0].get_points()
for i, point in enumerate(points):
x = point.x()
y = point.y()
conf = point.confidence() # <-- this is the joint score from the C++ side
So to get all 17 confidence values:
keypoint_confidences = [point.confidence() for point in points]
The existing code in pose_estimation.py:83-88 already calls point.x() and point.y() — just add point.confidence() alongside them. For example, you could filter out low-confidence keypoints or print them:
for idx, point in enumerate(points):
kpt_conf = point.confidence()
if kpt_conf > 0.5:
x = int((point.x() * bbox.width() + bbox.xmin()) * width)
y = int((point.y() * bbox.height() + bbox.ymin()) * height)
print(f"Keypoint {idx}: ({x}, {y}) conf={kpt_conf:.2f}")