-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get the quantitative results as in the paper? #6
Comments
Since the bag files already store the ground truth camera trajectory captured from the Vicon system, you can run with the options
That's good to hear. Replaying the bag files in parallel should give you similar results. However, due to the non-determinism of having the producer (bag file) and consumer (ROS node) in separate processes, the results may vary slightly. Also, we do not process all 30 frames per second and the results may change with faster/slower processing time.
The The true trajectory of the object path on the conveyor belt is recorded separately with Vicon markers. The raw bag files do not contain the true Vicon tracking of the objects since the markers would interfere with the visual tracking. Can you check, if you can reproduce the results with these As with the camera trajectory, you can use the |
Thanks for your prompt reply. Following your advice, I have estimated the ATE and RPE of the two estimation bag files, and the ATE of object tracking for the segmentation bag files. For a deterministic behavior, I read from bag files frame-by-frame using the “-l” arguments. I didn’t get same results, but the trend is clear, in terms of performance, MMF(S+D)>MMF(S)>CF.
Thanks in advance for your help! It’s really appreciated. |
I was using an "Intel Core i9-9900KF" and a "Nvidia GeForce RTX 2080 SUPER" and did not optimise the You should get similar performance results across different GPUs when you run in the deterministic frame-by-frame mode.
Just skiming over the scripts, the parameters provided to Fore reference, the parameters I used for the camera pose estimation experiments were: # CoFusion baseline (ICP)
MultiMotionFusion \
-ros -dim 640x480 \
colour:=/rgb/image_raw \
depth:=/depth_to_rgb/image_raw/filtered \
camera_info:=/rgb/camera_info \
_image_transport:=compressed \
-run -q \
-em -ep \
-exportdir /tmp/eval/$logname/icp
# MMF with keypoint tracking only (no refinement)
MultiMotionFusion \
-ros -dim 640x480 \
colour:=/rgb/image_raw \
depth:=/depth_to_rgb/image_raw/filtered \
camera_info:=/rgb/camera_info \
_image_transport:=compressed \
-run -q \
-em -ep \
-exportdir /tmp/eval/$logname/norefine \
-model [workspace]/install/super_point_inference/share/weights/SuperPointNet.pt \
-init kp
# MMF with keypoint tracking and dense refinement
MultiMotionFusion \
-ros -dim 640x480 \
colour:=/rgb/image_raw \
depth:=/depth_to_rgb/image_raw/filtered \
camera_info:=/rgb/camera_info \
_image_transport:=compressed \
-run -q \
-em -ep \
-exportdir /tmp/eval/$logname/kpinit \
-model [workspace]/install/super_point_inference/share/weights/SuperPointNet.pt \
-init kp -icp_refine And for the evaluation, I used something like ATE=`${tool_path}/evaluate_ate.py --max_difference=20000000 "${eval_path}/${SEQ}/true/poses-0.txt" "${eval_path}/${SEQ}/${METH}"`
RPE=`${tool_path}/evaluate_rpe.py "${eval_path}/${SEQ}/true/poses-0.txt" "${eval_path}/${SEQ}/${METH}"`
echo "ATE RMSE: ${ATE} m"
echo "RPE RMSE: ${RPE}"
I think this was done because the original true trajectory was centred such that it starts at the position where the object is spawned by the segmentation method. If you plot the trajectories from both folders in a sequence and centre their starting points, I think you should get the same trajectory.
They are without redetection. That means that the tracking would create a new mode with a new ID once the method would loose track.
All the evaluations in the Tables were done in the live / realtime mode when the bag file was played back in parallel to running the ROS node.
As far as I can remember, I did not specifically record the "redetection and pick & place" experiment when the robot was pick & placing the object from the conveyer belt to the table (Section IV.D and Figure 9). |
Hi,
Thanks again for publishing your work!
I am trying to reproduce the camera and object tracking results in Table I and Table II in the RAL paper. For camera tracking of manipulation and rotation sequence, I have used "-init tf -init_frame camera_true" as arguments, and use the logged poses-0.txt as ground truth camera poses. Then I evaluate ATE using the evaluate_ate.py provided by TUM-RGBD dataset. I got comparable results as in the paper.
But for the object tracking, it seems that I can not use -init tf to get the ground truth pose. How should I evaluate the object tracking trajectory quantitively? Could you provide some more details on the steps for getting results in the tables? Thanks!
The text was updated successfully, but these errors were encountered: