-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get extrinsics? #6
Comments
Hi @haradatm, thanks for your interest in our dataset! You are right that the sensor extrinsics have not been provided yet. We are working on it and will update you in the next few days on the status! |
Hello, thanks for your amazing dataset. Howerver, during my exploration on the dataset, I find I need the camera poses too. I would like to know if you have any progress on it? What's more, I have tried to use the extrinct parameters of cameras in config/sensors.yaml to align the point cloud which is generated from depth image of different views. But, I find it's seemed that there is small error in the translation part. For example, I get the extrinct parameters of "left_45" is [0.145, 0.1, 1.6], but not [0.356, -0.356, 1.6]. I don't know why it happens. Maybe you know the reason? At last, thanks for your dataset again! |
Hi, May I ask that how do you get the inconsistent extrinsic parameters? |
Hey @GANWANSHUI @haradatm @yjsx, thanks for the questions! We have recently fixed the label files and uploaded the LiDAR pointclouds data as well as LiDAR sensor extrinsics. For the coordinate system details, you are welcome to find more information in Get started's annotations section. For the problem @yjsx mentioned, could you please recheck them with the new label files? Would you mind telling us the exact sequence number if the problem persists? |
Hi @suniique thanks for the good information. Actually, what I want is a camera poses. How about the camera poses? |
Hi @haradatm , I also want to obtain the camera pose relative to the vehicle framework. I try to transform the rotation angle to the rotation matrix along the Z axis by: np.array([[math.cos(theata), -math.sin(theata), 0], But the result seems incorrect. It would be great if the author could help to provide the camera pose in the rotation matrix format. |
Hey @GANWANSHUI, thanks for the question! From your screenshot posted, I hypothesize that your initial pose of the camera is incorrect. All sensors have an initial pose at the origin and head toward the x-positive in the world coordinate system. You cloud get the rotation matrix via scipy, from scipy.spatial.transform import Rotation as R
rot = np.array(frame.extrinsics.rotation)
rot_matrix = R.from_euler("xyz", rot, degrees=False).as_matrix() To make things easier, I have also created a script for camera pose visualization at shift_dev/vis/sensor_pose.py for reference. FYI, here is an expected result from the script (sequence id = @haradatm @yjsx, you can also check for this. Let me know if there is still something unclear! 😄 |
Thank you for the great dataset, I found a dataset suitable for training DROID-SLAM. How can I get the external parameters or camera poses? I believe it is not included in {det_2d,det_3d,det_insseg_2d}.json.
The text was updated successfully, but these errors were encountered: