This package provides a method for automatically calibrating the extrinsic parameters between pointcloud sensors (especially Livox LiDAR) and camera.
(The following calibration process is under the Ubuntu 64-bit 18.04 and ROS melodic environment )
Install Livox SDK and livox_ros_driver. You can skip this step if they are already installed.
# install Livox_SDK
git clone https://github.com/Livox-SDK/Livox-SDK.git
cd Livox-SDK
sudo ./third_party/apr/apr_build.sh
cd build && cmake ..
make
sudo make install
# install livox_ros_driver
cd ..
git clone https://github.com/Livox-SDK/livox_ros_driver.git ws_livox/src
cd ws_livox
catkin_make
You can skip this step if they are already installed.
# install calibration package
mkdir catkin_ws
cd catkin_ws
mkdir src
cd src
#TODO copy the folder livox_camera_lidar_calibration under the src
#OR git clone https://github.com/GAfieldCN/automatic-camera-pointcloud-calibration.git
cd ..
catkin_make
source devel/setup.bash
This project includes the following nodes:
-
feature_extraction_camera - obtain the corner of the photo
-
feature_extraction_pointcloud - obtain the corner of the LiDAR
-
feature_manual_camera - obtain the corners manually
-
feature_manual_lidar - obtain the lidar corners manually
-
GetExtrinsic - optimization to get the extrinsic parameters
-
colorLidar - LiDAR point cloud colorized visualization
You can find the corresponding launch file in the launch folder if you want to modify any launch file.
Using MATLAB, OpenCV or Kalibr etc. to get intrinsic parameters.
The calibration board is recommended to be acrylic plate or foam, with 4 square holes symmetrically placed. You can take the following as an example:
You should make sure that the boards have certain distance from the background objects, and ensure that the LiDAR has a certain distance from the calibration board, probably 3-5m is the best. Each board could provide 16 points, but it is recommended to use 2-3 boards and arrange them uniformly in the space with different positions.
- During the 3.2 and 3.3, make sure the scene is unchanged and the platform remains still
Take the Livox LiDAR as an example (the authors have used the Livox Mid-70)
connect the Lidar and verify the pointcloud
roslaunch livox_ros_driver livox_lidar_rviz.launch
shut down the terminal, and we need to start the livox_custom_msg
roslaunch livox_ros_driver livox_lidar_msg.launch
Record the rosbag for about 20 seconds (Too large bag might take a while to compute)
rosbag record -o lidar /livox/lidar
Please connect and turn on the camera, then take the photo.
After collecting all the data, put photos in data/photo folder; LiDAR rosbag in data/lidar folder.
Firstly save the intrinsic parameters and distortion correct parameters in the path data/parameters/intrinsic.txt. Distortion corresponds to five distortion correction parameters, which in order are: k1, k2 (RadialDistortion), p1, p2(TangentialDistortion) and k3, normally set 0 by default. An example is given as intrinsic.txt.
Configure the launch file feature_extraction_camera.launch, make sure the plane size accords with the number of boards, and you might change the binary_threshold when the binary process works not well! Also, the min_area and the max_area determine the contour area to be extracted.
roslaunch automatic-camera-pointcloud-calibration feature_extraction_camera.launch
The program will open the corresponding photo and show the rectified image. Tap any key to enter the debug process, i.e., you can remove the false-detected contours. If you want to remove the 5 6 7 8 contours, enter
5 6 7 8 0
And if the contours are correct, tap 0 and enter to finish the process, then the program will calculate the features in more precise float type, and save them in default output path data/feature/corner_photo.txt. Then press a random key to end the whole process.
Configure the launch file feature_extraction_pointcloud.launch, make sure the plane size accords with the number of boards, and board_length is the length of board(m), square_length is the length of holes(m). Adjust the FOV to fit your LiDAR, but you can decrease the value to speed up the process as long as the boards are visible.
Then, run the command
roslaunch automatic-camera-pointcloud-calibration feature_extraction_pointcloud.launch
The detected corners will be saved in the default output path data/feature/corner_pointcloud.txt.
Extrinsic calculation node will read the calibration data in data/corner_photo.txt and data/corner_lidar.txt to calculate the extrinsic parameters. The program will stop reading the data and start calculating when it reads blank lines.
Before the calculation, check the calibration data to make sure that each line corresponds to the same corner and the data amount is the same.
Configure the launch file GetExtrinsic.launch, and run
roslaunch automatic-camera-pointcloud-calibration GetExtrinsic.launch
The cost of each iteration operation will be printed out on the terminal, and the result will be saved in the path data/parameters/extrinsic.txt in the format of homogeneous matrix.
Note: if the calibration result seems incorrect(i.e., the following colorization works not well), but no error occurs during the optimization process, which means the process has got into the local minimum trap. In that case, you should add some points manually by the following nodes.
Configure the launch file feature_manual_camera.launch, and run
roslaunch automatic-camera-pointcloud-calibration feature_manual_camera.launch
The result will be saved in the data folder.
Then, configure the launch file feature_manual_lidar.launch, make sure the selected_number equals to the photo corners you have just picked, and run
roslaunch automatic-camera-pointcloud-calibration feature_manual_lidar.launch
Use the publish point in the rviz and click the corresponding points, the process will be terminated when the total point reach the selected number.
Configure the launch file color_lidar_display.launch, and run
roslaunch automatic-camera-pointcloud-calibration colorLidar.launch