You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The algorithm occupancy grid mapping in below loops through all grid cells $i$, and updates those that were measured. The function `inverse_sensor_model` implements the inverse measurement model $p(m_i|z_{1:t},x_{1:t})$ in its log-odds form: if it measured any smaller than the maximum laser range, then mark the points on the map that are under the laser beam as free and the last one as occupied; if it measured some infinite value, truncated to max laser range and marks all as free grid cells. To accomplish this, an implementation of the [Bresenham line drawing algorithm](https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm) is used.
39
+
The algorithm occupancy grid mapping below loops through all grid cells $i$, and updates those that were measured. The function `inverse_sensor_model` implements the inverse measurement model $p(m_i|z_{1:t},x_{1:t})$ in its log-odds form: if it measured any smaller than the maximum laser range, then mark the points on the map that are under the laser beam as free and the last one as occupied; if it measured some infinite value, truncated to max laser range and marks all as free grid cells. To accomplish this, an implementation of the [Bresenham line drawing algorithm](https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm) is used.
Detection obstacles/objects is done using the occupancy grid map by the node `turtlebot3_object_detector`, it subscribes to receive messages from the topic `/custom_map`. For each message received, the map is segmented with a threshold ensuring that only points with a probability greater than this threshold are 1. Then, [OpenCV's connected components](https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#ga107a78bf7cd25dec05fb4dfc5c9e765f) approach is used these determine occupied regions. Next, you can see how the result of a components connected algorithm in a binary image looks like.
If this region's area is between a minimum and maximum value, then it publish the [BoundingBox2DArray](http://docs.ros.org/en/api/vision_msgs/html/msg/BoundingBox2DArray.html) and an [Image](http://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html) with everything rendered to visualize the results.
To determine when the exploration should stop, it is used the concept of fronteirs (intersection between unknown regions and free regions) in the occupancy grid map. So, the node `turtlebot3_mission_controller` subscribes to receive messages from topics `/custom_map` and `/detections`. It also makes available an action server, where through the goal sent to the action server, where a user can specify the number of remaining fronteir points, it will enable the `turtlebot3_explorer` to explore the enviromment until it reaches the specified numbers of frontiers points in the occupancy grid. Then, it returns the number of remaining fronteir points and the bounding boxes from `turtlebot3_object_detector`.
It is also available an action client node `turtlebot3_mission_client` responsible for sending the exploration task to the action server in `turtlebot3_mission_controller` and saving the results to a file.
68
+
63
69
## Results
64
70
71
+
Several worlds were made to test our solution. For the first tested world, we get the following results.
The ROS2 nodes proposed here solved the problem with interesting results, as long as the obstacles are not connected or too close to the walls. Furthermore, it is not limited to resolving only in the `turtlebot3_world` world, any close enviromment should work fine.
91
104
105
+
## Building
92
106
You can build all packages needed to run this assignment with docker:
0 commit comments