Skip to content

Commit c55bf59

Browse files
authored
🔀 Merge pull request #3 from autonomous-robots/develop
Release 0.1.0 Signed-off-by: Luiz Carlos Cosmi Filho <[email protected]>
2 parents 3accce7 + df3b7b7 commit c55bf59

12 files changed

+176
-5
lines changed

README.md

+174-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,174 @@
1-
# assignment-1
1+
# Assignment 1
2+
3+
The goal in this assignment is to explore concepts of perception in a robotic system to accomplish a task. Given a mobile robot with a set of sensors in a partially known environment, objects/obstacles must be detected and counted. In addition, the robot must be started in a random position and not rely on any teleoperated commands.
4+
5+
## Tools
6+
7+
<img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/python/python-original.svg" height='40' weight='40'/> <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/docker/docker-original.svg" height='40' weight='40'/> <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/ubuntu/ubuntu-plain.svg" height='40' weight='40'/> <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/opencv/opencv-original-wordmark.svg" height='40' weight='40'/> <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/numpy/numpy-original-wordmark.svg" height='40' weight='40'/> <img src="https://cdn.jsdelivr.net/gh/devicons/devicon/icons/git/git-original-wordmark.svg" height='40' weight='40'/>
8+
9+
## Methodology
10+
11+
The task to be solved here has been divided into several other tasks that together are the complete assignment resolution.
12+
13+
### Random start
14+
15+
To be able to initialize the robot in a random position, the [`worlds_gazebo`](https://github.com/autonomous-robots/worlds_gazebo) repository was built. When launching, one of the worlds will be randomly chosen as well as the position of the robot. The [python-sdformat](https://pypi.org/project/python-sdformat/0.1.0/) library is used to read the [SDFormat XML](http://sdformat.org/) file. Thus, the position of the cylinders is collected and a check is made to ensure that the robot never starts in a place already occupied by an obstacle.
16+
17+
### Environment exploration
18+
19+
Exploration is done using just a simple controller `turtlebot3_explorer` based on the [`turtlebot3_examples`](https://github.com/ROBOTIS-GIT/turtlebot3/tree/humble-devel) package. This ROS2 node subscribes to receive messages from the laser sensor and publishes velocity commands. If any obstacle is detected in front of the robot, it then rotates until it finds a free path again. Also has a service that allows to enable or disable this behavior.
20+
21+
<p align="center">
22+
<img src="etc/images/turtlebot3_explorer.png" alt="turtlebot3_explorer" width="400"/>
23+
</p>
24+
25+
### Occupancy grid
26+
27+
The entire solution proposed here for counting obstacles is based on the use of an occupancy grid map. To generate this map, it was developed a ROS2 node `turtlebot3_occupancy_grid` that subscribes to receive messages from the laser sensor and updates the occupancy grid map for each message received. Initially, all points on the occupancy map have probability equal to 50%. As messages are received from the laser sensor, occupied points will have probability above 50% and free points on the map will have probability below 50%. This probabilistic occupancy grid is published at a fixed rate in the `/custom_map` topic.
28+
29+
30+
<p align="center">
31+
<img src="etc/images/turtlebot3_occupancy_grid.png" alt="turtlebot3_occupancy_grid" width="400"/>
32+
</p>
33+
34+
35+
The occupancy grid mapping algorithm uses the log-odds representation of occupancy:
36+
37+
$$l_{t,i} = log(\frac{p(m_i|z_{1:t},x_{1:t})}{1 - p(m_i|z_{1:t},x_{1:t})})$$
38+
39+
where,
40+
41+
- $m_i :$ grid cell $i$
42+
43+
- $z_{i:t} :$ Collection of measurements up to time $t$
44+
45+
- $x_{i:t} :$ Collection of robot's pose up to time $t$
46+
47+
Using this representation we can avoid numerical instabilities for probabilities near zero or one and compute the problem with less cost. The probabilities are easily recovered from the log-odds ratio:
48+
49+
$$p(m_i|z_{1:t},x_{1:t}) = 1 - \frac{1}{1+ exp(l_{t,i})}$$
50+
51+
The algorithm occupancy grid mapping below loops through all grid cells $i$, and updates those that were measured. The function `inverse_sensor_model` implements the inverse measurement model $p(m_i|z_{1:t},x_{1:t})$ in its log-odds form: if it measured any smaller than the maximum laser range, then mark the points on the map that are under the laser beam as free and the last one as occupied; if it measured some infinite value, truncated to max laser range and marks all as free grid cells. To accomplish this, an implementation of the [Bresenham line drawing algorithm](https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm) is used.
52+
53+
<p align="center">
54+
<img src="etc/images/occupancy_grid_algo.png" alt="occupancy_grid" width="400"/>
55+
</p>
56+
57+
### Detect obstacles/objects
58+
59+
Detection obstacles/objects is done using the occupancy grid map by the node `turtlebot3_object_detector`, it subscribes to receive messages from the topic `/custom_map`. For each message received, the map is segmented with a threshold ensuring that only points with a probability greater than this threshold are 1. Then, [OpenCV's connected components](https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#ga107a78bf7cd25dec05fb4dfc5c9e765f) approach is used these determine occupied regions. Next, you can see how the result of a components connected algorithm in a binary image looks like.
60+
61+
<p align="center">
62+
<img src="https://scipy-lectures.org/_images/sphx_glr_plot_labels_001.png" alt="connected_components" width="400"/>
63+
</p>
64+
65+
If this region's area is between a minimum and maximum value, then it publish the [BoundingBox2DArray](http://docs.ros.org/en/api/vision_msgs/html/msg/BoundingBox2DArray.html) and an [Image](http://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html) with everything rendered to visualize the results.
66+
67+
<p align="center">
68+
<img src="etc/images/turtlebot3_object_detector.png" alt="turtlebot3_object_detector" width="400"/>
69+
</p>
70+
71+
### Exploration end
72+
73+
To determine when the exploration should stop, it is used the concept of fronteirs (intersection between unknown regions and free regions) in the occupancy grid map. So, the node `turtlebot3_mission_controller` subscribes to receive messages from topics `/custom_map` and `/detections`. It also makes available an action server, where through the goal sent to the action server, where a user can specify the number of remaining fronteir points, it will enable the `turtlebot3_explorer` to explore the enviromment until it reaches the specified numbers of frontiers points in the occupancy grid. Then, it returns the number of remaining fronteir points and the bounding boxes from `turtlebot3_object_detector`.
74+
75+
<p align="center">
76+
<img src="etc/images/turtlebot3_mission_controller.png" alt="turtlebot3_mission_controller" width="400"/>
77+
</p>
78+
79+
It is also available an action client node `turtlebot3_mission_client` responsible for sending the exploration task to the action server in `turtlebot3_mission_controller` and saving the results to a file.
80+
81+
## Results
82+
83+
Several worlds were made to test our solution. For the first tested world, we get the following results.
84+
85+
<p align="center">
86+
<img src="etc/images/result-1.png" alt="result-1"/>
87+
</p>
88+
89+
$$ result = \begin{Bmatrix} x & y & w & h \\\ -1.09 & -1.12 & 0.36 & 0.33 \\\ -0.01 & -1.12 & 0.33& 0.33 \\\ 1.1 & -1.12 & 0.33 & 0.33 \\\ -1.09 & -0.01 & 0.36 & 0.3 \\\ -0.01 & -0.01 & 0.33 & 0.33 \\\ 1.1 & -0.01 & 0.33 & 0.3 \\\ -1.12& 1.07 & 0.33 & 0.3 \\\ -0.01 & 1.07 & 0.33 & 0.3 \\\ 1.1 & 1.07 & 0.33 & 0.3 \end{Bmatrix}$$
90+
91+
Then, for the second tested world,
92+
93+
<p align="center">
94+
<img src="etc/images/result-2.png" alt="result-2"/>
95+
</p>
96+
97+
$$ result = \begin{Bmatrix} x & y & w & h \\\ -1.09 & -1.81 & 0.33 & 0.33 \\\ 1.1 & -1.12 & 0.33 & 0.33 \\\ -0.01 & -0.82 & 0.33 & 0.33 \\\ -1.09 & -0.01 & 0.3 & 0.33 \\\ -0.01 & -0.01 & 0.33 & 0.33 \\\ 1.07 & -0.01 & 0.33 & 0.33 \\\ -0.01 & 0.8 & 0.3 & 0.33 \\\ 1.07 & 1.1 & 0.33 & 0.3 \\\ -1.09 & 1.79 & 0.33 & 0.33\end{Bmatrix}$$
98+
99+
Then, for the third tested world,
100+
101+
<p align="center">
102+
<img src="etc/images/result-3.png" alt="result-3"/>
103+
</p>
104+
105+
$$ result = \begin{Bmatrix} x & y & w & h \\\ -1.12 & -1.12 & 0.33 & 0.33 \\\ -0.01 & -0.67 & 0.36 & 0.33 \\\ -1.12 & -0.01 & 0.33 & 0.33 \\\ 1.07 & 0.02 & 0.33 & 0.3 \\\ -0.01 & 0.65 & 0.33 & 0.36 \\\ -1.12 & 1.07 & 0.33 & 0.33\end{Bmatrix}$$
106+
107+
Finally, the last tested world,
108+
109+
<p align="center">
110+
<img src="etc/images/result-4.png" alt="result-4"/>
111+
</p>
112+
113+
$$ result = \begin{Bmatrix} x & y & w & h \\\ -1.66 & -1.66 & 0.3 & 0.36 \\\ 0.02 & -0.52 & 0.36 & 0.33 \\\ 0.5 &-0.49 & 0.36 & 0.36 \\\ -0.01 & -0.13 & 0.36 & 0.33 \\\ 0.5 & -0.1 & 0.36 & 0.33 \\\ -0.01 & 0.5 & 0.33 & 0.3 \\\ 0.5 & 0.5 & 0.36 & 0.33 \\\ -0.01 & 0.89 & 0.3 & 0.3 \\\ 0.47 & 0.89 & 0.33 & 0.33 \\\ 1.61 & 1.64 & 0.33 & 0.33\end{Bmatrix}$$
114+
115+
The ROS2 nodes proposed here solved the problem with interesting results, as long as the obstacles are not connected or too close to the walls. Furthermore, it is not limited to resolving only in the `turtlebot3_world` world, any close enviromment should work fine.
116+
117+
## Building
118+
119+
Get this project,
120+
```bash
121+
git clone --recursive https://github.com/autonomous-robots/assignment-1.git
122+
cd assignment-1/
123+
```
124+
125+
You can build all packages needed to run this assignment with docker:
126+
```bash
127+
docker build -t assignment-1:latest -f etc/docker/Dockerfile .
128+
```
129+
130+
## Runnning
131+
Not really safe, but it works.
132+
```bash
133+
sudo xhost +local:root
134+
```
135+
136+
Then, create a network to run all containers in it:
137+
```bash
138+
docker network create my-net
139+
```
140+
141+
Then, open a terminal in the container with support for the QT application:
142+
```bash
143+
docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --device /dev/dri/ --net=my-net assignment-1:latest /bin/bash
144+
```
145+
146+
The first thing you'll need to do is run the gazebo inside the container:
147+
```bash
148+
gazebo
149+
```
150+
151+
We haven't figured out why yet, but the first launch of the gazebo inside the container takes a long time. After the gazebo has opened the first time, you can close it and run our launch.
152+
```bash
153+
ros2 launch turtlebot3_mapper turtlebot3_mapper_launch.py
154+
```
155+
156+
If you want to explore the environment, just open a new container in the same network and run the action client node.
157+
```bash
158+
docker run -it --net=my-net assignment-1:latest /bin/bash
159+
ros2 run turtlebot3_mapper turtlebot3_mission_client -f 200
160+
```
161+
162+
After the task is finished, you can view the results in the generated `results.txt` file.
163+
164+
## Other sources of information
165+
166+
- THRUN, Sebastian; BURGARD, Wolfren; FOX, Dieter. Probabilistic Robotics. MIT Press, 2005. p. 221-243.
167+
168+
- SAKAI, Atsushi. Python Robotics, Python sample codes for robotics algorithms. <https://github.com/AtsushiSakai/PythonRobotics>
169+
170+
- ROBOTIS. ROS packages for Turtlebot3. <https://github.com/ROBOTIS-GIT/turtlebot3>.
171+
172+
- ROBOTIS. Simulations for Turtlebot3. <https://github.com/ROBOTIS-GIT/turtlebot3_simulations>.
173+
174+
- ROS PLANNING. ROS2 Navigation Framework and System. <https://github.com/ros-planning/navigation2>

etc/docker/Dockerfile

+1-3
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,7 @@ RUN python3 -m pip install --upgrade \
2525
setuptools==58.2.0 \
2626
python-sdformat==0.4.0
2727

28-
COPY src/worlds_gazebo/worlds/autonomous_robots_world_1.world /opt/ros/humble/share/turtlebot3_gazebo/worlds/
29-
COPY src/worlds_gazebo/worlds/autonomous_robots_world_2.world /opt/ros/humble/share/turtlebot3_gazebo/worlds/
30-
28+
COPY src/worlds_gazebo/worlds/*.world /opt/ros/humble/share/turtlebot3_gazebo/worlds/
3129
COPY src/worlds_gazebo/models/ /opt/ros/humble/share/turtlebot3_gazebo/models/
3230
COPY src/worlds_gazebo/launch/turtlebot3_world.launch.py /opt/ros/humble/share/turtlebot3_gazebo/launch/
3331

etc/images/occupancy_grid_algo.png

50.7 KB
Loading

etc/images/result-1.png

226 KB
Loading

etc/images/result-2.png

284 KB
Loading

etc/images/result-3.png

279 KB
Loading

etc/images/result-4.png

279 KB
Loading

etc/images/turtlebot3_explorer.png

11.1 KB
Loading
19.5 KB
Loading
12.2 KB
Loading
9.71 KB
Loading

0 commit comments

Comments
 (0)