6、gmapping mapping algorithm6.1、Introduction6.2、use6.2.1、pre-use configuration6.2.2、specific use6.3、node parsing6.3.1、Display the calculation graph6.3.2、Details of gmapping nodes6.3.3、TF transformation
wiki:http://wiki.ros.org/gmapping/
ros2 gmapping:https://github.com/Project-MANAS/slam_gmapping
The operating environment and software and hardware reference configurations are as follows:
REFERENCE MODEL: ROSMASTER X3
Robot hardware configuration: Arm series main control, Silan A1 lidar, AstraPro Plus depth camera
Robot system: Ubuntu (version not required) + docker (version 20.10.21 and above)
PC Virtual Machine: Ubuntu (20.04) + ROS2 (Foxy)
Usage scenario: Use on a relatively clean 2D plane
gmapping is only suitable for points with a single frame two-dimensional laser point count less than 1440, if the number of laser points in a single frame is greater than 1440, then there will be such a problem as [mapping-4] process has died.
Gmapping is a commonly used open source SLAM algorithm based on filtering SLAM frameworks.
Based on the RBpf particle filter algorithm, Gmapping separates the real-time localization and mapping process, and locates before mapping.
Gmapping makes two major improvements to the RBpf algorithm: improved proposed distribution and selective resampling.
Advantages: Gmapping can build indoor maps in real time, which requires less computation and high accuracy in building small scene maps.
Cons: The number of particles required increases as the scene grows, as each particle carries a map, hence the memory required to build a large map
The amount of computation increases. Therefore, it is not suitable for building large scene maps. And there is no loopback detection, so it may cause a map when the loopback is closed
Dislocation, although increasing the number of particles can close the map, but at the cost of increased computation and memory.
Note: Since ROSMASTER series robots are divided into multiple robots and multiple devices, the factory system has been configured with routines for multiple devices, but because the product cannot be automatically identified, it is necessary to manually set the machine type and radar model.
After entering the container: according to the model of the car, the type of radar and the type of camera, make the following modifications:
root@ubuntu:/# cd
root@ubuntu:~# vim .bashrc
After the modification is complete, save and exit vim, and then execute:
xxxxxxxxxx
root@ubuntu:~# source .bashrc
--------------------------------------------------------
ROS_DOMAIN_ID: 12
my_robot_type: x3 | my_lidar: a1 | my_camera: astraplus
--------------------------------------------------------
root@ubuntu:~#
You can see the model of the currently modified car, the type of radar and the type of camera
Note: When building the image, the slower the effect, the better (note if the rotation speed is slower), too fast the effect, the effect will be poor.
First of all, you need to do port binding operations in the host [car] [see the port binding tutorial chapter], where two devices, radar and serial port, are mainly used;
Then check whether the radar and serial device are in the port binding state: on the host [car], refer to the following command to view, and the successful binding is in the following state:
If the radar or serial device is not bound to the display, you can plug and unplug the USB cable to view it again.
Enter the docker container (for steps, see [Docker course chapter ----- 5. Enter the docker container of the robot]), and execute the following launch file in the terminal:
1、Start mapping
xxxxxxxxxx
ros2 launch yahboomcar_nav map_gmapping_launch.py
2、Start rviz to display the map, this step is recommended to be executed in the virtual machine, and multi-machine communication needs to be configured in the virtual machine
xxxxxxxxxx
ros2 launch yahboomcar_nav display_map_launch.py
3、Start the keyboard control node, this step is recommended to be executed in the virtual machine, and multi-machine communication needs to be configured in the virtual machine.
Or use the remote control [Move the trolley slowly] to start building the map until the complete map is built.
xxxxxxxxxx
ros2 run yahboomcar_ctrl yahboom_keyboard
4、Save the map, the path is as follows:
xxxxxxxxxx
ros2 launch yahboomcar_nav save_map_launch.py
The save path is as follows:
xxxxxxxxxx
/root/yahboomcar_ros2_ws/yahboomcar_ws/src/yahboomcar_nav/maps/
A PGM image, a YAML file yahboomcar.yaml
xxxxxxxxxx
image: /root/yahboomcar_ros2_ws/yahboomcar_ws/src/yahboomcar_nav/maps/yahboomcar.pgm
mode: trinary
resolution: 0.05
origin: [-10, -10, 0]
negate: 0
occupied_thresh: 0.65
free_thresh: 0.25
Parameter parsing:
image: The path to the map file, which can be absolute or relative
mode: This attribute can be one of trinary, scale, or raw, depending on the mode selected, trinary mode is the default mode
Resolution: The resolution of the map, meters per pixel
origin: 2D pose (x,y,yaw) in the lower left corner of the map, where yaw is rotated counterclockwise (yaw=0 means no rotation). Many parts of the system today ignore the YAW value.
negate: whether to reverse the meaning of white/black, free/occupy (the interpretation of the threshold is not affected)
occupied_thresh: Pixels with a probability of occupancy greater than this threshold are considered fully occupied.
free_thresh: Pixels with a probability of occupancy less than this threshold are considered completely free.
xxxxxxxxxx
rqt_graph
xxxxxxxxxx
ros2 run tf2_tools view_frames.py