8、RTAB-Map 3D mapping navigation8.1、Introduction8.2、Configure before use8.3、Mapping8.4、Navigation8.4.1、Start camera8.4.2、Start rviz to display the map [virtual machine start]8.4.3、Display rtabmap_viz [Virtual machine start]8.4.4、Start navigation node8.4.5、Single point navigation8.2.6、Multipoint navigation8.5、Node resolution8.5.1、Show Computational Graph8.5.2、Rtabmap navigation related node details8.5.3、TF transform
Official website of rtabmap: http://introlab.github.io/rtabmap/
rtabmap ros-foxy:https://github.com/introlab/rtabmap/tree/foxy-devel
The operating environment, software and hardware reference configurations are as follows:
Reference model:ROSMASTER X3
Robot hardware configuration:Arm series master controller, Silan A1 lidar, AstraPro Plus depth camera
Robot system:Ubuntu(version no requirement)+ docker(20.10.21 and above)
PC virtual machine:Ubuntu(20.04)+ ROS2(Foxy)
Use scene: use on a relatively clean 2D plane
rtabmap is a SLAM method based on RGB-D images, which uses a bag-of-words-based global Bayesian loop closure detector to build maps in real-time in large-scale environments.
The characteristics of rtabmap are as follows:
It can use a handheld RGBD camera for 6 degrees of freedom RGB-D mapping, or use a robot equipped with lidar for 3 degrees of freedom (2D laser) or 6 degrees of freedom (3D laser) mapping.
It uses a memory management mechanism to divide the map into working memory (WM), short-term memory (STM) and long-term memory (LTM) to limit the number of anchor points for closed-loop detection and graph optimization to ensure real-time
It uses a discrete Bayesian filter to estimate the probability of forming a loop and selects which anchors to transfer or retrieve based on weight and time.
It uses an incremental dictionary creation method instead of a pre-trained dictionary, which can be adapted to different environments.
It can run under ROS or as a standalone library and application.
Note: Since the ROSMASTER series robots are divided into several types of robots and various types of equipment, the factory system has been configured with routines for various types of equipment, but since the product cannot be automatically identified, it is necessary to manually set the machine type and radar model.
After entering the container: make the following modifications according to the car model, radar type and camera type:
root@ubuntu:/# cd
root@ubuntu:~# vim .bashrc
After the modification is complete, save and exit vim, then execute:
xxxxxxxxxx
root@ubuntu:~# source .bashrc
--------------------------------------------------------
ROS_DOMAIN_ID: 12
my_robot_type: x3 | my_lidar: a1 | my_camera: astraplus
--------------------------------------------------------
root@ubuntu:~#
You can see the current modified car model, radar type and camera type
Note: The depth camera of the car should be directly connected to the main controller, not to the USB-HUB, so as to improve the transmission efficiency of the depth camera and improve the effect of map building and navigation. Otherwise, it may result in the inability to build maps and navigate
Note: When building a map, the slower the speed, the better the effect (note that if the rotation speed is slower), the effect will be poor if the speed is too fast.
First of all, port binding operation needs to be done on the host machine [that is, the jetson of the car] [see the port binding tutorial chapter], here mainly use radar, serial port and camera three devices.
Then check whether the radar and the serial device are in the port binding state: on the host machine [that is, on the jetson of the car], refer to the following command to execute the check, and the successful binding is as follows:
Check if the camera is in port binding state.
If it is displayed that the radar, serial port, and camera devices are not bound, you can plug and unplug the USB cable to check again.
Enter the docker container, see [in the docker course ----- 5, enter the robot's docker container], and execute the following launch file in the terminal:
1、 Start the Astra camera
xxxxxxxxxx
ros2 launch astra_camera astro_pro_plus.launch.xml
2、Start mapping
xxxxxxxxxx
ros2 launch yahboomcar_nav map_rtabmap_launch.py
3、Start rviz to display the map. This step is recommended to be performed in a virtual machine. Multi-machine communication needs to be configured in the virtual machine
xxxxxxxxxx
ros2 launch yahboomcar_nav display_rtabmap_map_launch.py
4、 Start the keyboard control node. This step is recommended to be performed in a virtual machine. Multi-machine communication needs to be configured in the virtual machine.Or use the remote control [slowly move the car] to start building the map until the complete map is built.
xxxxxxxxxx
ros2 run yahboomcar_ctrl yahboom_keyboard
5、Map save
When the map is built, directly [ctrl+c] to exit the map node, and the system will automatically save the map. Map default save path【~/.ros/rtabmap.db】
Enter the docker container (see [docker course chapter ----- 5, enter the robot's docker container] for the steps), and execute in a terminal:
xros2 launch astra_camera astro_pro_plus.launch.xml
Configure multi-machine communication in the Ubuntu virtual machine and the docker container. This step is recommended to start in the virtual machine: to maintain time synchronization and reduce resource consumption, because if you use vnc, it is very dependent on the network, which may cause navigation failure.
【Note that you must first start the node that displays the map, and then start the navigation node in step 3. This is because the navigation2 terminal map topic is only published once. How to start the navigation node first, and then start the rviz display, you may not be able to subscribe to that The only map topic that was released once, causing the map not to be displayed】
xxxxxxxxxx
ros2 launch yahboomcar_nav display_rtabmap_nav_launch.py
rtabmap_viz is the visual interface of RTAB-Map, which is the package of RTAB-Map GUI graphics library, similar to rviz but has options for RTAB-Map. It can subscribe to different topics, such as odom, rgb/image, depth/image, scan, etc., to display the process and results of SLAM, and load the 3D map into rviz at the same time. This step is recommended to start in a virtual machine.
Note that the map will not be displayed after this step is started, and it will not be displayed until the next step of the navigation node is started.
xxxxxxxxxx
ros2 launch yahboomcar_nav rtabmap_viz_launch.py
Navigation can be divided into single-point navigation and multi-point navigation, which will be introduced below
1、Start the navigation node, enter the docker container, and execute in a terminal:
xxxxxxxxxx
ros2 launch yahboomcar_nav navigation_rtabmap_launch.py
2、Click [2D Pose Estimate] on rviz, then compare the pose of the car and mark an initial pose for the car on the map.
The display after marking is as follows:
3、Comparing the overlap between the radar scanning point and the obstacle, you can set the initial pose of the car several times until the radar scanning point and the obstacle roughly coincide;
After the initial pose is set, you can click [2D Goal Pose] to set a navigation target point, and the car will start single-point navigation;
1、After the initial pose is set, you can click [Panels] in the upper left corner of rviz --- select [Navigation 2], and the [Navigation 2] panel will be displayed.
2、Click [Waypoint mode] in the above picture, then click [Navigation2 Goal] on rviz, you can mark a target point on the map, click [Navigation2 Goal] again, you can mark the second target point on the map. Multiple target points can be marked at one time.
3、After marking multiple target points, click [Start Navigation] to start multi-point navigation. After the multi-point navigation is completed, the car will stay at the pose of the last target point.
4、It may appear during the navigation process. This is due to the navigation2 itself in the ros-foxy version. The subsequent ros2 version has been fixed.
5、During the navigation process, rviz may exit. This may be caused by insufficient resources. You can close the display of rtabmap_viz on the virtual machine after the navigation node starts to fully display the 3D map.
xxxxxxxxxx
rqt_graph
/rgbd_sync
/rtabmap
/rtabmap_viz
/bt_navigator
/controller_server
/global_costmap/global_costmap
/local_costmap/local_costmap
/planner_server
/recoveries_server
/waypoint_follower
xxxxxxxxxx
ros2 run tf2_tools view_frames.py