5、ORB_SLAM2 basics

 

Official website:http://webdiis.unizar.es/~raulmur/orbslam/

TUM Dataset:http://vision.in.tum.de/data/datasets/rgbd-dataset/download

KITTI Dataset:http://www.cvlibs.net/datasets/kitti/eval_odometry.php

EuRoC Dataset:http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

orb_slam2_ros:http://wiki.ros.org/orb_slam2_ros

ORB-SLAM:https://github.com/raulmur/ORB_SLAM

ORB-SLAM2:https://github.com/raulmur/ORB_SLAM2

ORB-SLAM3:https://github.com/UZ-SLAMLab/ORB_SLAM3

 

The operating environment and reference configurations for software and hardware are as follows:

 

5.1、Introduction

ORB-SLAM is mainly used for monocular SLAM;

The ORB-SLAM2 version supports monocular, binocular and RGBD interfaces;

The ORB-SLAM3 version adds IMU coupling and supports fisheye cameras.

All steps of ORB-SLAM uniformly use the ORB features of the image. ORB feature is a very fast feature extraction method, which is rotation invariant and can use pyramids to build scale invariance. The use of unified ORB features helps the SLAM algorithm to be consistent in the steps of feature extraction and tracking, key frame selection, 3D reconstruction, and closed-loop detection. The system is also robust to vigorous motion, supporting wide-baseline closed-loop detection and relocalization, including fully automatic initialization. Since the ORB-SLAM system is a SLAM system based on feature points, it can calculate the trajectory of the camera in real time and generate sparse 3D reconstruction results of the scene.

On the basis of ORB-SLAM, ORB-SLAM2 contributes points:

1)The first open-source SLAM system for monocular, binocular and RGBD cameras, including loopback and relocalization and map reuse.

2)The results of RGBD show that more accuracy can be obtained by using BA than ICP or minimization based on photometric and depth errors.

3)By using the far and near points in the binocular, as well as the monocular observation, the binocular result is more accurate than the direct binocular SLAM algorithm.

4)The light positioning mode can effectively reuse the map.

ORB-SLAM2 contains modules common to all SLAM systems: Tracking, Mapping, Relocalization, and Loop closing. .The following figure is the flow of ORB-SLAM2.

12.5.1

5.2、Official case

Note: The commands or positions mentioned below, unless otherwise specified, refer to the commands or positions in the Docker container.

 

5.2.1、Dataset location

It has downloaded EuRoC's MH01 dataset and TUM's rgbd_ Dataset_ Freiburg1_ Xyz dataset.

image-20230417111610512

If you need other datasets, you can download them at the following address:

 

5.2.2、Entering Docker Container

For the steps to enter the Docker container, please refer to 【the Docker Course Chapter -5. Entering the Docker Container for Robots】

 

5.2.3、Monocular testing

Taking the EuRoC dataset as an example, first enter the Docker container, and then enter the ORB_ SLAM2 directory:

Run the following command:

If encountering the following problems:

image-20230417110342053

 

Due to the issue with the Docker display, it can be successfully run multiple times. The successful run interface is shown in the following figure, which will be displayed on the VNC or car screen.

image-20230417105947250

The blue box represents the keyframes, the green box represents the camera orientation, the black dots represent the saved points, and the red dots represent the points currently seen by the camera.

 

After the test is completed, save the keyframes to the KeyFrameTrajectory.txt file in the current directory:

 

5.2.4、Binocular testing

Taking the EuRoC dataset as an example, enter the Docker container and then enter the ORB_ SLAM2 directory, run the following command:

The successful operation interface is shown in the following figure:

image-20230417114807211

The blue box represents the keyframes, the green box represents the camera orientation, the black dots represent the saved points, and the red dots represent the points currently seen by the camera.

After the test is completed, save the keyframes to the CameraTrajectory.txt file in the current directory

 

5.2.5、RGBD testing

Here, we use the TUM dataset and add depth information this time. Here, it is necessary to match the rgb and depth images, and merge the depth data and color image data into rgbd data.

The official script program associate.py is provided https://svncvpr.in.tum.de/cvpr-ros-pkg/trunk/rgbd_benchmark/rgbd_benchmark_tools/src/rgbd_benchmark_tools/associate.py

Download the associate. py file

Using Python to run associate.py, you can see that the associations.txt file has been generated in the specified path. 【This step is already configured before the product leaves the factory】

Then conduct testing: enter the Docker container, and then enter the ORB_ In the SLAM2 directory, enter the following command:

The successful operation interface is shown in the following figure:

image-20230417122258974

After running, CameraTrajectory.txt and KeyFrameTrajectory.txt will also be saved

image-20230417122536623

 

 

5.3、ORB_ SLAM2 ROS2 camera testing

 

The camera internal parameters have been modified before the product leaves the factory, and testing can start directly from 5.3.2. The required learning methods can be found in the section [5.3.1. Internal Parameter Modification].

5.3.1、Internal reference modification

Before running ORBSLAM, the camera requires internal parameters, so camera calibration must be performed first. The specific method can be found in the Astra Camera Calibration lesson.

After calibration, move the [calibrationdata. tar. gz] file to the [home] directory.

After decompressing, open [ost. yaml] in the folder and find the camera's internal parameter matrix, such as:

Camera's internal parameter matrix:

Modify the data in the data to the values corresponding to [mono. yaml] and [rgbd. yaml] in the [params] folder under the [yahboomcar_slam] function package

Params folder path:

 

5.3.2、Monocular testing

Enter Docker container:

Portable robots can be used as mobile carriers for mobile testing. If held, there is no need to execute the next command; otherwise, execute.

Start camera ORB_ SLAM2 testing

image-20230417161125907

When the command is executed, there is only a green box in the interface of the【 ORB_SLAM2: Map Viewer】,In the interface of 【ORB_SLAM2: Current Frame】, initialization is being attempted. At this time, slowly move the camera up, down, left, right, and search for feature points in the screen and initialize the SLAM.

After the test is completed, the keyframes are saved in the KeyFrameTrajectory.txt file in the following directory:

 

image-20230417160920005

As shown in the above figure, at this point, the camera enters the [SLAM Mode] mode and must continuously obtain each frame of image for camera positioning. If the [Localization Mode] pure positioning mode in the left image is selected, the camera will not be able to find its own position and must start from scratch to obtain key frames.

5.3.3、Binocular testing

Since there is no binocular camera on the car, there will be no demonstration here. Users with binocular cameras can follow the following steps to test:

Enter Docker container:

1、Launch the binocular camera node to view the topic names posted by the camera

2、Modify the subscription topic for binocular cameras in orbslam to your own published topic for binocular cameras:

image-20230417163154688

3、Recompile ROS2_ Orbslam feature pack:

4、Restart the binocular camera node and then run the orbslam node

After the test is completed, the keyframes are saved in the KeyFrameTrajectory.txt file in the following directory:

 

5.3.4、RGBD testing

Enter Docker container:

image-20230417161654629

RGBD does not need to continuously obtain each frame of image like running a single camera. If you choose the pure 【localization mode】 in the upper left image, you can locate the key frame just obtained.

 

After the test is completed, the keyframes are saved in the KeyFrameTrajectory.txt file in the following directory: