6. Voice control multi-point navigation

 

6. Voice control multi-point navigation6.1, Functional description6.2. Preparation6.2.1 Bind the voice control device ports in the host computer.6.2.2 Mounting the voice control device in the docker container6.3. Configure the navigation point6.4. Using voice multi-point navigation6.5 Node resolution6.5.1. Displaying a computational graph6.5.2. Voice control node details

 

The operating environment and hardware and software reference configuration are as follows:

 

6.1, Functional description

By interacting with the voice recognition module on the ROSMASTER, voice control can be realized to realize multi-point navigation in the built map;

Note: Before using the functions in this subsection, please learn to use the module [----- map building navigation function in LiDAR series course];

6.2. Preparation

This lesson requires the use of a voice control device, and the following preparations need to be made before running this lesson:

6.2.1 Bind the voice control device ports in the host computer.

Please refer to this section [1. Introduction to the module and the use of port binding].

6.2.2 Mounting the voice control device in the docker container

When you enter the docker container, you need to modify the script that enters the container to mount the voice control device:

Add this line to the [run_docker.sh] script:

Other modifications as you see fit:

 

6.3. Configure the navigation point

1, enter the container, see [docker course in ----- 5, enter the docker container of the robot], sub-terminal execution of the following command:

  1. Open the virtual machine, configure multi-machine communication, and then execute the following command to display the rviz node
  1. execution of navigation nodes in docker containers

4, this time in the virtual machine rviz interface click [2D Pose Estimate], and then compared to the position of the car in the map to the car mark an initial position;

After marking the display is as follows:

image-20230421142315189

 

  1. Compare the overlap between the radar scanning point and the obstacle, you can set the initial position for the cart several times until the radar scanning point and the obstacle roughly overlap;

 

6, open another terminal into the docker container, execute the

7, click [2D Goal Pose], set the first navigation target point, this time the cart began to navigate, while 6 steps in the topic data will be received listening:

image-20230425175941784

 

  1. Open the code in the following location:

Modify the positional attitude of the first navigation point to that printed in step 7:

image-20230425193007060

 

  1. Modify the position of the other 4 navigation points in the same way.

 

 

6.4. Using voice multi-point navigation

  1. Enter the container, see [5. Enter the docker container of the robot], and execute the following command in a sub-terminal:
  1. Open the virtual machine, configure multi-machine communication, and then execute the following command to display the rviz node
  1. execution of navigation nodes in docker containers

4, this time in the virtual machine rviz interface click [2D Pose Estimate], and then compared to the position of the car in the map to the car mark an initial position;

After marking the display is as follows:

image-20230421142315189

 

  1. Compare the overlap between the radar scanning point and the obstacle, you can set the initial position for the cart several times until the radar scanning point and the obstacle roughly overlap;

 

6, open another terminal into the docker container, the execution of the opening of the voice control navigation node

7, to the voice module on the car said "Hello, Xiaoya" wake up the voice module, hear the voice module feedback broadcast "in the", continue to say "navigation to the first position"; voice module will feedback broadcast "good, is going to the first position", at the same time the car began to navigate to the first position. After hearing the feedback from the voice module "yes", continue to say "navigate to position 1"; the voice module will announce "OK, going to position 1", and at the same time, the cart will start to navigate to position 1. Other position navigation can be used in the same way. Refer to the following table for voice control function words:

function wordSpeech Recognition Module ResultContents of the voice announcement
Navigate to position one.19Okay. Going to one.
Navigate to position two.20Okay. Going to two.
Navigate to position three.21Okay. Going to three.
Navigate to position four.32Okay. Going to four.
return to square one33Okay, it's coming back around.

 

6.5 Node resolution

 

6.5.1. Displaying a computational graph

voice_nav

 

6.5.2. Voice control node details

b7479ebc6ca25411da410d6ed81f5be