6.yolov4-tiny

yolov4-tiny official website: https://github.com/AlexeyAB/darknet

Source code: https://github.com/bubbliiiiing/yolov4-tiny-tf2

6.1. Introduction

Release time node

YOLOv4-Tiny performance on COCO: 40.2% AP50, 371 FPS (GTX 1080 Ti) Whether it is AP or FPS performance, it is a huge improvement compared to YOLOv3-Tiny, Pelee, and CSP. As shown below:

img

Comparison of YOLOv4 and YOLOv4-Tiny detection results, source network

YOLOv4 test results

img

YOLOv4-Tiny test results

img

We can see that the detection accuracy of Yolov4-tiny has declined, but Yolov4-tiny has obvious advantages in terms of time consumption: Yolov4-tiny detection takes only 2.6 milliseconds, while Yolov4 detection takes 27 milliseconds, which is faster. More than 10 times!

6.2. Use

Supports real-time monitoring of web pages, such as:

View node information

1364

Print detection information

Print as follows

6.3. Folder structure

The concept of anchor box was introduced in the YOLO-v2 version, which greatly increased the performance of target detection. The essence of anchor is the reverse of the SPP (spatial pyramid pooling) idea, and what does SPP itself do? It is to combine different sizes The input is resized to become the output of the same size, so the reverse of SPP is to reverse the output of the same size to get the input of different sizes.

6.4. Environmental requirements

The factory image is already configured and no installation is required.

Installation example

6.5. Customized training data set

6.5.1. Create data set

Method 1: Take some photos first, use the annotation tool to mark the targets on each photo, create a new [train.txt] file under the [garbage_data] folder, and write the target information in a specific format.

Method 2: Put background images (as many as possible) in the [garbage_data/texture] folder, modify the [GetData.py] code as needed, and execute [GetData.py] to generate a data set (as many as possible).

The names of the pictures and label files must correspond. The label format in the [train.txt] file is as follows:

Take method 2 as an example.

Open the [GetData.py] file

Modify the total number of generated data sets and fill it in as needed. [More], too few data sets will lead to unsatisfactory training results.

Run the [GetData.py] file to obtain the data set

6.5.2. Add weight file

There are good weight files (pre-trained models) [yolov4_tiny_weights_coco.h5] and [yolov4_tiny_weights_voc.h5] provided under the [model_data] file. Choose one of the two, and recommend coco’s weight file.

If you need the latest weight file, just search it on Baidu and download it.

6.5.3. Create label file

Be careful not to use Chinese tags and there should be no spaces in the folder!

For example: garbage.txt

6.5.4. Modify the train.py file

Modify according to your own needs by referring to the comments.

Follow the above process, and after the operation is completed, directly run the [train.py] file for training.

6.5.5. Model detection

During this period, you need to manually enter the images that need to be detected, as shown below:

image-20220302102751259