Skip to content

🚀 ⭐ The list of the most popular YOLO algorithms - awesome YOLO

Notifications You must be signed in to change notification settings

srebroa/awesome-yolo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 

Repository files navigation

awesome-yolo 🚀 ⭐

Object Detection DNN Algorithms

Most of the DNN object detection algorithm can:

  • classifies object
  • localize object (find the coordinates of the bounding box enclosing the object)

YOLO - You Only Looks Once

Object Detection DNN Algorithms Benchmark

Comparison of Small Models

No. Name Year Parameters (M) FLOPs (G) Speed V100 b1 (FPS) mAP 50-95 COCO (%) License
1 YOLOv5n 2020 1.9 4.5 159 28.0 AGPL-3.0
2 YOLOX-Nano 2021 0.91 1.08 - 25.8 Apache 2.0
3 YOLOv6-N 2022 4.7 11.4 779 (trt fp16) 37.5 GPL-3.0
4 YOLOv7-Tiny 2022 6.2 13.8 286 38.7 GPL-3.0
5 EdgeYOLO-Tiny 2023 5.8 - 136/67 (AGX Xavier) 41.4 Apache 2.0
6 YOLOv10-N 2024 2.3 6.7 543 38.5 AGPL-3.0
7 YOLO11-N 2024 2.6 6.5 667 39.5 AGPL-3.0
8 YOLOv12-N 2025 2.6 6.5 610 40.6 AGPL-3.0

Performance Metrics

Accuracy (A)

A = (Number of Correct Predictions) / (Total Number of Predictions)

Accuracy measures the overall correctness of the algorithm's predictions.

Precision (P)

P = (True Positives) / (True Positives + False Positives)

It quantifies the algorithm's ability not to label false positives. It measures the fraction of correctly predicted positive instances among all predicted positive instances.

Recall (R)

R = (True Positives) / (True Positives + False Negatives)

It quantifies the algorithm's ability to find all positive instances.

F1 Score (F1)

F1 = 2 × (Precision × Recall) / (Precision + Recall)

The F1 score provides a balanced measure of the algorithm's performance, as it is the harmonic mean of precision and recall.

Average Precision (AP)

AP = ∑(Pi × ΔRi)

  • Pi is the precision value at the i-th recall point.
  • ΔRi is the change in recall from the i-th to the (i+1)-th recall point.

AP summarizes the performance of an algorithm across different confidence thresholds. It quantifies the precision-recall trade-off for a given class.

Mean Average Precision (mAP)

mAP = (AP1 + AP2 + ... + APn) / n

  • AP1, AP2, ..., APn are the Average Precision values for each class.
  • n is the total number of classes.

Intersection over Union (IoU)

IoU = (Area of Intersection) / (Area of Union)

It is used to determine the accuracy of localization, measuring the overlap between predicted bounding boxes and ground truth bounding boxes.

Inference Time

Time taken to make predictions on a single input image. It measures the time it takes for the algorithm to process the input and produce the output (bounding boxes, class predictions) without considering any external factors.

Processing Speed (FPS)

Time taken by the algorithm to process a given dataset or a single image.

Often represented as FPS (Frames Per Second), which indicates the number of frames (or images) that the algorithm can process per second.

It takes into account factors such as data loading, pre-processing, and post-processing steps in addition to the inference time.

Number of Parameters

The number of model parameters indicates the model's complexity and memory requirements.

Memory Usage

It measures the amount of memory consumed by the algorithm during inference.

Tests and comparisons of models

Yolo v10 Yolo v6 vs Yolo v8 Yolo v8 Yolo v7 YoloR vs YoloX Yolo_v5 vs YoloX YoloX

Practical application examples

Detecting pumpkins from drone video Counting Tree Logs by Size Pipe Counting