MEDAS: A Multi-Dimensional Framework for Vehicle Detection Model Selection

2026-01-0025

To be published on 04/07/2026

Features
Event
Authors
Abstract
Content
Recent years have seen a rapid rise in edge-oriented object detection models, including new YOLO variants and transformer-based RT-DETR. Choosing an appropriate model for vehicle detection, however, remains challenged because common metrics such as precision, recall, and mAP capture only part of the trade-off between accuracy and computational cost. To better support model selection, we introduce the Multi-dimensional Equilibrium Detection Assessment Score (MEDAS), which evaluates detectors across four practical dimensions: performance, balance, efficiency, and adaptability. The framework includes a normalization strategy and adjustable weighting so that evaluations can reflect specific deployment needs, especially in resource-limited settings. Experiments on the MS-COCO vehicle dataset show that while RT-DETR models offer competitive accuracy, they require substantially more computation. In contrast, lightweight YOLO variants provide a stronger balance between accuracy and efficiency. Among all evaluated models, YOLOv11s achieves the highest MEDAS score, suggesting it is well suited for applications such as ADAS and embedded autonomous systems. MEDAS offers a practical way to compare modern detectors and helps connect offline accuracy metrics with real deployment constraints in intelligent transportation systems.
Meta TagsDetails
Citation
Guo, B., "MEDAS: A Multi-Dimensional Framework for Vehicle Detection Model Selection," WCX SAE World Congress Experience, Detroit, Michigan, United States, April 14, 2026, .
Additional Details
Publisher
Published
To be published on Apr 7, 2026
Product Code
2026-01-0025
Content Type
Technical Paper
Language
English