An Image-Derived Multimodal Framework for Automotive Tire Exterior Noise Prediction
2026-01-0220
To be published on 04/07/2026
- Content
- This study presents an image-derived multimodal AI framework for early-stage tire noise evaluation. The proposed model requires only multi-angle photographs captured by a standard smartphone and basic tire specifications. From these images, scaled three-dimensional (3D) meshes and fixed-view depth maps are reconstructed and combined with numerical parameters within a neural network architecture. Three input branches—a point-cloud–gradient branch, a depth-map convolutional neural network (CNN) branch, and a specification multi-layer perceptron (MLP) branch—are jointly trained using a composite loss that integrates frequency-weighted mean squared error (MSE), spectral cosine similarity, FFT-domain consistency, and A-weighted sound-level terms. A dataset of 28 tires, spanning passenger, SUV, and pickup applications for both battery electric vehicles (BEVs) and internal combustion engine (ICE) vehicles, was evaluated using leave-one-out (LOO) cross-validation. The model achieved a mean absolute error of 1.52 dB for overall A-weighted near-field noise, with a maximum error of 4.55 dB observed for a high-performance tread. Predicted 1/3-octave spectra preserved dominant peaks, valleys, and overall energy distribution, supporting engineering-level interpretation. The workflow avoids dedicated 3D scanning hardware and CAD models, enabling low-cost virtual screening prior to prototype manufacturing. In addition, the study treats near-field noise as the primary modeling target and describes a minimal, data-driven calibration pathway from near-field predictions to other exterior-noise metrics, which can be extended in future work.
- Citation
- Shao, Guangxin, Scott Shopoff, and Nicholas Franklin, "An Image-Derived Multimodal Framework for Automotive Tire Exterior Noise Prediction," SAE Technical Paper 2026-01-0220, 2026-, .