Predictive maintenance is critical to improving reliability, safety and operational efficiency of connected vehicles. However, classic supervised learning methods for fault prediction rely heavily on large-scale labeled data of failures, which are difficult to obtain and maintain a manually built dataset of failure events in real automotives settings. In this paper, we present a novel self-supervised anomaly detection model that makes predictions on the faults without the need for labeled failures by using only the operational data when the systems or robots are healthy.
The method relies on self-supervised pretext tasks, like masked signal reconstruction and future telemetry prediction, to extract nominal multi-sensor dynamics (i.e., temperature, pressure, current, vibration) while jointly minimizing the deviation between encoded/decoded signals and normal patterns in the latent space. A unsupervised anomaly detection model is then used to detect when the learned patterns are violated. This in conjunction with data driven predictive allows for early fault detection on key subsystems such as batteries, electric motors, brake systems, and cooling systems.
They tested the framework on some public benchmark datasets, and it’s pretty good at catching early anomalies with high accuracy and recall even better than the usual threshold-based methods. The study points out how important it is to use data from normal, healthy systems to build maintenance strategies that can scale well, adapt easily, and save costs, especially for connected vehicle fleets. Plus, the model helps explain what’s going on by identifying which telemetry signals are behind the anomalies, making it easier to take timely and practical maintenance actions.
This work basically offers a new, practical way to keep vehicle health in check ahead of time, helping fleets stay up and running longer while cutting down on surprise breakdowns and expensive repairs.