Browse Topic: Artificial intelligence (AI)
Letter from the Guest Editors
The transportation industry is transforming with the integration of advanced data technologies, edge devices, and artificial intelligence (AI). Intelligent transportation systems (ITS) are pivotal in optimizing traffic flow and safety. Central to this are transportation management centers, which manage transportation systems, traffic flow, and incident responses. Leveraging Advanced Data Technologies for Smart Traffic Management explores emerging trends in transportation data, focusing on data collection, aggregation, and sharing. Effective data management, AI application, and secure data sharing are crucial for optimizing operations. Integrating edge devices with existing systems presents challenges impacting security, cost, and efficiency. Ultimately, AI in transportation offers significant opportunities to predict and manage traffic conditions. AI-driven tools analyze historical data and current conditions to forecast future events. The importance of multidisciplinary approaches and
Regarding the development of automated driving, manufacturers, technology startups, and systems developers have taken some different approaches. Some are on the path toward stand-alone vehicles, mostly relying on onboard sensors and intelligence. On the other hand, the connected, cooperative, and automated mobility (CCAM) approach relies on additional communication and information exchange to ensure safe and secure operation. CCAM holds great potential to improve traffic management, road safety, equity, and convenience. In both approaches, there are increasingly large amounts of data generated and used functions in perception, situational awareness, path prediction, and decision-making. The use of artificial intelligence is instrumental in processing such data; and in that context, “edge AI” is a more recent type of implementation. Edge Artificial Intelligence in Cooperative, Connected, and Automated Mobility explores perspectives on edge AI for CCAM, explores primary applications, and
While working with deaf students for more than a decade and a half, Bader Alsharif, Ph.D. candidate in the Florida Atlantic University Department of Electrical Engineering and Computer Science, saw firsthand the communication struggles that his student faced daily.
The implementation of active sound design models in vehicles requires precise tuning of synthetic sounds to harmonize with existing interior noise, driving conditions, and driver preferences. This tuning process is often time-consuming and intricate, especially facing various driving styles and preferences of target customers. Incorporating user feedback into the tuning process of Electric Vehicle Sound Enhancement (EVSE) offers a solution. A user-focused empirical test drive approach can be assessed, providing a comprehensive understanding of the EVSE characteristics and highlighting areas for improvement. Although effective, the process includes many manual tasks, such as transcribing driver comments, classifying feedback, and identifying clusters. By integrating driving simulator technology to the test drive assessment method and employing machine learning algorithms for evaluation, the EVSE workflow can be more seamlessly integrated. But do the simulated test drive results
In the highly competitive automotive industry, optimizing vehicle components for superior performance and customer satisfaction is paramount. Hydrobushes play an integral role within vehicle suspension systems by absorbing vibrations and improving ride comfort. However, the traditional methods for tuning these components are time-consuming and heavily reliant on extensive empirical testing. This paper explores the advancing field of artificial intelligence (AI) and machine learning (ML) in the hydrobush tuning process, utilizing algorithms such as random forest, artificial neural networks, and logistic regression to efficiently analyze large datasets, uncover patterns, and predict optimal configurations. The study focuses on comparing these three AI/ML-based approaches to assess their effectiveness in improving the tuning process. A case study is presented, evaluating their performance and validating the most effective method through physical application, highlighting the potential
In the era of Industry 4.0, the maintenance of factory equipment is evolving with new systems using predictive or prescriptive methods. These methods leverage condition monitoring through digital twins, Artificial Intelligence, and machine learning techniques to detect early signs of faults, types of faults, locations of faults, etc. Bearings and gears are among the most common components, and cracking, misalignment, rubbing, and bowing are the most common failure modes in high-speed rotating machinery. In the present work, an end-to-end automated machine learning-based condition monitoring algorithm is developed for predicting and classifying internal gear and bearing faults using external vibration sensors. A digital twin model of the entire rotating system, consisting of the gears, bearings, shafts, and housing, was developed as a co-simulation between MSC ADAMS (dynamic simulation tool) and MATLAB (Mathematical tool). The gear and bearing models were developed mathematically, while
High-frequency whine noise in electric vehicles (EVs) is a significant issue that impacts customer perception and alters their overall view of the vehicle. This undesirable acoustic environment arises from the interaction between motor polar resonance and the resonance of the engine mount rubber. To address this challenge, the proposal introduces an innovative approach to predicting and tuning the frequency response by precisely adjusting the shape of rubber flaps, specifically their length and width. The approach includes the cumulation of two solutions: a precise adjustment of rubber flap dimensions and the integration of ML. The ML model is trained on historical data, derived from a mixture of physical testing conducted over the years and CAE simulations, to predict the effects of different flap dimensions on frequency response, providing a data-driven basis for optimization. This predictive capability is further enhanced by a Python program that automates the optimization of flap
Artificial intelligence (AI) systems promise transformative advancements, yet their growth has been limited by energy inefficiencies and bottlenecks in data transfer. Researchers at Columbia Engineering have unveiled a groundbreaking solution: a 3D photonic-electronic platform that achieves unprecedented energy efficiency and bandwidth density, paving the way for next-generation AI hardware.
As artificial intelligence (AI) and high-performance computing (HPC) workloads continue to surge, traditional semiconductor technology is reaching its limits. In addition to needing more pure computing power, AI requires more electricity than the world can provide. AI data centers alone are expected to consume up to 17 percent of U.S. electricity by 2030(1) more than triple the amount used in 2023, much due to generative AI. A query to ChatGPT requires nearly 10 times as much electricity as a regular Google search.(2) This raises urgent concerns about sustainability, especially as Goldman Sachs has forecasted a 160 percent increase in data center electricity usage by 2030.(2)
Items per page:
50
1 – 50 of 2077