Browse Topic: Automation
Avoiding and mitigating any potential collision is dependent on (1) road user ability to avoid entering into a conflict (conflict avoidance effect) and (2) road user response should a conflict be entered (collision avoidance effect). This study examined the collision avoidance effect of the Waymo Driver, a currently deployed SAE level 4 automated driving system (ADS), using a human behavior reference model, designed to be representative of a human driver that is non-impaired, with eyes on the conflict (NIEON). Reliable performance benchmarking methodologies for assessing ADS performance are an essential component of determining system readiness. This consistently performing, always-attentive driver does not exist in the human population. Counterfactual simulations were run on responder collision scenarios based on reconstructions from a 10-year period of human fatal crashes from the Operational Design Domain of the Waymo ADS in Chandler, Arizona. Of 16 simulated conflicts entered, 12
The concept of the vehicle has changed as a result of many innovations over the last decade in the fields of connected, autonomous/automated, shared, and electric (CASE) technologies. At the same time, labor shortages in Japan are becoming more serious due to a decline in the working population. To help resolve these issues, a remote-controlled autonomous vehicle driving system called Telemotion has been developed that automates the movement of vehicles in production plants. This system is an autonomous driving and transportation system in which the recognition, judgment, and operation functions of driving are handled by a control system outside the vehicle that communicates wirelessly with the vehicle. This system utilizes artificial intelligence (AI) and other advanced technologies to realize safe unmanned autonomous driving, and is already in operation in production plants. Currently, efforts are under way to build a digital twin environment and conduct AI learning using computer
Precision control in Level 4 Automated Vehicles is essential for enhancing operational efficiency, accuracy, and safety. This work, conducted as part of ARPA-E’s NEXTCAR program, focuses on developing a robust hardware and software control solution to enable drive-by-wire functionality. A previous publication by the authors presented the hardware solutions for overtaking stock vehicle controls. This paper focuses on a model-based and data-driven control algorithm to enable drive-by-wire functionality for longitudinal and lateral motion control for a 2021 Honda Clarity Plug-In Hybrid Electric Vehicle. This vehicle was equipped with a set of sensors and an onboard processing unit to enable Level 4 automation. For lateral controls, an algorithm was developed to command steering torque to the electronic power steering module, ensuring the vehicle could attain the desired steering angle position at varying speeds. The system leveraged feedforward and feedback mechanisms. Feedback controller
As the adoption of electric vehicles continues to accelerate, the demand for their development and testing using chassis dynamometers has also increased significantly. Compared with internal combustion engine vehicles, chassis dynamometer testing for electric vehicles typically requires test durations several to several dozen times longer, resulting in substantially increased labor requirements. In addition, low-temperature testing is often required, further intensifying the workload associated with vehicle testing. To address these challenges, this study developed and evaluated a pedal robot designed to enable unmanned and automated testing. The pedal robot developed in this study weighs only 12 kg and can be installed within a few minutes. It is, to the authors’ knowledge, the world’s first pedal robot that mimics human driving behavior by using a single foot to operate both the accelerator and brake pedals. Unlike conventional driving robots, the actuators of the proposed system do
During the 2025 Association of the United States Army (AUSA) annual meeting and exhibition, Forterra announced several major defense industry vehicle partnerships and introduced four new integrated modules designed to enable autonomy for military vehicles, communications, and more. Headquartered in Clarksburg, Maryland, Forterra develops autonomous mission systems for specific defense applications, including robotics and self-driving vehicles. The company has a new partnership with BAE Systems that will rapidly prototype an autonomous Armored Multi-Purpose Vehicle (AMPV). Separately, Forterra has also collaborated with Oshkosh Defense and Raytheon to develop the “DeepFires” autonomous vehicle launcher technology.
Multimodal sensors, capable of simultaneously acquiring multiple physical or chemical signals, have shown broad application potential in fields such as health monitoring, soft robotics, and energy systems. However, current multimodal sensors often suffer from complex fabrication processes and signal decoupling challenges, which limit their practical deployment. To address these issues, this work presents a thin-film temperature–strain multimodal sensor (FTSMS) fabricated via laser processing. The temperature-sensing unit, based on the Seebeck effect, achieves a sensitivity of 9.08 μV/°C, while the strain-sensing unit, utilizing BaTiO₃/AlN@PDMS as the sensitive layer, exhibits a gauge factor (GF) of 43.2. By integrating distinct sensing mechanisms (thermovoltage for temperature and capacitance change for strain), the FTSMS enables self-decoupled measurements over 20–90 °C. Applied in LIB monitoring, it successfully captures real-time temperature and strain variations during charge
This SAE Recommended Practice defines requirements for equipment and supplies to be used in measuring shot peening intensity and other surface enhancement processes. It is intended as a guide toward standard practice and is subject to change to keep pace with experience and technical advances. Guidelines for use of these items can be found in SAE J443 and SAE J2597.
Treat foundational AV safety like seatbelts - make it non-proprietary and universal. An open safety stack, shared scenarios, benchmarks, and core validation tools can speed certification, reduce duplicated V&V and build public trust while preserving vendor differentiation. The bottleneck isn't compute - it's verification. Autonomous features are shipping in more vehicles and markets, but the gating factor is no longer raw compute. It's whether developers and regulators can verify systems against requirements and validate them against real-world operating design domains (ODDs) with confidence and repeatability. Today, many safety-critical components, from scenario libraries to pass/fail criteria, live in proprietary silos. That fragmentation slows regression testing, complicates regulator audits across regions, and duplicates effort across the industry. The result is an expensive, bespoke path to certification for every program and geography.
The automotive industry is rapidly advancing towards autonomous vehicles, making sensors such as Cameras, LiDAR, and RADAR critical components for ensuring constant information exchange between the vehicle and its surrounding environment. However, these sensors are vulnerable to harsh environmental conditions like rain, dirt, snow, and bird droppings, which can impair their functionality and disrupt accurate vehicle maneuvers. To ensure all sensors operate effectively, dedicated cleaning is implemented, particularly for Level 3 and higher autonomous vehicles. It is important to test sensor cleaning mechanisms across different weather conditions and vehicle operating scenarios to ensure reliability and performance. One crucial aspect of testing is tracking the trajectory of the cleaning fluid to ensure it does not cause self-soiling of vehicles and affects the field of view or visibility zones of other components like the windshield. While wind tunnel tests are valuable, digitalizing
Nowadays, digital instrument clusters and modern infotainment systems are crucial parts of cars that improve the user experience and offer vital information. It is essential to guarantee the quality and dependability of these systems, particularly in light of safety regulations such as ISO 26262. Nevertheless, current testing approaches frequently depend on manual labor, which is laborious, prone to mistakes, and challenging to scale, particularly in agile development settings. This study presents a two-phase framework that uses machine learning (ML), computer vision (CV), and image processing techniques to automate the testing of infotainment and digital cluster systems. The NVIDIA Jetson Orin Nano Developer Kit and high-resolution cameras are used in Phase 1's open loop testing setup to record visual data from infotainment and instrument cluster displays. Without requiring input from the system being tested, this phase concentrates on both static and dynamic user interface analysis
The rapid introduction of new Automated Driving Systems (ADS) in the last years has led to an urge for robust methodologies for the type approval of vehicles equipped with such technologies. As a result, different Regulations addressing this field have been adopted. These Regulations are mainly based in the New Assessment and Testing Methodology (NATM) developed within the World Forum for the Harmonisation of Vehicle Regulations (WP29). However, the complexity of the regulatory ecosystem extends beyond type approval. This complexity requires a thorough analysis in order to avoid any possible gap which may jeopardise the feasibility of Automated Driving Vehicles deployment. This paper analyses the possible mismatches among the different regulations currently in place or under development and proposes a holistic approach, where the concept of the Operational Design Domain (ODD) takes a relevant role.
This paper elucidates the implementation of software-controlled synchronous rectification and dead time configuration for bi-directional controlled DC motors. These motors are extensively utilized in applications such as robotics and automotive systems to prolong their operational lifespan. Synchronous rectification mitigates large current spikes in the H-bridge, reducing conduction losses and improving efficiency [1]. Dead time configuration prevents shoot-through conditions, enhancing motor efficiency and longevity. Experimental results demonstrate significant improvements in motor performance, including reduced thermal stress, decreased power consumption, and increased reliability [2]. The reduction in power consumption helps to minimize thermal stress, thereby enhancing the overall efficiency and longevity of the motor.
It all started when Owen Kent and Todd Roberts became roommates at the University of California Berkeley. Owen has muscular dystrophy and had recently acquired a robotic arm, which he noticed he was using to do range of motion. Todd had come to Berkeley to study mechanical engineering with a focus on biomechanics, and both were enrolled in Designing for the Human Body, a biomechanics course taught by Mechanical Engineering Professor Grace O’Connell.
Items per page:
50
1 – 50 of 3243