Browse Topic: Automation

Items (3,095)
While semi-autonomous driving (SAE level 3 & 4) is already partially a reality, the driver still needs to take over driving upon notice. Hence, the cockpit cannot be designed freely to accommodate spaces for non-driving related activities. In the following use case, a mobile workplace is created by integrating a translucent acrylic glass pane into the cockpit and introducing joystick steering of the car. By using the technology Virtual Desktop 1, which is a software layer, any desktop application can be represented freely transformable on arbitrary physical and virtual surfaces. Thus, a complete Windows environment can be distributed across all curved and flat surfaces of an interior. The concept is further enhanced by a voice-driven generative AI which helps to summarize documents. A physical and a virtual demonstrator are created to experience and assess the mobile workspace, the well-being of the driver, external influences, and psychological aspects. The physical demonstrator is a
Beutenmüller, FrankReining, NineRosenstiel, RetoSchmidt, MaximilianLayer, SelinaBues, MatthiasMendonca, Daisy
Computer-aided synthesis and development tools are essential for discovering and optimizing innovative concepts. Evaluating different concepts and making informed decisions relies heavily on accurate assessments of system properties. Estimating these properties in the early stages of vehicle development is challenging due to the depth of modelling required. In order to enable a cost prognosis for driving assistance and automated driving functions including software and hardware properties a cost model was developed at the Institute of Automotive Engineering. The methodology and cost model focuses on multiple combined approaches. This includes a bottom-up approach for the hardware. The costs of the software components are integrated into the model with the help of existing literature data and an exponential regression. For a comprehensive view of the total costs, the model is the model is also supplemented by a top-down approach for estimating the costs of other hardware components. The
Sturm, AxelHichri, BassemRohde García, ÁlvaroHenze, Roman
This paper deals with autonomous vehicle trajectory planning for avoidance maneuver. It introduces a trajectory planning approach that allows for static obstacle avoidance maneuvers. Specifically, this study proposes a generalized geometric formulation based on Sigmoid functions in order to generate a smooth path that guides the vehicle on a lateral deviation and returns to the initial lane. In addition, the method considers various geometrical and dynamic constraints to ensure vehicle stability while taking into account a safety area around the obstacle. The algorithm validation is conducted on the professional CarMaker simulator by associating the path generation module with a robust lateral tracking controller. The results demonstrate the effectiveness of the proposed planning method in the field of autonomous driving vehicle control.
Vigne, BenoitGiuliani, Pio MicheleOrjuela, RodolfoBasset, Michel
We present DISRUPT, a research project to develop a cooperative traffic perception and prediction system based on networked infrastructure and vehicle sensors. Decentralized tracking and prediction algorithms are used to estimate the dynamic state of road users and predict their state in the near future. Compared to centralized approaches, which currently dominate traffic perception, decentralized algorithms offer advantages such as greater flexibility, robustness and scalability. Mobile sensor boxes are used as infrastructure sensors and the locally calculated state estimates are communicated in such a way that they can augment local estimates from other sensor boxes and/or vehicles. In addition, the information is transferred to a cloud that collects the local estimates and provides traffic visualization functionalities. The prediction module then calculates the future dynamic state based on neurocognitive behavior models and a measure of a road user's risk of being involved in
Beutenmüller, FrankBrostek, LukasDoberstein, ChristianHan, LongfeiKefferpütz, KlausObstbaum, MartinPawlowski, AntoniaRössert, ChristianSas-Brunschier, LucasSchön, ThiloSichermann, Jörg
Human driver errors, such as distracted driving, inattention, and aggressive driving, are the leading causes of road accidents. Understanding the underlying factors that contribute to these behaviors is critical for improving road safety. Previous studies have shown that physiological states, like raised heart rates due to stress and anxiety, can influence driving behavior, leading to erratic driving and an increased risk of accidents. In this study, we conducted on-road tests using a measurement system based on the Driver-Driven vehicle-Driving environment (3D) method. We collected physiological signals, specially electrocardiography (ECG) data, from human drivers to examine the relationship between physiological states and driving behaviors. The aim was to determine whether ECG can serve as an indicator of potential risky driving behaviors, such as sudden acceleration and frequent steering adjustments. This information enables automated driving (AD) systems to intervene in dangerous
Ji, DejieFlormann, MaximilianBollmann, JulianHenze, RomanDeserno, Thomas M.
Trajectory planning is a major challenge in robotics and autonomous vehicles, ensuring both efficient and safe navigation. The primary objective of this work is to generate an optimal trajectory connecting a starting point to a destination while meeting specific requirements, such as minimizing travel distance and adhering to the vehicle’s kinematic and dynamic constraints. The developed algorithms for trajectory design, defined as a sequence of arcs and straight segments, offer a significant advantage due to their low computational complexity, making them well-suited for real-time applications in autonomous navigation. The proposed trajectory model serves as a benchmark for comparing actual vehicle paths in trajectory control studies. Simulation results demonstrate the robustness of the proposed method across various scenarios.
Soundouss, HalimaMsaaf, MohammedBelmajdoub, Fouad
Swimming robots play a crucial role in mapping pollution, studying aquatic ecosystems, and monitoring water quality in sensitive areas such as coral reefs or lake shores. However, many devices rely on noisy propellers, which can disturb or harm wildlife. The natural clutter in these environments — including plants, animals, and debris — also poses a challenge to robotic swimmers.
The global medical device manufacturing industry is undergoing a rapid transformation driven by technological innovation, automation, and increasing demands for customized, high-quality care. For engineers at the heart of medtech manufacturing, understanding the latest technologies is crucial not only for maintaining competitiveness but also for ensuring regulatory compliance, improving time to market, and optimizing production workflows.
Low-cost jelly-like materials, developed by researchers at the University of Cambridge, can sense strain, temperature, and humidity. And unlike earlier self-healing robots, they can also partially repair themselves at room temperature.
Repartly, a startup based in Guetersloh, Germany, is using ABB’s collaborative robots to repair and refurbish electronic circuit boards in household appliances. Three GoFa cobots handle the sorting, visual inspection and precise soldering tasks enabling the company to enhance efficiency and maintain high quality standards.
Innovators at NASA Johnson Space Center have developed a robotic system whose primary structural platform, or “orb,” can be injected into a pipe network and perform reconnaissance of piping infrastructure and other interior volumes. When deployed, this technology uses throttled fluid flow from a companion device for passive propulsion. A tethered line facilitates directional control by the orb’s operator, allowing it to navigate through various piping configurations, including 90° junctions.
It’s a game a lot of us played as children — and maybe even later in life: unspooling measuring tape to see how far it would extend before bending. But to engineers at the University of California San Diego, this game was an inspiration, suggesting that measuring tape could become a great material for a robotic gripper.
For the team at SmartCap, building top-notch gear for outdoor adventurers isn’t just a business — it’s a passion driven by their own love for the wild. But as demand for their rugged, modular truck caps soared after their move to North America in 2022, they hit a snag: How do you ramp up production without sacrificing the meticulous quality you are known for, all while navigating a tough labor market? Their answer? A bold step into the world of intelligent automation, teaming up with GrayMatter Robotics, and employing the company’s innovative Scan&Sand™ system.
Researchers have developed a tiny magnetic robot that can take 3D scans from deep within the body and could revolutionize early cancer detection.
A team of UCLA engineers and their colleagues have developed a new design strategy and 3D printing technique to build robots in one single step. The breakthrough enabled the entire mechanical and electronic systems needed to operate a robot to be manufactured all at once by a new type of 3D printing process for engineered active materials with multiple functions (also known as metamaterials). Once 3D printed, a “meta-bot” will be capable of propulsion, movement, sensing, and decision-making.
Engineers have designed robots that crawl, swim, fly, and even slither like a snake, but no robot can hold a candle to a squirrel, which can parkour through a thicket of branches, leap across perilous gaps and execute pinpoint landings on the flimsiest of branches.
When we last heard from MELD Manufacturing, the large-scale 3D printer supplier was taking first place in the Robotics/Automation/Manufacturing category at the 2018 .
Letter from the Guest Editors
Liang, CiTörngren, Martin
Industrial bearings are critical components in aerospace, industrial, and automotive manufacturing, where their failures can result in costly downtime. Traditional fault diagnosis typically depends on time-consuming on-site inspections conducted by specialized field engineers. This study introduces an automated Artificial Intelligence virtual agent system that functions as a maintenance technician, empowering on-site personnel to perform preliminary diagnoses. By reducing the dependence on specialized engineers, this technology aims to minimize downtime. The Agentic Artificial Intelligence system leverages agents with the backbone of intelligence from Computer Vision and Large Language Models to guide the inspection process, answer queries from a comprehensive knowledge base, analyze defect images, and generate detailed reports with actionable recommendations. Multiple deep learning algorithms are provisioned as backend API tools to support the agentic workflow. This study details the
Chandrasekaran, Balaji
Industries that require high-accuracy automation in the creation of high-mix/low-volume parts, such as aerospace, often face cost constraints with traditional robotics and machine tools due to the need for many pre-programmed tool paths, dedicated part fixtures, and rigid production flow. This paper presents a new machine learning (ML) based vision mapping and planning technique, created to enhance flexibility and efficiency in robotic operations, while reducing overall costs. The system is capable of mapping discrete process targets in the robot work envelope that the ML algorithms have been trained to identify, without requiring knowledge of the overall assembly. Using a 2D camera, images are taken from multiple robot positions across the work area and are used in the ML algorithm to detect, identify, and predict the 6D pose of each target. The algorithm uses the poses and target identifications to automatically develop a part program with efficient tool paths, including
Langan, DanielHall, MichaelGoldberg, EmilySchrandt, Sasha
Additive manufacturing has been a game-changer in helping to create parts and equipment for the Department of Defense's (DoD's) industrial base. A naval facility in Washington state has become a leader in implementing additive manufacturing and repair technologies using various processes and materials to quickly create much-needed parts for submarines and ships. One of the many industrial buildings at the Naval Undersea Warfare Center Division, Keyport, in Washington, is the Manufacturing, Automation, Repair and Integration Networking Area Center, a large development center housing various additive manufacturing systems.
Abdul Hamid, Umar ZakirEastman, Brittany
In the automobile industry, ensuring the safety of automated vehicles equipped with the automated driving system (ADS) is becoming a significant focus due to the increasing development and deployment of automated driving. Automated driving depends on sensing both the external and internal environments of a vehicle, utilizing perception sensors and algorithms, and electrical/electronic (E/E) systems for situational awareness and response. ISO 21448 is the standard for Safety of the Intended Functionality (SOTIF) that aims to ensure that the ADS operate safely within their intended functionality. SOTIF focuses on preventing or mitigating potential hazards that may arise from the limitations or failures of the ADS, including hazards due to insufficiencies of specification, or performance insufficiencies, as well as foreseeable misuse of the intended functionality. However, the challenge lies in ensuring the safety of vehicles despite the limited availability of extensive and systematic
Patel, MilinJung, RolfKhatun, Marzana
Aiming at the problem of insufficient cross-scene detection performance of current traffic target detection and recognition algorithms in automatic driving, we proposed an improved cross-scene traffic target detection and recognition algorithm based on YOLOv5s. First, the loss function CIoU of insufficient penalty term in the YOLOv5s algorithm is adjusted to the more effective EIoU. Then, the context enhancement module (CAM) replaces the original SPPF module to improve feature detection and extraction. Finally, the global attention mechanism GCB is integrated with the traditional C3 module to become a new C3GCB module, which works cooperatively with the CAM module. The improved YOLOv5s algorithm was verified in KITTI, BDD100K, and self-built datasets. The results show that the average accuracy of mAP@0.5 is divided into 95.1%, 72.2%, and 97.5%, respectively, which are 0.6%, 2.1%, and 0.6% higher than that of YOLOv5s. Therefore, it shows that the improved algorithm has better detection
Ning, QianjiaZhang, HuanhuanCheng, Kehan
Dedicated lanes provide a simpler operating environment for ADS-equipped vehicles than those shared with other roadway users including human drivers, pedestrians, and bicycles. This final report in the Automation and Infrastructure series discusses how and when various types of lanes whether general purpose, managed, or specialty lanes might be temporarily or permanently reserved for ADS-equipped vehicles. Though simulations and economic analysis suggest that widespread use of dedicated lanes will not be warranted until market penetration is much higher, some US states and cities are developing such dedicated lanes now for limited use cases and other countries are planning more extensive deployment of dedicated lanes. Automated Vehicles and Infrastructure: Dedicated Lanes includes a review of practices across the US as well as case studies from the EU and UK, the Near East, Japan, Singapore, and Canada. Click here to access the full SAE EDGETM Research Report portfolio.
Coyner, KelleyBittner, Jason
Visual object tracking technology is the core foundation of intelligent driving, video surveillance, human–computer interaction, and the like. Inspired by the mechanism of human eye gaze, a new correlation filter (CF) tracking algorithm, named human eye gaze (HEG) tracking algorithm, was proposed in this study. The HEG tracking algorithm expanded the tracking detection idea from the traditional detection-tracking to detection-judging-tracking by adding a judging module to check the initial and retrack the unreliable tracking result. In addition, the detection module was further integrated into the edge contour feature on the basis of the HOG (histogram of oriented gradients) extracting feature and the color histogram to reduce the sensitivity of the algorithm to factors such as deformation and illumination changes. The comparison conducted on the OTB-2015 dataset showed that the overall overlap precision, distance precision, and center location error of the HEG tracking algorithm were
Jiang, YejieJiang, BinhuiChou, Clifford C.
This document describes machine-to-machine (M2M)1 communication to enable cooperation between two or more traffic participants or CDA devices hosted or controlled by said traffic participants. The cooperation supports or enables performance of the dynamic driving task (DDT) for a subject vehicle equipped with an engaged driving automation system feature and a CDA device. Other participants may include other vehicles with driving automation feature(s) engaged, shared road users (e.g., drivers of conventional vehicles or pedestrians or cyclists carrying compatible personal devices), or compatible road operator devices (e.g., those used by personnel who maintain or operate traffic signals or work zones). Cooperative driving automation (CDA) aims to improve the safety and flow of traffic and/or facilitate road operations by supporting the safer and more efficient movement of multiple vehicles in proximity to one another. This is accomplished, for example, by sharing information that can be
Cooperative Driving Automation(CDA) Committee
This document provides definitions, terminology, and classifications for automated truck and bus vehicle applications. Vehicles covered by this document are those with a GVWR of more than 10000 pounds and where each vehicle utilizes driving automation systems that perform part or all of the driving task on a sustained basis and that range in level from some driving automation to full driving automation. The document also provides levels of driving automation that apply to the driving automation feature engaged in any given instance of operation of an equipped vehicle. A vehicle may be equipped with a driving automation system that is capable of delivering multiple driving automation features that perform at different levels; the level of driving automation exhibited in any given instance is determined by the feature(s) that are engaged. This document provides guidance for the elements of the dynamic driving task (DDT) for a truck or bus equipped with an Automated Driving System (ADS).
Truck and Bus Automation Safety Committee
Drone show accidents highlight the challenges of maintaining safety in what engineers call “multiagent systems” — systems of multiple coordinated, collaborative, and computer-programmed agents, such as robots, drones, and self-driving cars.
Los Angeles-based plastics contract manufacturer Kal Plastics deployed UR10e trimming cobot for a fraction of the cost and lead time of a CNC machine, cut trimming time nearly in half, and reduced late shipments to under one percent — all while improving employee safety and growth opportunities.
Advances in artificial intelligence (AI), machine learning (ML), and sensor fusion drive robotics functionality across many applications, including healthcare. Ongoing innovations in high-speed connectivity, edge computing, network redundancy, and fail-safe procedures crucial to optimizing robotics opportunities. The emergence of natural language processing and emotional AI functionality are poised to propel more intuitive, responsive, and adaptive human-machine interaction.
Researchers at Universidad Carlos III de Madrid (UC3M) have developed a new soft joint model for robots with an asymmetrical triangular structure and an extremely thin central column. This breakthrough, recently patented, allows for versatility of movement, adaptability and safety, and will have a major impact in the field of robotics.
While some developers of autonomous technology for commercial trucks have stalled out, there's renewed energy to deliver augmented ADAS and automated driving systems to mass production. After a tumultuous 2023 that saw several autonomous trucking startups pivot out of or exit the arena entirely, there has been a recent resurgence of investment and efforts to bring the vision of driverless freight fleets to reality. In the wake of firms like Embark, TuSimple and Waymo scaling back or rolling up operations, Aurora, Continental and Knorr-Bremse have all announced continued development of SAE Level 4 systems with the intention to deploy trucks using these systems at scale. OEMs such as Volvo Trucks have also announced updates to existing technologies that will augment current advanced driver-assistance systems (ADAS) to help human drivers become safer behind the wheel.
Wolfe, Matt
Accurate object pose estimation refers to the ability of a robot to determine both the position and orientation of an object. It is essential for robotics, especially in pick-and-place tasks, which are crucial in industries such as manufacturing and logistics. As robots are increasingly tasked with complex operations, their ability to precisely determine the six degrees of freedom (6D pose) of objects, position, and orientation, becomes critical. This ability ensures that robots can interact with objects in a reliable and safe manner. However, despite advancements in deep learning, the performance of 6D pose estimation algorithms largely depends on the quality of the data they are trained on.
The unicycle self-balancing mobility system offers superior maneuverability and flexibility due to its unique single-wheel grounding feature, which allows it to autonomously perform exploration and delivery tasks in narrow and rough terrains. In this paper, a unicycle self-balancing robot traveling on the lunar terrain is proposed for autonomous exploration on the lunar surface. First, a multi-body dynamics model of the robot is derived based on quasi-Hamilton equations. A three-dimensional terramechancis model is used to describe the interaction between the robot wheels and the lunar soil. To achieve stable control of the robot's attitude, series PID controllers are used for pitch and roll attitude self-balancing control as well as velocity control. The whole robot model and control strategy were built in MATLAB and the robot's traveling stability was analyzed on the lunar terrain.
Shi, JunweiZhang, KaidiDuan, YupengWu, JinglaiZhang, Yunqing
About 32% of registered vehicles in the U.S are equipped with automatic emergency braking or forward collision warning (FCW) systems [1]. Retrofitting vehicles with aftermarket devices can accelerate the adoption of FCW, but it is unclear if aftermarket systems perform similarly to original equipment manufacturer (OEM) systems. The performance of four low-cost, user-installable aftermarket windshield-mounted FCW systems was evaluated in various Insurance Institute for Highway Safety (IIHS) rear-end and pedestrian crash avoidance tests and compared with previously tested OEM systems. The presence and timing of FCWs were measured when vehicles approached a stationary passenger car at 20, 40, 50, 60, and 70 km/h, motorcycle and dry van trailer at 50, 60, and 70 km/h, adult pedestrian at 40 and 60 km/h, and child pedestrian crossing the road at 20 and 40 km/h. Equivalence testing was used to determine if FCW performance was similar for aftermarket and OEM systems. OEM systems provided a
Kidd, DavidFloyd, PhilipAylor, David
Towards the goal of real-time navigation of autonomous robots, the Iterative Closest Point (ICP) based LiDAR odometry methods are a favorable class of Simultaneous Localization and Mapping (SLAM) algorithms for their robustness under any light conditions. However, even with the recent methods, the traditional SLAM challenges persist, where odometry drifts under adversarial conditions such as featureless or dynamic environments, as well as high motion of the robots. In this paper, we present a motion-aware continuous-time LiDAR-inertial SLAM framework. We introduce an efficient EKF-ICP sensor fusion solution by loosely coupling poses from the continuous time ICP and IMU data, designed to improve convergence speed and robustness over existing methods while incorporating a sophisticated motion constraint to maintain accurate localization during rapid motion changes. Our framework is evaluated on the KITTI datasets and artificially motion-induced dataset sequences, demonstrating
Kokenoz, CigdemShaik, ToukheerSharma, AbhishekPisu, PierluigiLi, Bing
One of the major issues facing the automated driving system (ADS)-equipped vehicle (AV) industry is how to evaluate the performance of an AV as it navigates a given scenario. The development and validation of a sound, consistent, and transparent dynamic driving task (DDT) assessment (DA) methodology is a key component of the safety case framework (SCF) of the Automated Vehicle – Test and Evaluation Process (AV-TEP) Mission, a collaboration between Science Foundation Arizona and Arizona State University. The DA methodology was presented in earlier work and includes the DA metrics from the recently published SAE J3237 Recommended Practice. This work extends and implements the methodology with an AV developed by OEM May Mobility in four diverse, real-world scenarios: (1) an oncoming vehicle entering the AV’s lane, (2) vulnerable road user (VRU) crossing in front of the AV’s path, (3) a vehicle executing a three-point turn encroaches into the AV’s path, and (4) the AV exhibiting aggressive
Wishart, JeffreyRahimi, ShujauddinSwaminathan, SunderZhao, JunfengFrantz, MattSingh, SatvirComo, Steven Gerard
Items per page:
1 – 50 of 3095