Browse Topic: Automation
Letter from the Guest Editors
Additive manufacturing has been a game-changer in helping to create parts and equipment for the Department of Defense's (DoD's) industrial base. A naval facility in Washington state has become a leader in implementing additive manufacturing and repair technologies using various processes and materials to quickly create much-needed parts for submarines and ships. One of the many industrial buildings at the Naval Undersea Warfare Center Division, Keyport, in Washington, is the Manufacturing, Automation, Repair and Integration Networking Area Center, a large development center housing various additive manufacturing systems.
This document describes machine-to-machine (M2M)1 communication to enable cooperation between two or more traffic participants or CDA devices hosted or controlled by said traffic participants. The cooperation supports or enables performance of the dynamic driving task (DDT) for a subject vehicle equipped with an engaged driving automation system feature and a CDA device. Other participants may include other vehicles with driving automation feature(s) engaged, shared road users (e.g., drivers of conventional vehicles or pedestrians or cyclists carrying compatible personal devices), or compatible road operator devices (e.g., those used by personnel who maintain or operate traffic signals or work zones). Cooperative driving automation (CDA) aims to improve the safety and flow of traffic and/or facilitate road operations by supporting the safer and more efficient movement of multiple vehicles in proximity to one another. This is accomplished, for example, by sharing information that can be
Los Angeles-based plastics contract manufacturer Kal Plastics deployed UR10e trimming cobot for a fraction of the cost and lead time of a CNC machine, cut trimming time nearly in half, and reduced late shipments to under one percent — all while improving employee safety and growth opportunities.
The rapid development of autonomous vehicles necessitates rigorous testing under diverse environmental conditions to ensure their reliability and safety. One of the most challenging scenarios for both human and machine vision is navigating through rain. This study introduces the Digitrans Rain Testbed, an innovative outdoor rain facility specifically designed to test and evaluate automotive sensors under realistic and controlled rain conditions. The rain plant features a wetted area of 600 square meters and a sprinkled rain volume of 600 cubic meters, providing a comprehensive environment to rigorously assess the performance of autonomous vehicle sensors. Rain poses a significant challenge due to the complex interaction of light with raindrops, leading to phenomena such as scattering, absorption, and reflection, which can severely impair sensor performance. Our facility replicates various rain intensities and conditions, enabling comprehensive testing of Radar, Lidar, and Camera
The rapid development of open-source Automated Driving System (ADS) stacks has created a pressing need for clear guidance on their evaluation and selection for specific use cases. This paper introduces a scenario-based evaluation framework combined with a modular simulation framework, offering a scalable methodology for assessing and benchmarking ADS solutions, including but not limited to off-the-shelf designs. The study highlights the lack of clear Operational Design Domain (ODD) descriptions in such systems. Without a common understanding, users must rely on subjective assumptions, which hinders the process of accurate system selection. To address this gap, the study proposes adopting a standardised ISO 34503 ODD description format within the ADS stacks. The application of the proposed framework is showcased through a case study evaluating two open-source systems, Autoware and Apollo. By first defining the assumed system’s ODD, then selecting a relevant scenario, and establishing
Over the decades, robotics deployments have been driven by the rapid in-parallel research advances in sensing, actuation, simulation, algorithmic control, communication, and high-performance computing among others. Collectively, their integration within a cyber-physical-systems framework has supercharged the increasingly complex realization of the real-time ‘sense-think-act’ robotics paradigm. Successful functioning of modern-day robots relies on seamless integration of increasingly complex systems (coming together at the component-, subsystem-, system- and system-of-system levels) as well as their systematic treatment throughout the life-cycle (from cradle to grave). As a consequence, ‘dependency management’ between the physical/algorithmic inter-dependencies of the multiple system elements is crucial for enabling synergistic (or managing adversarial) outcomes. Furthermore, the steep learning curve for customizing the technology for platform specific deployment discourages domain
Accurate object pose estimation refers to the ability of a robot to determine both the position and orientation of an object. It is essential for robotics, especially in pick-and-place tasks, which are crucial in industries such as manufacturing and logistics. As robots are increasingly tasked with complex operations, their ability to precisely determine the six degrees of freedom (6D pose) of objects, position, and orientation, becomes critical. This ability ensures that robots can interact with objects in a reliable and safe manner. However, despite advancements in deep learning, the performance of 6D pose estimation algorithms largely depends on the quality of the data they are trained on.
Drone show accidents highlight the challenges of maintaining safety in what engineers call “multiagent systems” — systems of multiple coordinated, collaborative, and computer-programmed agents, such as robots, drones, and self-driving cars.
Reproducing driving scenarios involving near-collisions and collisions in a simulator can be useful in the development and testing of autonomous vehicles, as it provides a safe environment to explore detailed vehicular behavior during these critical events. CARLA, an open-source driving simulator, has been widely used for reproducing driving scenarios. CARLA allows for both manual control and traffic manager control (the module that controls vehicles in autopilot manner in the simulation). However, current versions of CARLA are limited to setting the start and destination points for vehicles that are controlled by traffic manager, and are unable to replay precise waypoint paths that are collected from real-world collision and near-collision scenarios, due to the fact that the collision-free pathfinding modules are built into the system. This paper presents an extension to CARLA’s source code, enabling the replay of exact vehicle trajectories, irrespective of safety implications
Items per page:
50
1 – 50 of 3078