Browse Topic: Optics
ABSTRACT Future autonomous combat vehicles will need to travel off-road through poorly mapped environments. Three-dimensional topography may be known only to a limited extent (e.g. coarse height), but this will likely be noisy and of limited resolution. For ground vehicles, 3D topography will impact how far ahead the vehicle can “see”. Higher vantage points and clear views provide much more useful path planning data than lower vantage points and occluded views from trees and structures. The challenge is incorporating this knowledge into a path planning solution. When should the robot climb higher to get a better view or else continue moving along the shortest path predicted by current information? We investigated the use of Deep Q-Networks (DQN) to reason over this decision space, comparing performance to conventional methods. In the presence of significant sensor noise, the DQN was more successful in finding a path to the target than A* for all but one type of terrain. Citation: E
ABSTRACT Active thermography has been demonstrated to be an effective tool for detection of near-surface corrosion hidden under paint, as well as hidden material loss due to corrosion. Compared to established point inspection techniques (e.g. ultrasound, eddy current), thermography offers fast, wide-area inspection of flat or curved surfaces that does not require direct contact or coupling. In its simplest form, it can be used to perform qualitative inspection using a heat gun or lamp and an uncooled IR camera. Recent developments in thermographic signal processing, coupled with improved IR camera and thermal excitation technology have resulted in significant advances in resolution, sensitivity and probability of detection of near and far-surface corrosion, and the ability to perform quantitative characterization of corrosion
ABSTRACT The effective and safe use of Rough Terrain Cargo Handlers is severely hampered by the operator’s view being obstructed. This results in the inability to see a) in front of the vehicle while driving, b) where to set a carried container, and c) where to maneuver the vehicles top handler in order to engage with cargo containers. We present an analysis of these difficulties along with specific solutions to address these challenges that go beyond the non-technical solution currently used, including the placement of sensors and the use of image analysis. These solutions address the use of perception to support autonomy, drive assist, active safety, and logistics
ABSTRACT In this study, a styrene butadiene rubber, which is similar to the rubber used in road wheel backer pads of tracked vehicles, was investigated experimentally under monotonic and fatigue loading conditions. The monotonic loading response of the material was obtained under different stress states (compression and tension), strain rates (0.001/s to 3000/s), and temperatures (-5C to 50C). The experimental data showed that the material exhibited stress state, strain rate and temperature dependence. Fatigue loading behavior of the rubber was determined using a strain-life approach for R=0.5 loading conditions with varying strain amplitudes (25 to 43.75 percent) at a frequency of 2 Hz. Microstructural analysis of specimen fracture surfaces was performed using scanning electron microscopy and energy dispersive x-ray spectroscopy to determine the failure mechanisms of the material. The primary failure mechanisms for both loading conditions were found to be the debonding of particles on
ABSTRACT Currently, fielded ground robotic platforms are controlled by a human operator via constant, direct input from a controller. This approach requires constant attention on the part of the operator, decreasing situational awareness (SA). In scenarios where the robotic asset is non-line-of-sight (non-LOS), the operator must monitor visual feedback, which is typically in the form of a video feed and/or visualization. With the increasing use of personal radios, smart devices/wearable computers, and network connectivity by individual warfighters, the need for an unobtrusive means of robotic control and feedback is becoming more necessary. A proposed intuitive robotic operator control (IROC) involving a heads up display (HUD), instrumented gesture recognition glove, and ground robotic asset is described in this paper. Under the direction of the Marine Corps Warfighting Laboratory (MCWL) Futures Directorate, AnthroTronix, Inc. (ATinc) is implementing the described integration for
ABSTRACT This paper presents two techniques for autonomous convoy operations, one based on the Ranger localization system and the other a path planning technique within the Robotic Technology Kernel called Vaquerito. The first solution, Ranger, is a high-precision localization system developed by Southwest Research Institute® (SwRI®) that uses an inexpensive downward-facing camera and a simple lighting and electronics package. It is easily integrated onto vehicle platforms of almost any size, making it ideal for heterogeneous convoys. The second solution, Vaquerito, is a human-centered path planning technique that takes a hand-drawn map of a route and matches it to the perceived environment in real time to follow a route known to the operator, but not to the vehicle. Citation: N. Alton, M. Bries, J. Hernandez, “Autonomous Convoy Operations in the Robotic Technology Kernel (RTK)”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI
ABSTRACT Optical distortion measurements for transparent armor (TA) solutions are critical to ensure occupants can see what is happening outside a vehicle. Unfortunately, optically transparent materials often have poorer mechanical properties than their opaque counterparts which usually results in much thicker layups to provide the same level of protection. Current standards still call for the use of a double exposure method to manually compare the distortion of grid lines. This report presents provides a similar method of analysis with less user input using items typically available in many mechanics labs: machine vision cameras and digital image correlation software. Citation: J. M. Gorman, “An Easier Approach to Measuring Optical Distortion in Transparent Armor”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 11-13, 2020. The views presented are those of the author and do not necessarily represent the views of DoD or
ABSTRACT Simulation is a critical step in the development of autonomous systems. This paper outlines the development and use of a dynamically linked library for the Mississippi State University Autonomous Vehicle Simulator (MAVS). The MAVS is a library of simulation tools designed to allow for real-time, high performance, ray traced simulation capabilities for off-road autonomous vehicles. It includes features such as automated off-road terrain generation, automatic data labeling for camera and LIDAR, and swappable vehicle dynamics models. Many machine learning tools today leverage Python for development. To use these tools and provide an easy to use interface, Python bindings were developed for the MAVS. The need for these bindings and their implementation is described. Citation: C. Hudson, C. Goodin, Z. Miller, W. Wheeler, D. Carruth, “Mississippi State University Autonomous Vehicle Simulation Library”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium
ABSTRACT This paper presents a new terrain traversability mapping method integrated into the Robotic Technology Kernel (RTK) that produces ground slope traversability cost information from LiDAR height maps. These ground slope maps are robust to a variety of off-road scenarios including areas of sparse or dense vegetation. A few simple and computationally efficient heuristics are applied to the ground slope maps to produce cost data that can be directly consumed by existing path planners in RTK, improving the navigation performance in the presence of steep terrain. Citation: J. Ramsey, R. Brothers, J. Hernandez, “Creation of a Ground Slope Mapping Methodology Within the Robotic Technology Kernel for Improved Navigation Performance,” In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 16-18, 2022
ABSTRACT Recent advances in neuroscience, signal processing, machine learning, and related technologies have made it possible to reliably detect brain signatures specific to visual target recognition in real time. Utilizing these technologies together has shown an increase in the speed and accuracy of visual target identification over traditional visual scanning techniques. Images containing a target of interest elicit a unique neural signature in the brain (e.g. P300 event-related potential) when detected by the human observer. Computer vision exploits the P300-based signal to identify specific features in the target image that are different from other non-target images. Coupling the brain and computer in this way along with using rapid serial visual presentation (RSVP) of the images enables large image datasets to be accurately interrogated in a short amount of time. Together this technology allows for potential military applications ranging from image triaging for the image analyst
ABSTRACT Semi-autonomous behaviors, such as leader-following and “point-and-go” navigation, have the potential to significantly increase the value of squad-level UGVs by freeing operators to perform other tasks. A variety of technologies have been designed in recent years to enable such semi-autonomous behaviors on board mobile robots; however, most current solutions use custom payloads comprising sensors such as stereo cameras, LIDAR, GPS, or active transmitters. While effective, these approaches tend to be restricted to UGV platforms capable of supporting the payload’s space, weight, and power (SWaP), and may be cost-prohibitive to large-scale deployment. Charles River has developed a system that enables both leader-following and “point-and-go” navigation behaviors using only a single monocular camera. The system allows a user to control a mobile robot by leading the way and issuing commands through arm/hand gestures, and is capable of following an operator both on foot and aboard a
ABSTRACT Autonomous driving is emerging as the future of transportation recently. For autonomous driving to be safe and reliable the perception sensors need sufficient vision in sometimes challenging operating conditions including dust, dirt, and moisture or during inclement weather. LiDAR perception sensors used in certain autonomous driving solutions require both a clean and dry sensor screen to effectively operate in a safe manner. In this paper, UV durable Hydrophobic (UVH) coatings were developed to improve LiDAR sensing performance. A lab testbed was successfully constructed to evaluate UVH coatings and uncoated control samples for LiDAR sensor under the simulated weathering conditions, including fog, rain, mud, and bug. In addition, a mobile testbed was developed in partnership with North Dakota State University (NDSU) to evaluate the UVH coatings in an autonomous moving vehicle under different weathering conditions. These UV-durable easy-to-clean coatings with high optical
ABSTRACT Parametric analysis is an essential step in optimizing the performance of any system. In robotic systems, however, its usability is often limited by the lack of complex yet repeatable experiments required to gather meaningful data. We propose using the Robotics Interactive Visualization and Experimentation Toolbox (RIVET) in order to perform parametric analysis of robotic systems
ABSTRACT This paper will discuss the systematic operations of utilizing the BOXARR platform as the ‘Digital Thread’ to overcome the inherent and hidden complexities in massive-scale interdependent systems; with particular emphasis on future applications in Military Ground Vehicles (MGVs). It will discuss how BOXARR can enable significantly improved capabilities in requirements-capture, optimized risk management, enhanced collaborative relationships between engineering and project/program management teams, operational analysis, trade studies, capability analysis, adaptability, resilience, and overall architecture design; all within a unified framework of BOXARR’s customizable modeling, visualization and analysis applications
ABSTRACT This research proposes a human-multirobot system with semi-autonomous ground robots and UAV view for contaminant localization tasks. A novel Augmented Reality based operator interface has been developed. The interface uses an over-watch camera view of the robotic environment and allows the operator to direct each robot individually or in groups. It uses an A* path planning algorithm to ensure obstacles are avoided and frees the operator for higher-level tasks. It also displays sensor information from each individual robot directly on the robot in the video view. In addition, a combined sensor view can also be displayed which helps the user pin point source information. The sensors on each robot monitor the contaminant levels and a virtual display of the levels is given to the user and allows him to direct the multiple ground robots towards the hidden target. This paper reviews the user interface and describes several initial usability tests that were performed. This research
ABSTRACT Raytheon is in the final stages of production of three high performance thermal imaging / fire control systems being integrated on existing USMC and US Army armored vehicles. A goal in the design of these systems was to provide integration into the host vehicle that when viewed by the customer and user provided the enhanced capabilities of today’s latest thermal imaging and image processing technology as well as operating in concert with the vehicle as originally designed. This paper will summarize the technical solutions for each of these programs emphasizing the thermal imaging, fire control, image processing and vehicle integration technologies. It will also outline guiding philosophies and lessons learned used to focus the design team in achieving the successful integration. The programs to be reviewed are; USMC 2nd Gen Thermal Imaging System, the USMC LAV-25 Improved Thermal Sight System (ITSS) and the USMC / US Army M1A1 50 Cal Thermal Sight / DayTV System
ABSTRACT Modern military forces need an alternative to radio-frequency (RF) based communications between tactical vehicles. Free Space Optics (FSO) can provide that alternative but, to date, the design and form-factor of the equipment precluded considering it as a viable solution. Recent advances in FSO technologies are changing that and systems suitable for use in tactical field operations are currently being introduced into the battlefield by the special operations community. This paper explores some of the issues associated with adapting FSO to mobile vehicular applications and provides an overview of the current maturity and capabilities of these technologies
ABSTRACT To address the need for rapid capture of terrain profiles, and changes in terrain, researchers from Michigan Tech demonstrated a UAS collection system, during a live exercise, supported by the North Atlantic Treaty Organization’s (NATO) Science and Technology Organization (STO). The UAS collection system was deployed to provide high resolution topography (resolution less than 1 cm) with a terrain collection rate greater than 1 meter per second and results were processed within minutes. The resulting topography is of sufficient quality to demonstrate that the technique can be applied to update mobility models, as well as the detection of traverse by ground vehicles
ABSTRACT As today’s Cyber Physical Systems (CPS) become more and more complex they provide both incredible opportunity and risk. In fact, rapidly growing complexity is a significant impediment to the successful development, integration, and innovation of systems. Over the years, methods to manage system complexity have taken many forms. Model Based Systems Engineering (MBSE) provides organizations a timely opportunity to address the complexities of Cyber Physical Systems. MBSE tools, languages and methods are having a very positive impact but are still in a formative stage and continue to evolve. Moreover, the Systems Modeling Language (SysML) has proven to be a significant enabler to advance MBSE methods given its flexibility and expressiveness. While the strengths of SysML provide clarity and consistency, unfortunately the number of people who know SysML well is relatively small. To bring the full power of MBSE to the larger community, system models represented in SysML can be
ABSTRACT Off-road autonomous navigation poses a challenging problem, as the surrounding terrain is usually unknown, the support surface the vehicle must traverse cannot be considered flat, and environmental features (such as vegetation and water) make it difficult to estimate the support surface elevation. This paper will focus on Robotic Research’s suite of off-road autonomous planning and obstacle avoidance tools. Specifically, this paper will provide an overview of our terrain detection system, which utilizes advanced LADAR processing techniques to provide an estimate of the surface. Additionally, it will describe the kino-dynamic off-road planner which can, in real-time, calculate the optimal route, taking into account the support surface, obstacles sensed in the environment, and more. Finally, the paper will explore how these technologies have been applied to a wide variety of different robotic applications
ABSTRACT The Advanced Systems Engineering Capability (ASEC) developed by TARDEC Systems Engineering & Integration (SE&I) group is an integrated Systems Engineering (SE) knowledge creation and capture framework built on a decision centric method, high quality data visualizations, intuitive navigation and systems information management that enable continuous data traceability, real time collaboration and knowledge pattern leverage to support the entire system lifecycle. The ASEC framework has evolved significantly over the past year. New tools have been added for capturing lessons learned from warfighter experiences in theater and for analyzing and validating the needs of ground domains platforms/systems. These stakeholder needs analysis tools may be used to refine the ground domain capability model (functional decomposition) and to help identify opportunities for common solutions across platforms. On-going development of ASEC will migrate all tools to a single virtual desktop to promote
Items per page:
50
1 – 50 of 10018