Browse Topic: Artificial intelligence (AI)
ABSTRACT Future autonomous combat vehicles will need to travel off-road through poorly mapped environments. Three-dimensional topography may be known only to a limited extent (e.g. coarse height), but this will likely be noisy and of limited resolution. For ground vehicles, 3D topography will impact how far ahead the vehicle can “see”. Higher vantage points and clear views provide much more useful path planning data than lower vantage points and occluded views from trees and structures. The challenge is incorporating this knowledge into a path planning solution. When should the robot climb higher to get a better view or else continue moving along the shortest path predicted by current information? We investigated the use of Deep Q-Networks (DQN) to reason over this decision space, comparing performance to conventional methods. In the presence of significant sensor noise, the DQN was more successful in finding a path to the target than A* for all but one type of terrain. Citation: E
ABSTRACT The effective and safe use of Rough Terrain Cargo Handlers is severely hampered by the operator’s view being obstructed. This results in the inability to see a) in front of the vehicle while driving, b) where to set a carried container, and c) where to maneuver the vehicles top handler in order to engage with cargo containers. We present an analysis of these difficulties along with specific solutions to address these challenges that go beyond the non-technical solution currently used, including the placement of sensors and the use of image analysis. These solutions address the use of perception to support autonomy, drive assist, active safety, and logistics
ABSTRACT An increasing pace of technology advancements and recent heavy investment by potential adversaries has eroded the Army’s overmatch and spurred significant changes to the modernization enterprise. Commercial ground vehicle industry solutions are not directly applicable to Army acquisitions because of volume, usage and life cycle requirement differences. In order to meet increasingly aggressive schedule goals while ensuring high quality materiel, the Army acquisition and test and evaluation communities need to retain flexibility and continue to pursue novel analytic methods. Fully utilizing test and field data and incorporating advanced techniques, such as, big data analytics and machine learning can lead to smarter, more rapid acquisition and a better overall product for the Soldier. Logistics data collections during operationally relevant events that were originally intended for the development of condition based maintenance procedures in particular have been shown to provide
ABSTRACT Optical distortion measurements for transparent armor (TA) solutions are critical to ensure occupants can see what is happening outside a vehicle. Unfortunately, optically transparent materials often have poorer mechanical properties than their opaque counterparts which usually results in much thicker layups to provide the same level of protection. Current standards still call for the use of a double exposure method to manually compare the distortion of grid lines. This report presents provides a similar method of analysis with less user input using items typically available in many mechanics labs: machine vision cameras and digital image correlation software. Citation: J. M. Gorman, “An Easier Approach to Measuring Optical Distortion in Transparent Armor”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 11-13, 2020. The views presented are those of the author and do not necessarily represent the views of DoD or
ABSTRACT Simulation is a critical step in the development of autonomous systems. This paper outlines the development and use of a dynamically linked library for the Mississippi State University Autonomous Vehicle Simulator (MAVS). The MAVS is a library of simulation tools designed to allow for real-time, high performance, ray traced simulation capabilities for off-road autonomous vehicles. It includes features such as automated off-road terrain generation, automatic data labeling for camera and LIDAR, and swappable vehicle dynamics models. Many machine learning tools today leverage Python for development. To use these tools and provide an easy to use interface, Python bindings were developed for the MAVS. The need for these bindings and their implementation is described. Citation: C. Hudson, C. Goodin, Z. Miller, W. Wheeler, D. Carruth, “Mississippi State University Autonomous Vehicle Simulation Library”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium
ABSTRACT Model Based Systems Engineering (MBSE) has been a dominant methodology for defining and developing complex systems; however, it has not yet been paired with cutting-edge digital engineering transformation. MBSE is constrained to represent a whole system, but lacks other capabilities, such as dynamic simulation and optimization, as well as integration of hardware and software functions. This paper provides the key elements for developing a Smart MBSE (SMBSE) modeling approach that integrates Systems Engineering (SE) functionality with the full suite of other development tools utilized to create today’s complex products. SMBSE connects hardware and software with a set of customer needs, design requirements, program targets, simulations and optimization functionalities. The SMBSE modeling approach is still under development, with significant challenges for building bridges between conventional Systems Engineering methodology, with additional capabilities to reuse, automate
ABSTRACT The IGVC offers a design experience that is at the very cutting edge of engineering education, with a particular focus in developing engineering control/sensor integration experience for the college student participants. A main challenge area for teams is the proper processing of all the vehicle sensor feeds, optimal integration of the sensor feeds into a world map and the vehicle leveraging that world map to plot a safe course using robust control algorithms. This has been an ongoing challenge throughout the 26 year history of the competition and is a challenge shared with the growing autonomous vehicle industry. High consistency, reliability and redundancy of sensor feeds, accurate sensor fusion and fault-tolerant vehicle controls are critical, as even small misinterpretations can cause catastrophic results, as evidenced by the recent serious vehicle crashes experienced by self-driving companies including Tesla and Uber Optimal control techniques & sensor selection
ABSTRACT Recent advances in neuroscience, signal processing, machine learning, and related technologies have made it possible to reliably detect brain signatures specific to visual target recognition in real time. Utilizing these technologies together has shown an increase in the speed and accuracy of visual target identification over traditional visual scanning techniques. Images containing a target of interest elicit a unique neural signature in the brain (e.g. P300 event-related potential) when detected by the human observer. Computer vision exploits the P300-based signal to identify specific features in the target image that are different from other non-target images. Coupling the brain and computer in this way along with using rapid serial visual presentation (RSVP) of the images enables large image datasets to be accurately interrogated in a short amount of time. Together this technology allows for potential military applications ranging from image triaging for the image analyst
ABSTRACT To realize the full potential of simulation-based evaluation and validation of autonomous ground vehicle systems, the next generation of modeling and simulation (M&S) solutions must provide real-time closed-loop environments that feature the latest physics-based modeling approaches and simulation solvers. Real-time capabilities enable seamless integration of human-in/on-the-loop training and hardware-in-the-loop evaluation and validation studies. Using an open modular architecture to close the loop between the physics-based solvers and autonomy stack components allows for full simulation of unmanned ground vehicles (UGVs) for comprehensive development, training, and testing of artificial intelligence vehicle-based agents and their human team members. This paper presents an introduction to a Proof of Concept for such a UGV M&S solution for severe terrain environments with a discussion of simulation results and future research directions. This conceptual approach features: 1
ABSTRACT This paper describes the use of neural networks to enhance simulations for subsequent training of anomaly-detection systems. Simulations can provide edge conditions for anomaly detection which may be sparse or non-existent in real-world data. Simulations suffer, however, by producing data that is “too clean” resulting in anomaly detection systems that cannot transition from simulated data to actual conditions. Our approach enhances simulations using neural networks trained on real-world data to create outputs that are more realistic and variable than traditional simulations. Citation: P.Feldman, “Training robust anomaly detection using ML-Enhanced simulations”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 11-13, 2020
ABSTRACT This paper will explore the opportunities for artificial intelligence (AI) in the system engineering domain, particularly in ways that unite the unique capabilities of the systems engineer with the AI. This collaboration of human and machine intelligence is known as Augmented Intelligence (AuI). There is little doubt that systems engineering productivity could be improved with effective utilization of well-established AI techniques, such as machine learning, natural language processing, and statistical models. However, human engineers excel at many tasks that remain difficult for AIs, such as visual interpretation, abstract pattern matching, and drawing broad inferences based on experience. Combining the best of AI and human capabilities, along with effective human/machine interactions and data visualization, offers the potential for orders-of-magnitude improvements in the speed and quality of delivered
ABSTRACT Main Battle Tanks (MBTs) remain a key component of most modern militaries. While the best way to ‘kill a tank’ is via the employment of another tank, matching enemy armor formations one-for one is not always possible. Light infantry lack organic armor and their shoulder launched anti-tank capabilities do not defeat the latest generation of MBTs. To compensate for this capability gap, the U.S. Army has employed precision guided anti-tank munitions, such as the “Javelin.” However, these are expensive to produce in quantity and require risking the forward presence of dismounted Soldiers to employ. Mine fields offer another option but are immobile once employed. The ‘Guillotine’ Attack System proposes to change the equation by pairing an AI enabled, adaptive unmanned delivery system with a shaped charge payload. Guillotine can loiter for hours, reposition itself to hunt for targets, and- when ready- deliver a precision shaped charge strike from the air. Citation: “The ‘Guillotine
Items per page:
50
1 – 50 of 1811