Browse Topic: Human machine interface (HMI)

Items (917)
ABSTRACT This paper proposes that within the Land domain, there is not only a need to define an approach to open architectures, but also to mandate their use, in order to provide an agile framework for our fighting forces going forward. The paper sets out to explain such an approach; that taken by UK MOD and industry to produce the Generic Vehicle Architecture (GVA) defense standard. It will discuss how the GVA standard was formed, how it is currently being used and how it contributes to the wider MOD initiative for Open Systems Architecture for the Land domain. Finally the paper considers how the UK GVA relates to the US Victory standard and how interoperability may be achieved
White, Antony
ABSTRACT This paper presents developmental and experimental work beyond the initial presentation of the predictive display technology. Developmental work consisted of the addition of features to the predictive display such as image subsampling, camera stabilization, void filling and image overlay graphics. The paper then describes two experiments consisting of twelve subjects each in which the predictive displays were compared to both the zero latency case (baseline) and the unmitigated high-latency cases (worst case). The predictive display was compared using four objective performance and activity measures of mean speed, lateral deviation, heading deviation and steering activity. The predictive display was also assessed using subjective measures of workload and usability. Citation: M.J. Brudnak, “Predictive Displays for High Latency Teleoperation: Extensions and Experiments”, In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI
Brudnak, Mark
ABSTRACT Currently, fielded ground robotic platforms are controlled by a human operator via constant, direct input from a controller. This approach requires constant attention on the part of the operator, decreasing situational awareness (SA). In scenarios where the robotic asset is non-line-of-sight (non-LOS), the operator must monitor visual feedback, which is typically in the form of a video feed and/or visualization. With the increasing use of personal radios, smart devices/wearable computers, and network connectivity by individual warfighters, the need for an unobtrusive means of robotic control and feedback is becoming more necessary. A proposed intuitive robotic operator control (IROC) involving a heads up display (HUD), instrumented gesture recognition glove, and ground robotic asset is described in this paper. Under the direction of the Marine Corps Warfighting Laboratory (MCWL) Futures Directorate, AnthroTronix, Inc. (ATinc) is implementing the described integration for
Baraniecki, LisaVice, JackBrown, JonathanNichols, JoshStone, DaveDahn, Dawn
ABSTRACT Research is currently underway to improve controllability of high degree-of-freedom manipulators under a Phase II SBIR contract sponsored by the U.S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC). As part of this program, the authors have created new control methods as well as adapting tool changing technology onto a dexterous arm to look at controllability of various manipulator functions. In this paper, the authors describe the work completed under this program and describe the findings of this work in terms of how these technologies can be used to extend the capabilities of existing and newly developed robotic manipulators
Peters, DouglasGunnett, KeithGray, Jeremy
ABSTRACT This paper describes work to develop a hands-free, heads-up control system for Unmanned Ground Vehicles (UGVs) under an SBIR Phase I contract. Industry is building upon pioneering work that it has done in creating a speech recognition system that works well in noisy environments, by developing a robust key word spotting algorithm enabling UGV Operators to give speech commands to the UGV completely hands-free. Industry will also research and develop two sub-vocal control modes: whisper speech and teeth clicks. Industry is also developing a system that will enable the Operator to drive a UGV, with a high level of fidelity, to a location selected by the Operator using hands-free commands in conjunction with image segmentation and video overlays. This Phase I effort will culminate in a proof-of-concept demonstration of a hands-free, heads-up system, implemented on a small UGV, that will enable the Operator have a high level of fidelity for control of the system
Brown, JonathanGray, Jeremy P.Blanco, ChrisJuneja, AmitAlberts, JoelReinerman, Lauren
ABSTRACT Model-Based Systems Engineering (MBSE) has grown in popularity since the introduction of SysML a decade ago. Pockets of modeling excellence have developed within many government, industrial, and educational organizations. Few, if any, have achieved “wall-to-wall” adoption. This paper will focus on a key component of a successful system modeling efforts: the individuals who must translate sound systems engineering into robust, useful system models. The author routinely teaches systems architecture, systems engineering, and system modeling and will share methods and techniques for identifying and growing modeling talent. Success depends as much upon mindset and approach as it does upon understanding tool user interfaces and modeling conventions. Published texts, class exercises, videos, and case studies can be used to shape engineers’ problem-solving methods. In addition, a craft system (with apprentice, journeyman, and master modelers engaged in interlocking skill development
Vinarcik, Michael J.
ABSTRACT This paper presents a method to mitigate high latency in the teleoperation of unmanned ground systems through display prediction and state estimation. Specifically, it presents a simulation environment which models both sides of the teleoperation system in the laboratory. The simulation includes a teleoperated vehicle model to represent the dynamics in high fidelity. The sensors and actuators are modeled as well as the communication channel. The latency mitigation approach is implemented in this simulation environment, which consists of a feed-forward vehicle model as a state estimator which drives a predictive display algorithm. These components work together to help the operator receive immediate feedback regarding his/her control actions. The paper contains a technical discussion of the design as well as specific implementation. It concludes with the presentation of some experimental data which demonstrate significant improvement over the unmitigated case
Brudnak, Mark J.
ABSTRACT Semi-autonomous vehicles are intended to give drivers multitasking flexibility and to improve driving safety. Yet, drivers have to trust the vehicle’s autonomy to fully leverage the vehicle’s capability. Prior research on driver’s trust in a vehicle’s autonomy has normally assumed that the autonomy was without error. Unfortunately, this may be at times an unrealistic assumption. To address this shortcoming, we seek to examine the impacts of automation errors on the relationship between drivers’ trust in automation and their performance on a non-driving secondary task. More specifically, we plan to investigate false alarms and misses in both low and high risk conditions. To accomplish this, we plan to utilize a 2 (risk conditions) × 4 (alarm conditions) mixed design. The findings of this study are intended to inform Autonomous Driving Systems (ADS) designers by permitting them to appropriately tune the sensitivity of alert systems by understanding the impacts of error type and
Zhao, HuajingAzevedo-Sa, HebertEsterwood, ConnorYang, X. JessieRobert, LionelTilbury, Dawn
ABSTRACT Recent advances in neuroscience, signal processing, machine learning, and related technologies have made it possible to reliably detect brain signatures specific to visual target recognition in real time. Utilizing these technologies together has shown an increase in the speed and accuracy of visual target identification over traditional visual scanning techniques. Images containing a target of interest elicit a unique neural signature in the brain (e.g. P300 event-related potential) when detected by the human observer. Computer vision exploits the P300-based signal to identify specific features in the target image that are different from other non-target images. Coupling the brain and computer in this way along with using rapid serial visual presentation (RSVP) of the images enables large image datasets to be accurately interrogated in a short amount of time. Together this technology allows for potential military applications ranging from image triaging for the image analyst
Ries, Anthony J.Lance, BrentSajda, Paul
ABSTRACT Tradespace exploration (TSE) is a key component of conceptual design or materiel solution phases that revolves around multi-stakeholder decision making. The TSE process as presented in literature is discussed, including the various stages, tools, and decision making approaches. The decision-making process, summarized herein, can be aided in various ways; one key intervention is the use of visualizations. Characteristics of good visualizations are presented before discussion of a promising avenue for visualization: immersive reality. Immersive reality includes virtual reality representations as well as tactile feedback; however, there are aspects of immersive reality that must be considered as well, such as cognitive loads and accessibility. From the literature, major trends were identified, including that TSE focuses on value but can suffer when not framed as a group decision, the need for testing of proposed TSE support systems, and the need to consider user populations and
Sutton, MeredithTurner, CameronWagner, JohnGorsich, DavidRizzo, DeniseHartman, GregAgusti, RachelSkowronska, AnnetteCastanier, Matthew
ABSTRACT This paper discusses the design and implementation of an interactive mixed reality cockpit that enhances Soldier-vehicle interaction by providing a 360-degree situational awareness system. The cockpit uses indirect vision, where cameras outside the vehicle provide a video feed of the surroundings to the cockpit. The cockpit also includes a virtual information dashboard that displays real-time information about the vehicle, mission, and crew status. The visualization of the dashboard is based on past research in information visualization, allowing Soldiers to quickly assess their operational state. The paper presents the results of a usability study on the effectiveness of the mixed reality cockpit, which compared the Vitreous interface, a Soldier-centered mixed reality head-mounted display, with two other interface and display technologies. The study found that the Vitreous UI resulted in better driving performance and better subjective evaluation of the ability to actively
Hansberger, Jeffrey T.Wood, RyanConner, TyHansen, JayseNix, JacobTorres, Marco
ABSTRACT A crucial part of facilitating the cooperation of multi-robot and human-robot teams is a Common World Model – a shared knowledge base with both physical information (e.g., ground map) and semantic information (e.g, locations of threats and goals) – that can be used to provide high-level guidance to heterogeneous robot teams. Past work performed by Johns Hopkins University Applied Physics Laboratory (JHU/APL) has shown that the Advanced Explosive Ordnance Disposal Robotic System (AEODRS) architecture – a Modular Open Systems Approach (MOSA) architecture leveraging the JAUS (Joint Architecture for Unmanned Systems) standard for definition of its logical interfaces – can be effectively used to develop and integrate the subsystems of a teleoperated ground vehicle for use in a complex environment. This demonstration tackles the next challenge, which is to extend the AEODRS architecture to facilitate multi-robot and human-robot teams
Hinton, MarkVallabha, GautamCooke, ChrisPiatko, ChristineZeher, MichaelGayler, PeterOsier, Geoffrey
ABSTRACT This paper is a technology update of the continued leveraging of using the newest vehicle diagnostics system, the Smart Wireless Internal Combustion Engine (SWICE) interface as the Mini-VCS (Vehicle Computer System). The objective is to further enhance Conditioned Based Maintenance Plus (CBM+) secure diagnostics, data logging, prognostics and sensor integration to support improvement of the US military ground vehicle fleet’s uptime to enhance operational readiness. Evolving advancements of the SWICE initiative will be presented, including how the SWICE “At Platform” Test System can readily be deployed as a multiple-use Mini-VCS. The application of the Mini-VCS integrates the best practices of diagnostics and prognostics, coupled with specialized sensor integration, into a solution that optimally benefits the military ground vehicle fleet. These benefits include increased readiness and operational availability, reduced maintenance costs, lower repair part inventory levels
Zachos, MarkDeGrant, Kenneth
ABSTRACT The objective is to develop a human-multiple robot system that is optimized for teams of heterogeneous robots control. A new human-robot system permits to ease the execution of remote tasks. An operator can efficiently control the physical multi-robots using the high level command, Drag-to-Move method, on the virtual interface. The innovative virtual interface has been integrated with Augmented Reality that is able to track the location and sensory information from the video feed of ground and aerial robots in the virtual and real environment. The advanced feature of the virtual interface is guarded teleoperation that can be used to prevent operators from accidently driving multiple robots into walls and other objects
Lee, SamHunt, ShawnCao, AlexPandya, Abhilash
ABSTRACT This paper presents Neya’s efforts in developing autonomous depot assembly and parking behaviors for the Ground Vehicle Systems Center’s (GVSC) Autonomous Ground Re-supply (AGR) program. Convoys are a prime target for the enemy, and therefore GVSC is making efforts to remove the human operators and make them autonomous. However, humans still have to manually drive multiple convoy vehicles to and from their depot parking locations before and after autonomous convoy operations – a time-consuming and laborious process. Neya systems was responsible for the design, development, and testing of the autonomous depot assembly and disassembly behaviors, enabling end-to-end autonomy for convoy operations. Our solution to the problem, including the concept of operations, design, as well as approaches towards testing and validation are described in detail
Mattes, RichBruck, KurtCascone, AnthonyMartin, Dave
ABSTRACT The objective of this effort is to create parametric Computer-Aided Design (CAD) accommodation models for crew and dismount workstations with specific tasks. The CAD accommodation models are statistical models that have been created utilizing data from the Seated Soldier Study and follow-on studies. The final products are parametric CAD models that provide geometric boundaries indicating the required space and adjustments needed for the equipped Soldiers’ helmet, eyes, torso, knees, boots, controls, and seat travel. Clearances between the Soldier and surrounding interior surfaces and direct field of view have been added per MIL-STD-1472H. The CAD models can be applied early in the vehicle design process to ensure accommodation requirements are met and help explore possible design tradeoffs when conflicts with other design parameters exist. The CAD models are available to government and industry partners and via the GVSC public website once they have undergone Verification
Huston, Frank J.Zielinski, Gale L.Reed, Matthew P.
ABSTRACT The concept of handheld control systems with modular and/or integrated display provides the flexibility of operator use that supports the needs of today’s warfighters. A human machine interface control system that easily integrates with vehicle systems through common architecture and can transition to support dismounted operations provides warfighters with functional mobility they do not have today. With Size, Weight and Power along with reliability, maintainability and availability driving the needs of most platforms for both upgrade and development, moving to convertible (mounted to handheld) and transferrable control systems supports these needs as well as the need for the warfighter to maintain continuous control and command connectivity in uncertain mission conditions
Roy, Monica V.
ABSTRACT This paper describes an approach to aid the many military unmanned ground vehicles which are still teleoperated using a wireless Operator Control Unit (OCU). Our approach provides reliable control over long-distance, highly-latent, low-bandwidth communication links. The innovation in our approach allows refinement of the vehicle’s planned trajectory at any point in time along the path. Our approach uses hand-gestures to provide intuitive fast path editing options, avoiding traditional keyboard/mouse inputs which can be cumbersome for this application. Our local reactive planner is used for vehicle safeguarding. Using this approach, we have performed successful teleoperation nearly 1500 miles away over a cellular-based communications channel. We also discuss results from our user-tests which have evaluated our innovative controller approach with more traditional teleoperation over highly-latent communication links
Baker, Chris LBatavia, Parag
ABSTRACT To optimize the use of partially autonomous vehicles, it is necessary to develop an understanding of the interactions between these vehicles and their operators. This research investigates the relationship between level of partial autonomy and operator abilities using a web-based virtual reality study. In this study participants took part in a virtual drive where they were required to perform all or part of the driving task in one of five possible autonomy conditions while responding to sudden emergency road events. Participants also took part in a simultaneous communications console task to include an element of multitasking. Situation awareness was measured using real-time probes based on the Situation Awareness Global Assessment Technique (SAGAT) as well as the Situation Awareness Rating Technique (SART). Cognitive Load was measured using the NASA Task Load Index (NASA-TLX) and an adapted version of the SOS Scale. Other measured factors included multiple indicators of
Cossitt, Jessie E.Patel, Viraj R.Carruth, Daniel W.Paul, Victor J.Bethel, Cindy L.
ABSTRACT This research proposes a human-multirobot system with semi-autonomous ground robots and UAV view for contaminant localization tasks. A novel Augmented Reality based operator interface has been developed. The interface uses an over-watch camera view of the robotic environment and allows the operator to direct each robot individually or in groups. It uses an A* path planning algorithm to ensure obstacles are avoided and frees the operator for higher-level tasks. It also displays sensor information from each individual robot directly on the robot in the video view. In addition, a combined sensor view can also be displayed which helps the user pin point source information. The sensors on each robot monitor the contaminant levels and a virtual display of the levels is given to the user and allows him to direct the multiple ground robots towards the hidden target. This paper reviews the user interface and describes several initial usability tests that were performed. This research
Lee, SamLucas, Nathan P.Cao, AlexPandya, AbhilashEllis, R. Darin
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
MacDonald, Brian
ABSTRACT Acceptance testing is considered a final stage of validation, and performing acceptance tests of an actual UGV system can be expensive and time-consuming. Therefore, this paper discusses simulation based acceptance testing for UGVs, which can significantly reduce the time and cost of the acceptance test. In this paper, both dynamic and static simulation models are developed, and the results from these simulations show that the static simulation can be used, rather than the more complex dynamic simulation, because of the slow operating speed of UGVs. This finding improves the development efficiently at the simulation model development phase. In addition, the developed simulation models provide a better understanding of the UGV failure modes. The static simulations can determine the required joint motor torques for various UGV loadings and maneuvers and provide data for the full range of operating motion. Specifically, given threshold joint torque value, the safe operating range
Lee, Hyo JongJin, Jionghua (Judy)Ulsoy, A. Galip
ABSTRACT This paper surveys the state of autonomous systems and outlines a novel command and control (C2) paradigm that seeks to accommodate the environmental challenges facing warfighters and their robotic counterparts in the future. New interface techniques will be necessary to reinforce the paradigm that supports the C2 of multiple human-machine teams completing diverse missions as part of the Third Offset Strategy. Realizing this future will require a new approach to teaming and interfaces that fully enable the potential of independent and cooperative decision-making abilities of fully autonomous machines while maximizing the effectiveness of human operators on the battlefield
Michelson, W. Stuart
ABSTRACT Can convolutional neural networks (CNNs) recognize gestures from a camera for robotic control? We examine this question using a small set of vehicle control gestures (move forward, grab control, no gesture, release control, stop, turn left, and turn right). Deep learning methods typically require large amounts of training data. For image recognition, the ImageNet data set is a widely used data set that consists of millions of labeled images. We do not expect to be able to collect a similar volume of training data for vehicle control gestures. Our method applies transfer learning to initialize the weights of the convolutional layers of the CNN to values obtained through training on the ImageNet data set. The fully connected layers of our network are then trained on a smaller set of gesture data that we collected and labeled. Our data set consists of about 50,000 images recorded at ten frames per second, collected and labeled in less than 15 man-hours. Images contain multiple
Kawatsu, ChrisKoss, FrankGillies, AndyZhao, AaronCrossman, JacobPurman, BenStone, DaveDahn, Dawn
ABSTRACT Over the past several years, the rate of advancements in modern computer hardware and graphics computing capabilities has increased exponentially and provided unprecedented opportunities within the Modeling and Simulation community to increase the visual fidelity and quality in new Image Generators (IGs). As a result, IG vendors are continuously reevaluating the best way to make use of these new performance improvements. Some vendors have chosen to increase the resolution of the environment by displaying higher resolution imagery from disk while other vendors have chosen to increase the number of polygons that are capable of being presented in the scene while maintaining 60Hz. While all of these approaches use the latest hardware technology to improve the quality of the simulated environment in the IG, the authors of this paper have chosen to focus on a different approach; to improve the accuracy and realism of the simulated environment. To accomplish this, the authors have
Kuehne, BobHebert, KennyChladny, Brett
ABSTRACT The confluence of intra-vehicle networks, Vehicular Integration for (C4ISR) Command, Control Communication, Computers, Intelligence, Surveillance, Reconnaissance/(EW) Electronic Warfare Interoperability (VICTORY) standards and onboard general-purpose processors creates an opportunity to implement Army combat ground vehicle intercommunications (intercom) capability in software. The benefits of such an implementation include 1) SWAP savings, 2) cost savings, 3) simplified path to future upgrades and 4) enabling of potential new capabilities such as voice activated mission command. The VICTORY Standards Support Office (VSSO), working at the direction of its Executive Steering Group (ESG) members (Program Executive Office (PEO) Ground Combat Systems (GCS), PEO Combat Support and Combat Service Support (CS&CSS), PEO Command Control Communications-Tactical (C3T) and PEO Intelligence, Electronic Warfare and Sensors (IEW&S)), has developed and demonstrated a software intercom
Kelsch, GeoffreySerafinko, RobertFrissora, Anthony
ABSTRACT As the number of robotic systems on the battlefield increases, the number of operators grows with it, leading to significant cost burden. Autonomous robots are already capable of task execution with limited supervision, and the capabilities of autonomous robots continue to advance rapidly. Because these autonomous systems have the ability to assist and augment human soldiers, commanders need advanced methods for assigning tasks to the systems, monitoring their status and using them to achieve desirable results. Mission Command for Autonomous Systems (MCAS) aims to enable natural interaction between commanders and their autonomous assets without requiring dedicated operators or significantly increasing the commanders’ cognitive burden. This paper discusses the approach, design and challenges of MCAS and present opportunities for future collaboration with industry and academia
Martin, JeremyKorfiatis, PeterSilva, Udam
ABSTRACT Many rollover prevention algorithms rely on vehicle models which are difficult to develop and require extensive knowledge of the vehicle. The Zero-Moment Point (ZMP) combines a simple vehicle model with IMU-only sensor measurements. When used in conjunction with haptic feedback, ground vehicle rollover can be prevented. This paper investigates IMU grade requirements for an accurate rollover prediction. This paper also discusses a haptic feedback design that delivers operator alerts to prevent rollover. An experiment was conducted using a Gazebo simulation to assess the capabilities of the ZMP method to predict vehicle wheel lift-off and demonstrate the potential for haptic communication of the ZMP index to prevent rollover. Citation: K. Steadman, C. Stubbs, A. Baskaran, C. G. Rose, D. Bevly, “Teleoperated Ground Vehicle Rollover Prevention via Haptic Feedback of the Zero-Moment Point Index,” In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium
Steadman, KathleenStubbs, ChandlerBaskaran, AvinashRose, Chad G.Bevly, David
ABSTRACT This presentation will review the ongoing lessons learned from a joint Industry/DoD collaborative program to explore this area over the past 5 years. The discussion will review the effectiveness of integrating multiple new technologies (combined with select COTS elements) to provide a complete solution designed to reduce spares stockpiles, maximize available manpower, reduce maintenance downtime and reduce vehicle lifecycle costs. A number of new and emerging technology case studies involving diagnostic sensors (such as battery health monitors), knowledge management data accessibility, remote support-based Telematics, secure communication, condition-based software algorithms, browser-based user interfaces and web portal data delivery will be presented
Fortson, RickJohnson, Ken
ABSTRACT Application of human figure modeling tools and techniques has proven to be a valuable asset in the effort to examine man-machine interface problems through the evaluation of 3D CAD models of workspace designs. Digital human figure modeling has also become a key tool to help ensure that Human Systems Integration (HSI) requirements are met for US Army weapon systems and platforms. However, challenges still exist to the effective application of human figure modeling especially with regard to military platforms. For example, any accommodation analysis of these systems must not only account for the physical dimensions of the target Soldier population but also the specialized mission clothing and equipment such as body armor, hydration packs, extreme cold weather gear and chemical protective equipment to name just a few. Other design aspects such as seating, blast mitigation components, controls and communication equipment are often unique to military platforms and present special
Burns, CherylKozycki, Richard
ABSTRACT We have developed techniques for a robot to compute its expected myopic gain in performance from asking its operator specific questions, such as questions about how risky a particular movement action is around pedestrians. Coupled with a model of the operator’s costs for responding to inquiries, these techniques form the core of a new algorithm that iteratively allows the robot to decide what questions are in expectation most valuable to ask the operator and whether their value justifies potentially interrupting the operator. We have performed experiments in simple simulated robotic domains that illustrate the effectiveness of our approach
Durfee, EdmundKarmol, DavidMaxim, MichaelSingh, Satinder
ABSTRACT As U.S. Army leadership continues to invest in novel technological systems to give warfighters a decisive edge for mounted and dismounted operations, the Integrated Visual Augmentation System (IVAS) and other similar systems are in the spotlight. Continuing to put capable systems that integrate fighting, rehearsing, and training operations into the hands of warfighters will be a key delineator for the future force to achieve and maintain overmatch in an all-domain operational environment populated by near-peer threats. The utility and effectiveness of these new systems will depend on the degree to which the capabilities and limitations of humans are considered in context during development and testing. This manuscript will survey how formal and informal Human Systems Integration planning can positively impact system development and will describe a Helmet Mounted Display (HMD) case study
Michelson, StuartRay, Jerry
ABSTRACT The concept of handheld control systems with modular and/or integrated display provides the flexibility of operator use that supports the needs of today’s warfighters. A human machine interface control system that easily integrates with vehicle systems through common architecture and can transition to support dismounted operations provides warfighters with functional mobility they do not have today. With Size, Weight and Power along with reliability, maintainability and availability driving the needs of most platforms for both upgrade and development, moving to convertible (mounted to handheld) and transferrable control systems supports these needs as well as the need for the warfighter to maintain continuous control and command connectivity in uncertain mission conditions
Roy, Monica V.
ABSTRACT The U.S. Army Combat Capabilities Development Command (DEVCOM) Ground Vehicle Systems Center (GVSC) has been developing next generation crew stations over the last several decades. In this paper, the problem space that impacts design development and decisions is discussed. This is followed by a historical overview of crewstation development activities that have evolved over the last 30 years, as well as key lessons learned that must be considered for successful ground vehicle Soldier-vehicle interactions. Lastly, the direction and critical technological focus areas are identified to exploit advancements and meet future combat vehicle system needs. Citation: T. Tierney, “A Perspective on GVSC Crewstation Development and Addressing Future Ground Combat Vehicle Needs,” In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 15-17, 2023
Tierney, Terrance M.
ABSTRACT Automated systems can have a hard time completing complex tasks in a timely manner. When controlling a robot outside of autonomous mode, a good control device needs to give the user full control of the system while enabling the mission to be completed in a quick, accurate and efficient manner. This paper outlines the potential features of a puppet style control device and the lessons learned while implementing such a device
Rusbarsky, DavidGray, Jeremy P.Grebinoski, JimMor, Andrew B.
Abstract Converting vehicles from conventional manned operations to unmanned supervised operations has been slow to adoption in many industries due to cost, complexity (requiring more highly skilled personnel) and perceived lower productivity. Indeed, hazardous operations (military, nuclear cleanup, etc.) have seen the most significant implementations of robotics based solely on personnel safety. Starting in 2005, the U.S. Army Corps of Engineers (USACE) has assumed a leading role in promoting the use of robotics in unexploded ordnance (UXO) range remediation. Although personnel safety is the primary component of the USACE mission, increasing productivity while reducing overall cost is an extremely important driver behind their program. To achieve this goal demands that robotic range clearance equipment be affordable, easy to install on rental equipment, durable and reliable (to minimize down-time), low or no maintenance, and easy to learn / operate by the same individuals who would
Selfridge, BobHewitt, Gregory
ABSTRACT There is a need to better understand how operators and autonomous vehicle control systems can work together in order to provide the best-case scenario for utilization of autonomous capabilities in military missions to reduce crew sizes and thus reduce labor costs. The goal of this research is to determine how different levels of autonomous capabilities in vehicles affect the operator’s situational awareness, cognitive load, and ability to respond to road events while also responding to other auditory and visual tasks. Understanding these interactions is a crucial step to eventually determining the best way to allocate tasks to crew members in missions where crew size has been reduced due to the utilization of autonomous vehicles. Citation: J. E. Cossitt, C. R. Hudson, D. W. Carruth, C. L. Bethel, “Dynamic Task Allocation and Understanding of Situation Awareness Under Different Levels of Autonomy in Closed-Hatch Military Vehicles”, In Proceedings of the Ground Vehicle Systems
Cossitt, Jessie E.Hudson, Christopher R.Carruth, Daniel W.Bethel, Cindy L.
ABSTRACT Commercial OEMs are fast realizing the long awaited dream of self-driving trucks and cars. The technology continues to improve with major implications for the Army. In the near tear, the impact may be most profound for military installations. Many believe, however, that the major limiting factor to wide-spread automated vehicle usage will not be technology but the human element. What happens when humans through no choice of their own are compelled to interact with self-driving vehicles? We propose a mixed-methods research study that examines the complex transportation system from both a technical and social perspective. This study will inform environmental controls (rules of the road and infrastructure modifications) and increase understanding of the social dynamics involved with vehicle acceptance. Findings may pave the way for a reduction in the over $400M the Army spends annually on non-tactical vehicles and the technical improvements, grounded in dual-use use cases will be
Straub, Edward
ABSTRACT This paper presents a practical and easy to implement method for tracking the position of tele-operated Unmanned Ground Vehicles (UGVs) inside buildings, where GPS is unavailable. In conventional dead-reckoning systems, which typically use odometry combined with a single-axis gyro or an Inertial Measurement Unit (IMU), heading errors grow without bound. For that reason, tracking the position of tele-operated UGVs for more than a few minutes becomes unfeasible. Our method, called Heuristics-Enhanced Dead-reckoning (HEDR), overcomes this problem by completely eliminating heading errors at steady state in tele-operated missions of unlimited duration. As a result, HEDR allows the plotting of very accurate trajectories on the Operator Console Unit (OCU). When overlaid over an aerial photo of a building, the real-time trajectory display gives the operator crucial information about position and heading of the UGV relative to the building. This feature offers the operator much
Borenstein, JohannBorrell, AdamMiller, RussThomas, David
Crew Station design in the physical realm is complex and expensive due to the cost of fabrication and the time required to reconfigure necessary hardware to conduct studies for human factors and optimization of space claim. However, recent advances in Virtual Reality (VR) and hand tracking technologies have enabled a paradigm shift to the process. The Ground Vehicle System Center has developed an innovative approach using VR technologies to enable a trade space exploration capability which provides crews the ability to place touchscreens and switch panels as desired, then lock them into place to perform a fully recorded simulation of operating the vehicle through a virtual terrain, maneuvering through firing points and engaging moving and static targets during virtual night and day missions with simulated sensor effects for infrared and night vision. Human factors are explored and studied using hand tracking which enables operators to check reach by interacting with virtual components
Agusti, Rachel S.Brown, DavidKovacin, KyleSmith, AaronHackenbruch, Rachel N.Hess, DavidSimmons, Caleb B.Stewart, Colin
Today’s intelligent robots can accurately recognize many objects through vision and touch. Tactile information, obtained through sensors, along with machine learning algorithms, enables robots to identify objects previously handled
Semi-automated computational design methods involving physics-based simulation, optimization, machine learning, and generative artificial intelligence (AI) already allow greatly enhanced performance alongside reduced cost in both design and manufacturing. As we progress, developments in user interfaces, AI integration, and automation of workflows will increasingly reduce the human inputs required to achieve this. With this, engineering teams must change their mindset from designing products to specifying requirements, focusing their efforts on testing and analysis to provide accurate specifications. Generative Design in Aerospace and Automotive Structures discusses generative design in its broadest sense, including the challenges and recommendations regarding multi-stage optimizations. Click here to access the full SAE EDGETM Research Report portfolio
Muelaner, Jody Emlyn
Homologation is an important process in vehicle development and aerodynamics a main data contributor. The process is heavily interconnected: Production planning defines the available assemblies. Construction defines their parts and features. Sales defines the assemblies offered in different markets, where Legislation defines the rules applicable to homologation. Control engineers define the behavior of active, aerodynamically relevant components. Wind tunnels are the main test tool for the homologation, accompanied by surface-area measurement systems. Mechanics support these test operations. The prototype management provides test vehicles, while parts come from various production and prototyping sources and are stored and commissioned by logistics. Several phases of this complex process share the same context: Production timelines for assemblies and parts for each chassis-engine package define which drag coefficients or drag coefficient contributions shall be determined. Absolute and
Jacob, Jan D.
Using electrical impedance tomography (EIT), researchers have developed a system using a flexible tactile sensor for objective evaluation of fine finger movements. Demonstrating high accuracy in classifying diverse pinching motions, with discrimination rates surpassing 90 percent, this innovation holds potential in cognitive development and automated medical research
The lane departure warning (LDW) system is a warning system that alerts drivers if they are drifting (or have drifted) out of their lane or from the roadway. This warning system is designed to reduce the likelihood of crashes resulting from unintentional lane departures (e.g., run-off-road, side collisions, etc.). This system will not take control of the vehicle; it will only let the driver know that he/she needs to steer back into the lane. An LDW is not a lane-change monitor, which addresses intentional lane changes, or a blind spot monitoring system, which warns of other vehicles in adjacent lanes. This informational report applies to original equipment manufacturer and aftermarket LDW systems for light-duty vehicles (gross vehicle weight rating of no more than 8500 pounds) on relatively straight roads with a radius of curvature of 500 m or more and under good weather conditions
Advanced Driver Assistance Systems (ADAS) Committee
Computer modelling, virtual prototyping and simulation is widely used in the automotive industry to optimize the development process. While the use of CAE is widespread, on its own it lacks the ability to provide observable acoustics or tactile vibrations for decision makers to assess, and hence optimize the customer experience. Subjective assessment using Driver-in-Loop simulators to experience data has been shown to improve the quality of vehicles and reduce development time and uncertainty. Efficient development processes require a seamless interface from detailed CAE simulation to subjective evaluations suitable for high level decision makers. In the context of perceived vehicle vibration, the need for a bridge between complex CAE data and realistic subjective evaluation of tactile response is most compelling. A suite of VI-grade noise and vibration simulators have been developed to meet this challenge. In the process of developing these solutions VI-grade has identified the need
Franks, GrahamTcherniak, DmitriKennings, PaulAllman-Ward, MarkKuhmann, Marvin
Items per page:
1 – 50 of 917