Browse Topic: Human factors
ABSTRACT This study investigated the effect of an innovative chilling device that intends to make subjects more alert and less sleepy. Tests were conducted using a variety of methods including electric-encephalography (EEG) brain tomography. A series of behavioral tests showed an increase in alertness, changes of body temperatures, and performance indicators after usage of this device. The device chills specific areas of the body and disrupts the body’s ability to self-regulate core body temperature. The induced temperature shifts may reduce the body’s capability to go to sleep. Physiological changes and brain wave indicators of alertness were also reviewed in this paper. A full study of alertness indicators in expanded driver simulations is recommended. As for future application of this device to Human Factors aspects, this device may have the potential to enhance alertness in the human dimension of machine operation of manned and unmanned assets with further improvement
ABSTRACT This paper discusses the design and implementation of an interactive mixed reality cockpit that enhances Soldier-vehicle interaction by providing a 360-degree situational awareness system. The cockpit uses indirect vision, where cameras outside the vehicle provide a video feed of the surroundings to the cockpit. The cockpit also includes a virtual information dashboard that displays real-time information about the vehicle, mission, and crew status. The visualization of the dashboard is based on past research in information visualization, allowing Soldiers to quickly assess their operational state. The paper presents the results of a usability study on the effectiveness of the mixed reality cockpit, which compared the Vitreous interface, a Soldier-centered mixed reality head-mounted display, with two other interface and display technologies. The study found that the Vitreous UI resulted in better driving performance and better subjective evaluation of the ability to actively
ABSTRACT The goal of the human factors engineer is to work within the systems engineering process to ensure that a Crew Centric Design approach is utilized throughout system design, development, fielding, sustainment, and retirement. To evaluate the human interface, human factors engineers must often start with a low fidelity mockup, or virtual model, of the intended design until a higher fidelity physical representation or the working hardware is available. Testing the Warrior-Machine Interface needs to begin early and continue throughout the Crew Centric Design process to ensure optimal soldier performance. This paper describes a Four Step Process to achieve this goal and how it has been applied to the ground combat vehicle programs. Using these four steps in the ground combat vehicle design process improved design decisions by including the user throughout the process either in virtual or real form, and applying the user’s operational requirements to drive the design
ABSTRACT The study describes the development of a plug-in module of the realistic 3D Digital Human Modeling (DHM) tool RAMSIS that is used to optimize product development of military vehicle systems. The use of DHM in product development has been established for years. DHM for the development of military vehicles requires not only the representation of the vehicle occupants, but also the representation of equipment and simulation of the impact of such equipment on the Warfighter. To simulate occupants in military vehicles, whether land or air based, realistically, equipment must become an integral part of the extended human model. Simply attaching CAD-geometry to one manikin’s element is not sufficient. Equipment size needs to be scalable with respect to anthropometry, impact on joint mobility needs to be considered with respect to anatomy. Those aspects must be integrated in posture prediction algorithms to generate objective, reliable and reproducible results to help design engineers
ABSTRACT The complexity of the current and future security environment will impose new and ever-changing challenges to Warfighter capabilities. Given the critical nature of Soldier cognitive performance in meeting these increased demands, systems should be designed to work in ways that are consistent with human cognitive function. Here, we argue that traditional approaches to understanding the human and cognitive dimensions of systems development cannot always provide an adequate understanding of human cognitive performance. We suggest that integrating neuroscience approaches and knowledge provides unique opportunities for understanding human cognitive function. Such an approach has the potential to enable more effective systems design – that is, neuroergonomic design – and that it is necessary to obtain these understandings within complex, dynamic environments. Ongoing research efforts utilizing large-scale ride motion simulations that allow researchers to systematically constrain
Abstract A necessary, but not sufficient, condition for innovation is that it be different. Given this, a technique is proposed to develop innovative solutions at each step of a systems engineering based product development effort. This technique, while not guaranteeing results, allows ventures into innovation which can be planned, scheduled, and measured
ABSTRACT This research proposes a human-multirobot system with semi-autonomous ground robots and UAV view for contaminant localization tasks. A novel Augmented Reality based operator interface has been developed. The interface uses an over-watch camera view of the robotic environment and allows the operator to direct each robot individually or in groups. It uses an A* path planning algorithm to ensure obstacles are avoided and frees the operator for higher-level tasks. It also displays sensor information from each individual robot directly on the robot in the video view. In addition, a combined sensor view can also be displayed which helps the user pin point source information. The sensors on each robot monitor the contaminant levels and a virtual display of the levels is given to the user and allows him to direct the multiple ground robots towards the hidden target. This paper reviews the user interface and describes several initial usability tests that were performed. This research
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
ABSTRACT Military personnel involved in convoy operations are often required to complete multiple tasks within tightly constrained timeframes, based on limited or time-sensitive information. Current simulations are often lacking in fidelity with regard to team interaction and automated agent behavior; particularly problematic areas include responses to obstacles, threats, and other changes in conditions. More flexible simulations are needed to support decision making and train military personnel to adapt to the dynamic environments in which convoys regularly operate. A hierarchical task analysis approach is currently being used to identify and describe the many tasks required for effective convoy operations. The task decomposition resulting from the task analysis provides greater opportunity for determining decision points and potential errors. The results of the task analysis will provide guidance for the development of more targeted simulations for training and model evaluation from
ABSTRACT Latencies as small as 170 msec significantly degrade ground vehicle teleoperation performance and latencies greater than a second usually lead to a “move and wait” style of control. TORIS (Teleoperation Of Robots Improvement System) mitigates the effects of latency by providing the operator with a predictive display showing a synthetic latency-corrected view of the robot’s relationship to the local environment and control primitives that remove the operator from the high-frequency parts of the robot control loops. TORIS uses operator joystick inputs to specify relative robot orientations and forward travel distances rather than rotational and translational velocities, with control loops on the robot making the robot achieve the commanded sequence of poses. Because teleoperated ground vehicles vary in sensor suite and on-board computation, TORIS supports multiple predictive display methods. Future work includes providing obstacle detection and avoidance capabilities to support
Summary Combat vehicle designers have made great progress in improving crew survivability against large blast mines and improvised explosive devices. Current vehicles are very resistant to hull failure from large blasts, protecting the crew from overpressure and behind armor debris. However, the crew is still vulnerable to shock injuries arising from the blast and its after-effects. One of these injury modes is spinal compression resulting from the shock loading of the crew seat. This can be ameliorated by installing energy-absorbing seats which reduce the intensity of the spinal loading, while spreading it out over a longer time. The key question associated with energy-absorbing seats has to do with the effect of various factors associated with the design on spinal compression and injury. These include the stiffness and stroking distance of the seat’s energy absorption mechanism, the size of the blast, the vehicle shape and mass, and the weight of the seat occupant. All of these
ABSTRACT This paper surveys the state of autonomous systems and outlines a novel command and control (C2) paradigm that seeks to accommodate the environmental challenges facing warfighters and their robotic counterparts in the future. New interface techniques will be necessary to reinforce the paradigm that supports the C2 of multiple human-machine teams completing diverse missions as part of the Third Offset Strategy. Realizing this future will require a new approach to teaming and interfaces that fully enable the potential of independent and cooperative decision-making abilities of fully autonomous machines while maximizing the effectiveness of human operators on the battlefield
ABSTRACT Imagine Soldiers reacting to an unpredictable, dynamic, stressful situation on the battlefield. How those Soldiers think about the information presented to them by the system or other Soldiers during this situation – and how well they translate that into thinking into effective behaviors – is critical to how well they perform. Importantly, those thought processes (i.e., cognition) interact with both external (e.g., the size of the enemy force, weather) and internal (e.g., ability to communicate, personality, fatigue level) factors. The complicated nature of these interactions can have dramatic and unexpected consequences, as is seen in the analysis of military and industrial disasters, such as the shooting down of Iran Air flight 655, or the partial core meltdown on Three Mile Island. In both cases, decision makers needed to interact with equipment and personnel in a stressful, dynamic, and uncertain environment. Similarly, the complex and dynamic nature of the contemporary
ABSTRACT We have developed techniques for a robot to compute its expected myopic gain in performance from asking its operator specific questions, such as questions about how risky a particular movement action is around pedestrians. Coupled with a model of the operator’s costs for responding to inquiries, these techniques form the core of a new algorithm that iteratively allows the robot to decide what questions are in expectation most valuable to ask the operator and whether their value justifies potentially interrupting the operator. We have performed experiments in simple simulated robotic domains that illustrate the effectiveness of our approach
ABSTRACT BAE Systems Combat Simulation and Integration Labs (CSIL) are a culmination of more than 14 years of operational experience at our SIL facility in Santa Clara. The SIL provides primary integration and test functions over the entire life cycle of a combat vehicle’s development. The backbone of the SIL operation is the Simulation-Emulation-Stimulation (SES) process. The SES process has successfully supported BAE Systems US Combat Systems (USCS) SIL activities for many government vehicle development programs. The process enables SIL activities in vehicle design review, 3D virtual prototyping, human factor engineering, and system & subsystem integration and test. This paper describes how CSIL applies the models, software, and hardware components in a hardware-in-the-loop environment to support USCS combat vehicle development in the system integration lab
ABSTRACT Time lags are known to reduce performance in human-in-the-loop control systems. Performance decrements for human-in-the-loop control systems as a result of time lags are generally associated with the operator’s inability to predict the outcome of their control input and are dependent upon the characteristics of the lag (e.g., magnitude and variability). Further, the effects of variable time lags are not well studied or understood, but may exacerbate the effects on human control actions observed with fixed lags. Several studies have demonstrated mechanisms that can help combat the effects of lag including adaptation, mathematical predictors (e.g., filters), and predictive displays. This experiment examined the effects of lag and lag variability on a simulated driving task, as well as a possible mitigation (predictive display) for the effects of lag. Results indicated that lag variability significantly reduced driving performance, and that the predictive display significantly
ABSTRACT As U.S. Army leadership continues to invest in novel technological systems to give warfighters a decisive edge for mounted and dismounted operations, the Integrated Visual Augmentation System (IVAS) and other similar systems are in the spotlight. Continuing to put capable systems that integrate fighting, rehearsing, and training operations into the hands of warfighters will be a key delineator for the future force to achieve and maintain overmatch in an all-domain operational environment populated by near-peer threats. The utility and effectiveness of these new systems will depend on the degree to which the capabilities and limitations of humans are considered in context during development and testing. This manuscript will survey how formal and informal Human Systems Integration planning can positively impact system development and will describe a Helmet Mounted Display (HMD) case study
ABSTRACT In this paper, we propose a new approach to developing advanced simulation environments for use in performing human-subject experiments. We call this approach the mission-based scenario. The mission-based scenario aims to: 1) Situate experiments within a realistic mission context; 2) Incorporate tasks, task loadings, and environmental interactions that are consistent with the mission’s operational context; and 3) Permit multiple sequences of actions/tasks to complete mission objectives. This approach will move us beyond more traditional, tightly-scripted experimental scenarios, and will employ concepts from interactive narrative as well as nonlinear game play approaches to video game design to enhance the richness and realism of Soldier-task-environment interactions. In this paper, we will detail the rationale for adopting such an approach and present a discussion of significant concepts that have guided a proof-of-concept test program of the mission-based scenario, which we
ABSTRACT The objective of this effort is to create a parametric Computer-Aided Design (CAD) accommodation model for the Fixed Heel Point (FHP) driver and crew workstations with specific tasks. The FHP model is a statistical model that was created utilizing data from the Seated Soldier Study (Reed and Ebert, 2013). The final product is a stand-alone CAD model that provides geometric boundaries indicating the required space and adjustments needed for the equipped Soldiers’ helmet, eyes, torso, knees, and seat travel. Clearances between the Soldier and surrounding interior surfaces and direct field of view have been added per MIL-STD-1472G. This CAD model can be applied early in the vehicle design process to ensure accommodation requirements are met and help explore possible design tradeoffs when conflicts with other design parameters exist. The CAD model will be available once it has undergone Verification, Validation, and Accreditation (VV&A) and a user guide has been written
ABSTRACT Designing robots for military applications requires a greater understanding between the engineer and the Soldier. Soldier considerations result from experiences not common to the engineer in the lab and, when understood, can minimize the design time and provide a more capable product that is more readily deployed into the unit
ABSTRACT Lay error is a primary source of error in fire control, which is defined as “the gunner’s inability to lay the sight crosshairs exactly on the center of the target.” To evaluate the potential implementation of computer vision and artificial intelligence algorithms for improving gunners’ performance or enabling autonomous targeting, it is crucial for the US Army to establish a benchmark of human performance as a reference point. In this study, we present preliminary results of a human subject study conducted to establish such a baseline. Using the Unreal Engine [1], we developed a photorealistic simulation environment with various targets. Fifteen individuals meeting the military applicant criteria in terms of age were assigned the task of aligning crosshairs on targets at multiple ranges and under different motion conditions. Each participant fired at 240 targets, resulting in a total of 3600 shots fired. We collected and analyzed data including lay error and time to fire. The
Crew Station design in the physical realm is complex and expensive due to the cost of fabrication and the time required to reconfigure necessary hardware to conduct studies for human factors and optimization of space claim. However, recent advances in Virtual Reality (VR) and hand tracking technologies have enabled a paradigm shift to the process. The Ground Vehicle System Center has developed an innovative approach using VR technologies to enable a trade space exploration capability which provides crews the ability to place touchscreens and switch panels as desired, then lock them into place to perform a fully recorded simulation of operating the vehicle through a virtual terrain, maneuvering through firing points and engaging moving and static targets during virtual night and day missions with simulated sensor effects for infrared and night vision. Human factors are explored and studied using hand tracking which enables operators to check reach by interacting with virtual components
At the InCabin USA vehicle technology expo in Detroit, Ford customer research lead Susan Shaw said that the sea of letters around ADAS features and control and indicator icons that vary between vehicles are often confusing to drivers. Shaw pointed out that the following all represent features related to driving lanes: LDW, LKA, LKS, LFA, LCA. These initialisms (groups of letters that form words) are not the only ways the industry refers to these technologies, as some OEMs have their own names for similar things. It all contributes to what can be dangerous assumptions on the part of a driver. “It's shocking how many people think their vehicle will apply the brakes in an emergency, when the car has no such system,” she said. As an overview to the subject of control and indicator iconography, Shaw began with an introduction to user experience research by talking about a classic example: Norman is the author of “The Design of Everyday Things.” A so-called Norman door is any door that is
This SAE Systems Management Standard specifies the Habitability processes throughout planning, design, development, test, production, use and disposal of a system. Depending on contract phase and/or complexity of the program, tailoring of this standard may be applied. Appendix C provides guidance on tailoring standard requirements to fit the various DoD acquisition pathways. The primary goals of a contractor Habitability program include: Ensuring that the system design complies with the customer Habitability requirements and that discrepancies are reported to management and the customer. Identifying, coordinating, tracking, prioritizing, and resolving Habitability risks and issues and ensuring that they are: ◦ Reflected in the contractor proposal, budgets, and plans. ◦ Raised at design, management, and program reviews. ◦ Debated in working group meetings. ◦ Coordinated with Training, logistics, and the other HSI disciplines. ◦ Included appropriately in documentation and deliverable
Artificial intelligence (AI)-based solutions are slowly making their way into mobile devices and other parts of our lives on a daily basis. By integrating AI into vehicles, many manufacturers are looking forward to developing autonomous cars. However, as of today, no existing autonomous vehicles (AVs) that are consumer ready have reached SAE Level 5 automation. To develop a consumer-ready AV, numerous problems need to be addressed. In this chapter we present a few of these unaddressed issues related to human-machine interaction design. They include interface implementation, speech interaction, emotion regulation, emotion detection, and driver trust. For each of these aspects, we present the subject in detail—including the area’s current state of research and development, its current challenges, and proposed solutions worth exploring
Connected and autonomous vehicles (CAVs) and their productization are a major focus of the automotive and mobility industries as a whole. However, despite significant investments in this technology, CAVs are still at risk of collisions, particularly in unforeseen circumstances or “edge cases.” It is also critical to ensure that redundant environmental data are available to provide additional information for the autonomous driving software stack in case of emergencies. Additionally, vehicle-to-everything (V2X) technologies can be included in discussions on safer autonomous driving design. Recently, there has been a slight increase in interest in the use of responder-to-vehicle (R2V) technology for emergency vehicles, such as ambulances, fire trucks, and police cars. R2V technology allows for the exchange of information between different types of responder vehicles, including CAVs. It can be used in collision avoidance or emergency situations involving CAV responder vehicles. The
Items per page:
50
1 – 50 of 1118