Browse Topic: Human factors
This SAE Recommended Practice describes two-dimensional, 95th percentile truck driver, side view, seated shin-knee contours for both the accelerator operating leg and the clutch operating leg for horizontally adjustable seats (see Figure 1). There is one contour for the clutch shin-knee and one contour for the accelerator shin-knee. There are three locating equations for each curve to accommodate male-to-female ratios of 50:50, 75:25, and 90:10 to 95:5
This Recommended Practice provides procedures for defining the Accelerator Heel Point and the Accommodation Tool Reference Point, a point on the seat H-point travel path which is used for locating various driver workspace accommodation tools in Class B vehicles (heavy trucks and buses). Three accommodation tool reference points are available depending on the percentages of males and females in the expected driver population (50:50, 75:25, and 90:10 to 95:5). These procedures are applicable to both the SAE J826 HPM and the SAE J4002 HPM-II
This SAE Recommended Practice describes two-dimensional 95th percentile truck driver side view, seated stomach contours for horizontally adjustable seats (see Figure 1). There is one contour and three locating lines to accommodate male-to-female ratios of 50:50, 75:25, and 90:10 to 95:5
This Recommended Practice provides a procedure to locate driver seat tracks, establish seat track length, and define the SgRP in Class B vehicles (heavy trucks and buses). Three sets of equations that describe where drivers position horizontally adjustable seats are available for use in Class B vehicles depending on the percentages of males to females in the expected driver population (50:50, 75:25, and 90:10 to 95:5). The equations can also be used as a checking tool to estimate the level of accommodation provided by a given length of horizontally adjustable seat track. These procedures are applicable for both the SAE J826 HPM and the SAE J4002 HPM-II
This SAE Recommended Practice establishes three alternate methods for describing and evaluating the truck driver's viewing environment: the Target Evaluation, the Polar Plot and the Horizontal Planar Projection. The Target Evaluation describes the field of view volume around a vehicle, allowing for ray projections, or other geometrically accurate simulations, that demonstrate areas visible or non-visible to the driver. The Target Evaluation method may also be conducted manually, with appropriate physical layouts, in lieu of CAD methods. The Polar Plot presents the entire available field of view in an angular format, onto which items of interest may be plotted, whereas the Horizontal Planar Projection presents the field of view at a given elevation chosen for evaluation. These methods are based on the Three Dimensional Reference System described in SAE J182a. This document relates to the driver's exterior visibility environment and was developed for the heavy truck industry (Class B
Innovators at NASA Johnson Space Center have developed an adjustable thermal control ball valve (TCBV) assembly which utilizes a unique geometric ball valve design to facilitate precise thermal control within a spacesuit. The technology meters the coolant flow going to the cooling and ventilation garment, worn by an astronaut in the next generation space suit, that expels waste heat during extra vehicular activities (EVAs) or spacewalks
This recommended practice shall apply to all on-highway trucks and truck-tractors equipped with air brake systems and having a GVW rating of 26 000 lb or more
ABSTRACT In this paper, we propose a new approach to developing advanced simulation environments for use in performing human-subject experiments. We call this approach the mission-based scenario. The mission-based scenario aims to: 1) Situate experiments within a realistic mission context; 2) Incorporate tasks, task loadings, and environmental interactions that are consistent with the mission’s operational context; and 3) Permit multiple sequences of actions/tasks to complete mission objectives. This approach will move us beyond more traditional, tightly-scripted experimental scenarios, and will employ concepts from interactive narrative as well as nonlinear game play approaches to video game design to enhance the richness and realism of Soldier-task-environment interactions. In this paper, we will detail the rationale for adopting such an approach and present a discussion of significant concepts that have guided a proof-of-concept test program of the mission-based scenario, which we
ABSTRACT Imagine Soldiers reacting to an unpredictable, dynamic, stressful situation on the battlefield. How those Soldiers think about the information presented to them by the system or other Soldiers during this situation – and how well they translate that into thinking into effective behaviors – is critical to how well they perform. Importantly, those thought processes (i.e., cognition) interact with both external (e.g., the size of the enemy force, weather) and internal (e.g., ability to communicate, personality, fatigue level) factors. The complicated nature of these interactions can have dramatic and unexpected consequences, as is seen in the analysis of military and industrial disasters, such as the shooting down of Iran Air flight 655, or the partial core meltdown on Three Mile Island. In both cases, decision makers needed to interact with equipment and personnel in a stressful, dynamic, and uncertain environment. Similarly, the complex and dynamic nature of the contemporary
ABSTRACT We have developed techniques for a robot to compute its expected myopic gain in performance from asking its operator specific questions, such as questions about how risky a particular movement action is around pedestrians. Coupled with a model of the operator’s costs for responding to inquiries, these techniques form the core of a new algorithm that iteratively allows the robot to decide what questions are in expectation most valuable to ask the operator and whether their value justifies potentially interrupting the operator. We have performed experiments in simple simulated robotic domains that illustrate the effectiveness of our approach
ABSTRACT Lay error is a primary source of error in fire control, which is defined as “the gunner’s inability to lay the sight crosshairs exactly on the center of the target.” To evaluate the potential implementation of computer vision and artificial intelligence algorithms for improving gunners’ performance or enabling autonomous targeting, it is crucial for the US Army to establish a benchmark of human performance as a reference point. In this study, we present preliminary results of a human subject study conducted to establish such a baseline. Using the Unreal Engine [1], we developed a photorealistic simulation environment with various targets. Fifteen individuals meeting the military applicant criteria in terms of age were assigned the task of aligning crosshairs on targets at multiple ranges and under different motion conditions. Each participant fired at 240 targets, resulting in a total of 3600 shots fired. We collected and analyzed data including lay error and time to fire. The
Summary Combat vehicle designers have made great progress in improving crew survivability against large blast mines and improvised explosive devices. Current vehicles are very resistant to hull failure from large blasts, protecting the crew from overpressure and behind armor debris. However, the crew is still vulnerable to shock injuries arising from the blast and its after-effects. One of these injury modes is spinal compression resulting from the shock loading of the crew seat. This can be ameliorated by installing energy-absorbing seats which reduce the intensity of the spinal loading, while spreading it out over a longer time. The key question associated with energy-absorbing seats has to do with the effect of various factors associated with the design on spinal compression and injury. These include the stiffness and stroking distance of the seat’s energy absorption mechanism, the size of the blast, the vehicle shape and mass, and the weight of the seat occupant. All of these
ABSTRACT This paper surveys the state of autonomous systems and outlines a novel command and control (C2) paradigm that seeks to accommodate the environmental challenges facing warfighters and their robotic counterparts in the future. New interface techniques will be necessary to reinforce the paradigm that supports the C2 of multiple human-machine teams completing diverse missions as part of the Third Offset Strategy. Realizing this future will require a new approach to teaming and interfaces that fully enable the potential of independent and cooperative decision-making abilities of fully autonomous machines while maximizing the effectiveness of human operators on the battlefield
ABSTRACT Latencies as small as 170 msec significantly degrade ground vehicle teleoperation performance and latencies greater than a second usually lead to a “move and wait” style of control. TORIS (Teleoperation Of Robots Improvement System) mitigates the effects of latency by providing the operator with a predictive display showing a synthetic latency-corrected view of the robot’s relationship to the local environment and control primitives that remove the operator from the high-frequency parts of the robot control loops. TORIS uses operator joystick inputs to specify relative robot orientations and forward travel distances rather than rotational and translational velocities, with control loops on the robot making the robot achieve the commanded sequence of poses. Because teleoperated ground vehicles vary in sensor suite and on-board computation, TORIS supports multiple predictive display methods. Future work includes providing obstacle detection and avoidance capabilities to support
ABSTRACT Designing robots for military applications requires a greater understanding between the engineer and the Soldier. Soldier considerations result from experiences not common to the engineer in the lab and, when understood, can minimize the design time and provide a more capable product that is more readily deployed into the unit
ABSTRACT BAE Systems Combat Simulation and Integration Labs (CSIL) are a culmination of more than 14 years of operational experience at our SIL facility in Santa Clara. The SIL provides primary integration and test functions over the entire life cycle of a combat vehicle’s development. The backbone of the SIL operation is the Simulation-Emulation-Stimulation (SES) process. The SES process has successfully supported BAE Systems US Combat Systems (USCS) SIL activities for many government vehicle development programs. The process enables SIL activities in vehicle design review, 3D virtual prototyping, human factor engineering, and system & subsystem integration and test. This paper describes how CSIL applies the models, software, and hardware components in a hardware-in-the-loop environment to support USCS combat vehicle development in the system integration lab
ABSTRACT As U.S. Army leadership continues to invest in novel technological systems to give warfighters a decisive edge for mounted and dismounted operations, the Integrated Visual Augmentation System (IVAS) and other similar systems are in the spotlight. Continuing to put capable systems that integrate fighting, rehearsing, and training operations into the hands of warfighters will be a key delineator for the future force to achieve and maintain overmatch in an all-domain operational environment populated by near-peer threats. The utility and effectiveness of these new systems will depend on the degree to which the capabilities and limitations of humans are considered in context during development and testing. This manuscript will survey how formal and informal Human Systems Integration planning can positively impact system development and will describe a Helmet Mounted Display (HMD) case study
ABSTRACT The objective of this effort is to create a parametric Computer-Aided Design (CAD) accommodation model for the Fixed Heel Point (FHP) driver and crew workstations with specific tasks. The FHP model is a statistical model that was created utilizing data from the Seated Soldier Study (Reed and Ebert, 2013). The final product is a stand-alone CAD model that provides geometric boundaries indicating the required space and adjustments needed for the equipped Soldiers’ helmet, eyes, torso, knees, and seat travel. Clearances between the Soldier and surrounding interior surfaces and direct field of view have been added per MIL-STD-1472G. This CAD model can be applied early in the vehicle design process to ensure accommodation requirements are met and help explore possible design tradeoffs when conflicts with other design parameters exist. The CAD model will be available once it has undergone Verification, Validation, and Accreditation (VV&A) and a user guide has been written
ABSTRACT The goal of the human factors engineer is to work within the systems engineering process to ensure that a Crew Centric Design approach is utilized throughout system design, development, fielding, sustainment, and retirement. To evaluate the human interface, human factors engineers must often start with a low fidelity mockup, or virtual model, of the intended design until a higher fidelity physical representation or the working hardware is available. Testing the Warrior-Machine Interface needs to begin early and continue throughout the Crew Centric Design process to ensure optimal soldier performance. This paper describes a Four Step Process to achieve this goal and how it has been applied to the ground combat vehicle programs. Using these four steps in the ground combat vehicle design process improved design decisions by including the user throughout the process either in virtual or real form, and applying the user’s operational requirements to drive the design
ABSTRACT The study describes the development of a plug-in module of the realistic 3D Digital Human Modeling (DHM) tool RAMSIS that is used to optimize product development of military vehicle systems. The use of DHM in product development has been established for years. DHM for the development of military vehicles requires not only the representation of the vehicle occupants, but also the representation of equipment and simulation of the impact of such equipment on the Warfighter. To simulate occupants in military vehicles, whether land or air based, realistically, equipment must become an integral part of the extended human model. Simply attaching CAD-geometry to one manikin’s element is not sufficient. Equipment size needs to be scalable with respect to anthropometry, impact on joint mobility needs to be considered with respect to anatomy. Those aspects must be integrated in posture prediction algorithms to generate objective, reliable and reproducible results to help design engineers
ABSTRACT Time lags are known to reduce performance in human-in-the-loop control systems. Performance decrements for human-in-the-loop control systems as a result of time lags are generally associated with the operator’s inability to predict the outcome of their control input and are dependent upon the characteristics of the lag (e.g., magnitude and variability). Further, the effects of variable time lags are not well studied or understood, but may exacerbate the effects on human control actions observed with fixed lags. Several studies have demonstrated mechanisms that can help combat the effects of lag including adaptation, mathematical predictors (e.g., filters), and predictive displays. This experiment examined the effects of lag and lag variability on a simulated driving task, as well as a possible mitigation (predictive display) for the effects of lag. Results indicated that lag variability significantly reduced driving performance, and that the predictive display significantly
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
ABSTRACT This paper discusses the design and implementation of an interactive mixed reality cockpit that enhances Soldier-vehicle interaction by providing a 360-degree situational awareness system. The cockpit uses indirect vision, where cameras outside the vehicle provide a video feed of the surroundings to the cockpit. The cockpit also includes a virtual information dashboard that displays real-time information about the vehicle, mission, and crew status. The visualization of the dashboard is based on past research in information visualization, allowing Soldiers to quickly assess their operational state. The paper presents the results of a usability study on the effectiveness of the mixed reality cockpit, which compared the Vitreous interface, a Soldier-centered mixed reality head-mounted display, with two other interface and display technologies. The study found that the Vitreous UI resulted in better driving performance and better subjective evaluation of the ability to actively
ABSTRACT The complexity of the current and future security environment will impose new and ever-changing challenges to Warfighter capabilities. Given the critical nature of Soldier cognitive performance in meeting these increased demands, systems should be designed to work in ways that are consistent with human cognitive function. Here, we argue that traditional approaches to understanding the human and cognitive dimensions of systems development cannot always provide an adequate understanding of human cognitive performance. We suggest that integrating neuroscience approaches and knowledge provides unique opportunities for understanding human cognitive function. Such an approach has the potential to enable more effective systems design – that is, neuroergonomic design – and that it is necessary to obtain these understandings within complex, dynamic environments. Ongoing research efforts utilizing large-scale ride motion simulations that allow researchers to systematically constrain
Abstract A necessary, but not sufficient, condition for innovation is that it be different. Given this, a technique is proposed to develop innovative solutions at each step of a systems engineering based product development effort. This technique, while not guaranteeing results, allows ventures into innovation which can be planned, scheduled, and measured
Items per page:
50
1 – 50 of 1159