Browse Topic: Human factors

Items (1,082)
ABSTRACT BAE Systems Combat Simulation and Integration Labs (CSIL) are a culmination of more than 14 years of operational experience at our SIL facility in Santa Clara. The SIL provides primary integration and test functions over the entire life cycle of a combat vehicle’s development. The backbone of the SIL operation is the Simulation-Emulation-Stimulation (SES) process. The SES process has successfully supported BAE Systems US Combat Systems (USCS) SIL activities for many government vehicle development programs. The process enables SIL activities in vehicle design review, 3D virtual prototyping, human factor engineering, and system & subsystem integration and test. This paper describes how CSIL applies the models, software, and hardware components in a hardware-in-the-loop environment to support USCS combat vehicle development in the system integration lab
Lin, TCChang, KevinJohnson, ChristopherNaghshineh, KasraKwon, SungLi, Hsi Shang
ABSTRACT The goal of the human factors engineer is to work within the systems engineering process to ensure that a Crew Centric Design approach is utilized throughout system design, development, fielding, sustainment, and retirement. To evaluate the human interface, human factors engineers must often start with a low fidelity mockup, or virtual model, of the intended design until a higher fidelity physical representation or the working hardware is available. Testing the Warrior-Machine Interface needs to begin early and continue throughout the Crew Centric Design process to ensure optimal soldier performance. This paper describes a Four Step Process to achieve this goal and how it has been applied to the ground combat vehicle programs. Using these four steps in the ground combat vehicle design process improved design decisions by including the user throughout the process either in virtual or real form, and applying the user’s operational requirements to drive the design
Vala, MarilynNavarre, RussellKempf, PeterSmist, Thomas
ABSTRACT Maintenance of local security is essential for the lethality and survivability in modern urban conflicts. Among solutions the Army is developing is an indirect-vision display (IVD) based sensor system supporting full-spectrum, 360°local area awareness. Unfortunately, such display solutions only address part of the challenge, with remaining issues spawned by the properties of human perceptual-cognitive function. The current study examined the influence of threat properties (e.g. threat type, distance, etc.) on detection performance while participants conducted a patrol through a simulated urban area. Participants scanned a virtual environment comprised of static and dynamic entities and reported those that were deemed potential threats. Results showed that the most influential variables were the characteristics of the targets; threats that appeared far away, behind the vehicle, and for short periods of time were most likely missed. Thus, if an IVD system is to be effective, it
Metcalfe, Jason S.Cosenzo, Keryl A.Johnson, TonyBrumm, BradleyManteuffel, ChristopherEvans, A. WilliamTierney, Terrance
ABSTRACT The United States military stands to greatly benefit from perpetual advances in vehicle-borne 360-degree Situational Awareness (SA) systems. However, in recent years, a gap has emerged that hinders development of vehicle-borne 360 SA. At a fundamental level, military ground vehicle designers require unambiguous requirements to build effective 360-degree SA systems; and, critical decision-makers must define requirements that offer substantial operational value. To ensure that 360-degree SA systems effectively address Warfighter requirements, the military ground vehicle research and development communities must better understand vehicle-borne 360 SA evaluation parameters and their relevance to current military operations. This paper will therefore describe a set of evaluation parameters across five broad categories that are vital to effective 360-degree SA: namely, vehicle-mounted visual sensors, data transmission systems, in-vehicle displays, intelligent cuing technologies, and
Mikulski, ThomasBerman, David
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
MacDonald, Brian
Crew Station design in the physical realm is complex and expensive due to the cost of fabrication and the time required to reconfigure necessary hardware to conduct studies for human factors and optimization of space claim. However, recent advances in Virtual Reality (VR) and hand tracking technologies have enabled a paradigm shift to the process. The Ground Vehicle System Center has developed an innovative approach using VR technologies to enable a trade space exploration capability which provides crews the ability to place touchscreens and switch panels as desired, then lock them into place to perform a fully recorded simulation of operating the vehicle through a virtual terrain, maneuvering through firing points and engaging moving and static targets during virtual night and day missions with simulated sensor effects for infrared and night vision. Human factors are explored and studied using hand tracking which enables operators to check reach by interacting with virtual components
Agusti, Rachel S.Brown, DavidKovacin, KyleSmith, AaronHackenbruch, Rachel N.Hess, DavidSimmons, Caleb B.Stewart, Colin
The increased use of computational human models in evaluation of safety systems demands greater attention to selected methods in coupling the model to its seated environment. This study assessed the THUMS v4.0.1 in an upright driver posture and a reclined occupant posture. Each posture was gravity settled into an NCAC vehicle model to assess model quality and HBM to seat coupling. HBM to seat contact friction and seat stiffness were varied across a range of potential inputs to evaluate over a range of potential inputs. Gravity settling was also performed with and without constraints on the pelvis to move towards the target H-Point. These combinations resulted in 18 simulations per posture, run for 800 ms. In addition, 5 crash pulse simulations (51.5 km/h delta V) were run to assess the effect of settling time on driver kinematics. HBM mesh quality and HBM to seat coupling metrics were compared at kinetically identical time points during the simulation to an end state where kinetic
Wade von Kleeck, B.Caffrey, JulietteWeaver, Ashley A.Gayzik, F. ScottHallman, Jason
This research aims at understanding how the driver interacts with the steering wheel, in order to detect driving strategies. Such driving strategies will allow in the future to derive accurate holistic driver models for enhancing both safety and comfort of vehicles. The use of an original instrumented steering wheel (ISW) allows to measure at each hand, three forces, three moments, and the grip force. Experiments have been performed with 10 nonprofessional drivers in a high-end dynamic driving simulator. Three aspects of driving strategy were analyzed, namely the amplitudes of the forces and moments applied to the steering wheel, the correlations among the different signals of forces and moments, and the order of activation of the forces and moments. The results obtained on a road test have been compared with the ones coming from a driving simulator, with satisfactory results. Two different strategies for actuating the steering wheel have been identified. In the first strategy, the
Previati, GiorgioMastinu, GianpieroGobbi, Massimiliano
Driving safety in the mixed traffic state of autonomous vehicles and conventional vehicles has always been an important research topic, especially on highways where autonomous driving technology is being more widely adopted. The merging scenario at highway ramps poses high risks with frequent vehicle conflicts, often stemming from misperceived intentions [1]. This study focuses on autonomous and conventional vehicles in merging scenarios, where timely recognition of lane-changing intentions can enhance merging efficiency and reduce accidents. First, trajectory data of merging vehicles and their conflicting vehicles were extracted from the NGSIM open-source database in the I-80 section. The segmented cubic polynomial interpolation method and Savitzky–Golay filtering are utilized for data outlier removal and noise reduction. Second, the processed trajectory data were used as input to a hybrid Gaussian hidden Markov (GMM-HMM) model for driving intention classification, specifically lane
Ren, YouWang, XiyaoSong, JiaqiLu, WenyangLi, PenglongLi, Shangke
To identify the influences of various built environment factors on ridership at urban rail transit stations, a case study was conducted on the Changsha Metro. First, spatial and temporal distributions of the station-level AM peak and PM peak boarding ridership are analyzed. The Moran’s I test indicates that both of them show significant spatial correlations. Then, the pedestrian catchment area of each metro station is delineated using the Thiessen polygon method with an 800-m radius. The built environment factors within each pedestrian catchment area, involving population and employment, land use, accessibility, and station attributes, are collected. Finally, the mixed geographically weighted regression models are constructed to quantitatively identify the effects of these built environment factors on the AM and PM peak ridership, respectively. The estimation results indicate that population density and employment density have significant but opposite influences on the AM and PM peak
Su, MeilingLiu, LingChen, XiyangLong, RongxianLiu, Chenhui
Artificial intelligence (AI)-based solutions are slowly making their way into mobile devices and other parts of our lives on a daily basis. By integrating AI into vehicles, many manufacturers are looking forward to developing autonomous cars. However, as of today, no existing autonomous vehicles (AVs) that are consumer ready have reached SAE Level 5 automation. To develop a consumer-ready AV, numerous problems need to be addressed. In this chapter we present a few of these unaddressed issues related to human-machine interaction design. They include interface implementation, speech interaction, emotion regulation, emotion detection, and driver trust. For each of these aspects, we present the subject in detail—including the area’s current state of research and development, its current challenges, and proposed solutions worth exploring
Fang, ChenRazdan, rahulBeiker, SvenTaleb-Bendiab, Amine
Connected and autonomous vehicles (CAVs) and their productization are a major focus of the automotive and mobility industries as a whole. However, despite significant investments in this technology, CAVs are still at risk of collisions, particularly in unforeseen circumstances or “edge cases.” It is also critical to ensure that redundant environmental data are available to provide additional information for the autonomous driving software stack in case of emergencies. Additionally, vehicle-to-everything (V2X) technologies can be included in discussions on safer autonomous driving design. Recently, there has been a slight increase in interest in the use of responder-to-vehicle (R2V) technology for emergency vehicles, such as ambulances, fire trucks, and police cars. R2V technology allows for the exchange of information between different types of responder vehicles, including CAVs. It can be used in collision avoidance or emergency situations involving CAV responder vehicles. The
Abdul Hamid, Umar ZakirRoth, ChristianNickerson, JeffreyLyytinen, KalleKing, John Leslie
Walking around the SAE WCX conference in Detroit this April and reading through the topic listings for the hundreds of sessions and thousands of presentations, I remembered why I enjoyed this conference so much. I used to attend as a reporter for other outlets, but I haven't been back to WCX since before the pandemic. It was different to walk the halls as editor of this magazine. What happens at WCX - and at dozens of mobility and transportation conferences around the world - is fascinating. I would bet big money that our readers agree. Still, sometimes it's difficult to translate the deeply technical work that makes up our days into something that piques the interest of those who don't spend inordinate amounts of time thinking about the “future of mobility
Blanco, Sebastian
A University of Cambridge team used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. The robot was able to read the braille at 315 words per minute at close to 90 percent accuracy
Given the rapid advancements in engineering and technology, it is anticipated that connected and automated vehicles (CAVs) will soon become prominent in our daily lives. This development has a vast potential to change the socio-technical perception of public, personal, and freight transportation. The potential benefits to society include reduced driving risks due to human errors, increased mobility, and overall productivity of autonomous vehicle consumers. On the other hand, the potential risks associated with CAV deployment related to technical vulnerabilities are safety and cybersecurity issues that may arise from flawed hardware and software. Cybersecurity and Digital Trust Issues in Connected and Automated Vehicles elaborates on these topics as unsettled cybersecurity and digital trust issues in CAVs and follows with recommendations to fill in the gaps in this evolving field. This report also highlights the importance of establishing robust cybersecurity protocols and fostering
Ahmed, QadeerRenganathan, Vishnu
For taking counter measures in advance to prevent accidental risks, it is of significance to explore the causes and evolutionary mechanism of ship collisions. This article collects 70 ship collision accidents in Zhejiang coastal waters, where 60 cases are used for modeling while 10 cases are used for verification (testing). By analyzing influencing factors (IFs) and causal chains of accidents, a Bayesian network (BN) model with 19 causal nodes and 1 consequential node is constructed. Parameters of the BN model, namely the conditional probability tables (CPTs), are determined by mathematical statistics methods and Bayesian formulas. Regarding each testing case, the BN model’s prediction on probability of occurrence is above 80% (approaching 100% indicates the certainty of occurrence), which verifies the availability of the model. Causal analysis based on the backward reasoning process shows that H (Human error) is the main IF resulting in ship collisions. The causal chain that maximizes
Tian, YanfeiQiao, HuiHua, LinAi, Wanzheng
This paper compares the results from three human factors studies conducted in a motion-based simulator in 2008, 2014 and 2023, to highlight the trends in driver's response to Forward Collision Warning (FCW). The studies were motivated by the goal to develop an effective HMI (Human-Machine Interface) strategy that enables the required driver's response to FCW while minimizing the level of annoyance of the feature. All three studies evaluated driver response to a baseline-FCW and no-FCW conditions. Additionally, the 2023 study included two modified FCW chime variants: a softer FCW chime and a fading FCW chime. Sixteen (16) participants, balanced for gender and age, were tested for each group in all iterations of the studies. The participants drove in a high-fidelity simulator with a visual distraction task (number reading). After driving 15 minutes in a nighttime rural highway environment, a surprise forward collision threat arose during the distraction task. The response times from the
Nasir, MansoorKurokawa, KoSinghal, NehaMayer, KenChowanic, AndreaOsafo Yeboah, BenjaminBlommer, Michael
Automated driving has become a very promising research direction with many successful deployments and the potential to reduce car accidents caused by human error. Automated driving requires automated path planning and tracking with the ability to avoid collisions as its fundamental requirement. Thus, plenty of research has been performed to achieve safe and time efficient path planning and to develop reliable collision avoidance algorithms. This paper uses a data-driven approach to solve the abovementioned fundamental requirement. Consequently, the aim of this paper is to develop Deep Reinforcement Learning (DRL) training pipelines which train end-to-end automated driving agents by utilizing raw sensor data. The raw sensor data is obtained from the Carla autonomous vehicle simulation environment here. The proposed automated driving agent learns how to follow a pre-defined path with reasonable speed automatically. First, the A* path searching algorithm is applied to generate an optimal
Chen, HaochongAksun Guvenc, Bilin
This paper reports the development of an operation support system for production equipment using image processing with deep learning. Semi-automatic riveters are used to attach small parts to skin panels, and they involve manual positioning followed by automated drilling and fastening. The operator watches a monitor showing the processing area, and two types of failure may arise because of human error. First, the operator should locate the correct position on the skin panel by looking at markers painted thereon but may mistakenly cause the equipment to drill at an incorrect position. Second, the operator should prevent the equipment from fastening if they see chips around a hole after drilling but may overlook the chips; chips remaining around a drilled hole may cause the fastener to be inserted into the hole and fastened at an angle, which can result in the whole panel having to be scrapped. To prevent these operational errors that increase production costs by requiring repair work
Yamanouchi, ShihoAoki, NaofumiNagano, YoyaMoritake, DaichiSakata, TatsuhikoKato, Kunihito
This SAE Information Report relates to a special class of automotive adaptive equipment which consists of modifications to the power steering system provided as original equipment on personally licensed vehicles. These modifications are generically called “modified effort steering” or “reduced effort power steering.” The purpose of the modification is to alter the amount of driver effort required to steer the vehicle. Retention of reliability, ease of use for physically disabled drivers and maintainability are of primary concern. As an Information Report, the numerical values for performance measurements presented in this report and in the test procedure in the appendices, while based upon the best knowledge available at the time, have not been validated
Adaptive Devices Standards Committee
Air-Launched Effects (ALEs) are a concept for operating small, inexpensive, attritable, and highly autonomous unmanned aerial systems that can be tube launched from aircraft. Launch from ground vehicles is planned as well, although Ground-Launched Effects are not yet a requirement. ALEs are envisioned to provide “reconnaissance, surveillance, target acquisition (RSTA), and lethality with an advanced team of manned and unmanned aircraft as part of an ecosystem including Future Attack and Reconnaissance Aircraft (FARA) and ALE.” A primary purpose of ALEs is to extend “tactical and operational reach and lethality of manned assets, allowing them to remain outside of the range of enemy sensors and weapon systems while delivering kinetic and non-kinetic, lethal and non-lethal mission effects against multiple threats, as well as, providing battle damage assessment data
Over the past few decades, aircraft automation has progressively increased. Advances in digital computing during the 1980s eliminated the need for onboard flight engineers. Avionics systems, exemplified by FADEC for engine control and Fly-By-Wire, handle lower-level functions, reducing human error. This shift allows pilots to focus on higher-level tasks like navigation and decision-making, enhancing overall safety. Full automation and autonomous flight operations are a logical continuation of this trend. Thanks to aerospace pioneers, most functions for full autonomy are achievable with legacy technologies. Machine learning (ML), especially neural networks (NNs), will enable what Daedalean terms Situational Intelligence: the ability to understand and make sense of the current environment and situation but also anticipate and react to a future situation, including a future problem. By automating tasks traditionally limited to human pilots - like detecting airborne traffic and identifying
This report reviews human factors research on the supervision of multiple unmanned vehicles (UVs) as it affects human integration with Air-Launched Effects (ALE). U.S. Army Combat Capabilities Development Command Analysis Center, Fort Novosel, Alabama Air-Launched Effects (ALEs) are a concept for operating small, inexpensive, attritable, and highly autonomous unmanned aerial systems that can be tube launched from aircraft. Launch from ground vehicles is planned as well, although Ground-Launched Effects are not yet a requirement. ALEs are envisioned to provide “reconnaissance, surveillance, target acquisition (RSTA), and lethality with an advanced team of manned and unmanned aircraft as part of an ecosystem including Future Attack and Reconnaissance Aircraft (FARA) and ALE.” A primary purpose of ALEs is to extend “tactical and operational reach and lethality of manned assets, allowing them to remain outside of the range of enemy sensors and weapon systems while delivering kinetic and
E-Mobility and low noise IC Engines has pushed product development teams to focus more on sound quality rather than just on reduced noise levels and legislative needs. Furthermore, qualification of products from a sound quality perspective from an end of line testing requirement is also a major challenge. End of line (EOL) NVH testing is key evaluation criteria for product quality with respect to NVH and warranty. Currently for subsystem or component level evaluation, subjective assessment of the components is done by a person to segregate OK and NOK components. As human factor is included, the process becomes very subjective and time consuming. Components with different acceptance criteria will be present and it’s difficult to point out the root cause for NOK components. In this paper, implementation of machine learning is done for acoustic source detection at end of line testing. To improve the fault detection an automated intelligent tool has been developed for subjective to
Shukle, SrinidhiIyer, GaneshFaizan, Mohammed
In this study, a novel assessment approach of in-vehicle speech intelligibility is presented using psychometric curves. Speech recognition performance scores were modeled at an individual listener level for a set of speech recognition data previously collected under a variety of in-vehicle listening scenarios. The model coupled an objective metric of binaural speech intelligibility (i.e., the acoustic factors) with a psychometric curve indicating the listener’s speech recognition efficiency (i.e., the listener factors). In separate analyses, two objective metrics were used with one designed to capture spatial release from masking and the other designed to capture binaural loudness. The proposed approach is in contrast to the traditional approach of relying on the speech recognition threshold, the speech level at 50% recognition performance averaged across listeners, as the metric for in-vehicle speech intelligibility. Results from the presented analyses suggest the importance of
Samardzic, NikolinaLavandier, MathieuShen, Yi
This standard covers Manpower and Personnel (M&P) processes throughout planning, design, development, test, production, use, and disposal of a system. Depending on contract phase and/or complexity of the program, tailoring can be applied. The scope of this standard includes Prime and Subcontractor M&P activities; it does not include Government M&P activities. The primary goals of a contractor M&P program typically include: Ensuring that the system design complies with the latest customer Manpower estimates (numbers and mix of personnel, plus availability) and that discrepancies are reported to management and the customer. Ensuring that the system design is regularly compared to the latest customer personnel estimates (capabilities and limitations) and that discrepancies are reported to management and the customer. Identifying, coordinating, tracking, and resolving M&P risks and issues and ensuring that they are: ○ Reflected in the contractor proposal, budgets, and plans. ○ Raised at
G-45 Human Systems Integration
This SAE Standard identifies contractor activities for planning and conducting HSI as part of procurement activities on Department of Defense (DoD) system acquisition programs. This standard covers HSI processes throughout system design, development, test, production, use, and disposal. Depending on contract phase, type of the program and/or complexity of the program, tailoring of this standard should be applied. Appendix A lists the requrememts (“shall” statements) in this standard along with unique numbers to facilitate tailoring. In addition, Appendix D provides tailoring guidance to better match requirememts in this standard to the DoD’s Adaptive Acquisition Framework pathways. The scope of this standard includes prime and subcontractor HSI activities; it does not include Government HSI activities, which are covered by DoD and service-level regulations and guidelines. HSI programs should use the latest version of standards and handbooks listed below, unless a particular revision is
G-45 Human Systems Integration
In India, agriculture is a vital part of the country’s economy and almost everything depends on it. It takes a lot of time and effort for the farmer to remove the leftover root vegetables and crops in soil. Even after manually removing these crops, they can’t fully recover the leftover thing. This process takes more time and is challenging for the farmer. Due to human error, around 20-30% of the crops and root crops are left out in the field. Unfortunately, poor farmers can’t afford the necessary equipment to remove these crops. Generally, Root crops are cultivated by root crop harvester through diggers present under the chassis in the middle which are seen randomly by operators and cultivated or else through cameras which are highly cost and not affordable by all the farmers, hard to maintain and not technically strong by the farmers to operate the cameras. Hence, it is aimed to design a Plough machine to take the left over root crops in the field as well as to loosen/break up the
Deepan Kumar, SadhasivamM, BoopathiSridhar Raj, SKarthick, K NP, Vivek KumarR, BalamuruganS, Iniya Mounika
This SAE Recommended Practice defines key terms used in the description and analysis of video based driver eye glance behavior, as well as guidance in the analysis of that data. The information provided in this practiced is intended to provide consistency for terms, definitions, and analysis techniques. This practice is to be used in laboratory, driving simulator, and on-road evaluations of how people drive, with particular emphasis on evaluating Driver Vehicle Interfaces (DVIs; e.g., in-vehicle multimedia systems, controls and displays). In terms of how such data are reduced, this version only concerns manual video-based techniques. However, even in its current form, the practice should be useful for describing the performance of automated sensors (eye trackers) and automated reduction (computer vision
null, null
This SAE Standard describes head position contours and procedures for locating the contours in a vehicle. Head position contours are useful in establishing accommodation requirements for head space and are required for several measures defined in SAE J1100. Separate contours are defined depending on occupant seat location and the desired percentage (95 and 99) of occupant accommodation. This document is primarily focused on application to Class A vehicles (see SAE J1100), which include most personal-use vehicles (passenger cars, sport utility vehicles, pick-up trucks). A procedure for use in Class B vehicles can be found in Appendix B
null, null
The purpose of this document is to establish air-conditioning design guidelines that will apply to most systems rather than the specific design of any particular system. Operating conditions and characteristics of the equipment will determine the design of any successful system; since these characteristics and conditions vary greatly from one application to another, the designer shall determine the goals expected to be reached under the conditions encountered. To determine the capacity of such items as blowers, condenser fans, condenser coils, evaporator coils, filters, compressors, etc., will require the adherence to several guidelines, some of which are outlined in the following paragraphs
HFTC6, Operator Accommodation
Autonomous driving systems (ADS) have been widely tested in real-world environments with operators who must monitor and intervene due to remaining technical challenges. However, intervention methods that require operators to take over control of the vehicle involve many drawbacks related to human performance. ADS consist of recognition, decision, and control modules. The latter two phases are dependent on the recognition phase, which still struggles with tasks involving the prediction of human behavior, such as pedestrian risk prediction. As an alternative to full automation of the recognition task, cooperative recognition approaches utilize the human operator to assist the automated system in performing challenging recognition tasks, using a recognition assistance interface to realize human-machine cooperation. In this study, we propose a recognition assistance interface for cooperative recognition in order to achieve safer and more efficient driving through improved human-automation
Kuribayashi, AtsushiTakeuchi, EijiroCarballo, AlexanderIshiguro, YoshioTakeda, Kazuya
The analysis of lipid biomarkers has gained increasing importance within environmental and archaeological fields because biomarkers are representative of plant and animal sources. Proven gold standard laboratory techniques for lipid biomarker extraction are laborious, with many opportunities for human error. As a solution, NASA Ames Research Center has developed a novel technology that provides an autonomous, miniaturized fluidic system for lipid analysis. The technology, in a single instrument, can accept an unprocessed soil, rock, or ice sample, comminute the sample, extract lipids via sonication and blending, filter out mineral residue, concentrate the analyte, and deliver the aliquot to downstream analytical instruments for molecular characterization, without requiring intervention from a human operator
New forms of air transport are expected to arrive in the next decade: development of unmanned multi-rotor equipped drones, are expected to be used for not only observation purposes, but for postal package delivery as well. The impact of close-flying drones near communities is still not fully understood. One of the main concerns for public acceptability is noise impact as it may negatively affect human health and well-being. Prior research shows that non-acoustical factors play an important role in the perception of noise. A laboratory study was conducted to evaluate different subjective factors to examine their influence on noise annoyance: education on useful applications of drones (positive framing), rural versus urban environments, different visually modelled sizes of drones, and the visual noticeability of drones. Participants of the study evaluated scripted drone events using a Virtual Reality headset with a sound simulation system. Results show that drones flying in a rural
Aalmoes, Roaltde Bruijn, BramSieben, Naomi
The sound produced by Unmanned Aerial Systems (known as UAS or Drones) is often considered to be one of the main barriers (alongside privacy and safety concerns) preventing the widespread use of these vehicles in environments where they may be in close proximity to the general public. To better understand the potential environmental noise impact of commercial UAS operations, work undertaken by the University of Salford has focused on two key areas. Firstly, how to characterise and measure the sound produced by UAS during outdoor flight conditions and secondly, better understanding of the dose response of UAS noise when the listener is in either an indoor or outdoor environment. The paper describes a field measurement campaign undertaken to measure several UAS performing flyovers at different speeds and take-off weights. The methodology of the measurement campaign was strongly influenced by emerging guidance and has been used to calculate the directivity of sound propagation which may
Green, NathanRamos-Romero, CarlosTorija Martinez, Antonio
The development of the autonomous applications for dismounted Soldier systems is paramount to defeating our adversaries, such as China and Russia, in future combat. A comprehensive literature review is necessary to assist in defining the best path forward. Army Research Laboratory, Aberdeen Proving Ground, MD The development of the artificial intelligence/machine learning (AI/ML) applications for dismounted Soldier systems is paramount to defeating our adversaries, such as China and Russia, in future combat. A comprehensive AI/ML literature review is a first step toward defining what exists and what can be applied and researched for our nation's defense in future warfare. There is a clear need to use the latest AI/ML technologies in threat identification and elimination without U.S. lives lost. A comprehensive literature review is necessary to assist in defining the best path forward. In theory, networked unmanned aerial vehicles (UAVs) using onboard cameras may assist in successful
Sequential turn signals are becoming more common, partly because of the availability of the detailed temporal and spatial control of light that is allowed by LED sources. They seem to be popular with drivers, and some human factors considerations suggest that they may more effectively convey information about intended maneuvers. This research was designed to investigate possible benefits by presenting experimental participants with a variety of sequential and static turn signals under realistic field conditions. The experimental tasks were based on possible encounters at four-way intersections. Passenger cars were statically positioned to represent such encounters. Participants were seated in one of the vehicles and were asked to make simple but meaningful judgments about intended turns by the other vehicles. Visual conditions were realistic in terms of the viewing geometry and photometry. Experiments were conducted in the day and at night. Three experiments were performed. In two of
Flannagan, MichaelWaragaya, TakeshiKita, Yasushi
Electro-hydraulic actuators, a type of soft actuators, can provide soft-touch vibrations due to their structural characteristics, but some problems need to be improved to apply them to vehicles. That is, it is necessary to increase excitation force, expand frequency band, lower driving voltage, and increase durability. This research aims to design a new type based on electro-hydraulic actuator and improve problems with its performance to develop a product that generates emotional vibration in vehicles. First, a new mechanism and design of an electro-hydraulic actuator called a PVC-gel film actuator are proposed. This actuator uses PVC-gel as a film which covers a dielectric liquid and uses carbon nanotube as a cathode material. In addition, a method of manufacturing an actuator with improved performance has been proposed by creating and testing prototypes with different sizes and material properties. It has been verified that the proposed actuator improves excitation force, frequency
Chang, Kyoung-JinKyung, Ki-UkKim, HyunwooHong, SangjinPark, Dong Chul
The driver monitoring system (DMS) plays an essential role in reducing traffic accidents caused by human errors due to driver distraction and fatigue. The vision-based DMS has been the most widely used because of its advantages of non-contact and high recognition accuracy. However, the traditional RGB camera-based DMS has poor recognition accuracy under complex lighting conditions, while the IR-based DMS has a high cost. In order to improve the recognition accuracy of conventional RGB camera-based DMS under complicated illumination conditions, this paper proposes a lightweight low-illumination image enhancement network inspired by the Retinex theory. The lightweight aspect of the network structure is realized by introducing a pixel-wise adjustment function. In addition, the optimization bottleneck problem is solved by introducing the shortcut mechanism. Model performance comparison test results demonstrate that the Structure Similarity Index Measure index of the proposed model is 7.04
Wu, Zhanqian
The Lane Change Task (LCT) provides a simple, scorable simulation of driving, and serves as a primary task in studies of driver distraction. It is widely accepted, but somewhat limited in functionality, a problem this project partially overcomes. In the Lane Change Task, subjects drive along a road with 3 lanes in the same direction. Periodically, signs appear, indicating in which of the 3 lanes the subject should drive, which changes from sign to sign. The software is plug-and-play for a current Windows computer with a Logitech steering/pedal assembly, even though the software was written 18 years ago. For each timestamp in a trial, the software records the steering wheel angle, speed, and x and y coordinates of the subject. A limitation of the LCT is that few characteristics of this useful software can be readily modified as only the executable code is available (on the ISO 26022 website), not the source code. Therefore, a combination of vJoy, FreePIE, and Python scripts was used to
Zheng, HongxiaoHu, FengyuanGreen, Paul
In the United States and worldwide, 38,824 and 1.35 million people were killed in vehicle crashes during 2020. These statistics are tragic and indicative of an on-going public health crisis centered on automobiles and other ground transportation solutions. Although the long-term US vehicle fatality rate is slowly declining, it continues to be elevated compared to European countries. The introduction of vehicle safety systems and re-designed roadways has improved survivability and driving environment, but driver behavior has not been fully addressed. A non-confrontational approach is the evaluation of driver behavior using onboard sensors and computer algorithms to determine the vehicle’s “mistrust” level of the given operator and the safety of the individual operating the vehicle. This is an inversion of the classic human-machine trust paradigm in which the human evaluates whether the machine can safely operate in an automated fashion. The impetus of the research is the recognition
Wang, ChengshiWang, YueAlexander, KimWagner, John
Simulation plays a central role in almost every aspect of automotive product development. And as this month's cover story explains, ‘sim’ is extending its reach in automated-driving R&D, bringing efficiency to human factors and critical but tedious component-verification work. Some argue that most AV development should - and thanks to contemporary sim technology, can - be conducted in the virtual world. It's hard for me to imagine getting to consumer-ready SAE Level 4 and 5 driving automation without eventual heavy reliance on simulation-based validation. That notion comes hard against what's played out with Tesla, however. The EV leader effectively has leveraged its customers' on-the-road experiences to incrementally “harden” its automated-driving software. It's not an entirely off-the-ranch idea; many AV developers have relied on some sort of crowdsourcing data acquisition to help their systems learn. The difference, however, is that Tesla consigned this role - and its genuine risks
Visnic, Bill
Creating technologies that amplify human experience and endeavor to help solve society's biggest challenges is the mission of the Toyota Research Institute. Gill Pratt has a gift for explaining complex topics in simple terms. And as Toyota Motor Co.'s chief scientist and CEO of the Toyota Research Institute (TRI), he also speaks frankly about the promises and potential pitfalls of new technologies. Addressing a rare group of visitors - tech reporters and analysts, including SAE Media - recently at TRI's Silicon Valley headquarters, Pratt noted the heightened public discourse around artificial intelligence, a core area of focus for many of TRI's 200 scientists and engineers. “Everybody is worried ChatGPT is going to be writing term papers for college students,” Pratt said half-jokingly about the controversial “chat bot” introduced in late 2022 by OpenAI. “But even our humor reflects the anxiety we have about this technology and its dual nature of good and evil.” Society, he observed
Brooke, Lindsay
Items per page:
1 – 50 of 1082