Browse Topic: Human machine interface (HMI)

Items (882)
ABSTRACT This paper describes work to develop a hands-free, heads-up control system for Unmanned Ground Vehicles (UGVs) under an SBIR Phase I contract. Industry is building upon pioneering work that it has done in creating a speech recognition system that works well in noisy environments, by developing a robust key word spotting algorithm enabling UGV Operators to give speech commands to the UGV completely hands-free. Industry will also research and develop two sub-vocal control modes: whisper speech and teeth clicks. Industry is also developing a system that will enable the Operator to drive a UGV, with a high level of fidelity, to a location selected by the Operator using hands-free commands in conjunction with image segmentation and video overlays. This Phase I effort will culminate in a proof-of-concept demonstration of a hands-free, heads-up system, implemented on a small UGV, that will enable the Operator have a high level of fidelity for control of the system
Brown, JonathanGray, Jeremy P.Blanco, ChrisJuneja, AmitAlberts, JoelReinerman, Lauren
ABSTRACT Research is currently underway to improve controllability of high degree-of-freedom manipulators under a Phase II SBIR contract sponsored by the U.S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC). As part of this program, the authors have created new control methods as well as adapting tool changing technology onto a dexterous arm to look at controllability of various manipulator functions. In this paper, the authors describe the work completed under this program and describe the findings of this work in terms of how these technologies can be used to extend the capabilities of existing and newly developed robotic manipulators
Peters, DouglasGunnett, KeithGray, Jeremy
ABSTRACT The concept of handheld control systems with modular and/or integrated display provides the flexibility of operator use that supports the needs of today’s warfighters. A human machine interface control system that easily integrates with vehicle systems through common architecture and can transition to support dismounted operations provides warfighters with functional mobility they do not have today. With Size, Weight and Power along with reliability, maintainability and availability driving the needs of most platforms for both upgrade and development, moving to convertible (mounted to handheld) and transferrable control systems supports these needs as well as the need for the warfighter to maintain continuous control and command connectivity in uncertain mission conditions
Roy, Monica V.
ABSTRACT As the number of robotic systems on the battlefield increases, the number of operators grows with it, leading to significant cost burden. Autonomous robots are already capable of task execution with limited supervision, and the capabilities of autonomous robots continue to advance rapidly. Because these autonomous systems have the ability to assist and augment human soldiers, commanders need advanced methods for assigning tasks to the systems, monitoring their status and using them to achieve desirable results. Mission Command for Autonomous Systems (MCAS) aims to enable natural interaction between commanders and their autonomous assets without requiring dedicated operators or significantly increasing the commanders’ cognitive burden. This paper discusses the approach, design and challenges of MCAS and present opportunities for future collaboration with industry and academia
Martin, JeremyKorfiatis, PeterSilva, Udam
ABSTRACT The concept of handheld control systems with modular and/or integrated display provides the flexibility of operator use that supports the needs of today’s warfighters. A human machine interface control system that easily integrates with vehicle systems through common architecture and can transition to support dismounted operations provides warfighters with functional mobility they do not have today. With Size, Weight and Power along with reliability, maintainability and availability driving the needs of most platforms for both upgrade and development, moving to convertible (mounted to handheld) and transferrable control systems supports these needs as well as the need for the warfighter to maintain continuous control and command connectivity in uncertain mission conditions
Roy, Monica V.
ABSTRACT This presentation will review the ongoing lessons learned from a joint Industry/DoD collaborative program to explore this area over the past 5 years. The discussion will review the effectiveness of integrating multiple new technologies (combined with select COTS elements) to provide a complete solution designed to reduce spares stockpiles, maximize available manpower, reduce maintenance downtime and reduce vehicle lifecycle costs. A number of new and emerging technology case studies involving diagnostic sensors (such as battery health monitors), knowledge management data accessibility, remote support-based Telematics, secure communication, condition-based software algorithms, browser-based user interfaces and web portal data delivery will be presented
Fortson, RickJohnson, Ken
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
MacDonald, Brian
ABSTRACT Recent advances in neuroscience, signal processing, machine learning, and related technologies have made it possible to reliably detect brain signatures specific to visual target recognition in real time. Utilizing these technologies together has shown an increase in the speed and accuracy of visual target identification over traditional visual scanning techniques. Images containing a target of interest elicit a unique neural signature in the brain (e.g. P300 event-related potential) when detected by the human observer. Computer vision exploits the P300-based signal to identify specific features in the target image that are different from other non-target images. Coupling the brain and computer in this way along with using rapid serial visual presentation (RSVP) of the images enables large image datasets to be accurately interrogated in a short amount of time. Together this technology allows for potential military applications ranging from image triaging for the image analyst
Ries, Anthony J.Lance, BrentSajda, Paul
Crew Station design in the physical realm is complex and expensive due to the cost of fabrication and the time required to reconfigure necessary hardware to conduct studies for human factors and optimization of space claim. However, recent advances in Virtual Reality (VR) and hand tracking technologies have enabled a paradigm shift to the process. The Ground Vehicle System Center has developed an innovative approach using VR technologies to enable a trade space exploration capability which provides crews the ability to place touchscreens and switch panels as desired, then lock them into place to perform a fully recorded simulation of operating the vehicle through a virtual terrain, maneuvering through firing points and engaging moving and static targets during virtual night and day missions with simulated sensor effects for infrared and night vision. Human factors are explored and studied using hand tracking which enables operators to check reach by interacting with virtual components
Agusti, Rachel S.Brown, DavidKovacin, KyleSmith, AaronHackenbruch, Rachel N.Hess, DavidSimmons, Caleb B.Stewart, Colin
Today’s intelligent robots can accurately recognize many objects through vision and touch. Tactile information, obtained through sensors, along with machine learning algorithms, enables robots to identify objects previously handled
Semi-automated computational design methods involving physics-based simulation, optimization, machine learning, and generative artificial intelligence (AI) already allow greatly enhanced performance alongside reduced cost in both design and manufacturing. As we progress, developments in user interfaces, AI integration, and automation of workflows will increasingly reduce the human inputs required to achieve this. With this, engineering teams must change their mindset from designing products to specifying requirements, focusing their efforts on testing and analysis to provide accurate specifications. Generative Design in Aerospace and Automotive Structures discusses generative design in its broadest sense, including the challenges and recommendations regarding multi-stage optimizations. Click here to access the full SAE EDGETM Research Report portfolio
Muelaner, Jody Emlyn
Homologation is an important process in vehicle development and aerodynamics a main data contributor. The process is heavily interconnected: Production planning defines the available assemblies. Construction defines their parts and features. Sales defines the assemblies offered in different markets, where Legislation defines the rules applicable to homologation. Control engineers define the behavior of active, aerodynamically relevant components. Wind tunnels are the main test tool for the homologation, accompanied by surface-area measurement systems. Mechanics support these test operations. The prototype management provides test vehicles, while parts come from various production and prototyping sources and are stored and commissioned by logistics. Several phases of this complex process share the same context: Production timelines for assemblies and parts for each chassis-engine package define which drag coefficients or drag coefficient contributions shall be determined. Absolute and
Jacob, Jan D.
Using electrical impedance tomography (EIT), researchers have developed a system using a flexible tactile sensor for objective evaluation of fine finger movements. Demonstrating high accuracy in classifying diverse pinching motions, with discrimination rates surpassing 90 percent, this innovation holds potential in cognitive development and automated medical research
The lane departure warning (LDW) system is a warning system that alerts drivers if they are drifting (or have drifted) out of their lane or from the roadway. This warning system is designed to reduce the likelihood of crashes resulting from unintentional lane departures (e.g., run-off-road, side collisions, etc.). This system will not take control of the vehicle; it will only let the driver know that he/she needs to steer back into the lane. An LDW is not a lane-change monitor, which addresses intentional lane changes, or a blind spot monitoring system, which warns of other vehicles in adjacent lanes. This informational report applies to original equipment manufacturer and aftermarket LDW systems for light-duty vehicles (gross vehicle weight rating of no more than 8500 pounds) on relatively straight roads with a radius of curvature of 500 m or more and under good weather conditions
Advanced Driver Assistance Systems (ADAS) Committee
The high-frequency whining noise produced by motors in modern electric vehicles can cause a significant issue, which leads to passenger annoyance. This noise becomes even more noticeable due to the quiet nature of electric vehicles, which lack background noise sources to mask the high-frequency whining noise. To improve the noise caused by motors, it is essential to optimize various motor design parameters. However, this task requires expert knowledge and a considerable time investment. In this project, the application of artificial intelligence was applied to optimize the NVH performance of motors during the design phase. Firstly, three benchmark motor types were modelled using the Motor-CAD CAE tool. Machine learning models were trained using DoE methods to simulate batch runs of CAE inputs and outputs. By applying AI, a CatBoost-based regression model was developed to estimate motor performance, including NVH and torque, based on motor design parameters, achieving impressive R
Noh, KyoungjinLee, DongchulJung, InsooTate, SimonMullineux, JamesMohd Azmin, Farraen
Computer modelling, virtual prototyping and simulation is widely used in the automotive industry to optimize the development process. While the use of CAE is widespread, on its own it lacks the ability to provide observable acoustics or tactile vibrations for decision makers to assess, and hence optimize the customer experience. Subjective assessment using Driver-in-Loop simulators to experience data has been shown to improve the quality of vehicles and reduce development time and uncertainty. Efficient development processes require a seamless interface from detailed CAE simulation to subjective evaluations suitable for high level decision makers. In the context of perceived vehicle vibration, the need for a bridge between complex CAE data and realistic subjective evaluation of tactile response is most compelling. A suite of VI-grade noise and vibration simulators have been developed to meet this challenge. In the process of developing these solutions VI-grade has identified the need
Franks, GrahamTcherniak, DmitriKennings, PaulAllman-Ward, MarkKuhmann, Marvin
iMotions employs neuroscience and AI-powered analysis tools to enhance the tracking, assessment and design of human-machine interfaces inside vehicles. The advancement of vehicles with enhanced safety and infotainment features has made evaluating human-machine interfaces (HMI) in modern commercial and industrial vehicles crucial. Drivers face a steep learning curve due to the complexities of these new technologies. Additionally, the interaction with advanced driver-assistance systems (ADAS) increases concerns about cognitive impact and driver distraction in both passenger and commercial vehicles. As vehicles incorporate more automation, many clients are turning to biosensor technology to monitor drivers' attention and the effects of various systems and interfaces. Utilizing neuroscientific principles and AI, data from eye-tracking, facial expressions and heart rate are informing more effective system and interface design strategies. This approach ensures that automation advancements
Nguyen, Nam
Automatically controlling equipment, and providing users with visualization of the operation, are two distinct but closely related functions. Specialized microcontrollers or commercial off-the-shelf (COTS) programmable logic controllers (PLCs) are workhorses for implementing control, while a variety of dedicated or PC-based human-machine interface (HMI) options are available
Game-like navigation visuals Conversational-style voice commands. Contactless biometric sensing. A tidal wave of software code and sensing technologies are being prepped to alter in-vehicle activities. Two supplier companies, TomTom and Mitsubishi Electric Automotive America (MEAA), recently presented their concept cockpit demonstrators to media at TomTom's North American corporate offices in Farmington Hills, Michigan. A few highlights
Buchholz, Kami
In a new study, engineers from Korea and the United States have developed a wearable, stretchy patch that could help to bridge the divide between people and machines — and with benefits for the health of humans around the world
Speech enhancement can extract clean speech from noise interference, enhancing its perceptual quality and intelligibility. This technology has significant applications in in-car intelligent voice interaction. However, the complex noise environment inside the vehicle, especially the human voice interference is very prominent, which brings great challenges to the vehicle speech interaction system. In this paper, we propose a speech enhancement method based on target speech features, which can better extract clean speech and improve the perceptual quality and intelligibility of enhanced speech in the environment of human noise interference. To this end, we propose a design method for the middle layer of the U-Net architecture based on Long Short-Term Memory (LSTM), which can automatically extract the target speech features that are highly distinguishable from the noise signal and human voice interference features in noisy speech, and realize the targeted extraction of clean speech. Then
Pei, KaikunZhang, LijunMeng, DejianHe, Yinzhi
Temporal light modulation (TLM), colloquially known as “flicker,” is an issue in almost all lighting applications, due to widespread adoption of LED and OLED sources and their driving electronics. A subset of LED/OLED lighting systems delivers problematic TLM, often in specific types of residential, commercial, outdoor, and vehicular lighting. Dashboard displays, touchscreens, marker lights, taillights, daytime running lights (DRL), interior lighting, etc. frequently use pulse width modulation (PWM) circuits to achieve different luminances for different times of day and users’ visual adaptation levels. The resulting TLM waveforms and viewing conditions can result in distraction and disorientation, nausea, cognitive effects, and serious health consequences in some populations, occurring with or without the driver, passenger, or pedestrian consciously “seeing” the flicker. There are three visual responses to TLM: direct flicker, the stroboscopic effect, and phantom array effect (also
Miller, NaomiIrvin, Lia
Level 2 (L2) partial driving automation systems are rapidly emerging in the marketplace. L2 systems provide sustained automatic longitudinal and lateral vehicle motion control, reducing the need for drivers to continuously brake, accelerate and steer. Drivers, however, remain critically responsible for safely detecting and responding to objects and events. This paper summarizes variations of L2 systems (hands-on and/or hands-free) and considers human drivers’ roles when using L2 systems and for designing Human-Machine Interfaces (HMIs), including Driver Monitoring Systems (DMSs). In addition, approaches for examining potential unintended consequences of L2 usage and evaluating L2 HMIs, including field safety effect examination, are reviewed. The aim of this paper is to guide L2 system HMI development and L2 system evaluations, especially in the field, to support safe L2 deployment, promote L2 system improvements, and ensure well-informed L2 policy decision-making
Glaser, Yi G.Kiefer, RaymondGlaser, DanielLandry, StevenOwen, SusanLlaneras, RobertLeBlanc, DavidLeslie, AndrewFlannagan, Carol
This paper compares the results from three human factors studies conducted in a motion-based simulator in 2008, 2014 and 2023, to highlight the trends in driver's response to Forward Collision Warning (FCW). The studies were motivated by the goal to develop an effective HMI (Human-Machine Interface) strategy that enables the required driver's response to FCW while minimizing the level of annoyance of the feature. All three studies evaluated driver response to a baseline-FCW and no-FCW conditions. Additionally, the 2023 study included two modified FCW chime variants: a softer FCW chime and a fading FCW chime. Sixteen (16) participants, balanced for gender and age, were tested for each group in all iterations of the studies. The participants drove in a high-fidelity simulator with a visual distraction task (number reading). After driving 15 minutes in a nighttime rural highway environment, a surprise forward collision threat arose during the distraction task. The response times from the
Nasir, MansoorKurokawa, KoSinghal, NehaMayer, KenChowanic, AndreaOsafo Yeboah, BenjaminBlommer, Michael
The purpose of this document is to provide guidance for the implementation of DVI for momentary intervention-type LKA systems, as defined by ISO 11270. LKA systems provide driver support for safe lane keeping operations via momentary interventions. LKA systems are SAE Level 0, according to SAE J3016. LKA systems do not automate any part of the dynamic driving task (DDT) on a sustained basis and are not classified as an integral component of a partial or conditional driving automation system per SAE J3016. The design intent (i.e., purpose) of an LKA system is to address crash scenarios resulting from inadvertent lane or road departures. Drivers can override an LKA system intervention at any time. LKA systems do not guarantee prevention of lane drifts or related crashes. Road and driving environment (e.g., lane line delineation, inclement weather, road curvature, road surface, etc.) as well as vehicle factors (e.g., speed, lateral acceleration, equipment condition, etc.) may affect the
Advanced Driver Assistance Systems (ADAS) Committee
ChatGPT has entered the car. At CES 2024, Volkswagen and technology partner Cerence introduced an update to IDA, VW's in-car voice assistant, so it can now use ChatGPT to expand what's possible using voice commands in vehicles. VW said the ChatGPT bot will be available in Europe in current MEB and MQB evo models from VW Group brands that currently use the IDA voice assistant. That includes some members of the ID family - the ID.7, ID.4, ID.5 and ID.3 - as well as the new Tiguan, Passat and Golf models. VW brands Seat, Škoda, Cupra and VW Commercial Vehicles also will get IDA integration. VW hopes to bring IDA to other markets, including North America, but did not make any timing announcements
Blanco, Sebastian
The paper talks about Quantification of Alertness for vision based Driver Drowsiness and Alertness Warning System (DDAWS). The quantification of alertness, as per Karolinska Sleepiness Scale (KSS), reads the basic input of facial features & behaviour recognition of driver in a standard manner. Although quantification of alertness is inconclusive with respect to the true value, the paper emphasised on systematic validation process of the system covering various scenarios in order to evaluate the system’s functionality very close to the reality. The methodology depends on definition of threshold values of blink and head pose. The facial features are defined by number of blinks with classification of heavy blink and light blink and head pose in (x, y, z) directions. The Human Machine Interface (HMI) warnings are selected in the form of visual and acoustic signals. Frequency, Amplitude and Illumination of HMI alerts are specified. The protocols and trigger functions are defined and KSS
Balasubrahmanyan, ChappagaddaAkbar Badusha, AViswanatham, Satish
Wearing Helmet is a critical safety measure not only for riders but also for passengers. However, people often tend to skip wearing these protective headgears, thereby leading to, increased risk of injury or death in the event of an accident. There is a growing necessity to develop innovative methods that automatically monitor and prevent unsafe driving. To address this issue, we have developed a computer vision-based helmet detection system that can detect if a rider has his helmet on in real-time. We use state-of-the-art computer vision-based techniques for helmet detection. This paper covers various aspects of helmet detection, including image pre-processing, feature extraction, and classification. The system is evaluated on performance metrics such as accuracy, precision, and recall. Further enhancement of the system is proposed in the potential directions for future research. The results demonstrate that computer vision-based helmet detection systems hold significant potential to
D, Bhavanash Rai
Non-usage of helmets does not cause accidents but is critical for averting fatal and grievous injuries in the event of road occurrence accidents. Currently, traffic police use the helmet detection solution on surveillance videos to identify the vehicle number plate of a person who is not wearing a helmet and issue challan. But on the vehicle side, it is not yet implemented. At present, vehicles are neither equipped to issue warnings nor there are any safety measures taken to minimize the risk when the rider is not wearing a helmet. This paper suggests a passive safety system for two-wheelers that uses an integrated camera to detect if the rider is wearing a helmet or not by utilizing image processing techniques. Based on the result, if a helmet is not detected, then the vehicle can send control frames to vehicle HMI for alerts. This paper suggests two approaches to implement the solution. One is Machine Learning Model deployment, and another is OpenCV-based helmet detection. Each
Kishor, KaushalTarte, MalayJoshi, Umita
In this study, a novel assessment approach of in-vehicle speech intelligibility is presented using psychometric curves. Speech recognition performance scores were modeled at an individual listener level for a set of speech recognition data previously collected under a variety of in-vehicle listening scenarios. The model coupled an objective metric of binaural speech intelligibility (i.e., the acoustic factors) with a psychometric curve indicating the listener’s speech recognition efficiency (i.e., the listener factors). In separate analyses, two objective metrics were used with one designed to capture spatial release from masking and the other designed to capture binaural loudness. The proposed approach is in contrast to the traditional approach of relying on the speech recognition threshold, the speech level at 50% recognition performance averaged across listeners, as the metric for in-vehicle speech intelligibility. Results from the presented analyses suggest the importance of
Samardzic, NikolinaLavandier, MathieuShen, Yi
Traditional static gesture recognition algorithms are easily affected by the complex environment inside the cabin, resulting in low recognition rates. Compared with RGB photos captured by traditional cameras, the depth images captured by 3D-TOF cameras can not only reduce the influence of complex environments inside the cabin, but also protect crew privacy. Therefore, this paper proposes a low-computing static gesture recognition method based on 3D-TOF in the cabin. A low-parameter lightweight convolutional neural network (CNN) is used to train five gestures, and the trained gesture model is deployed on a low-computing embedded platform to detect passenger gestures in real-time while ensuring the recognition speed. The contributions of this paper mainly include: (1) Using the TOF camera to collect 1000 depth images of five gestures inside the car cabin. And these gesture depth maps are preprocessed and trained by lightweight convolutional neural network to obtain the gesture
Yi, ZhigangZhou, MingyuXue, DanPeng, Shusheng
Engineers like to know what customers think about a vehicle. Now, drivers of the all-electric Ford F-150 Lightning and Mustang Mach-E can oblige via a new system that channels select customer comments to engineers. F-150 Lightning fullsize pickup truck and Mustang Mach-E SUV owners in the U.S. can pass along opinions via a 45-second voice message after selecting “record feedback” through the settings-general menu on the infotainment touchscreen. “We want to hear the customer's voice. Ford does customer clinics and events, but this is a different way to capture customer feedback,” Donna Dickson, chief engineer of the Ford Mustang Mach-E, said in an interview with SAE Media
Buchholz, Kami
Personal devices feed our sight and hearing virtually unlimited streams of information while leaving our sense of touch mostly … untouched
Achieving human-level dexterity during manipulation and grasping has been a long-standing goal in robotics. To accomplish this, having a reliable sense of tactile information and force is essential for robots. A recent study, published in IEEE Robotics and Automation Letters, describes the L3 F-TOUCH sensor that enhances the force sensing capabilities of classic tactile sensors. The sensor is lightweight, low-cost, and wireless, making it an affordable option for retrofitting existing robot hands and graspers
This SAE Recommended Practice defines key terms used in the description and analysis of video based driver eye glance behavior, as well as guidance in the analysis of that data. The information provided in this practiced is intended to provide consistency for terms, definitions, and analysis techniques. This practice is to be used in laboratory, driving simulator, and on-road evaluations of how people drive, with particular emphasis on evaluating Driver Vehicle Interfaces (DVIs; e.g., in-vehicle multimedia systems, controls and displays). In terms of how such data are reduced, this version only concerns manual video-based techniques. However, even in its current form, the practice should be useful for describing the performance of automated sensors (eye trackers) and automated reduction (computer vision
null, null
This document describes System Theoretic Process Analysis (STPA) approaches to evaluate human-machine interaction (HMI) found effective when conducting STPA human factors and/or a system safety evaluation
Functional Safety Committee
I know nothing more about artificial intelligence (AI) than what I read and what learned people tell me. I know it's supposed to bring new sophistication to all manner of processes and technologies, including automated driving. So, when a driverless robotaxi operated by GM's Cruise plowed into a road section of freshly poured cement in San Francisco, it raised questions about recently beleaguered Cruise. My mind wandered to AI, which many AV compute “stacks” are touted to leverage in abundance. Driving into wet cement isn't intelligent. Did somebody need to train the vehicle's AV stack specifically to recognize wet cement? If that's how it works, I'd prefer not to bet my life on whether some fairly oddball happenstance (is the term ‘edge case’ not cool anymore?) had been accounted for in that particular version of the AD system's algorithm running that particular day
Visnic, Bill
Startups are famous for moving quickly. Vinfast may want to slow things down. It was only 2019 when the Vietnamese company built its first cars, rebodied versions of gasoline BMWs that became hits in its home market. Vinfast speedily developed four electric SUVs, including the inaugural VF8 that SAE Media drove in southern California. At the same time, a cargo ship docked near San Francisco, carrying nearly 2,000 VF8s for customers in California and Canada. The next day, Vinfast announced plans to go public via a SPAC merger. And Vinfast recently broke ground on a $4 billion factory in North Carolina, targeting 150,000 units of annual capacity and more than 7,000 jobs
Ulrich, Lawrence
Extra-Vehicular Activity (EVA) spacesuits are both enabling and limiting. Because pressurization results in stiffening of the pressure garment, an astronaut’s motions and mobility are significantly restricted during EVAs. Dexterity, in particular, is severely reduced. Astronauts are commonly on record identifying spacesuit gloves as a top-priority item in their EVA apparel needing significant improvement. Apollo 17 astronaut-geologist Harrison “Jack” Schmitt has singled out hand fatigue and dexterity as the top two problems to address in EVA spacesuit design for future Moon and Mars exploration. The NASA-STD-3000 standards document indeed states: “Space suit gloves degrade tactile proficiency compared to bare hand operations... Attention should be given to the design of manual interfaces to preclude or minimize hand fatigue or physical discomfort
The scope of this document is to describe system design guidelines for the use of haptic interfaces to manage system safety and functional aspects of designs applicable for OEM and aftermarket systems in light vehicles. The intent of these guidelines is to help system designers determine when to use haptic interfaces and how to ensure their effectiveness. These may be stand-alone interfaces or the haptic aspects of multi-modal (audio, video, speech, haptic) interfaces. Excludes haptic systems designed for use by passengers, which may be addressed in a future version
null, null
Autonomous driving systems (ADS) have been widely tested in real-world environments with operators who must monitor and intervene due to remaining technical challenges. However, intervention methods that require operators to take over control of the vehicle involve many drawbacks related to human performance. ADS consist of recognition, decision, and control modules. The latter two phases are dependent on the recognition phase, which still struggles with tasks involving the prediction of human behavior, such as pedestrian risk prediction. As an alternative to full automation of the recognition task, cooperative recognition approaches utilize the human operator to assist the automated system in performing challenging recognition tasks, using a recognition assistance interface to realize human-machine cooperation. In this study, we propose a recognition assistance interface for cooperative recognition in order to achieve safer and more efficient driving through improved human-automation
Kuribayashi, AtsushiTakeuchi, EijiroCarballo, AlexanderIshiguro, YoshioTakeda, Kazuya
“Holy cats. What happens when this stuff goes wrong?” That's how mechanical engineer and attorney Jennifer Dukarski framed her tech talk about developments in vehicular artificial intelligence (AI) and machine learning at the 2023 SAE WCX conference in Detroit. She linked the discussion to General Motors' March announcement that it was exploring using ChatGPT as the driver interface in vehicles
Clonts, Chris
Technology capable of replicating the sense of touch — also known as haptic feedback — can greatly enhance human-computer and human-robot interfaces for applications such as medical rehabilitation and virtual reality. A soft artificial skin was developed that provides haptic feedback and, using a self-sensing mechanism, has the potential to instantaneously adapt to a wearer’s movements
Vehicles equipped with Level 4 and 5 autonomy will need to be tested according to regulatory standards (or future revisions thereof) that vehicles with lower levels of autonomy are currently subject to. Today, dynamic Federal Motor Vehicle Safety Standards (FMVSS) tests are performed with human drivers and driving robots controlling the test vehicle’s steering wheel, throttle pedal, and brake pedal. However, many Level 4 and 5 vehicles will lack these traditional driver controls, so it will be impossible to control these vehicles using human drivers or traditional driving robots. Therefore, there is a need for an electronic interface that will allow engineers to send dynamic steering, speed, and brake commands to a vehicle. This paper describes the design and implementation of a market-ready Automated Driving Systems (ADS) Test Data Interface (TDI), a secure electronic control interface which aims to solve the challenges outlined above. The interface consists of a communication port
Zagorski, ScottNguyen, AnHeydinger, GaryAbbey, Howard
For Automated Vehicles (AVs) to be successful, they must integrate into society in a way that makes everyone confident in how AVs work to serve people and their communities. This integration requires that AVs communicate effectively, not only with other vehicles, but with all road users, including pedestrians and cyclists. One proposed method of AV communication is through an external human-machine interface (eHMI). While many studies have evaluated eHMI solutions, few have considered their compliance with relevant Federal Motor Vehicle Safety Standards (FMVSS) and their scalability. This study evaluated the effectiveness of a lightbar eHMI to communicate AV intent by measuring user comprehension of the eHMI and its impact on pedestrians’ trust and acceptance of AVs. In a virtual reality scene, 33 participants experienced one of three eHMI conditions (no lightbar, FMVSS-compliant lightbar, non-compliant lightbar) of an AV that communicated its intent when navigating a busy intersection
Marulanda, SusanaBritten, NicholasChang, Chun-ChengShutko, John
Engaging in visual-manual tasks such as selecting a radio station, adjusting the interior temperature, or setting an automation function can be distracting to drivers. Additionally, if setting the automation fails, driver takeover can be delayed. Traditionally, assessing the usability of driver interfaces and determining if they are unacceptably distracting (per the NHTSA driver distraction guidelines and SAE J2364) involves human subject testing, which is expensive and time-consuming. However, most vehicle engineering decisions are based on computational analyses, such as the task time predictions in SAE J2365. Unfortunately, J2365 was developed before touch screens were common in motor vehicles. To update J2365 and other task analyses, estimates were developed for (1) cognitive activities (mental, search, read), (2) low-level 2D elements (Press, Tap, Double Tap, Drag, Zoom, Press and Hold, Rotate, Turn Knob, Type and Keypress, and Flick), (3) complex 2D elements (handwrite, menu use
Green, PaulKoca, EkimBrennan-Carey, Collin
Items per page:
1 – 50 of 882