Browse Topic: Human machine interface (HMI)
The study analyzed data from on-road drives with a pre-production Level 2 (L2) partial automation system using a sample of 27 drivers ranging from 21 to 75 years of age. The system provides continuous automatic lateral and longitudinal control but requires the driver to remain attentive and intervene when necessary. The L2 system was equipped with a Driving Monitoring System (DMS) that issued escalating alerts to remind the driver to pay attention or take over when needed. During the 14-month study period, drivers completed 354,768 miles of travel with the L2 system engaged, totaling 5,913 trips. The results of the study showed that drivers were highly responsive to attention reminders and takeover alerts, with high compliance rates and quick response times. Importantly, there was no evidence of habituation to these alerts over time. These findings support the effectiveness of the system's DMS and alert HMI (Human-Machine Interface) strategy in promoting the proper use of the system
This SAE Edge Research Report explores advancements in next-generation mobility, focusing on digitalized and smart cockpits and cabins. It offers literature review, examining current customer experiences with traditional vehicles and future mobility expectations. Key topics include integrating smart cockpit and cabin technologies, addressing challenges in customer and user experience (UX) in digital environments, and discussing strategies for transitioning from traditional vehicles to electric ones while educating customers. User Experience for Digitalized and Smart Cockpits and Cabins of Next-gen Mobility covers both on- and off-vehicle experiences, analyzing complexities in developing and deploying digital products and services with effective user interfaces. Emphasis is placed on meeting UX requirements, gaining user acceptance, and avoiding trust issues due to poor UX. Additionally, the report concludes with suggestions for improving UX in digital products and services for future
This paper presents the development of a cost-effective assistive headgear designed to address the navigation challenges faced by millions of visually impaired individuals in India. Existing solutions are often prohibitively expensive, leaving a significant portion of this population underserved. To address this gap, we propose a novel human-machine interface that utilizes a synergistic combination of computer vision, stereo imaging, and haptic feedback technologies. The focus of this project lies in the creation of a practical and affordable headgear that empowers visually impaired users with real time obstacle detection and navigation capabilities. The solution leverages computer vision for environmental analysis and integrates haptic feedback for intuitive user guidance. This paper details the design intricacies of the headgear, along with the implementation methodologies employed. We present comprehensive testing results and discuss the project's potential to significantly enhance
Modal performance of a vehicle body often influences tactile vibrations felt by passengers as well as their acoustic comfort inside the cabin at low frequencies. This paper focuses on a premium hatchback’s development program where a design-intent initial batch of proto-cars were found to meet their targeted NVH performance. However, tactile vibrations in pre-production pilot batch vehicles were found to be of higher intensity. As a resolution, a method of cascading full vehicle level performance to its Body-In-White (BIW) component level was used to understand dynamic behavior of the vehicle and subsequently, to improve structural weakness of the body to achieve the targeted NVH performance. The cascaded modal performance indicated that global bending stiffness of the pre-production bodies was on the lower side w.r.t. that of the design intent body. To identify the root cause, design sensitivity of number and footprint of weld spots, roof bows’ and headers’ attachment stiffness to BIW
Today’s intelligent robots can accurately recognize many objects through vision and touch. Tactile information, obtained through sensors, along with machine learning algorithms, enables robots to identify objects previously handled.
Semi-automated computational design methods involving physics-based simulation, optimization, machine learning, and generative artificial intelligence (AI) already allow greatly enhanced performance alongside reduced cost in both design and manufacturing. As we progress, developments in user interfaces, AI integration, and automation of workflows will increasingly reduce the human inputs required to achieve this. With this, engineering teams must change their mindset from designing products to specifying requirements, focusing their efforts on testing and analysis to provide accurate specifications. Generative Design in Aerospace and Automotive Structures discusses generative design in its broadest sense, including the challenges and recommendations regarding multi-stage optimizations. Click here to access the full SAE EDGETM Research Report portfolio.
Homologation is an important process in vehicle development and aerodynamics a main data contributor. The process is heavily interconnected: Production planning defines the available assemblies. Construction defines their parts and features. Sales defines the assemblies offered in different markets, where Legislation defines the rules applicable to homologation. Control engineers define the behavior of active, aerodynamically relevant components. Wind tunnels are the main test tool for the homologation, accompanied by surface-area measurement systems. Mechanics support these test operations. The prototype management provides test vehicles, while parts come from various production and prototyping sources and are stored and commissioned by logistics. Several phases of this complex process share the same context: Production timelines for assemblies and parts for each chassis-engine package define which drag coefficients or drag coefficient contributions shall be determined. Absolute and
Using electrical impedance tomography (EIT), researchers have developed a system using a flexible tactile sensor for objective evaluation of fine finger movements. Demonstrating high accuracy in classifying diverse pinching motions, with discrimination rates surpassing 90 percent, this innovation holds potential in cognitive development and automated medical research.
The lane departure warning (LDW) system is a warning system that alerts drivers if they are drifting (or have drifted) out of their lane or from the roadway. This warning system is designed to reduce the likelihood of crashes resulting from unintentional lane departures (e.g., run-off-road, side collisions, etc.). This system will not take control of the vehicle; it will only let the driver know that he/she needs to steer back into the lane. An LDW is not a lane-change monitor, which addresses intentional lane changes, or a blind spot monitoring system, which warns of other vehicles in adjacent lanes. This informational report applies to original equipment manufacturer and aftermarket LDW systems for light-duty vehicles (gross vehicle weight rating of no more than 8500 pounds) on relatively straight roads with a radius of curvature of 500 m or more and under good weather conditions.
iMotions employs neuroscience and AI-powered analysis tools to enhance the tracking, assessment and design of human-machine interfaces inside vehicles. The advancement of vehicles with enhanced safety and infotainment features has made evaluating human-machine interfaces (HMI) in modern commercial and industrial vehicles crucial. Drivers face a steep learning curve due to the complexities of these new technologies. Additionally, the interaction with advanced driver-assistance systems (ADAS) increases concerns about cognitive impact and driver distraction in both passenger and commercial vehicles. As vehicles incorporate more automation, many clients are turning to biosensor technology to monitor drivers' attention and the effects of various systems and interfaces. Utilizing neuroscientific principles and AI, data from eye-tracking, facial expressions and heart rate are informing more effective system and interface design strategies. This approach ensures that automation advancements
Automatically controlling equipment, and providing users with visualization of the operation, are two distinct but closely related functions. Specialized microcontrollers or commercial off-the-shelf (COTS) programmable logic controllers (PLCs) are workhorses for implementing control, while a variety of dedicated or PC-based human-machine interface (HMI) options are available.
While there is a tendency for new vehicles to have a focus on ride, handling, performance and other dynamic elements, the model year 2024 Lincoln Nautilus team added another element to how the driver will experience the midsize SUV. Not that the ride, handling, etc. were ignored, but the global design and engineering team wanted to do something different with this two-row SUV. Recognize that this is a vehicle with a sumptuous interior that includes not only first-class seating (24-way adjustable front seats) and materials (Alpine Venetian leather available on the seats; cashmere for the headliner) but also an available high-end Revel Ultima 3D audio system with 28 speakers. What's more, there's “Lincoln Digital Scent,” small electronically activated pods containing various aromas (e.g., Mystic Forest, Ozonic Azure, Violet Cashmere). Across the top of the instrument panel there is a 48-inch backlit LCD screen and a 11.1-inch touchscreen in the center stack.
Game-like navigation visuals Conversational-style voice commands. Contactless biometric sensing. A tidal wave of software code and sensing technologies are being prepped to alter in-vehicle activities. Two supplier companies, TomTom and Mitsubishi Electric Automotive America (MEAA), recently presented their concept cockpit demonstrators to media at TomTom's North American corporate offices in Farmington Hills, Michigan. A few highlights:
In a new study, engineers from Korea and the United States have developed a wearable, stretchy patch that could help to bridge the divide between people and machines — and with benefits for the health of humans around the world.
The purpose of this document is to provide guidance for the implementation of DVI for momentary intervention-type LKA systems, as defined by ISO 11270. LKA systems provide driver support for safe lane keeping operations via momentary interventions. LKA systems are SAE Level 0, according to SAE J3016. LKA systems do not automate any part of the dynamic driving task (DDT) on a sustained basis and are not classified as an integral component of a partial or conditional driving automation system per SAE J3016. The design intent (i.e., purpose) of an LKA system is to address crash scenarios resulting from inadvertent lane or road departures. Drivers can override an LKA system intervention at any time. LKA systems do not guarantee prevention of lane drifts or related crashes. Road and driving environment (e.g., lane line delineation, inclement weather, road curvature, road surface, etc.) as well as vehicle factors (e.g., speed, lateral acceleration, equipment condition, etc.) may affect the
ChatGPT has entered the car. At CES 2024, Volkswagen and technology partner Cerence introduced an update to IDA, VW's in-car voice assistant, so it can now use ChatGPT to expand what's possible using voice commands in vehicles. VW said the ChatGPT bot will be available in Europe in current MEB and MQB evo models from VW Group brands that currently use the IDA voice assistant. That includes some members of the ID family - the ID.7, ID.4, ID.5 and ID.3 - as well as the new Tiguan, Passat and Golf models. VW brands Seat, Škoda, Cupra and VW Commercial Vehicles also will get IDA integration. VW hopes to bring IDA to other markets, including North America, but did not make any timing announcements.
Wearing Helmet is a critical safety measure not only for riders but also for passengers. However, people often tend to skip wearing these protective headgears, thereby leading to, increased risk of injury or death in the event of an accident. There is a growing necessity to develop innovative methods that automatically monitor and prevent unsafe driving. To address this issue, we have developed a computer vision-based helmet detection system that can detect if a rider has his helmet on in real-time. We use state-of-the-art computer vision-based techniques for helmet detection. This paper covers various aspects of helmet detection, including image pre-processing, feature extraction, and classification. The system is evaluated on performance metrics such as accuracy, precision, and recall. Further enhancement of the system is proposed in the potential directions for future research. The results demonstrate that computer vision-based helmet detection systems hold significant potential to
In this study, a novel assessment approach of in-vehicle speech intelligibility is presented using psychometric curves. Speech recognition performance scores were modeled at an individual listener level for a set of speech recognition data previously collected under a variety of in-vehicle listening scenarios. The model coupled an objective metric of binaural speech intelligibility (i.e., the acoustic factors) with a psychometric curve indicating the listener’s speech recognition efficiency (i.e., the listener factors). In separate analyses, two objective metrics were used with one designed to capture spatial release from masking and the other designed to capture binaural loudness. The proposed approach is in contrast to the traditional approach of relying on the speech recognition threshold, the speech level at 50% recognition performance averaged across listeners, as the metric for in-vehicle speech intelligibility. Results from the presented analyses suggest the importance of
Achieving human-level dexterity during manipulation and grasping has been a long-standing goal in robotics. To accomplish this, having a reliable sense of tactile information and force is essential for robots. A recent study, published in IEEE Robotics and Automation Letters, describes the L3 F-TOUCH sensor that enhances the force sensing capabilities of classic tactile sensors. The sensor is lightweight, low-cost, and wireless, making it an affordable option for retrofitting existing robot hands and graspers.
Personal devices feed our sight and hearing virtually unlimited streams of information while leaving our sense of touch mostly … untouched.
Engineers like to know what customers think about a vehicle. Now, drivers of the all-electric Ford F-150 Lightning and Mustang Mach-E can oblige via a new system that channels select customer comments to engineers. F-150 Lightning fullsize pickup truck and Mustang Mach-E SUV owners in the U.S. can pass along opinions via a 45-second voice message after selecting “record feedback” through the settings-general menu on the infotainment touchscreen. “We want to hear the customer's voice. Ford does customer clinics and events, but this is a different way to capture customer feedback,” Donna Dickson, chief engineer of the Ford Mustang Mach-E, said in an interview with SAE Media.
This SAE Recommended Practice defines key terms used in the description and analysis of video based driver eye glance behavior, as well as guidance in the analysis of that data. The information provided in this practiced is intended to provide consistency for terms, definitions, and analysis techniques. This practice is to be used in laboratory, driving simulator, and on-road evaluations of how people drive, with particular emphasis on evaluating Driver Vehicle Interfaces (DVIs; e.g., in-vehicle multimedia systems, controls and displays). In terms of how such data are reduced, this version only concerns manual video-based techniques. However, even in its current form, the practice should be useful for describing the performance of automated sensors (eye trackers) and automated reduction (computer vision).
I know nothing more about artificial intelligence (AI) than what I read and what learned people tell me. I know it's supposed to bring new sophistication to all manner of processes and technologies, including automated driving. So, when a driverless robotaxi operated by GM's Cruise plowed into a road section of freshly poured cement in San Francisco, it raised questions about recently beleaguered Cruise. My mind wandered to AI, which many AV compute “stacks” are touted to leverage in abundance. Driving into wet cement isn't intelligent. Did somebody need to train the vehicle's AV stack specifically to recognize wet cement? If that's how it works, I'd prefer not to bet my life on whether some fairly oddball happenstance (is the term ‘edge case’ not cool anymore?) had been accounted for in that particular version of the AD system's algorithm running that particular day.
Startups are famous for moving quickly. Vinfast may want to slow things down. It was only 2019 when the Vietnamese company built its first cars, rebodied versions of gasoline BMWs that became hits in its home market. Vinfast speedily developed four electric SUVs, including the inaugural VF8 that SAE Media drove in southern California. At the same time, a cargo ship docked near San Francisco, carrying nearly 2,000 VF8s for customers in California and Canada. The next day, Vinfast announced plans to go public via a SPAC merger. And Vinfast recently broke ground on a $4 billion factory in North Carolina, targeting 150,000 units of annual capacity and more than 7,000 jobs.
ABSTRACT This paper discusses the design and implementation of an interactive mixed reality cockpit that enhances Soldier-vehicle interaction by providing a 360-degree situational awareness system. The cockpit uses indirect vision, where cameras outside the vehicle provide a video feed of the surroundings to the cockpit. The cockpit also includes a virtual information dashboard that displays real-time information about the vehicle, mission, and crew status. The visualization of the dashboard is based on past research in information visualization, allowing Soldiers to quickly assess their operational state. The paper presents the results of a usability study on the effectiveness of the mixed reality cockpit, which compared the Vitreous interface, a Soldier-centered mixed reality head-mounted display, with two other interface and display technologies. The study found that the Vitreous UI resulted in better driving performance and better subjective evaluation of the ability to actively
Crew Station design in the physical realm is complex and expensive due to the cost of fabrication and the time required to reconfigure necessary hardware to conduct studies for human factors and optimization of space claim. However, recent advances in Virtual Reality (VR) and hand tracking technologies have enabled a paradigm shift to the process. The Ground Vehicle System Center has developed an innovative approach using VR technologies to enable a trade space exploration capability which provides crews the ability to place touchscreens and switch panels as desired, then lock them into place to perform a fully recorded simulation of operating the vehicle through a virtual terrain, maneuvering through firing points and engaging moving and static targets during virtual night and day missions with simulated sensor effects for infrared and night vision. Human factors are explored and studied using hand tracking which enables operators to check reach by interacting with virtual components
ABSTRACT The U.S. Army Combat Capabilities Development Command (DEVCOM) Ground Vehicle Systems Center (GVSC) has been developing next generation crew stations over the last several decades. In this paper, the problem space that impacts design development and decisions is discussed. This is followed by a historical overview of crewstation development activities that have evolved over the last 30 years, as well as key lessons learned that must be considered for successful ground vehicle Soldier-vehicle interactions. Lastly, the direction and critical technological focus areas are identified to exploit advancements and meet future combat vehicle system needs. Citation: T. Tierney, “A Perspective on GVSC Crewstation Development and Addressing Future Ground Combat Vehicle Needs,” In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 15-17, 2023.
Extra-Vehicular Activity (EVA) spacesuits are both enabling and limiting. Because pressurization results in stiffening of the pressure garment, an astronaut’s motions and mobility are significantly restricted during EVAs. Dexterity, in particular, is severely reduced. Astronauts are commonly on record identifying spacesuit gloves as a top-priority item in their EVA apparel needing significant improvement. Apollo 17 astronaut-geologist Harrison “Jack” Schmitt has singled out hand fatigue and dexterity as the top two problems to address in EVA spacesuit design for future Moon and Mars exploration. The NASA-STD-3000 standards document indeed states: “Space suit gloves degrade tactile proficiency compared to bare hand operations... Attention should be given to the design of manual interfaces to preclude or minimize hand fatigue or physical discomfort.”
Items per page:
50
1 – 50 of 935