Browse Topic: Human machine interface (HMI)
ABSTRACT Research is currently underway to improve controllability of high degree-of-freedom manipulators under a Phase II SBIR contract sponsored by the U.S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC). As part of this program, the authors have created new control methods as well as adapting tool changing technology onto a dexterous arm to look at controllability of various manipulator functions. In this paper, the authors describe the work completed under this program and describe the findings of this work in terms of how these technologies can be used to extend the capabilities of existing and newly developed robotic manipulators
ABSTRACT The concept of handheld control systems with modular and/or integrated display provides the flexibility of operator use that supports the needs of today’s warfighters. A human machine interface control system that easily integrates with vehicle systems through common architecture and can transition to support dismounted operations provides warfighters with functional mobility they do not have today. With Size, Weight and Power along with reliability, maintainability and availability driving the needs of most platforms for both upgrade and development, moving to convertible (mounted to handheld) and transferrable control systems supports these needs as well as the need for the warfighter to maintain continuous control and command connectivity in uncertain mission conditions
ABSTRACT As the number of robotic systems on the battlefield increases, the number of operators grows with it, leading to significant cost burden. Autonomous robots are already capable of task execution with limited supervision, and the capabilities of autonomous robots continue to advance rapidly. Because these autonomous systems have the ability to assist and augment human soldiers, commanders need advanced methods for assigning tasks to the systems, monitoring their status and using them to achieve desirable results. Mission Command for Autonomous Systems (MCAS) aims to enable natural interaction between commanders and their autonomous assets without requiring dedicated operators or significantly increasing the commanders’ cognitive burden. This paper discusses the approach, design and challenges of MCAS and present opportunities for future collaboration with industry and academia
ABSTRACT The concept of handheld control systems with modular and/or integrated display provides the flexibility of operator use that supports the needs of today’s warfighters. A human machine interface control system that easily integrates with vehicle systems through common architecture and can transition to support dismounted operations provides warfighters with functional mobility they do not have today. With Size, Weight and Power along with reliability, maintainability and availability driving the needs of most platforms for both upgrade and development, moving to convertible (mounted to handheld) and transferrable control systems supports these needs as well as the need for the warfighter to maintain continuous control and command connectivity in uncertain mission conditions
ABSTRACT This presentation will review the ongoing lessons learned from a joint Industry/DoD collaborative program to explore this area over the past 5 years. The discussion will review the effectiveness of integrating multiple new technologies (combined with select COTS elements) to provide a complete solution designed to reduce spares stockpiles, maximize available manpower, reduce maintenance downtime and reduce vehicle lifecycle costs. A number of new and emerging technology case studies involving diagnostic sensors (such as battery health monitors), knowledge management data accessibility, remote support-based Telematics, secure communication, condition-based software algorithms, browser-based user interfaces and web portal data delivery will be presented
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
ABSTRACT Recent advances in neuroscience, signal processing, machine learning, and related technologies have made it possible to reliably detect brain signatures specific to visual target recognition in real time. Utilizing these technologies together has shown an increase in the speed and accuracy of visual target identification over traditional visual scanning techniques. Images containing a target of interest elicit a unique neural signature in the brain (e.g. P300 event-related potential) when detected by the human observer. Computer vision exploits the P300-based signal to identify specific features in the target image that are different from other non-target images. Coupling the brain and computer in this way along with using rapid serial visual presentation (RSVP) of the images enables large image datasets to be accurately interrogated in a short amount of time. Together this technology allows for potential military applications ranging from image triaging for the image analyst
Crew Station design in the physical realm is complex and expensive due to the cost of fabrication and the time required to reconfigure necessary hardware to conduct studies for human factors and optimization of space claim. However, recent advances in Virtual Reality (VR) and hand tracking technologies have enabled a paradigm shift to the process. The Ground Vehicle System Center has developed an innovative approach using VR technologies to enable a trade space exploration capability which provides crews the ability to place touchscreens and switch panels as desired, then lock them into place to perform a fully recorded simulation of operating the vehicle through a virtual terrain, maneuvering through firing points and engaging moving and static targets during virtual night and day missions with simulated sensor effects for infrared and night vision. Human factors are explored and studied using hand tracking which enables operators to check reach by interacting with virtual components
Today’s intelligent robots can accurately recognize many objects through vision and touch. Tactile information, obtained through sensors, along with machine learning algorithms, enables robots to identify objects previously handled
Semi-automated computational design methods involving physics-based simulation, optimization, machine learning, and generative artificial intelligence (AI) already allow greatly enhanced performance alongside reduced cost in both design and manufacturing. As we progress, developments in user interfaces, AI integration, and automation of workflows will increasingly reduce the human inputs required to achieve this. With this, engineering teams must change their mindset from designing products to specifying requirements, focusing their efforts on testing and analysis to provide accurate specifications. Generative Design in Aerospace and Automotive Structures discusses generative design in its broadest sense, including the challenges and recommendations regarding multi-stage optimizations. Click here to access the full SAE EDGETM Research Report portfolio
Homologation is an important process in vehicle development and aerodynamics a main data contributor. The process is heavily interconnected: Production planning defines the available assemblies. Construction defines their parts and features. Sales defines the assemblies offered in different markets, where Legislation defines the rules applicable to homologation. Control engineers define the behavior of active, aerodynamically relevant components. Wind tunnels are the main test tool for the homologation, accompanied by surface-area measurement systems. Mechanics support these test operations. The prototype management provides test vehicles, while parts come from various production and prototyping sources and are stored and commissioned by logistics. Several phases of this complex process share the same context: Production timelines for assemblies and parts for each chassis-engine package define which drag coefficients or drag coefficient contributions shall be determined. Absolute and
Using electrical impedance tomography (EIT), researchers have developed a system using a flexible tactile sensor for objective evaluation of fine finger movements. Demonstrating high accuracy in classifying diverse pinching motions, with discrimination rates surpassing 90 percent, this innovation holds potential in cognitive development and automated medical research
The lane departure warning (LDW) system is a warning system that alerts drivers if they are drifting (or have drifted) out of their lane or from the roadway. This warning system is designed to reduce the likelihood of crashes resulting from unintentional lane departures (e.g., run-off-road, side collisions, etc.). This system will not take control of the vehicle; it will only let the driver know that he/she needs to steer back into the lane. An LDW is not a lane-change monitor, which addresses intentional lane changes, or a blind spot monitoring system, which warns of other vehicles in adjacent lanes. This informational report applies to original equipment manufacturer and aftermarket LDW systems for light-duty vehicles (gross vehicle weight rating of no more than 8500 pounds) on relatively straight roads with a radius of curvature of 500 m or more and under good weather conditions
iMotions employs neuroscience and AI-powered analysis tools to enhance the tracking, assessment and design of human-machine interfaces inside vehicles. The advancement of vehicles with enhanced safety and infotainment features has made evaluating human-machine interfaces (HMI) in modern commercial and industrial vehicles crucial. Drivers face a steep learning curve due to the complexities of these new technologies. Additionally, the interaction with advanced driver-assistance systems (ADAS) increases concerns about cognitive impact and driver distraction in both passenger and commercial vehicles. As vehicles incorporate more automation, many clients are turning to biosensor technology to monitor drivers' attention and the effects of various systems and interfaces. Utilizing neuroscientific principles and AI, data from eye-tracking, facial expressions and heart rate are informing more effective system and interface design strategies. This approach ensures that automation advancements
Automatically controlling equipment, and providing users with visualization of the operation, are two distinct but closely related functions. Specialized microcontrollers or commercial off-the-shelf (COTS) programmable logic controllers (PLCs) are workhorses for implementing control, while a variety of dedicated or PC-based human-machine interface (HMI) options are available
Game-like navigation visuals Conversational-style voice commands. Contactless biometric sensing. A tidal wave of software code and sensing technologies are being prepped to alter in-vehicle activities. Two supplier companies, TomTom and Mitsubishi Electric Automotive America (MEAA), recently presented their concept cockpit demonstrators to media at TomTom's North American corporate offices in Farmington Hills, Michigan. A few highlights
In a new study, engineers from Korea and the United States have developed a wearable, stretchy patch that could help to bridge the divide between people and machines — and with benefits for the health of humans around the world
The purpose of this document is to provide guidance for the implementation of DVI for momentary intervention-type LKA systems, as defined by ISO 11270. LKA systems provide driver support for safe lane keeping operations via momentary interventions. LKA systems are SAE Level 0, according to SAE J3016. LKA systems do not automate any part of the dynamic driving task (DDT) on a sustained basis and are not classified as an integral component of a partial or conditional driving automation system per SAE J3016. The design intent (i.e., purpose) of an LKA system is to address crash scenarios resulting from inadvertent lane or road departures. Drivers can override an LKA system intervention at any time. LKA systems do not guarantee prevention of lane drifts or related crashes. Road and driving environment (e.g., lane line delineation, inclement weather, road curvature, road surface, etc.) as well as vehicle factors (e.g., speed, lateral acceleration, equipment condition, etc.) may affect the
ChatGPT has entered the car. At CES 2024, Volkswagen and technology partner Cerence introduced an update to IDA, VW's in-car voice assistant, so it can now use ChatGPT to expand what's possible using voice commands in vehicles. VW said the ChatGPT bot will be available in Europe in current MEB and MQB evo models from VW Group brands that currently use the IDA voice assistant. That includes some members of the ID family - the ID.7, ID.4, ID.5 and ID.3 - as well as the new Tiguan, Passat and Golf models. VW brands Seat, Škoda, Cupra and VW Commercial Vehicles also will get IDA integration. VW hopes to bring IDA to other markets, including North America, but did not make any timing announcements
Wearing Helmet is a critical safety measure not only for riders but also for passengers. However, people often tend to skip wearing these protective headgears, thereby leading to, increased risk of injury or death in the event of an accident. There is a growing necessity to develop innovative methods that automatically monitor and prevent unsafe driving. To address this issue, we have developed a computer vision-based helmet detection system that can detect if a rider has his helmet on in real-time. We use state-of-the-art computer vision-based techniques for helmet detection. This paper covers various aspects of helmet detection, including image pre-processing, feature extraction, and classification. The system is evaluated on performance metrics such as accuracy, precision, and recall. Further enhancement of the system is proposed in the potential directions for future research. The results demonstrate that computer vision-based helmet detection systems hold significant potential to
In this study, a novel assessment approach of in-vehicle speech intelligibility is presented using psychometric curves. Speech recognition performance scores were modeled at an individual listener level for a set of speech recognition data previously collected under a variety of in-vehicle listening scenarios. The model coupled an objective metric of binaural speech intelligibility (i.e., the acoustic factors) with a psychometric curve indicating the listener’s speech recognition efficiency (i.e., the listener factors). In separate analyses, two objective metrics were used with one designed to capture spatial release from masking and the other designed to capture binaural loudness. The proposed approach is in contrast to the traditional approach of relying on the speech recognition threshold, the speech level at 50% recognition performance averaged across listeners, as the metric for in-vehicle speech intelligibility. Results from the presented analyses suggest the importance of
Engineers like to know what customers think about a vehicle. Now, drivers of the all-electric Ford F-150 Lightning and Mustang Mach-E can oblige via a new system that channels select customer comments to engineers. F-150 Lightning fullsize pickup truck and Mustang Mach-E SUV owners in the U.S. can pass along opinions via a 45-second voice message after selecting “record feedback” through the settings-general menu on the infotainment touchscreen. “We want to hear the customer's voice. Ford does customer clinics and events, but this is a different way to capture customer feedback,” Donna Dickson, chief engineer of the Ford Mustang Mach-E, said in an interview with SAE Media
Personal devices feed our sight and hearing virtually unlimited streams of information while leaving our sense of touch mostly … untouched
Achieving human-level dexterity during manipulation and grasping has been a long-standing goal in robotics. To accomplish this, having a reliable sense of tactile information and force is essential for robots. A recent study, published in IEEE Robotics and Automation Letters, describes the L3 F-TOUCH sensor that enhances the force sensing capabilities of classic tactile sensors. The sensor is lightweight, low-cost, and wireless, making it an affordable option for retrofitting existing robot hands and graspers
This SAE Recommended Practice defines key terms used in the description and analysis of video based driver eye glance behavior, as well as guidance in the analysis of that data. The information provided in this practiced is intended to provide consistency for terms, definitions, and analysis techniques. This practice is to be used in laboratory, driving simulator, and on-road evaluations of how people drive, with particular emphasis on evaluating Driver Vehicle Interfaces (DVIs; e.g., in-vehicle multimedia systems, controls and displays). In terms of how such data are reduced, this version only concerns manual video-based techniques. However, even in its current form, the practice should be useful for describing the performance of automated sensors (eye trackers) and automated reduction (computer vision
I know nothing more about artificial intelligence (AI) than what I read and what learned people tell me. I know it's supposed to bring new sophistication to all manner of processes and technologies, including automated driving. So, when a driverless robotaxi operated by GM's Cruise plowed into a road section of freshly poured cement in San Francisco, it raised questions about recently beleaguered Cruise. My mind wandered to AI, which many AV compute “stacks” are touted to leverage in abundance. Driving into wet cement isn't intelligent. Did somebody need to train the vehicle's AV stack specifically to recognize wet cement? If that's how it works, I'd prefer not to bet my life on whether some fairly oddball happenstance (is the term ‘edge case’ not cool anymore?) had been accounted for in that particular version of the AD system's algorithm running that particular day
Startups are famous for moving quickly. Vinfast may want to slow things down. It was only 2019 when the Vietnamese company built its first cars, rebodied versions of gasoline BMWs that became hits in its home market. Vinfast speedily developed four electric SUVs, including the inaugural VF8 that SAE Media drove in southern California. At the same time, a cargo ship docked near San Francisco, carrying nearly 2,000 VF8s for customers in California and Canada. The next day, Vinfast announced plans to go public via a SPAC merger. And Vinfast recently broke ground on a $4 billion factory in North Carolina, targeting 150,000 units of annual capacity and more than 7,000 jobs
Extra-Vehicular Activity (EVA) spacesuits are both enabling and limiting. Because pressurization results in stiffening of the pressure garment, an astronaut’s motions and mobility are significantly restricted during EVAs. Dexterity, in particular, is severely reduced. Astronauts are commonly on record identifying spacesuit gloves as a top-priority item in their EVA apparel needing significant improvement. Apollo 17 astronaut-geologist Harrison “Jack” Schmitt has singled out hand fatigue and dexterity as the top two problems to address in EVA spacesuit design for future Moon and Mars exploration. The NASA-STD-3000 standards document indeed states: “Space suit gloves degrade tactile proficiency compared to bare hand operations... Attention should be given to the design of manual interfaces to preclude or minimize hand fatigue or physical discomfort
“Holy cats. What happens when this stuff goes wrong?” That's how mechanical engineer and attorney Jennifer Dukarski framed her tech talk about developments in vehicular artificial intelligence (AI) and machine learning at the 2023 SAE WCX conference in Detroit. She linked the discussion to General Motors' March announcement that it was exploring using ChatGPT as the driver interface in vehicles
Technology capable of replicating the sense of touch — also known as haptic feedback — can greatly enhance human-computer and human-robot interfaces for applications such as medical rehabilitation and virtual reality. A soft artificial skin was developed that provides haptic feedback and, using a self-sensing mechanism, has the potential to instantaneously adapt to a wearer’s movements
Items per page:
50
1 – 50 of 882