Browse Topic: Mental processes
ABSTRACT: Ground vehicle survivability and protection systems and subsystems are increasingly employing sensors to augment and enhance overall platform survivability. These systems sense and measure select attributes of the operational environment and pass this measured “data” to a computational controller which then produces a survivability or protective system response based on that computed data. The data collected is usually narrowly defined for that select system’s purpose and is seldom shared or used by adjacent survivability and protection subsystems. The Army approach toward centralized protection system processing (MAPS Modular APS Controller) provides promise that sensor data will be more judiciously shared between platform protection subsystems in the future. However, this system in its current form, falls short of the full protective potential that could be realized from the cumulative sum of sensor data. Platform protection and survivability can be dramatically enhanced if
ABSTRACT Autonomous robots can maneuver into dangerous situations without endangering Soldiers. The Soldier tasked with the supervision of a route clearing robot vehicle must be located beyond the physical effect of an exploding IED but close enough to understand the environment in which the robot is operating. Additionally, mission duration requirements discourage the use of low level, fatigue inducing, teleoperation. Techniques are needed to reduce the Soldier’s mental stress in this demanding situation, as well as to blend the high level reasoning of a remote human supervisor with the local autonomous capability of a robot to provide effective, long term mission performance. GDRS has developed an advanced supervised autonomy version of its Robotics Kit (GDRK) under the Robotic Mounted Detection System (RMDS) program that provides a cost effective, high-utility automation solution that overcomes the limitations and burden of a purely teleoperated system. GDRK is a modular robotic
ABSTRACT The complexity of the current and future security environment will impose new and ever-changing challenges to Warfighter capabilities. Given the critical nature of Soldier cognitive performance in meeting these increased demands, systems should be designed to work in ways that are consistent with human cognitive function. Here, we argue that traditional approaches to understanding the human and cognitive dimensions of systems development cannot always provide an adequate understanding of human cognitive performance. We suggest that integrating neuroscience approaches and knowledge provides unique opportunities for understanding human cognitive function. Such an approach has the potential to enable more effective systems design – that is, neuroergonomic design – and that it is necessary to obtain these understandings within complex, dynamic environments. Ongoing research efforts utilizing large-scale ride motion simulations that allow researchers to systematically constrain
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
ABSTRACT As the number of robotic systems on the battlefield increases, the number of operators grows with it, leading to significant cost burden. Autonomous robots are already capable of task execution with limited supervision, and the capabilities of autonomous robots continue to advance rapidly. Because these autonomous systems have the ability to assist and augment human soldiers, commanders need advanced methods for assigning tasks to the systems, monitoring their status and using them to achieve desirable results. Mission Command for Autonomous Systems (MCAS) aims to enable natural interaction between commanders and their autonomous assets without requiring dedicated operators or significantly increasing the commanders’ cognitive burden. This paper discusses the approach, design and challenges of MCAS and present opportunities for future collaboration with industry and academia
ABSTRACT Imagine Soldiers reacting to an unpredictable, dynamic, stressful situation on the battlefield. How those Soldiers think about the information presented to them by the system or other Soldiers during this situation – and how well they translate that into thinking into effective behaviors – is critical to how well they perform. Importantly, those thought processes (i.e., cognition) interact with both external (e.g., the size of the enemy force, weather) and internal (e.g., ability to communicate, personality, fatigue level) factors. The complicated nature of these interactions can have dramatic and unexpected consequences, as is seen in the analysis of military and industrial disasters, such as the shooting down of Iran Air flight 655, or the partial core meltdown on Three Mile Island. In both cases, decision makers needed to interact with equipment and personnel in a stressful, dynamic, and uncertain environment. Similarly, the complex and dynamic nature of the contemporary
Engineers at the University of California San Diego in collaboration with clinicians, people with MCI, and their care partners have developed CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation — a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home
Using electrical impedance tomography (EIT), researchers have developed a system using a flexible tactile sensor for objective evaluation of fine finger movements. Demonstrating high accuracy in classifying diverse pinching motions, with discrimination rates surpassing 90 percent, this innovation holds potential in cognitive development and automated medical research
Advances in healthcare and medical treatments have led to longer life expectancies in many parts of the world. As people receive better healthcare and management of other health conditions, they are more likely to reach an age where neurodegenerative diseases become a greater risk. Neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), and Huntington's disease (HD), are complex and can affect various aspects of a person's cognitive, motor, and sensory functions
In this study, a novel assessment approach of in-vehicle speech intelligibility is presented using psychometric curves. Speech recognition performance scores were modeled at an individual listener level for a set of speech recognition data previously collected under a variety of in-vehicle listening scenarios. The model coupled an objective metric of binaural speech intelligibility (i.e., the acoustic factors) with a psychometric curve indicating the listener’s speech recognition efficiency (i.e., the listener factors). In separate analyses, two objective metrics were used with one designed to capture spatial release from masking and the other designed to capture binaural loudness. The proposed approach is in contrast to the traditional approach of relying on the speech recognition threshold, the speech level at 50% recognition performance averaged across listeners, as the metric for in-vehicle speech intelligibility. Results from the presented analyses suggest the importance of
Artificial intelligence (AI) has become prevalent in many fields in the modern world, ranging from vacuum cleaners to lawn mowers and commercial automobiles. These capabilities are continuing to evolve and become a part of more products and systems every day, with numerous potential benefits to humans. AI is of particular interest in autonomous vehicles (AVs), where the benefits include reduced cognitive workload, increased efficiency, and improved safety for human operators. Numerous investments from academia and industry have been made recently with the intent of improving the enabling technologies for AVs. Google and Tesla are two of the more well-known examples in industry, with Google developing a self-driving car and Tesla providing its Full Self-Driving (FSD) autopilot system. Ford and BMW are also working on their own AVs
Prior investigations of swarm robot control focus on optimizing communication and coordination between agents, with at most one human control scheme, or with discrete (rather than continuous) human control schemes. In these studies, focus tends to be on human-robot interactions, including human-machine gesture interfaces, human-machine interaction during conversation, or evaluation of higher-level mental states like comfort, happiness and cognitive load. While there is early work in human control of Unmanned Arial Vehicles (UAVs) and interface design, there are few systematic studies of how human operators perceive fundamental properties of small swarms of ground-based semi-autonomous robots. Therefore, the goal of this study is to better understand how humans perceive swarms of semi-autonomous agents across a range of conditions
Modern in-vehicle experiences are brimming with functionalities and convenience driven by automation, digitalization, and electrification. While automotive manufacturers are competing to provide the best systems to their customers, there is no common ground to evaluate these in-vehicle experiences as they become increasingly complex. Existing automotive guidelines do not offer thresholds for cognitive distraction, or—more appropriately—“disengagement.” What can researchers can do to change this? Evaluation of the In-vehicle Experience discusses acceptable levels of disengagement by evaluating the driving context and exploring how system reliability can translate to distraction and frustration. It also covers the need to test systems for their complexity and ease of use, and to prevent users from resorting to alternative systems while driving (e.g., smartphones). It highlights the value in naturalistic data generation using vehicles already sold to customers and the issues around
A team of Cornell University researchers has laid the foundation for developing a new class of untethered soft robots that can achieve more complex motions with less reliance on explicit computation. By taking advantage of viscosity — the very thing that previously stymied the movement of soft robots — the new approach offloads control of a soft robot’s cognitive capability from the “brain” onto the body using the robot’s mechanical reflexes and ability to leverage its environment
Automated driving is considered a key technology for reducing traffic accidents, improving road utilization, and enhancing transportation economy and thus has received extensive attention from academia and industry in recent years. Although recent improvements in artificial intelligence are beginning to be integrated into vehicles, current AD technology is still far from matching or exceeding the level of human driving ability. The key technologies that need to be developed include achieving a deep understanding and cognition of traffic scenarios and highly intelligent decision-making. Automated Vehicles, the Driving Brain, and Artificial Intelligenceaddresses brain-inspired driving and learning from the human brain's cognitive, thinking, reasoning, and memory abilities. This report presents a few unaddressed issues related to brain-inspired driving, including the cognitive mechanism, architecture implementation, scenario cognition, policy learning, testing, and validation. Click here
Reliably operating electromagnetic (EM) systems including radar, communications, and navigation, while deceiving or disrupting the adversary, is critical to success on the battlefield. As threats evolve, electronic warfare (EW) systems must remain flexible and adaptable, with performance upgrades driven by the constant game of cat and mouse between opposing systems. This drives EW researchers and systems engineers to develop novel techniques and capabilities, based on new waveforms and algorithms, multifunction RF systems, and cognitive and adaptive modes of operation
How do different parts of the brain communicate with each other during learning and memory formation? A new study by researchers at the University of California San Diego takes a first step at answering this fundamental neuroscience question
Today, as the spread of vehicles equipped with autonomous driving functions increases, accidents caused by autonomous vehicles are also increasing. Therefore, issues regarding safety and reliability of autonomous vehicles are emerging. Various studies have been conducted to secure the safety and reliability of autonomous vehicles, and the application of the International Organization for Standardization (ISO) 26262 standard for safety and reliability improvement and the importance of verifying the safety of autonomous vehicles are increasing. Recently, Mobileye proposed an RSS model called Responsibility Sensitive Safety, which is a mathematical model that presents the standardization of safety guarantees of the minimum requirements that all autonomous vehicles must meet. In this article, the RSS model that ensures safety and reliability was derived to be suitable for variable focus function cameras that can cover the cognitive regions of radar and lidar with a single camera. It is
Acoustic range managers need a better system for identifying high-value decision points before conducting test events. When this research was conducted, a qualitative process model that represents the acoustic range decision process did not exist
How do different parts of the brain communicate with each other during learning and memory formation? A new study by researchers at the University of California San Diego takes a first step at answering this fundamental neuro-science question
The performance of persons who watch surveillance videos, either in real-time or recordings, can vary with their level of expertise. It is reasonable to suppose that some of the performance differences might be due, at least in part, to the way experts scan a visual scene versus the way novices might scan the same scene. For example, experts might be more systematic or efficient in the way they scan a scene compared to novices. Even within the same person, video surveillance performance can vary with factors such as fatigue. Again, differences in the way their eyes scan a scene might account for some of the differences. Full Motion Video (FMV) “Eyes-on” intelligence analysts, in particular, actively scan video scenes for items of interest for long periods of time
In the stage of automobile industry transition from SAE level “0,1” low autonomous through “2,3,4” human-in-the-loop and ultimately “5” fully autonomous driving, advanced driving monitor system is critical to understand the status, performance, and behavior of drivers for next-generation intelligent vehicles. By making necessary warnings or adjustments, they could operate collaboratively to ensure a safe and efficient traffic environment. The performance and behavior can be viewed as a reflection of the driver’s cognitive workload, which corresponds as well to the environment of their driving scenarios. In this study, image features extracted from driving scenarios, as well as additional environmental features were utilized to classify driving workload levels for different driving scenario video clips. As a continuing study of exploring transfer learning capability, two transfer learning approaches for feature extraction, image segmentation mask transfer approach and image-fixation map
Andrew Grove (founder CEO Intel) defines strategic inflectionpoints as what happens to a business when a major event alters itsfundamentals. The Covid-19 pandemic is one such historic event thatis changing fundamental business assumptions in the Oil industry.Companies with a hunter-gatherer mindset will ride this wave withthe help of technologies that make their operations lean andefficient. Current developments in AI, specifically aroundCognitive Sciences is one such area that will empower the earlyadopters to a many-fold improvement in engineering and researchproductivity. This paper explores 'how to augment the humanintelligence with insights from engineering literature, leveragingCognitive AI techniques?'. The key challenge of acquiringknowledge from engineering literature (patents, books, journals,articles, papers etc.) is the sheer volume at which it growsannually (100s of millions existing and new papers growing at 40%year-on-year as per IDC). 6 million+ patents are filed every
This document addresses the operational safety and human factors aspects of unauthorized laser illumination events in navigable airspace. The topics addressed include operational procedures, training, and protocols that flight crew members should follow in the event of a laser exposure. Of particular emphasis, this document outlines coping strategies for use during critical phases of flight. Although lasers are capable of causing retinal damage, most laser cockpit illuminations, to date, has been relatively low in irradiance causing primarily startle reactions, visual glare, flashblindness and afterimages. Permanent eye injuries from unauthorized laser exposures have been extremely rare. This document describes pilot operational procedures in response to the visual disruptions associated with low to moderate laser exposures that pilots are most likely to encounter during flight operations. With education and training, pilots can take actions that safeguard both their vision and the
In 2017, the US Army announced their modernization priorities as a means of maintaining their military strength. Six specific areas were targeted for focus improvement and development, with the first five being specific technologies or end products. The sixth was “Soldier Lethality” or a soldier’s ability to shoot, move, communicate, protect and sustain by improving human performance and decision making. In an effort to support this priority area for those trying to make clothing and individual equipment (CIE) acquisition and development decisions, there is a desire for an integrated or holistic objective tool to measure soldier performance, specifically mobility, lethality and survivability incorporating underlying measures of human factors, biomechanics and cognition
Can one technical solution help prevent drowsy drivers and detect a child left behind? Yes, using a single, maintenance-free, Non-Dispersive Infrared (NDIR) gas sensor integrated in the cabin ventilation system. Carbon dioxide (CO2) is an established proxy for ventilation needs in buildings. Recently, several studies have been published showing a moderate elevation of the indoor carbon dioxide level effect cognitive performance such as information usage, activity, focus and crisis response. A study of airplane pilots using 3-hour flight simulation tests, showed pilots made 50% more mistakes when exposed to 2,500 ppm carbon dioxide compared to 700 ppm. This has a direct impact on safety. All living animals and humans exhale carbon dioxide. In our investigations we have found that an unintentionally left behind child, or pet, can easily be detected in a parked car by analyzing the carbon dioxide trends in the cabin. Even an 8-month old baby acts as a carbon dioxide source, increasing
In the current scenario, vehicles are majorly owned by individuals where they have their own personal settings or accessories as per their individual preferences. In Shared mobility all features/controls are not personalized to everyone who shares the vehicle, which hinders the usage of shared vehicle. For shared mobility/Autonomous vehicle to be successful, it must play a significant role in customer engagement. To enhance the customer engagement, we need to satisfy individual customer by customizing the vehicle for their needs. This will give a cognitive feel of personal vehicle in a shared environment. We need technologies and design in improving vehicle interior and exterior systems to address personalization. We will involve Design Thinking approach by customer interactions in each zone of vehicle both interior and exterior to identify personalization needs. The zones of study include Frunk & Trunk compartment zone, Interaction zones, interior & exterior zone. We will rank the
Items per page:
50
1 – 50 of 268