Browse Topic: Psychiatry and psychology
ABSTRACT: Ground vehicle survivability and protection systems and subsystems are increasingly employing sensors to augment and enhance overall platform survivability. These systems sense and measure select attributes of the operational environment and pass this measured “data” to a computational controller which then produces a survivability or protective system response based on that computed data. The data collected is usually narrowly defined for that select system’s purpose and is seldom shared or used by adjacent survivability and protection subsystems. The Army approach toward centralized protection system processing (MAPS Modular APS Controller) provides promise that sensor data will be more judiciously shared between platform protection subsystems in the future. However, this system in its current form, falls short of the full protective potential that could be realized from the cumulative sum of sensor data. Platform protection and survivability can be dramatically enhanced if
ABSTRACT Autonomous robots can maneuver into dangerous situations without endangering Soldiers. The Soldier tasked with the supervision of a route clearing robot vehicle must be located beyond the physical effect of an exploding IED but close enough to understand the environment in which the robot is operating. Additionally, mission duration requirements discourage the use of low level, fatigue inducing, teleoperation. Techniques are needed to reduce the Soldier’s mental stress in this demanding situation, as well as to blend the high level reasoning of a remote human supervisor with the local autonomous capability of a robot to provide effective, long term mission performance. GDRS has developed an advanced supervised autonomy version of its Robotics Kit (GDRK) under the Robotic Mounted Detection System (RMDS) program that provides a cost effective, high-utility automation solution that overcomes the limitations and burden of a purely teleoperated system. GDRK is a modular robotic
ABSTRACT The complexity of the current and future security environment will impose new and ever-changing challenges to Warfighter capabilities. Given the critical nature of Soldier cognitive performance in meeting these increased demands, systems should be designed to work in ways that are consistent with human cognitive function. Here, we argue that traditional approaches to understanding the human and cognitive dimensions of systems development cannot always provide an adequate understanding of human cognitive performance. We suggest that integrating neuroscience approaches and knowledge provides unique opportunities for understanding human cognitive function. Such an approach has the potential to enable more effective systems design – that is, neuroergonomic design – and that it is necessary to obtain these understandings within complex, dynamic environments. Ongoing research efforts utilizing large-scale ride motion simulations that allow researchers to systematically constrain
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
ABSTRACT As the number of robotic systems on the battlefield increases, the number of operators grows with it, leading to significant cost burden. Autonomous robots are already capable of task execution with limited supervision, and the capabilities of autonomous robots continue to advance rapidly. Because these autonomous systems have the ability to assist and augment human soldiers, commanders need advanced methods for assigning tasks to the systems, monitoring their status and using them to achieve desirable results. Mission Command for Autonomous Systems (MCAS) aims to enable natural interaction between commanders and their autonomous assets without requiring dedicated operators or significantly increasing the commanders’ cognitive burden. This paper discusses the approach, design and challenges of MCAS and present opportunities for future collaboration with industry and academia
ABSTRACT Imagine Soldiers reacting to an unpredictable, dynamic, stressful situation on the battlefield. How those Soldiers think about the information presented to them by the system or other Soldiers during this situation – and how well they translate that into thinking into effective behaviors – is critical to how well they perform. Importantly, those thought processes (i.e., cognition) interact with both external (e.g., the size of the enemy force, weather) and internal (e.g., ability to communicate, personality, fatigue level) factors. The complicated nature of these interactions can have dramatic and unexpected consequences, as is seen in the analysis of military and industrial disasters, such as the shooting down of Iran Air flight 655, or the partial core meltdown on Three Mile Island. In both cases, decision makers needed to interact with equipment and personnel in a stressful, dynamic, and uncertain environment. Similarly, the complex and dynamic nature of the contemporary
This study aims to explore the multifaceted influencing factors of market acceptance and consumer behavior of low-altitude flight services through online surveys and advanced neuroscientific methods (such as functional magnetic resonance imaging fMRI, electroencephalography EEG, functional near-infrared spectroscopy fNIRS) combined with artificial intelligence and video advertisement quantitative analysis. We conducted an in-depth study of the current trends in low-altitude flight vehicle development and customer acceptance of low-altitude services, focusing particularly on the survey methods used for market acceptance. To overcome the influence of strong opinion leaders in volunteer group experiments, we designed specialized surveys targeting broader online and social media groups. Utilizing specialized knowledge in aviation psychology, we designed a distinctive questionnaire and, within just 7 days of its launch, gathered a significant number of valid responses. The data was then
Nagoya University Nagoya, Japan
Neurostimulators, also known as brain pacemakers, send electrical impulses to specific areas of the brain via special electrodes. It is estimated that some 200,000 people worldwide are now benefiting from this technology, including those who suffer from Parkinson’s disease or from pathological muscle spasms. According to Mehmet Fatih Yanik, professor of neurotechnology at ETH Zurich, further research will greatly expand the potential applications: instead of using them exclusively to stimulate the brain, the electrodes can also be used to precisely record brain activity and analyze it for anomalies associated with neurological or psychiatric disorders. In a second step, it would be conceivable in future to treat these anomalies and disorders using electrical impulses
Engineers at the University of California San Diego in collaboration with clinicians, people with MCI, and their care partners have developed CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation — a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home
Using electrical impedance tomography (EIT), researchers have developed a system using a flexible tactile sensor for objective evaluation of fine finger movements. Demonstrating high accuracy in classifying diverse pinching motions, with discrimination rates surpassing 90 percent, this innovation holds potential in cognitive development and automated medical research
Researchers have invented sensor-based noninvasive medical devices to make the monitoring and treatment of certain physiological and psychological conditions timelier and more precise
Advances in healthcare and medical treatments have led to longer life expectancies in many parts of the world. As people receive better healthcare and management of other health conditions, they are more likely to reach an age where neurodegenerative diseases become a greater risk. Neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), and Huntington's disease (HD), are complex and can affect various aspects of a person's cognitive, motor, and sensory functions
In this study, a novel assessment approach of in-vehicle speech intelligibility is presented using psychometric curves. Speech recognition performance scores were modeled at an individual listener level for a set of speech recognition data previously collected under a variety of in-vehicle listening scenarios. The model coupled an objective metric of binaural speech intelligibility (i.e., the acoustic factors) with a psychometric curve indicating the listener’s speech recognition efficiency (i.e., the listener factors). In separate analyses, two objective metrics were used with one designed to capture spatial release from masking and the other designed to capture binaural loudness. The proposed approach is in contrast to the traditional approach of relying on the speech recognition threshold, the speech level at 50% recognition performance averaged across listeners, as the metric for in-vehicle speech intelligibility. Results from the presented analyses suggest the importance of
Artificial intelligence (AI) has become prevalent in many fields in the modern world, ranging from vacuum cleaners to lawn mowers and commercial automobiles. These capabilities are continuing to evolve and become a part of more products and systems every day, with numerous potential benefits to humans. AI is of particular interest in autonomous vehicles (AVs), where the benefits include reduced cognitive workload, increased efficiency, and improved safety for human operators. Numerous investments from academia and industry have been made recently with the intent of improving the enabling technologies for AVs. Google and Tesla are two of the more well-known examples in industry, with Google developing a self-driving car and Tesla providing its Full Self-Driving (FSD) autopilot system. Ford and BMW are also working on their own AVs
Prior investigations of swarm robot control focus on optimizing communication and coordination between agents, with at most one human control scheme, or with discrete (rather than continuous) human control schemes. In these studies, focus tends to be on human-robot interactions, including human-machine gesture interfaces, human-machine interaction during conversation, or evaluation of higher-level mental states like comfort, happiness and cognitive load. While there is early work in human control of Unmanned Arial Vehicles (UAVs) and interface design, there are few systematic studies of how human operators perceive fundamental properties of small swarms of ground-based semi-autonomous robots. Therefore, the goal of this study is to better understand how humans perceive swarms of semi-autonomous agents across a range of conditions
Modern in-vehicle experiences are brimming with functionalities and convenience driven by automation, digitalization, and electrification. While automotive manufacturers are competing to provide the best systems to their customers, there is no common ground to evaluate these in-vehicle experiences as they become increasingly complex. Existing automotive guidelines do not offer thresholds for cognitive distraction, or—more appropriately—“disengagement.” What can researchers can do to change this? Evaluation of the In-vehicle Experience discusses acceptable levels of disengagement by evaluating the driving context and exploring how system reliability can translate to distraction and frustration. It also covers the need to test systems for their complexity and ease of use, and to prevent users from resorting to alternative systems while driving (e.g., smartphones). It highlights the value in naturalistic data generation using vehicles already sold to customers and the issues around
A team of Cornell University researchers has laid the foundation for developing a new class of untethered soft robots that can achieve more complex motions with less reliance on explicit computation. By taking advantage of viscosity — the very thing that previously stymied the movement of soft robots — the new approach offloads control of a soft robot’s cognitive capability from the “brain” onto the body using the robot’s mechanical reflexes and ability to leverage its environment
Automated driving is considered a key technology for reducing traffic accidents, improving road utilization, and enhancing transportation economy and thus has received extensive attention from academia and industry in recent years. Although recent improvements in artificial intelligence are beginning to be integrated into vehicles, current AD technology is still far from matching or exceeding the level of human driving ability. The key technologies that need to be developed include achieving a deep understanding and cognition of traffic scenarios and highly intelligent decision-making. Automated Vehicles, the Driving Brain, and Artificial Intelligenceaddresses brain-inspired driving and learning from the human brain's cognitive, thinking, reasoning, and memory abilities. This report presents a few unaddressed issues related to brain-inspired driving, including the cognitive mechanism, architecture implementation, scenario cognition, policy learning, testing, and validation. Click here
In electric vehicles (EVs), the ear-piercing acoustic noise contributed by the electric drive systems (EDSs) has become a critical issue in sensitive situations. This paper provides a comprehensive sound quality evaluation associated with objective psychological parameters in EDSs with multi-power levels and full-operational conditions. The experimental test sets, prototype categories and acoustic samples are firstly proposed to reveal the sound pressure level distributions. Then, the objective psychological parameters are introduced and divided into six dimensions. The principal component analysis (PCA) method has been employed to achieve dimensionality reduction, in which the original six dimensions can be reduced to two dimensions. The calculated and evaluated results show that the loudness and sharpness are the main contributing components with a cumulative contribution of 99.93%. All results are sensitive to the operational conditions. The proposed work and the related results can
The continuous encouragement of lightweight design in modern vehicles demands a reliable and efficient method to predict and ameliorate the interior acoustic comfort for passengers. Due to considerable psychological effects on stress and concentration, the low frequency contribution plays a vital rule regarding interior noise perception. Apart other contributors, low frequency noise can be induced by transient aerodynamic excitation and the related structural vibrations. Assessing this disturbance requires the reliable simulation of the complex multi-physical mechanisms involved, such as transient aerodynamics, structural dynamics and acoustics. The domain of structural dynamics is particularly sensitive regarding the modelling of attachments restraining the vibrational behaviour of incorporated membrane-like structures. In a later development stage, when prototypes are available, it is therefore desirable to replace or update purely numerical models with experimental data. To this end
Reliably operating electromagnetic (EM) systems including radar, communications, and navigation, while deceiving or disrupting the adversary, is critical to success on the battlefield. As threats evolve, electronic warfare (EW) systems must remain flexible and adaptable, with performance upgrades driven by the constant game of cat and mouse between opposing systems. This drives EW researchers and systems engineers to develop novel techniques and capabilities, based on new waveforms and algorithms, multifunction RF systems, and cognitive and adaptive modes of operation
How do different parts of the brain communicate with each other during learning and memory formation? A new study by researchers at the University of California San Diego takes a first step at answering this fundamental neuroscience question
Today, as the spread of vehicles equipped with autonomous driving functions increases, accidents caused by autonomous vehicles are also increasing. Therefore, issues regarding safety and reliability of autonomous vehicles are emerging. Various studies have been conducted to secure the safety and reliability of autonomous vehicles, and the application of the International Organization for Standardization (ISO) 26262 standard for safety and reliability improvement and the importance of verifying the safety of autonomous vehicles are increasing. Recently, Mobileye proposed an RSS model called Responsibility Sensitive Safety, which is a mathematical model that presents the standardization of safety guarantees of the minimum requirements that all autonomous vehicles must meet. In this article, the RSS model that ensures safety and reliability was derived to be suitable for variable focus function cameras that can cover the cognitive regions of radar and lidar with a single camera. It is
Acoustic range managers need a better system for identifying high-value decision points before conducting test events. When this research was conducted, a qualitative process model that represents the acoustic range decision process did not exist
How do different parts of the brain communicate with each other during learning and memory formation? A new study by researchers at the University of California San Diego takes a first step at answering this fundamental neuro-science question
The performance of persons who watch surveillance videos, either in real-time or recordings, can vary with their level of expertise. It is reasonable to suppose that some of the performance differences might be due, at least in part, to the way experts scan a visual scene versus the way novices might scan the same scene. For example, experts might be more systematic or efficient in the way they scan a scene compared to novices. Even within the same person, video surveillance performance can vary with factors such as fatigue. Again, differences in the way their eyes scan a scene might account for some of the differences. Full Motion Video (FMV) “Eyes-on” intelligence analysts, in particular, actively scan video scenes for items of interest for long periods of time
In the stage of automobile industry transition from SAE level “0,1” low autonomous through “2,3,4” human-in-the-loop and ultimately “5” fully autonomous driving, advanced driving monitor system is critical to understand the status, performance, and behavior of drivers for next-generation intelligent vehicles. By making necessary warnings or adjustments, they could operate collaboratively to ensure a safe and efficient traffic environment. The performance and behavior can be viewed as a reflection of the driver’s cognitive workload, which corresponds as well to the environment of their driving scenarios. In this study, image features extracted from driving scenarios, as well as additional environmental features were utilized to classify driving workload levels for different driving scenario video clips. As a continuing study of exploring transfer learning capability, two transfer learning approaches for feature extraction, image segmentation mask transfer approach and image-fixation map
Andrew Grove (founder CEO Intel) defines strategic inflectionpoints as what happens to a business when a major event alters itsfundamentals. The Covid-19 pandemic is one such historic event thatis changing fundamental business assumptions in the Oil industry.Companies with a hunter-gatherer mindset will ride this wave withthe help of technologies that make their operations lean andefficient. Current developments in AI, specifically aroundCognitive Sciences is one such area that will empower the earlyadopters to a many-fold improvement in engineering and researchproductivity. This paper explores 'how to augment the humanintelligence with insights from engineering literature, leveragingCognitive AI techniques?'. The key challenge of acquiringknowledge from engineering literature (patents, books, journals,articles, papers etc.) is the sheer volume at which it growsannually (100s of millions existing and new papers growing at 40%year-on-year as per IDC). 6 million+ patents are filed every
Items per page:
50
1 – 50 of 468