Browse Topic: Mental processes

Items (268)
This paper presents a comprehensive implementation of various Conduit frameworks designed to manage the hygiene of Simulink models in control systems and enhance them to meet industry standards such as MAB, MISRA, Polyspace, and CERT. The core challenge addressed is the minimization of repetitive work and the elimination of cognitive workload. Beginners often struggle to create Simulink models that adhere to industry standards, and keeping track of all the standards can be challenging. Given the complexity and size of these models, manual processing is time-consuming. Our Conduit frameworks help enhance their models for adherence to those standards, improving efficiency by up to 95% and utilizing machine intelligence to process large amounts of code efficiently. The Conduit frameworks also automate non value added (NVA) activities, including updates in properties of variables, checking for unwanted data types that develop during internal calculations of Simulink blocks, and variable
Agrawal, VipulTE, HarikrishnaN, PrajithaKumar, KosalaramanVenkat, HarishShaji, Anish
ABSTRACT: Ground vehicle survivability and protection systems and subsystems are increasingly employing sensors to augment and enhance overall platform survivability. These systems sense and measure select attributes of the operational environment and pass this measured “data” to a computational controller which then produces a survivability or protective system response based on that computed data. The data collected is usually narrowly defined for that select system’s purpose and is seldom shared or used by adjacent survivability and protection subsystems. The Army approach toward centralized protection system processing (MAPS Modular APS Controller) provides promise that sensor data will be more judiciously shared between platform protection subsystems in the future. However, this system in its current form, falls short of the full protective potential that could be realized from the cumulative sum of sensor data. Platform protection and survivability can be dramatically enhanced if
ABSTRACT Tradespace exploration (TSE) is a key component of conceptual design or materiel solution phases that revolves around multi-stakeholder decision making. The TSE process as presented in literature is discussed, including the various stages, tools, and decision making approaches. The decision-making process, summarized herein, can be aided in various ways; one key intervention is the use of visualizations. Characteristics of good visualizations are presented before discussion of a promising avenue for visualization: immersive reality. Immersive reality includes virtual reality representations as well as tactile feedback; however, there are aspects of immersive reality that must be considered as well, such as cognitive loads and accessibility. From the literature, major trends were identified, including that TSE focuses on value but can suffer when not framed as a group decision, the need for testing of proposed TSE support systems, and the need to consider user populations and
Sutton, MeredithTurner, CameronWagner, JohnGorsich, DavidRizzo, DeniseHartman, GregAgusti, RachelSkowronska, AnnetteCastanier, Matthew
ABSTRACT Autonomous robots can maneuver into dangerous situations without endangering Soldiers. The Soldier tasked with the supervision of a route clearing robot vehicle must be located beyond the physical effect of an exploding IED but close enough to understand the environment in which the robot is operating. Additionally, mission duration requirements discourage the use of low level, fatigue inducing, teleoperation. Techniques are needed to reduce the Soldier’s mental stress in this demanding situation, as well as to blend the high level reasoning of a remote human supervisor with the local autonomous capability of a robot to provide effective, long term mission performance. GDRS has developed an advanced supervised autonomy version of its Robotics Kit (GDRK) under the Robotic Mounted Detection System (RMDS) program that provides a cost effective, high-utility automation solution that overcomes the limitations and burden of a purely teleoperated system. GDRK is a modular robotic
Frederick, BrianRodgers, DanielMartin, JohnHutchison, John
ABSTRACT The complexity of the current and future security environment will impose new and ever-changing challenges to Warfighter capabilities. Given the critical nature of Soldier cognitive performance in meeting these increased demands, systems should be designed to work in ways that are consistent with human cognitive function. Here, we argue that traditional approaches to understanding the human and cognitive dimensions of systems development cannot always provide an adequate understanding of human cognitive performance. We suggest that integrating neuroscience approaches and knowledge provides unique opportunities for understanding human cognitive function. Such an approach has the potential to enable more effective systems design – that is, neuroergonomic design – and that it is necessary to obtain these understandings within complex, dynamic environments. Ongoing research efforts utilizing large-scale ride motion simulations that allow researchers to systematically constrain
Oie, Kelvin S.Paul, Victor
ABSTRACT To optimize the use of partially autonomous vehicles, it is necessary to develop an understanding of the interactions between these vehicles and their operators. This research investigates the relationship between level of partial autonomy and operator abilities using a web-based virtual reality study. In this study participants took part in a virtual drive where they were required to perform all or part of the driving task in one of five possible autonomy conditions while responding to sudden emergency road events. Participants also took part in a simultaneous communications console task to include an element of multitasking. Situation awareness was measured using real-time probes based on the Situation Awareness Global Assessment Technique (SAGAT) as well as the Situation Awareness Rating Technique (SART). Cognitive Load was measured using the NASA Task Load Index (NASA-TLX) and an adapted version of the SOS Scale. Other measured factors included multiple indicators of
Cossitt, Jessie E.Patel, Viraj R.Carruth, Daniel W.Paul, Victor J.Bethel, Cindy L.
ABSTRACT The use and operation of unmanned systems are becoming more commonplace and as missions gain complexity, our warfighters are demanding increasing levels of system functionality. At the same time, decision making is becoming increasingly data driven and operators must process large amounts of data while also controlling unmanned assets. Factors impacting robotic/unmanned asset control include mission task complexity, line-of-sight/non-line-of-sight operations, simultaneous UxV control, and communication bandwidth availability. It is critical that any unmanned system requiring human interaction, is designed as a “human-in-the-loop” system from the beginning to ensure that operator cognitive load is minimized and operator effectiveness is optimized. Best practice human factors engineering in the form of human machine interfaces and user-centered design for robotic/unmanned control systems integrated early in platform concept and design phases can significantly impact platform
MacDonald, Brian
ABSTRACT As the number of robotic systems on the battlefield increases, the number of operators grows with it, leading to significant cost burden. Autonomous robots are already capable of task execution with limited supervision, and the capabilities of autonomous robots continue to advance rapidly. Because these autonomous systems have the ability to assist and augment human soldiers, commanders need advanced methods for assigning tasks to the systems, monitoring their status and using them to achieve desirable results. Mission Command for Autonomous Systems (MCAS) aims to enable natural interaction between commanders and their autonomous assets without requiring dedicated operators or significantly increasing the commanders’ cognitive burden. This paper discusses the approach, design and challenges of MCAS and present opportunities for future collaboration with industry and academia
Martin, JeremyKorfiatis, PeterSilva, Udam
ABSTRACT Imagine Soldiers reacting to an unpredictable, dynamic, stressful situation on the battlefield. How those Soldiers think about the information presented to them by the system or other Soldiers during this situation – and how well they translate that into thinking into effective behaviors – is critical to how well they perform. Importantly, those thought processes (i.e., cognition) interact with both external (e.g., the size of the enemy force, weather) and internal (e.g., ability to communicate, personality, fatigue level) factors. The complicated nature of these interactions can have dramatic and unexpected consequences, as is seen in the analysis of military and industrial disasters, such as the shooting down of Iran Air flight 655, or the partial core meltdown on Three Mile Island. In both cases, decision makers needed to interact with equipment and personnel in a stressful, dynamic, and uncertain environment. Similarly, the complex and dynamic nature of the contemporary
McDowell, KalebZywiol, Harry J.
ABSTRACT There is a need to better understand how operators and autonomous vehicle control systems can work together in order to provide the best-case scenario for utilization of autonomous capabilities in military missions to reduce crew sizes and thus reduce labor costs. The goal of this research is to determine how different levels of autonomous capabilities in vehicles affect the operator’s situational awareness, cognitive load, and ability to respond to road events while also responding to other auditory and visual tasks. Understanding these interactions is a crucial step to eventually determining the best way to allocate tasks to crew members in missions where crew size has been reduced due to the utilization of autonomous vehicles. Citation: J. E. Cossitt, C. R. Hudson, D. W. Carruth, C. L. Bethel, “Dynamic Task Allocation and Understanding of Situation Awareness Under Different Levels of Autonomy in Closed-Hatch Military Vehicles”, In Proceedings of the Ground Vehicle Systems
Cossitt, Jessie E.Hudson, Christopher R.Carruth, Daniel W.Bethel, Cindy L.
The advancements towards autonomous driving have propelled the need for reference/ground truth data for development and validation of various functionalities. Traditional data labelling methods are time consuming, skills intensive and have many drawbacks. These challenges are addressed through ALiVA (automatic lidar, image & video annotator), a semi-automated framework assisting for event detection and generation of reference data through annotation/labelling of video & point-cloud data. ALiVA is capable of processing large volumes of camera & lidar sensor data. Main pillars of framework are object detection-classification models, object tracking algorithms, cognitive algorithms and annotation results review functionality. Automatic object detection functionality creates a precise bounding box around the area of interest and assigns class labels to annotated objects. Object tracking algorithms tracks detected objects in video frames, provides a unique object id for each object and
Mardhekar, AmoghPawar, RushikeshMohod, RuchaShirudkar, RohitHivarkar, Umesh N.
Engineers at the University of California San Diego in collaboration with clinicians, people with MCI, and their care partners have developed CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation — a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home
Using electrical impedance tomography (EIT), researchers have developed a system using a flexible tactile sensor for objective evaluation of fine finger movements. Demonstrating high accuracy in classifying diverse pinching motions, with discrimination rates surpassing 90 percent, this innovation holds potential in cognitive development and automated medical research
Temporal light modulation (TLM), colloquially known as “flicker,” is an issue in almost all lighting applications, due to widespread adoption of LED and OLED sources and their driving electronics. A subset of LED/OLED lighting systems delivers problematic TLM, often in specific types of residential, commercial, outdoor, and vehicular lighting. Dashboard displays, touchscreens, marker lights, taillights, daytime running lights (DRL), interior lighting, etc. frequently use pulse width modulation (PWM) circuits to achieve different luminances for different times of day and users’ visual adaptation levels. The resulting TLM waveforms and viewing conditions can result in distraction and disorientation, nausea, cognitive effects, and serious health consequences in some populations, occurring with or without the driver, passenger, or pedestrian consciously “seeing” the flicker. There are three visual responses to TLM: direct flicker, the stroboscopic effect, and phantom array effect (also
Miller, NaomiIrvin, Lia
Advances in healthcare and medical treatments have led to longer life expectancies in many parts of the world. As people receive better healthcare and management of other health conditions, they are more likely to reach an age where neurodegenerative diseases become a greater risk. Neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), and Huntington's disease (HD), are complex and can affect various aspects of a person's cognitive, motor, and sensory functions
Effective smart cockpit interaction design can address the specific needs of children, offering ample entertainment and educational resources to enhance their on-board experience. Currently, substantial attention is focused on smart cockpit design to enrich the overall travel engagement for children. Recognizing the contrasts between children and adults in areas such as physical health, cognitive development, and emotional psychology, it becomes imperative to meticulously customize the design and optimization processes to cater explicitly to their individual requirements. However, a noticeable gap persists in both research methodologies and product offerings within this domain. This study employs user survey to delve into children’s on-board experiences and utilization of current child-centric in-cockpit interaction solutions (C-SI Solutions), that over 50% of the interviewees (children) got on-board at least several times per week and over half of the parents would pay for C-SI
Xu, JinghanHui, XinruWang, YixiangJia, Qing
In this study, a novel assessment approach of in-vehicle speech intelligibility is presented using psychometric curves. Speech recognition performance scores were modeled at an individual listener level for a set of speech recognition data previously collected under a variety of in-vehicle listening scenarios. The model coupled an objective metric of binaural speech intelligibility (i.e., the acoustic factors) with a psychometric curve indicating the listener’s speech recognition efficiency (i.e., the listener factors). In separate analyses, two objective metrics were used with one designed to capture spatial release from masking and the other designed to capture binaural loudness. The proposed approach is in contrast to the traditional approach of relying on the speech recognition threshold, the speech level at 50% recognition performance averaged across listeners, as the metric for in-vehicle speech intelligibility. Results from the presented analyses suggest the importance of
Samardzic, NikolinaLavandier, MathieuShen, Yi
Artificial intelligence (AI) has become prevalent in many fields in the modern world, ranging from vacuum cleaners to lawn mowers and commercial automobiles. These capabilities are continuing to evolve and become a part of more products and systems every day, with numerous potential benefits to humans. AI is of particular interest in autonomous vehicles (AVs), where the benefits include reduced cognitive workload, increased efficiency, and improved safety for human operators. Numerous investments from academia and industry have been made recently with the intent of improving the enabling technologies for AVs. Google and Tesla are two of the more well-known examples in industry, with Google developing a self-driving car and Tesla providing its Full Self-Driving (FSD) autopilot system. Ford and BMW are also working on their own AVs
Prior investigations of swarm robot control focus on optimizing communication and coordination between agents, with at most one human control scheme, or with discrete (rather than continuous) human control schemes. In these studies, focus tends to be on human-robot interactions, including human-machine gesture interfaces, human-machine interaction during conversation, or evaluation of higher-level mental states like comfort, happiness and cognitive load. While there is early work in human control of Unmanned Arial Vehicles (UAVs) and interface design, there are few systematic studies of how human operators perceive fundamental properties of small swarms of ground-based semi-autonomous robots. Therefore, the goal of this study is to better understand how humans perceive swarms of semi-autonomous agents across a range of conditions
This article explores the value of simulation for autonomous-vehicle research and development. There is ample research that details the effectiveness of simulation for training humans to fly and drive. Unfortunately, the same is not true for simulations used to train and test artificial intelligence (AI) that enables autonomous vehicles to fly and drive without humans. Research has shown that simulation “fidelity” is the most influential factor affecting training yield, but psychological fidelity is a widely accepted definition that does not apply to AI because it describes how well simulations engage various cognitive functions of human operators. Therefore, this investigation reviewed the literature that was published between January 2010 and May 2022 on the topic of simulation fidelity to understand how researchers are defining and measuring simulation fidelity as applied to training AI. The results reported herein illustrate that researchers are generally using agreed-upon terms
Johnson, ChristopherGraupe, ElanKassel, Maxfield
Modern in-vehicle experiences are brimming with functionalities and convenience driven by automation, digitalization, and electrification. While automotive manufacturers are competing to provide the best systems to their customers, there is no common ground to evaluate these in-vehicle experiences as they become increasingly complex. Existing automotive guidelines do not offer thresholds for cognitive distraction, or—more appropriately—“disengagement.” What can researchers can do to change this? Evaluation of the In-vehicle Experience discusses acceptable levels of disengagement by evaluating the driving context and exploring how system reliability can translate to distraction and frustration. It also covers the need to test systems for their complexity and ease of use, and to prevent users from resorting to alternative systems while driving (e.g., smartphones). It highlights the value in naturalistic data generation using vehicles already sold to customers and the issues around
Roth, Christian
A team of Cornell University researchers has laid the foundation for developing a new class of untethered soft robots that can achieve more complex motions with less reliance on explicit computation. By taking advantage of viscosity — the very thing that previously stymied the movement of soft robots — the new approach offloads control of a soft robot’s cognitive capability from the “brain” onto the body using the robot’s mechanical reflexes and ability to leverage its environment
Engaging in visual-manual tasks such as selecting a radio station, adjusting the interior temperature, or setting an automation function can be distracting to drivers. Additionally, if setting the automation fails, driver takeover can be delayed. Traditionally, assessing the usability of driver interfaces and determining if they are unacceptably distracting (per the NHTSA driver distraction guidelines and SAE J2364) involves human subject testing, which is expensive and time-consuming. However, most vehicle engineering decisions are based on computational analyses, such as the task time predictions in SAE J2365. Unfortunately, J2365 was developed before touch screens were common in motor vehicles. To update J2365 and other task analyses, estimates were developed for (1) cognitive activities (mental, search, read), (2) low-level 2D elements (Press, Tap, Double Tap, Drag, Zoom, Press and Hold, Rotate, Turn Knob, Type and Keypress, and Flick), (3) complex 2D elements (handwrite, menu use
Green, PaulKoca, EkimBrennan-Carey, Collin
Automated driving is considered a key technology for reducing traffic accidents, improving road utilization, and enhancing transportation economy and thus has received extensive attention from academia and industry in recent years. Although recent improvements in artificial intelligence are beginning to be integrated into vehicles, current AD technology is still far from matching or exceeding the level of human driving ability. The key technologies that need to be developed include achieving a deep understanding and cognition of traffic scenarios and highly intelligent decision-making. Automated Vehicles, the Driving Brain, and Artificial Intelligenceaddresses brain-inspired driving and learning from the human brain's cognitive, thinking, reasoning, and memory abilities. This report presents a few unaddressed issues related to brain-inspired driving, including the cognitive mechanism, architecture implementation, scenario cognition, policy learning, testing, and validation. Click here
Zheng, Ling
Reliably operating electromagnetic (EM) systems including radar, communications, and navigation, while deceiving or disrupting the adversary, is critical to success on the battlefield. As threats evolve, electronic warfare (EW) systems must remain flexible and adaptable, with performance upgrades driven by the constant game of cat and mouse between opposing systems. This drives EW researchers and systems engineers to develop novel techniques and capabilities, based on new waveforms and algorithms, multifunction RF systems, and cognitive and adaptive modes of operation
Operator attention has been a significant focus of human factors research in recent years. This research has clarified how electronic devices and other stimuli can become distractions for vehicle operators. The research has defined a condition known as “distracted driving,” characterized by interruption of the sequence of cognitive processes essential for safe operation of a vehicle. Although “attention” has been the most often mentioned of these cognitive processes, they also include perception, memory, cognition, and planful behavior. These processes are the “cognitive demands” of safe vehicle operation. There is another issue, similar to distracted driving, that may hamper safe operation of a vehicle. That issue is the “cognitive load” of human-machine interface devices, including instrument clusters. The present paper explores the effects of cognitive load on operator response speed. It describes a novel method for displaying systems datums designed to manage cognitive load. The
Havins, William
Engineering practice routinely involves decision making under uncertainty. Much of this decision making entails reconciling multiple pieces of information to form a suitable model of uncertainty. As more information is collected, one expectedly makes better and better decisions. However, conditional probability assessments made by human decision makers, as new information arrives does not always follow expected trends and instead exhibits inconsistencies. Understanding them is necessary for a better modeling of the cognitive processes taking place in their mind, whether it be the designer or the end-user. Doing so can result in better products and product features. Quantum probability has been used in the literature to explain many commonly observed deviations from the classical probability such as: question order effect, response replicability effect, Machina and Ellsberg paradoxes and the effect of positive and negative interference between events. In this work, we present results
Pandey, VijitashwaBasieva, Irina
According to the statistics of National Highway Traffic Safety Administration, driver’s cognitive distraction, which is usually caused by drivers using mobile phones, has become one of the main causes of traffic accidents. To solve this problem and guarantee the safety of man-vehicle-road system, the most critical work is to improve the accuracy of driver’s cognitive state detection. In this paper, a novel driver’s cognitive state detecting method based on LightGBM (Light Gradient Boosting Machine) is proposed. Firstly, cognitive distraction experiments of making calls are carried out on a driving simulator to collect vehicle states, eye tracking and EEG (electron encephalogram) data simultaneously and feature extraction is conducted. Then a classifier considering road and individual characteristics used for detecting cognitive states is trained based on LightGBM algorithm, with 3 predefined cognitive states including concentration, ordinary distraction and extreme distraction. Finally
Li, JingyuanLiu, YahuiJi, XuewuTao, Shuxin
How do different parts of the brain communicate with each other during learning and memory formation? A new study by researchers at the University of California San Diego takes a first step at answering this fundamental neuroscience question
Today, as the spread of vehicles equipped with autonomous driving functions increases, accidents caused by autonomous vehicles are also increasing. Therefore, issues regarding safety and reliability of autonomous vehicles are emerging. Various studies have been conducted to secure the safety and reliability of autonomous vehicles, and the application of the International Organization for Standardization (ISO) 26262 standard for safety and reliability improvement and the importance of verifying the safety of autonomous vehicles are increasing. Recently, Mobileye proposed an RSS model called Responsibility Sensitive Safety, which is a mathematical model that presents the standardization of safety guarantees of the minimum requirements that all autonomous vehicles must meet. In this article, the RSS model that ensures safety and reliability was derived to be suitable for variable focus function cameras that can cover the cognitive regions of radar and lidar with a single camera. It is
Kim, Min JoongKim, Tong HyunYu, Sung HunKim, Young Min
Acoustic range managers need a better system for identifying high-value decision points before conducting test events. When this research was conducted, a qualitative process model that represents the acoustic range decision process did not exist
How do different parts of the brain communicate with each other during learning and memory formation? A new study by researchers at the University of California San Diego takes a first step at answering this fundamental neuro-science question
The performance of persons who watch surveillance videos, either in real-time or recordings, can vary with their level of expertise. It is reasonable to suppose that some of the performance differences might be due, at least in part, to the way experts scan a visual scene versus the way novices might scan the same scene. For example, experts might be more systematic or efficient in the way they scan a scene compared to novices. Even within the same person, video surveillance performance can vary with factors such as fatigue. Again, differences in the way their eyes scan a scene might account for some of the differences. Full Motion Video (FMV) “Eyes-on” intelligence analysts, in particular, actively scan video scenes for items of interest for long periods of time
In the stage of automobile industry transition from SAE level “0,1” low autonomous through “2,3,4” human-in-the-loop and ultimately “5” fully autonomous driving, advanced driving monitor system is critical to understand the status, performance, and behavior of drivers for next-generation intelligent vehicles. By making necessary warnings or adjustments, they could operate collaboratively to ensure a safe and efficient traffic environment. The performance and behavior can be viewed as a reflection of the driver’s cognitive workload, which corresponds as well to the environment of their driving scenarios. In this study, image features extracted from driving scenarios, as well as additional environmental features were utilized to classify driving workload levels for different driving scenario video clips. As a continuing study of exploring transfer learning capability, two transfer learning approaches for feature extraction, image segmentation mask transfer approach and image-fixation map
Liu, YongkangHansen, John
Andrew Grove (founder CEO Intel) defines strategic inflectionpoints as what happens to a business when a major event alters itsfundamentals. The Covid-19 pandemic is one such historic event thatis changing fundamental business assumptions in the Oil industry.Companies with a hunter-gatherer mindset will ride this wave withthe help of technologies that make their operations lean andefficient. Current developments in AI, specifically aroundCognitive Sciences is one such area that will empower the earlyadopters to a many-fold improvement in engineering and researchproductivity. This paper explores 'how to augment the humanintelligence with insights from engineering literature, leveragingCognitive AI techniques?'. The key challenge of acquiringknowledge from engineering literature (patents, books, journals,articles, papers etc.) is the sheer volume at which it growsannually (100s of millions existing and new papers growing at 40%year-on-year as per IDC). 6 million+ patents are filed every
Ghosh, Arnab
This document addresses the operational safety and human factors aspects of unauthorized laser illumination events in navigable airspace. The topics addressed include operational procedures, training, and protocols that flight crew members should follow in the event of a laser exposure. Of particular emphasis, this document outlines coping strategies for use during critical phases of flight. Although lasers are capable of causing retinal damage, most laser cockpit illuminations, to date, has been relatively low in irradiance causing primarily startle reactions, visual glare, flashblindness and afterimages. Permanent eye injuries from unauthorized laser exposures have been extremely rare. This document describes pilot operational procedures in response to the visual disruptions associated with low to moderate laser exposures that pilots are most likely to encounter during flight operations. With education and training, pilots can take actions that safeguard both their vision and the
G-10OL Operational Laser Committee
In 2017, the US Army announced their modernization priorities as a means of maintaining their military strength. Six specific areas were targeted for focus improvement and development, with the first five being specific technologies or end products. The sixth was “Soldier Lethality” or a soldier’s ability to shoot, move, communicate, protect and sustain by improving human performance and decision making. In an effort to support this priority area for those trying to make clothing and individual equipment (CIE) acquisition and development decisions, there is a desire for an integrated or holistic objective tool to measure soldier performance, specifically mobility, lethality and survivability incorporating underlying measures of human factors, biomechanics and cognition
Can one technical solution help prevent drowsy drivers and detect a child left behind? Yes, using a single, maintenance-free, Non-Dispersive Infrared (NDIR) gas sensor integrated in the cabin ventilation system. Carbon dioxide (CO2) is an established proxy for ventilation needs in buildings. Recently, several studies have been published showing a moderate elevation of the indoor carbon dioxide level effect cognitive performance such as information usage, activity, focus and crisis response. A study of airplane pilots using 3-hour flight simulation tests, showed pilots made 50% more mistakes when exposed to 2,500 ppm carbon dioxide compared to 700 ppm. This has a direct impact on safety. All living animals and humans exhale carbon dioxide. In our investigations we have found that an unintentionally left behind child, or pet, can easily be detected in a parked car by analyzing the carbon dioxide trends in the cabin. Even an 8-month old baby acts as a carbon dioxide source, increasing
Rödjegård, HenrikFranchy, MichaelEhde, StaffanZoubir, YassineAl-Khaldy, SamOlsson, PatrikBengtsson, CarlNowak, TonyO'Brien, Don
In the current scenario, vehicles are majorly owned by individuals where they have their own personal settings or accessories as per their individual preferences. In Shared mobility all features/controls are not personalized to everyone who shares the vehicle, which hinders the usage of shared vehicle. For shared mobility/Autonomous vehicle to be successful, it must play a significant role in customer engagement. To enhance the customer engagement, we need to satisfy individual customer by customizing the vehicle for their needs. This will give a cognitive feel of personal vehicle in a shared environment. We need technologies and design in improving vehicle interior and exterior systems to address personalization. We will involve Design Thinking approach by customer interactions in each zone of vehicle both interior and exterior to identify personalization needs. The zones of study include Frunk & Trunk compartment zone, Interaction zones, interior & exterior zone. We will rank the
Dayakar, SureshSubramanian, VijayasarathyReddy, KeshavaShiramgond, Vijaykumar
Items per page:
1 – 50 of 268