Browse Topic: Cameras

Items (601)
Computer vision has evolved from a supportive driver-assistance tool into a core technology for intelligent, non-intrusive occupant health monitoring in modern vehicles. Leveraging deep learning, edge optimization, and adaptive image processing, this work presents a dual-module Driver Health and Wellness Monitoring System that simultaneously performs fatigue detection and emotional wellbeing assessment using existing in-cabin RGB cameras without requiring additional sensors or intrusive wearables. The fatigue module employs MediaPipe-based facial and skeletal landmark analysis to track Eye Aspect Ratio (EAR), Mouth Aspect Ratio (MAR), head posture, and gaze dynamics, detecting early drowsiness and postural deviations. Adaptive, driver-specific thresholds combined with CAN-bus data fusion minimize false positives, achieving over 92% detection accuracy even under variable lighting and demographics. The emotional wellbeing module analyzes micro-expressions and facial action units to
Iqbal, ShoaibImteyaz, Shahma
Vehicle door-related accidents, especially in urban environments, pose a significant safety risk to pedestrians, infrastructure and vehicle occupants. Conventional rear view systems fails to detect obstacles in blind spots directly below the Outside Rear View Mirror (ORVM), leading to unintended collisions during door opening. This paper presents a novel vision-based obstacle detection system integrated into the ORVM assembly. It utilizes the monocular camera and a projection-based reference image technique. The system captures real-time images of the ground surface near the door and compares them with calibrated reference projections to detect deviations caused by obstacles such as pavements, potholes or curbs. Once such an obstacle is detected the vehicle user is alerted in the form of a chime.
Bhuyan, AnuragKhandekar, DhirajJahagirdar, Shweta
The automotive industry is rapidly advancing towards autonomous vehicles, making sensors such as Cameras, LiDAR, and RADAR critical components for ensuring constant information exchange between the vehicle and its surrounding environment. However, these sensors are vulnerable to harsh environmental conditions like rain, dirt, snow, and bird droppings, which can impair their functionality and disrupt accurate vehicle maneuvers. To ensure all sensors operate effectively, dedicated cleaning is implemented, particularly for Level 3 and higher autonomous vehicles. It is important to test sensor cleaning mechanisms across different weather conditions and vehicle operating scenarios to ensure reliability and performance. One crucial aspect of testing is tracking the trajectory of the cleaning fluid to ensure it does not cause self-soiling of vehicles and affects the field of view or visibility zones of other components like the windshield. While wind tunnel tests are valuable, digitalizing
Mane, SuvidyaMakam, Sri Lalith MadhavVarghese, RixsonDesu, Harsha
Accurate trajectory prediction of traffic agents is critical for enabling safer and more reliable autonomous driving, particularly in urban driving scenarios where close-range interactions are most safety critical. High-definition (HD) and standard-definition (SD) maps play a vital role in this process by providing lane topology and directional cues for forecasting agent movements. However, HD maps are expensive and resource-intensive to create, often requiring specialized sensors, while SD maps lack the precision needed for reliable autonomous navigation. To address this, we propose a novel framework for trajectory prediction that leverages online reconstruction of HD maps using vehicle-mounted cameras, offering a scalable and cost-effective alternative. Our method achieves improvements in predicting accuracy, particularly in close-range scenarios, the most crucial for urban driving, while also performing robustly in settings without pre-built maps. Furthermore, we introduce a new
Upreti, MinaliGirijal, RahulB A, NaveenKumarThontepu, PhaniGhosh, ShankhanilChakraborty, Bodhisattwa
This paper presents a comprehensive survey and data collection study on the adaptability of Camera Monitoring Systems (CMS) for passenger vehicles. With the growing demand for enhanced safety, automation, and driver assistance technologies, Camera Monitoring Systems (CMS) has emerged as a key component in modern automotive design. This study aims to explore the current state of camera-based monitoring in passenger vehicles, focusing on their adaptability through survey data collection of various driving population and analysis. This paper evaluates the acceptance of CMS configurations in replacement to conventional rear-view mirrors through Position of Monitor, Clarity, CMS Adaptiveness to eyes, Comfort while turning, Merging into moving traffic, Monitoring Rear Traffic, while Getting Out of Car, while Overtaking, Coverage Area and Overall Acceptance. The findings offer valuable insights for manufacturers, engineers, and researchers working toward the evolution of intelligent vehicle
Sinha, AnkitTambolkar, Sonali AmeyaBelavadi Venkataramaiah, ShamsundaraKauffmann, Maximilian
In low-light driving scenarios, in-vehicle camera images encounter technical challenges, including severe brightness degradation and short exposure times. Conventional driving image enhancement algorithms are susceptible to issues such as the loss of image features and significant color distortion. The proposed solution to this problem is a multi-scale attention fusion network (MAF-NET) for the enhancement of images captured during low-light driving conditions. The network’s structural design is uncomplicated. The model incorporates a meticulously designed multi-scale attention fusion module (MAFB), along with all essential components for network connectivity. The MAF is predicated on a heavy parameter residual feature block design and incorporates a multi-scale channel attention mechanism to capture richer global/local features. A substantial body of experimental evidence has demonstrated that, in comparison with prevailing algorithms, MAF-NET exhibits superior performance in low
Pan, DengChen, YuhanShi, YicuiLi, JieLi, Guofa
Perceiving the movement characteristics of specific body parts of a driver is crucial for determining their activity. Moreover, the driver’s body posture significantly impacts personnel safety during collision. This study investigates the creation of a dataset using Kinect depth camera for acquiring, organizing, annotating with skeleton tracking assistance, and optimizing interpolation. The pose recognition methods enhanced through an anchor regression mechanism, leading to the refinement of a lightweight anchor regression network capable of end-to-end learning ability from depth images. The improved backbone neck head structure offers advantages of reduced model parameters and enhanced accuracy. This engineering optimization makes it better suited for practical applications within vehicles with limited computational resources limitations and high real-time demands.
Xu, HailanLi, WuhuanLu, JunWang, XinHe, WenhaoChen, ZhenmingLiu, Yunjie
With the rapid development of autonomous driving technology, environmental perception, as its core module, has attracted much attention. Among them, the pure visual bird's-eye-view (BEV) 3D detection scheme has become a research hotspot due to its high spatial resolution and excellent semantic recognition ability in specific scenarios. Existing methods mainly utilize the Transformer encoder structure to perform position encoding in the BEV domain to achieve 3D perspective transformation, but they often fail to fully exploit the potential value of multi-perspective image information. To address this challenge, this paper proposes an improved Transformer-based visual BEV vehicle perception method that enhances perception performance by deeply fusing BEV domain and image domain information: an innovative multi-perspective position encoding mechanism is designed, which decouples camera parameters to more efficiently learn the mapping from images to 3D space; at the same time, a cyclic
Chen, PengyuWei, XiaoxuChen, Zhenwei
Vehicle trajectories encapsulate critical spatial-temporal information essential for traffic state estimation, congestion analysis, and operational parameter optimization. In a Vehicle-to-Infrastructure (V2I) environment, connected automated vehicles (CAVs) not only continuously transmit their own real-time trajectory data but also utilize onboard sensors to perceive and estimate the motion states of surrounding regular vehicles (RVs) within a defined communication range. These multi-source data streams, when integrated with fixed infrastructure-based detectors such as speed cameras at intersections, create a robust foundation for reconstructing full-sample vehicle trajectories, thereby addressing data sparsity issues caused by incomplete CAV penetration. Building upon classical car-following (CF) theory, this study introduces a novel trajectory reconstruction framework that fuses CAV-generated trajectories and infrastructure-based speed detection data. The proposed method specifically
Bai, WeiFu, ChengxinYao, Zhihong
Stoneridge displayed its vision for the future of commercial vehicle technology on the SAE COMVEC 2025 exhibit floor. The Innovation Truck showcases the Tier 1 supplier's next-generation vision and driver-assistance technologies designed to enhance driver safety and fleet optimization. Mario Gafencu, product design and evaluation specialist at Stoneridge, gave Truck & Off-Highway Engineering a tech truck walkaround at the event. The first technology Gafencu detailed was the second-generation MirrorEye camera monitor system that's designed to replace the glass mirrors on the sides of a truck.
Gehm, Ryan
Planetary and lunar rover exploration missions can encounter environments that do not allow for navigation by typical, stereo camera-based systems. Stereo cameras meet difficulties in areas with low ambient light (even when lit by floodlights), direct sunlight, or washed-out environments. Improved sensors are required for safe and successful rover mobility in harsh conditions. NASA Goddard Space Flight Center has developed a Space Qualified Rover LiDAR (SQRLi) system that will improve rover sensing capabilities in a small, lightweight package. The new SQRLi package is developed to survive the hazardous space environment and provide valuable image data during planetary and lunar rover exploration.
As I'm wont to do come December, with work well underway on the first issue of the new year, I like to take stock of upcoming venues for innovative product reveals and thought-provoking presentations on emerging trends and technologies. Come the first week of January, that means CES in Las Vegas. Traditional equipment manufacturers have increasingly used the event to demonstrate to the broader public that they not only deal in metal but also the digital realm. For example, earlier this year at CES, John Deere revealed its second-generation tech stack featuring camera pods, Nvidia Orin purpose-built processors and Deere's VPUs (vision processing units), along with four new autonomous machines including the 9RX 640 tractor for open-field ag operations. The company is exhibiting again this coming year.
Gehm, Ryan
Waiting for a wound to heal is incredibly frustrating. First, it must clot; then an immune system response is needed; followed by scabbing and scarring — and that’s not even getting into the pain part.
This paper presents a novel approach to automated robot programming and robot integration in manufacturing domain and minimizing the dependency on manual online/offline programming. Traditional industrial robots programming is typically done by online programing via teach pendants or by offline programming tools. This presents a major challenge as it requires skilled professionals and is a time-consuming process. In today’s competitive market, factories need to harness their full potential through smart and adaptive thinking to keep pace with evolving technology, customer demand, and manufacturing processes. This requires ability to manufacture multiple products on the same production line, minimum time for changeovers and implement robotic automation for efficiency enhancement. But each custom automation piece also demands significant human efforts for development and maintenance. By integrating the Robot Operating System (ROS) with vision-based 3D model generation systems, we address
Hepat, Abhijeet
Measuring the volume of harvested material behind the machine can be beneficial for various agricultural operations, such as baling, dropping, material decomposition, cultivation, and seeding. This paper aims to investigate and determine the volume of material for use in various agricultural operations. This proposed methodology can help to predict the amount of residue available in the field, assess field readiness for the next production cycle, measure residue distribution, determine hay readiness for baling, and evaluate the quantity of hay present in the field, among other applications which would benefit the customer. Efficient post-harvest residue management is essential for sustainable agriculture. This paper presents an Automated Offboard System that leverages Remote Sensing, IoT, Image Processing, and Machine Learning/Deep Learning (ML/DL) to measure the volume of harvested material in real-time. The system integrates onboard cameras and satellite imagery to analyze the field
Singh, Rana ShaktiStallin, Saravanan
This study focused on the effects of hydrogen on the flame propagation characteristics and combustion characteristics of a small spark-ignition engine. The combustion flame in the cylinder was observed using a side-valve engine that allowed optical access. The fundamental characteristics of hydrogen combustion were investigated based on combustion images photographed in the cylinder with a high-speed camera and measured cylinder pressure waveforms. Experiments were conducted under various ignition timings and equivalence ratios and comparisons were made with the characteristics of an existing hydrocarbon liquid fuel. The hydrogen flame was successfully photographed, although it has been regarded as being difficult to visualize, thus enabling calculation of the flame propagation speed. As a result, it was found that the flame propagation speed of hydrogen was much faster than that of the existing hydrocarbon fuel. On the other hand, it was difficult to photograph the hydrogen flame
Arai, YutoUeno, TakamoriSuda, RyosukeSato, RyoichiNakao, YoshinoriNinomiya, YoshinariMatsushita, KoichiroKamio, TomohikoIijima, Akira
Researchers have developed a prototype imaging system that could significantly improve doctors’ ability to detect cancerous tissue during endoscopic procedures. This approach combines light-emitting diodes (LEDs) with hyperspectral imaging technology to create detailed maps of tissue properties that are invisible to conventional endoscopic cameras.
In today’s digital age, the use of “Internet-of-Things” devices (embedded with software and sensors) has become widespread. These devices include wireless equipment, autonomous machinery, wearable sensors, and security systems. Because of their intricate structures and properties there is a need to scrutinize them closely to assess their safety and utility and rule out any potential defects. But, at the same time, damage to the device during inspection must be avoided.
Image sensors built into every smartphone and digital camera, distinguish colors like the human eye. In our retinas, individual cone cells recognize red, green and blue (RGB). In image sensors, individual pixels absorb the corresponding wavelengths and convert them into electrical signals.
Northwestern engineers have developed a new system for full-body motion capture — and it doesn’t require specialized rooms, expensive equipment, bulky cameras, or an array of sensors. Instead, it requires a simple mobile device.
The emergence of SUAS as a threat vector introduces significant challenges in surveillance and defense due to their potential for low cross section and high speeds, defeating or evading many existing detection and tracking capabilities. This paper presents two algorithms—one for detection and one for tracking—developed for event cameras, which offer substantial improvements in temporal resolution, dynamic range, and low-light performance compared to traditional imaging systems, all of which are critical for effective UAS defense. These advancements address current limitations in using event cameras and pave the way for a new generation of robust robotic vision based on event cameras.
Anthony, DavidChambers, DavidTowler, Jerry
Our research focuses on developing a novel loss function that significantly improves object matching accuracy in multi-robot systems, a critical capability for Safety, Security, and Rescue Robotics (SSRR) applications. By enhancing the consistency and reliability of object identification across multiple viewpoints, our approach ensures a comprehensive understanding of environments with complex layouts and interlinked infrastructure components. We utilize ZED 2i cameras to capture diverse scenarios, demonstrating that our proposed loss function, inspired by the DETR framework, outperforms traditional methods in both accuracy and efficiency. The function’s ability to adapt to dynamic and high-risk environments, such as disaster response and critical infrastructure inspection, is further validated through extensive experiments, showing superior performance in real-time decision-making and operational effectiveness. This work not only advances the state of the art in SSRR but also
Brown, Taylor J.Vincent, GraceNakamoto, KyleBhattacharya, Sambit
For further elucidation of the extremely complex mechanism of wall heat transfer during diesel flame impingement, heat flux measurement results based on two different relatively new approaches, high-speed infrared thermography and Micro Electro- Mechanical Systems (MEMS) heat flux sensor, were compared. Both measurements were conducted on the chamber wall impinged by a diesel flame achieved in constant volume combustion vessels under similar experimental conditions. Infrared thermography was conducted using a high-speed infrared camera (TELOPS M3k, 13,000 fps, 128×128 pixels), allowing the capture of time-series temperature and heat flux distributions on the wall surface with a spatial resolution of 70 μm (9 mm / 128 pixels). This high-resolution imaging also enables detailed estimation of near-wall turbulent structures, which are considered to significantly influence the heat flux distributions. The MEMS sensor is composed of closely aligned (520 microns separated) multiple highly
Shimizu, FumikaMorooka, MasatoAizawa, TetsuyaDejima, KazuhitoNakabeppu, Osamu
Engineers have developed a smart capsule called PillTrek that can measure pH, temperature, and a variety of different biomarkers. It incorporates simple, inexpensive sensors into a miniature wireless electrochemical workstation that relies on low-power electronics. PillTrek measures 7 mm in diameter and 25 mm in length, making it smaller than commercially available capsule cameras used for endoscopy but capable of executing a range of electrochemical measurements.
In order to comply with increasingly stringent emission regulations and ensure clean air, wall-flow particulate filters are predominantly used in exhaust gas aftertreatment systems of combustion engines to remove reactive soot and inert ash particles from exhaust gases. These filters consist of parallel porous channels with alternately closed ends, effectively separating particles by forming a layer on the filter surface. However, the accumulated particulate layer increases the pressure drop across the filter, requiring periodic filter regeneration. During regeneration, soot oxidation breaks up the particulate layer, while resuspension and transport of individual agglomerates can occur. These phenomena are influenced by gas temperature and velocity, as well as by the dispersity and reactivity of the soot particles. Renewable and biomass based fuels can produce different types of soot with different reactivities and dispersities. Therefore, this study focuses on the influences of soot
Desens, OleHagen, Fabian P.Meyer, JörgDittler, Achim
The U-Shift IV represents the latest evolution in modular urban mobility solutions, offering significant advancements over its predecessors. This innovative vehicle concept introduces a distinct separation between the drive module, known as the driveboard, and the transport capsules. The driveboard contains all the necessary components for autonomous driving, allowing it to operate independently. This separation not only enables versatile applications - such as easily swapping capsules for passenger or goods transportation - but also significantly improves the utilization of the driveboard. By allowing a single driveboard to be paired with different capsules, operational efficiency is maximized, enabling continuous deployment of driveboards while the individual capsules are in use. The primary focus of U-Shift IV was to obtain a permit for operating at the Federal Garden Show 2023. To achieve this goal, we built the vehicle around the specific requirements for semi-public road
Pohl, EricScheibe, SebastianMünster, MarcoOsebek, ManuelKopp, GerhardSiefkes, Tjark
With 2D cameras and space robotics algorithms, astronautics engineers at Stanford have created a navigation system able to manage multiple satellites using visual data only. They recently tested it in space for the first time. Stanford University, Stanford, CA Someday, instead of large, expensive individual space satellites, teams of smaller satellites - known by scientists as a “swarm” - will work in collaboration, enabling greater accuracy, agility, and autonomy. Among the scientists working to make these teams a reality are researchers at Stanford University's Space Rendezvous Lab, who recently completed the first-ever in-orbit test of a prototype system able to navigate a swarm of satellites using only visual information shared through a wireless network. “It's a milestone paper and the culmination of 11 years of effort by my lab, which was founded with this goal of surpassing the current state of the art and practice in distributed autonomy in space,” said Simone D'Amico
In October 2024, Kongsberg NanoAvionics discovered damage to their MP42 satellite, and used the discovery as an opportunity to raise awareness on the need to reduce space debris generated by satellites. Kongsberg NanoAvionics, Vilnius, Lithuania Our MP42 satellite, which launched into low Earth orbit (LEO) two and a half years ago aboard the SpaceX Transporter-4 mission, recently took an unexpected hit from a small piece of space debris or micrometeoroid. The impact created a 6 mm hole, roughly the size of a chickpea, in one of its solar panels. Despite this damage, the satellite continued performing its mission without interruption, and we only discovered the impact thanks to an image taken by its onboard selfie camera in October of 2024. It is challenging to pinpoint exactly when the impact occurred because MP42's last selfie was taken a year and a half ago, in April of 2023.
In active noise control, the control region size (same meaning as zone of control) decreases as the frequency increases, so that even a small moving of the passenger's head causes the ear position to go out of the control region. To increase the size of the control region, many speakers and microphones are generally required, but it is difficult to apply it in a vehicle cabin due to space and cost constraints. In this study, we propose moving zone of quiet active noise control technique. A 2D image-based head tracking system captured by a camera to generate the passenger's 0head coordinates in real time with deep learning algorithm. In the controller, the control position is moved to the ear position using a multi-point virtual microphone algorithm according to the generated ear position. After that, the multi-point adaptive filter training system applies the optimal control filter to the current position and maintains the control performance. Through this study, it is possible to
Oh, ChiSungKang, JonggyuKim, Joong-Kwan
This study presents a novel methodology for optimizing the acoustic performance of rotating machinery by combining scattered 3D sound intensity data with numerical simulations. The method is demonstrated on the rear axle of a truck. Using Scan&Paint 3D, sound intensity data is rapidly acquired over a large spatial area with the assistance of a 3D sound intensity probe and infrared stereo camera. The experimental data is then integrated into far-field radiation simulations, enabling detailed analysis of the acoustic behavior and accurate predictions of far-field sound radiation. This hybrid approach offers a significant advantage for assessing complex acoustic sources, allowing for quick and reliable evaluation of noise mitigation solutions.
Fernandez Comesana, DanielVael, GeorgesRobin, XavierOrselli, JosephSchmal, Jared
Design verification and quality control of automotive components require the analysis of the source location of ultra-short sound events, for instance the engaging event of an electromechanical clutch or the clicking noise of the aluminium frame of a passenger car seat under vibration. State-of-the-art acoustic cameras allow for a frame rate of about 100 acoustic images per second. Considering that most of the sound events introduced above can be far less than 10ms, an acoustic image generated at this rate resembles an hard-to-interpret overlay of multiple sources on the structure under test along with reflections from the surrounding test environment. This contribution introduces a novel method for visualizing impulse-like sound emissions from automotive components at 10x the frame rate of traditional acoustic cameras. A time resolution of less than 1ms eventually allows for the true localization of the initial and subsequent sound events as well as a clear separation of direct from
Rittenschober, Thomas
Industries that require high-accuracy automation in the creation of high-mix/low-volume parts, such as aerospace, often face cost constraints with traditional robotics and machine tools due to the need for many pre-programmed tool paths, dedicated part fixtures, and rigid production flow. This paper presents a new machine learning (ML) based vision mapping and planning technique, created to enhance flexibility and efficiency in robotic operations, while reducing overall costs. The system is capable of mapping discrete process targets in the robot work envelope that the ML algorithms have been trained to identify, without requiring knowledge of the overall assembly. Using a 2D camera, images are taken from multiple robot positions across the work area and are used in the ML algorithm to detect, identify, and predict the 6D pose of each target. The algorithm uses the poses and target identifications to automatically develop a part program with efficient tool paths, including
Langan, DanielHall, MichaelGoldberg, EmilySchrandt, Sasha
The segment manipulator machine, a large custom-built apparatus, is used for assembling and disassembling heavy tooling, specifically carbon fiber forms. This complex yet slow-moving machine had been in service for nineteen years, with many control components becoming obsolete and difficult to replace. The customer engaged Electroimpact to upgrade the machine using the latest state-of-the-art controls, aiming to extend the system's operational life by at least another two decades. The program from the previous control system could not be reused, necessitating a complete overhaul.
Luker, ZacharyDonahue, Michael
This study investigates the ignitability of hydrogen in an optical heavy-duty SI engine. While the ignition energy of hydrogen is exceptionally low, the high load and lean mixtures used in heavy-duty hydrogen engines lead to a high gas density, resulting in a much higher breakdown voltage than in light-duty SI engines. Spark plug wear is a concern, so there is a need to minimise the spark energy while maintaining combustion stability, even at challenging conditions for ignition. This work consists of a two-stage experimental study performed in an optical engine. In the first part, we mapped the combustion stability and frequency of misfires with two different ignition systems: a DC inductive discharge ignition system, and a closed-loop controlled capacitive AC system. The equivalence ratio and dwell time were varied for the inductive system while the capacitive system instead varied spark duration and spark current in addition to equivalence ratio. A key finding was that spark energy
Hallstadius, PeterSaha, AnupamSridhara, AravindAndersson, Öivind
Autonomous ground navigation has advanced significantly in urban and structured environments, supported by the availability of comprehensive datasets. However, navigating complex and off-road terrains remains challenging due to limited datasets, diverse terrain types, adverse environmental conditions, and sensor limitations affecting vehicle perception. This study presents a comprehensive review of off-road datasets, integrating their applications with sensor technologies and terrain traversability analysis methods. It identifies critical gaps, including class imbalances, sensor performance under adverse conditions, and limitations in existing traversability estimation approaches. Key contributions include a novel classification of off-road datasets based on annotation methods, providing insights into scalability and applicability across diverse terrains. The study also evaluates sensor technologies under adverse conditions and proposes strategies for incorporating event-based and
Musau, HannahRuganuza, DenisIndah, DebbieMukwaya, ArthurGyimah, Nana KankamPatil, AshishBhosale, MayureshGupta, PrakharMwakalonge, JudithJia, YunyiMikulski, DariuszGrabowsky, DavidHong, Jae DongSiuhi, Saidi
Accurate reconstruction of vehicle collisions is essential for understanding incident dynamics and informing safety improvements. Traditionally, vehicle speed from dashcam footage has been approximated by estimating the time duration and distance traveled as the vehicle passes between reference objects. This method limits the resolution of the speed profile to an average speed over given intervals and reduces the ability to determine moments of acceleration or deceleration. A more detailed speed profile can be calculated by solving for the vehicle’s position in each video frame; however, this method is time-consuming and can introduce spatial and temporal error and is often constrained by the availability of external trackable features in the surrounding environment. Motion tracking software, widely used in the visual effects industry to track camera positions, has been adopted by some collision reconstructionists for determining vehicle speed from video. This study examines the
Perera, NishanGriffiths, HarrisonPrentice, Greg
Deliberate modifications to infrastructure can significantly enhance machine vision recognition of road sections designed for Vulnerable Road Users, such as green bike lanes. This study evaluates how green bike lanes, compared to unpainted lanes, enhance machine vision recognition and vulnerable road users safety by keeping vehicles at a safe distance and preventing encroachment into designated bike lanes. Conducted at the American Center for Mobility, this study utilizes a vehicle equipped with a front-facing camera to assess green bike lane recognition capabilities across various environmental conditions including dry daytime, dry nighttime, rain, fog, and snow. Data collection involved gathering a comprehensive dataset under diverse conditions and generating masks for lane markings to perform comparative analysis for training Advanced Driver Assistance Systems. Quality measurement and statistical analysis are used to evaluate the effectiveness of machine vision recognition using
Ponnuru, Venkata Naga RithikaDas, SushantaGrant, JosephNaber, JeffreyBahramgiri, Mojtaba
In this study, we introduce RGB2BEV-Net, an end-to-end pipeline that extends traditional BEV segmentation models by utilizing raw RGB images with Bird’s Eye View (BEV) generation. While previous work primarily focused on pre-segmented images to generate corresponding BEV maps, our approach expands this by collecting RGB images alongside their affiliated segmentation masks and BEV representations. This enables direct input of RGB camera sensors into the pipeline, reflecting real-world autonomous driving scenarios where RGB cameras are commonly used as sensors, rather than relying on pre-segmented images. Our model processes four RGB images through a segmentation layer before converting them into a segmented BEV, implemented in the PyTorch framework after being adapted from an original implementation that utilized a different framework. This adaptation was necessary to improve compatibility and ensure better integration of the entire system within autonomous vehicle applications. We
Hossain, SabirLin, Xianke
Vehicle-to-Infrastructure (V2I) cooperation has emerged as a fundamental technology to overcome the limitations of the individual ego-vehicle perception. Onboard perception is limited by the lack of information for understanding the environment, the lack of anticipation, the drop of performance due to occlusions and the physical limitations of embedded sensors. The perception of V2I in a cooperative manner improves the perception range of the ego vehicle by receiving information from the infrastructure that has another point of view, mounted with sensors, such as camera and LiDAR. This technical paper presents a perception pipeline developed for the infrastructure based on images with multiple viewpoints. It is designed to be scalable and has five main components: the image acquisition for the modification of camera settings and to get the pixel data, the object detection for fast and accurate detection of four wheels, two wheels and pedestrians, the data fusion module for robust
Picard, QuentinMorice, MaloFadili, MaryemPechberti, Steve
This paper explores the integration of two deep learning models that are currently being used for object detection, specifically Mask R-CNN and YOLOX, for two distinct driving environments: urban cityscapes and highway settings. The hypothesis underlying this work is that different methods of object detection will work best in different driving environments, due to the differences in their unique strengths as well as the key differences in those driving environments. Some of these differences in the driving environment include varying traffic densities, diverse object classes, and differing scene complexities, including specific differences such as the types of signs present, the presence or absence of stoplights, and the limited-access nature of highways as compared to city streets. As part of this work, a scene classifier has also been developed to categorize the driving context into the two categories of highway and urban driving, in order to allow the overall object detection
Patel, KrunalPeters, Diane
Vehicle ADAS Systems majorly comprises of two functions: Driving and Parking. The most common form of damage to the vehicle which goes unnoticed with unidentified cause are parking damages. A vehicle once parked at a certain location may get damaged without knowledge of the user. In this work developed a solution that not only pre-warns the driver but also prepares the vehicle beforehand if it suspects a damage may occur. This eliminates the latency between damage and information capture, detects small damages such as scratches, classifies the type of damage and informs the user beforehand. This is solution is different from our competitors as the existing solutions informs the user about the scratches/damages, but these solutions are expensive, have high response time, and the damage information is captured after the damage has occurred. The solution consists of the following check blocks: Precondition, Sensor Control and Action Module. The Precondition Module observes the vehicle
Debnath, SarnabPatil, PrasadBelur Subramanya, SheshagiriGovinda, Shiva Prasad
Items per page:
1 – 50 of 601