Browse Topic: Navigation and guidance systems
This article introduces a comprehensive cooperative navigation algorithm to improve vehicular system safety and efficiency. The algorithm employs surrogate optimization to prevent collisions with cooperative cruise control and lane-keeping functionalities. These strategies address real-world traffic challenges. The dynamic model supports precise prediction and optimization within the MPC framework, enabling effective real-time decision-making for collision avoidance. The critical component of the algorithm incorporates multiple parameters such as relative vehicle positions, velocities, and safety margins to ensure optimal and safe navigation. In the cybersecurity evaluation, the four scenarios explore the system’s response to different types of cyberattacks, including data manipulation, signal interference, and spoofing. These scenarios test the algorithm’s ability to detect and mitigate the effects of malicious disruptions. Evaluate how well the system can maintain stability and avoid
With 2D cameras and space robotics algorithms, astronautics engineers at Stanford have created a navigation system able to manage multiple satellites using visual data only. They recently tested it in space for the first time. Stanford University, Stanford, CA Someday, instead of large, expensive individual space satellites, teams of smaller satellites - known by scientists as a “swarm” - will work in collaboration, enabling greater accuracy, agility, and autonomy. Among the scientists working to make these teams a reality are researchers at Stanford University's Space Rendezvous Lab, who recently completed the first-ever in-orbit test of a prototype system able to navigate a swarm of satellites using only visual information shared through a wireless network. “It's a milestone paper and the culmination of 11 years of effort by my lab, which was founded with this goal of surpassing the current state of the art and practice in distributed autonomy in space,” said Simone D'Amico
Today, our mobile phones, computers, and GPS systems can give us very accurate time indications and positioning thanks to the over 400 atomic clocks worldwide. All sorts of clocks - be it mechanical, atomic or a smartwatch - are made of two parts: an oscillator and a counter. The oscillator provides a periodic variation of some known frequency over time while the counter counts the number of cycles of the oscillator. Atomic clocks count the oscillations of vibrating atoms that switch between two energy states with very precise frequency.
To meet the requirements of high-precision and stable positioning for autonomous driving vehicles in complex urban environments, this paper designs and develops a multi-sensor fusion intelligent driving hardware and software system based on BDS, IMU, and LiDAR. This system aims to fill the current gap in hardware platform construction and practical verification within multi-sensor fusion technology. Although multi-sensor fusion positioning algorithms have made significant progress in recent years, their application and validation on real hardware platforms remain limited. To address this issue, the system integrates BDS dual antennas, IMU, and LiDAR sensors, enhancing signal reception stability through an optimized layout design and improving hardware structure to accommodate real-time data acquisition and processing in complex environments. The system’s software design is based on factor graph optimization algorithms, which use the global positioning data provided by BDS to constrain
From your car’s navigation display to the screen you are reading this on, luminescent polymers — a class of flexible materials that contain light-emitting molecules — are used in a variety of today’s electronics. Luminescent polymers stand out for their light-emitting capability, coupled with their remarkable flexibility and stretchability, showcasing vast potential across diverse fields of application.
The escalation of road infrastructure anomalies, such as speed breakers and potholes, presents a formidable challenge to vehicular safety, efficient traffic management, and road maintenance strategies worldwide. In addressing these pervasive issues, this paper proposes an advanced, integrated approach for the detection and classification of speed breakers and potholes. Utilizing a sophisticated blend of deep learning methodologies and enhanced image processing techniques, our solution leverages Object Detection to analyze and interpret real-time visual data captured through advanced vehicle-mounted camera systems. This research meticulously details the comprehensive process involved in the development of this system, including the acquisition and preprocessing of a vast, varied dataset representative of numerous road types, conditions, and environmental factors. Through rigorous training, testing, and validation phases, the model demonstrates remarkable proficiency in recognizing and
This paper presents the development of a cost-effective assistive headgear designed to address the navigation challenges faced by millions of visually impaired individuals in India. Existing solutions are often prohibitively expensive, leaving a significant portion of this population underserved. To address this gap, we propose a novel human-machine interface that utilizes a synergistic combination of computer vision, stereo imaging, and haptic feedback technologies. The focus of this project lies in the creation of a practical and affordable headgear that empowers visually impaired users with real time obstacle detection and navigation capabilities. The solution leverages computer vision for environmental analysis and integrates haptic feedback for intuitive user guidance. This paper details the design intricacies of the headgear, along with the implementation methodologies employed. We present comprehensive testing results and discuss the project's potential to significantly enhance
Golden packages on the Moon? Not exactly. This is no extraterrestrial gift depot, but a cutting-edge project in the LUNA hall. Here, the German Aerospace Center (Deutsches Zentrum für Luftund Raumfahrt; DLR) has been researching how payload boxes, sensors, rovers and astronauts can connect to form an integrated network. These participants, or nodes within the network, exchange signals that facilitate both communication and navigation.
LIDAR-based autonomous mobile robots (AMRs) are gradually being used for gas detection in industries. They detect tiny changes in the composition of the environment in indoor areas that is too risky for humans, making it ideal for the detection of gases. This current work focusses on the basic aspect of gas detection and avoiding unwanted accidents in industrial sectors by using an AMR with LIDAR sensor capable of autonomous navigation and MQ2 a gas detection sensor for identifying the leakages including toxic and explosive gases, and can alert the necessary personnel in real-time by using simultaneous localization and mapping (SLAM) algorithm and gas distribution mapping (GDM). GDM in accordance with SLAM algorithm directs the robot towards the leakage point immediately thereby avoiding accidents. Raspberry Pi 4 is used for efficient data processing and hardware part accomplished with PGM45775 DC motor for movements with 2D LIDAR allowing 360° mapping. The adoption of LIDAR-based AMRs
There are certain situations when landing an Advanced Air Mobility (AAM) aircraft is required to be performed without assistance from GPS data. For example, AAM aircraft flying in an urban environment with tall buildings and narrow canyons may affect the ability of the AAM aircraft to effectively use GPS to access a landing area. Incorporating a vision-based navigation method, NASA Ames has developed a novel Alternative Position, Navigation, and Timing (APNT) solution for AAM aircraft in environments where GPS is not available.
A new scientific technique could significantly improve the reference frames that millions of people rely upon each day when using GPS navigation services, according to a recently published article in Radio Science.
Southwest Research Institute has developed off-road autonomous driving tools with a focus on stealth for the military and agility for space and agriculture clients. The vision-based system pairs stereo cameras with novel algorithms, eliminating the need for LiDAR and active sensors.
Items per page:
50
1 – 50 of 1099