Browse Topic: Navigation and guidance systems
In contemporary society, where Global Navigation Satellite Systems (GNSS) are utilised extensively, their inherent fragility gives rise to potential hazards with respect to the safety of ship navigation. In order to address this issue, the present study focuses on an ASM signal delay measurement system based on software defined radio peripherals. The system comprises two distinct components: a transmitting end and a receiving end. At the transmitting end, a signal generator, a first time-frequency synchronisation device, and a VHF transmitting antenna are employed to transmit ASM signals comprising dual Barker 13 code training sequences. At the receiving end, signals are received via software-defined radio equipment, a second time-frequency synchronisation device, a computing host, and a VHF receiving antenna. Utilising sliding correlation algorithms enables accurate time delay estimation. The present study leverages the high performance and low cost advantages of the universal
We present a novel processing approach to extract a ship traffic flow framework in order to cope with problems such as large volume, high noise levels and complexity spatio-temporal nature of AIS data. We preprocess AIS data using covariance matrix-based abnormal data filtering, develop improved Douglas-Peucker (DP) algorithm for multi-granularity trajectory compression, identify navigation hotspots and intersections using density-based spatial clustering and visualize chart overlays using Mercator projection. In experiments with AIS data from the Laotieshan waters in the Bohai Bay, we achieve compression rate up to 97% while maintaining a key trajectory feature retention error less than 0.15 nautical miles. We identify critical areas such as waterway intersections and generate traffic flow heatmap for maritime management, route planning, etc.
The Vision for Off-road Autonomy (VORA) project used passive, vision-only sensors to generate a dense, robust world model for use in off-road navigation. The research resulted in vision-based algorithms applicable to defense and surveillance autonomy, intelligent agricultural applications, and planetary exploration. Passive perception for world modeling enables stealth operation (since lidars can alert observers) and does not require more expensive or specialized sensors (e.g., radar or lidar). Over the course of this three-phase program, SwRI built components of a vision-only navigation pipeline and tested the result on a vehicle platform in an off-road environment.
Despite all the technological evolution in navigation, waters just off coastal shores around the globe have remained a black box. That is, until researchers from The University of Texas at Austin and Oregon State University developed a new technology that uses satellites in space to map out these tricky areas.
As NASA’s Artemis missions build out infrastructure on and around the Moon in the coming years, CubeSats and other small satellites will likely play an important role in a communications network that will enable not only conversation with mission control but also navigation, direct scientific observations, and more, all enabled by an internet-like “LunaNet.” These little satellites are cheap to launch and can form constellations for relaying signals reliably. But their small size makes it hard for them to carry antennas large enough to communicate across vast distances.
This article introduces a comprehensive cooperative navigation algorithm to improve vehicular system safety and efficiency. The algorithm employs surrogate optimization to prevent collisions with cooperative cruise control and lane-keeping functionalities. These strategies address real-world traffic challenges. The dynamic model supports precise prediction and optimization within the MPC framework, enabling effective real-time decision-making for collision avoidance. The critical component of the algorithm incorporates multiple parameters such as relative vehicle positions, velocities, and safety margins to ensure optimal and safe navigation. In the cybersecurity evaluation, the four scenarios explore the system’s response to different types of cyberattacks, including data manipulation, signal interference, and spoofing. These scenarios test the algorithm’s ability to detect and mitigate the effects of malicious disruptions. Evaluate how well the system can maintain stability and avoid
With 2D cameras and space robotics algorithms, astronautics engineers at Stanford have created a navigation system able to manage multiple satellites using visual data only. They recently tested it in space for the first time. Stanford University, Stanford, CA Someday, instead of large, expensive individual space satellites, teams of smaller satellites - known by scientists as a “swarm” - will work in collaboration, enabling greater accuracy, agility, and autonomy. Among the scientists working to make these teams a reality are researchers at Stanford University's Space Rendezvous Lab, who recently completed the first-ever in-orbit test of a prototype system able to navigate a swarm of satellites using only visual information shared through a wireless network. “It's a milestone paper and the culmination of 11 years of effort by my lab, which was founded with this goal of surpassing the current state of the art and practice in distributed autonomy in space,” said Simone D'Amico
Today, our mobile phones, computers, and GPS systems can give us very accurate time indications and positioning thanks to the over 400 atomic clocks worldwide. All sorts of clocks - be it mechanical, atomic or a smartwatch - are made of two parts: an oscillator and a counter. The oscillator provides a periodic variation of some known frequency over time while the counter counts the number of cycles of the oscillator. Atomic clocks count the oscillations of vibrating atoms that switch between two energy states with very precise frequency.
To meet the requirements of high-precision and stable positioning for autonomous driving vehicles in complex urban environments, this paper designs and develops a multi-sensor fusion intelligent driving hardware and software system based on BDS, IMU, and LiDAR. This system aims to fill the current gap in hardware platform construction and practical verification within multi-sensor fusion technology. Although multi-sensor fusion positioning algorithms have made significant progress in recent years, their application and validation on real hardware platforms remain limited. To address this issue, the system integrates BDS dual antennas, IMU, and LiDAR sensors, enhancing signal reception stability through an optimized layout design and improving hardware structure to accommodate real-time data acquisition and processing in complex environments. The system’s software design is based on factor graph optimization algorithms, which use the global positioning data provided by BDS to constrain
From your car’s navigation display to the screen you are reading this on, luminescent polymers — a class of flexible materials that contain light-emitting molecules — are used in a variety of today’s electronics. Luminescent polymers stand out for their light-emitting capability, coupled with their remarkable flexibility and stretchability, showcasing vast potential across diverse fields of application.
Items per page:
50
1 – 50 of 1115