Browse Topic: Radar

Items (894)
The rapid advancement in the autonomous vehicle industry has underscored the critical role of sensors in identifying and tracking traffic participants. Among these sensors, radar plays a pivotal role due to its ability to function reliably in various weather and lighting conditions. This paper presents a phenomenological radar sensor model designed to simulate the behavior of real radar systems under diverse scenarios, including noisy environments and accidental situations. As the complexity of autonomous systems increases, relying solely on on-road and bench testing becomes insufficient for meeting stringent safety and performance standards. These traditional testing methods may not encompass the wide range of potential scenarios that autonomous vehicles might encounter. As a result, virtual environment modeling has emerged as a crucial tool for validating driving functions, assistance systems, and the strategic placement of multiple sensors. In contrast to high-fidelity radar models
Hanumanthaiah, ManjunathS, GirishDurairaj, Priya
Object detection (OD) is one of the most important aspects in Autonomous Driving (AD) application. This depends on the strategic sensor’s selection and placement of sensors around the vehicle. The sensors should be selected based on various constraints such as range, use-case, and cost limitation. This paper introduces a systematic approach for identifying the optimal practices for selecting sensors in AD object detection, offering guidance for those looking to expand their expertise in this field and select the most suitable sensors accordingly. In general, object detection typically involves utilizing RADAR, LiDAR, and cameras. RADAR excels in accurately measuring longitudinal distances over both long and short ranges, but its accuracy in lateral distances is limited. LiDAR is known for its ability to provide accurate range data, but it struggles to identify objects in various weather conditions. On the other hand, camera-based systems offer superior recognition capabilities but lack
Maktedar, AsrarulhaqChatterjee, Mayurika
ABSTRACT Localization refers to the process of estimating ones location (and often orientation) within an environment. Ground vehicle automation, which offers the potential for substantial safety and logistical benefits, requires accurate, robust localization. Current localization solutions, including GPS/INS, LIDAR, and image registration, are all inherently limited in adverse conditions. This paper presents a method of localization that is robust to most conditions that hinder existing techniques. MIT Lincoln Laboratory has developed a new class of ground penetrating radar (GPR) with a novel antenna array design that allows mapping of the subsurface domain for the purpose of localization. A vehicle driving through the mapped area uses a novel real-time correlation-based registration algorithm to estimate the location and orientation of the vehicle with respect to the subsurface map. A demonstration system has achieved localization accuracy of 2 cm. We also discuss tracking results
Stanley, ByronCornick, MatthewKoechling, Jeffrey
ABSTRACT The complex future battlefield will require the ability for quick identification of threats in chaotic environments followed by decisive and accurate threat mitigation by lethal force or countermeasure. Integration and synchronization of high bandwidth sensor capabilities into military vehicles is essential to identifying and mitigating the full range of threats. High bandwidth sensors including Radar, Lidar, and electro-optical sensors provide real-time information for active protection systems, advanced lethality capabilities, situational understanding and automation. The raw sensor data from Radar systems can exceed 10 gigabytes per second and high definition video is currently at 4 gigabytes per second with increased resolution standards emerging. The processing and memory management of the real time sensor data assimilated with terrain maps and external communication information requires a high performance electronic architecture with integrated data management. GDLS has
Silveri, Andrew
ABSTRACT For safe navigation through an environment, autonomous ground vehicles rely on sensory inputs such as cameras, LiDAR, and radar for detection and classification of obstacles and impassable terrain. These sensors provide data representing 3D space surrounding the vehicle. Often this data is obscured by dust, precipitation, objects, or terrain, producing gaps in the sensor field of view. These gaps, or occlusions, can indicate the presence of obstacles, negative obstacles, or rough terrain. Because sensors receive no data in these occlusions, sensor data provides no explicit information about what might be found in the occluded areas. To provide the navigation system with a more complete model of the environment, information about the occlusions must be inferred from sensor data. In this paper we show a probabilistic method for mapping point cloud occlusions in real-time and how knowledge of these occlusions can be integrated into an autonomous vehicle obstacle detection and
Bybee, Taylor C.Ferrin, Jeffrey L.
Object detection is one of the core tasks in autonomous driving perception systems. Most perception algorithms commonly use cameras and LiDAR sensors, but the robustness is insufficient in harsh environments such as heavy rain and fog. Moreover, velocity of objects is crucial for identifying motion states. The next generation of 4D millimeter-wave radar retains traditional radar advantages in robustness and speed measurement, while also providing height information, higher resolution and density. 4D radar has great potential in the field of 3D object detection. However, existing methods overlook the need for specific feature extraction modules for 4D millimeter-wave radar, which can lead to potential information loss. In this study, we propose RadarPillarDet, a novel approach for extracting features from 4D radar to achieve high-quality object detection. Specifically, our method introduces a dual-stream encoder (DSE) module, which combines traditional multilayer perceptron and
Yang, LongZheng, LianqingMo, JingyueBai, JieZhu, XichanMa, Zhixiong
In this paper, a single-chip based design for an automotive 4D millimeter -wave radar is proposed. Compared to conventional 3D millimeter-wave radar, this innovative scheme features a MIMO antenna array and advanced waveform design, significantly enhancing the radar's elevation measurement capabilities. The maximum measurement error is approximately ±0.3° for azimuth within ±50° and about ±0.4° for elevation within ±15°. Extensive road testing had demonstrated that the designed radar can routinely measure targets such as vehicles, pedestrians, and bicycles, while also accurately detecting additional objects like overpasses and guide signs. The cost of this radar is comparable to that of traditional automotive 3D millimeter-wave radar, and it has been successfully integrated into a forward radar system for a specific vehicle model
Cai, YongjunZhang, XianshengBai, JieShen, Hui-LiangRao, Bing
RADAR antennae come in varying sizes and shapes. They are often employed in heterogeneous systems (i.e., systems that use multiple detection methods) that are employed to detect and visualize objects. Object identification in the context of automated vehicle behavior design could require extensive data sets to train algorithms that have the potential to make dynamic driving decisions. A widely available platform would increase the ability of researchers learn about automated systems and to gather data, which may be necessary for training automated vehicle systems. This work describes the application of a 77 GHz, portable antenna to the description of standard fleet vehicles as well as a suite of soft targets contextualized within polar plots. This work shows that object detection and identification is possible in off-the-shelf portable systems that combine readily available materials and software in a reproducible manner. The described system and algorithm create a visual correlate
Chen, AaronHartman, EthanLin, VincentManahan, TaylorSidhu, AnmolEichaker, Lauren
Automotive radar plays a crucial role in object detection and tracking. While a standalone radar possesses ideal characteristics, integrating it within a vehicle introduces challenges. The presence of vehicle body, bumper, chassis, and cables in proximity influences the electromagnetic waves emitted by the radar, thereby impacting its performance. To address these challenges, electromagnetic simulations can guide early-stage design modifications. However, operating at very high frequencies around 77GHz and dealing with the large electrical size of complex structures demand specialized simulation techniques to optimize radar integration scenarios. Thus, the primary challenge lies in achieving an optimal balance between accuracy and computational resources/simulation time. This paper outlines the process of radar vehicle integration from an electromagnetic perspective and demonstrates the derivation of optimal solutions through RF simulation
Rao, SukumaraM K, Yadhu Krishnan
Southwest Research Institute has developed off-road autonomous driving tools with a focus on stealth for the military and agility for space and agriculture clients. The vision-based system pairs stereo cameras with novel algorithms, eliminating the need for LiDAR and active sensors
Researchers at the University of California, Davis, have developed a proof-of-concept sensor that may usher in a new era for millimeter wave radars. They call its design a “mission impossible” made possible
Phased array radar technology has been gaining popularity since its initial introduction in the 1960s and is now being used in a variety of applications, from military and defense to civilian sectors and even space exploration. This cutting-edge technology has revolutionized radar systems by offering unparalleled flexibility, precision, and speed. At the heart of phased array radar lies a sophisticated antenna system composed of numerous individual elements, each capable of independently emitting and receiving radio waves. Unlike traditional radar systems that rely on mechanically rotating antennas, phased array radars electronically steer their beams, enabling rapid and precise target acquisition. This breakthrough is made possible by meticulously controlling the phase of radio waves emitted from each antenna element
A potentially effective means for ground system radar cross section reduction (RCSR) involves a checkerboard-arranged applique (ACA) composed of artificial magnetic conductor (AMC) metasurfaces which can result in phase modification – and thus destructive interference – of the reflected radar energy. This effort focused on the development of such a concept through the following main tasks: (1) the development of performance goals; (2) the selection of the AMC topology pattern; (3) the development of various performance models based upon transmission line theory and antenna planar array theory, and the use of various computational electromagnetics (CEM) solvers; (4) model validation; (5) the optimization of the AMC pattern through a design of experiment (DOE) approach; and (6) the development of a genetic programming framework for more rigorous ACA optimization
Tison, NathanD’Archangel, Jeffrey
Radio frequency (RF) and microwave signals are integral carriers of information for technology that enriches our everyday life – cellular communication, automotive radar sensors, and GPS navigation, among others. At the heart of each system is a single-frequency RF or microwave source, the stability and spectral purity of which is critical. While these sources are designed to generate a signal at a precise frequency, in practice the exact frequency is blurred by phase noise, arising from component imperfections and environmental sensitivity, that compromises ultimate system-level performance
Metasurfaces comprised of sub-wavelength structures, possess remarkable electromagnetic (EM) wave manipulation capabilities. Their application as radar absorbers has gained widespread recognition, particularly in modern stealth technology, where their main role is to minimize the radar cross-section (RCS) of military assets. Conventional radar absorber design is tedious because of its time-consuming, computationally intensive, iterative nature, and demand for a high level of expertise. In contrast, the emergence of machine/deep learning-based metasurface design for RCS reduction represents a rapidly evolving field. This approach offers automated and computationally efficient means to generate radar absorber designs. In this article, an inverse approach, using machine/deep learning methodology is presented for multilayered broadband microwave absorber. The proposed method is primarily based on geometry and absorption characteristics. The proposed design is based on an in-depth
P K, AnjanaV, Abhilash PBisariya, SiddharthSutrakar, Vijay Kumar
Accurate and reliable localization in GNSS-denied environments is critical for autonomous driving. Nevertheless, LiDAR-based and camera-based methods are easily affected by adverse weather conditions such as rain, snow, and fog. The 4D Radar with all-weather performance and high resolution has attracted more interest. Currently, there are few localization algorithms based on 4D Radar, so there is an urgent need to develop reliable and accurate positioning solutions. This paper introduces RIO-Vehicle, a novel tightly coupled 4D Radar/IMU/vehicle dynamics within the factor graph framework. RIO-Vehicle aims to achieve reliable and accurate vehicle state estimation, encompassing position, velocity, and attitude. To enhance the accuracy of relative constraints, we introduce a new integrated IMU/Dynamics pre-integration model that combines a 2D vehicle dynamics model with a 3D kinematics model. Then, we employ a dynamic object removal process to filter out dynamic points from a single 4D
Zhu, JiaqiZhuo, GuirongXiong, Luzihang, heLeng, Bo
Lane detection plays a critical role in autonomous vehicles for safe and reliable navigation. Lane detection is traditionally accomplished using a camera sensor and computer vision processing. The downside of this traditional technique is that it can be computationally intensive when high quality images at a fast frame rate are used and has reliability issues from occlusion such as, glare, shadows, active road construction, and more. This study addresses these issues by exploring alternative methods for lane detection in specific scenarios caused from road construction-induced lane shift and sun glare. Specifically, a U-Net, a convolutional network used for image segmentation, camera-based lane detection method is compared with a radar-based approach using a new type of sensor previously unused in the autonomous vehicle space: radar retro-reflectors. This evaluation is performed using ground truth data, obtained by measuring the lane positions and transforming them into pixel
Brown, Nicolas EricPatil, PriteshSharma, SachinKadav, ParthFanas Rojas, JohanHong, Guan YueDaHan, LiaoEkti, AliWang, RossMeyer, RickAsher, Zachary
Traditional autonomous vehicle perception subsystems that use onboard sensors have the drawbacks of high computational load and data duplication. Infrastructure-based sensors, which can provide high quality information without the computational burden and data duplication, are an alternative to traditional autonomous vehicle perception subsystems. However, these technologies are still in the early stages of development and have not been extensively evaluated for lane detection system performance. Therefore, there is a lack of quantitative data on their performance relative to traditional perception methods, especially during hazardous scenarios, such as lane line occlusion, sensor failure, and environmental obstructions. We address this need by evaluating the influence of hazards on the resilience of three different lane detection methods in simulation: (1) traditional camera detection using a U-Net algorithm, (2) radar detections using infrastructure-based radar retro-reflectors (RRs
Patil, PriteshFanas Rojas, JohanKadav, ParthSharma, SachinMasterson, AlexandraWang, RossEkti, AliDaHan, LiaoBrown, NicolasAsher, Zachary
SLAM (Simultaneous Localization and Mapping) plays a key role in autonomous driving. Recently, 4D Radar has attracted widespread attention because it breaks through the limitations of 3D millimeter wave radar and can simultaneously detect the distance, velocity, horizontal azimuth and elevation azimuth of the target with high resolution. However, there are few studies on 4D Radar in SLAM. In this paper, RI-FGO, a 4D Radar-Inertial SLAM method based on Factor Graph Optimization, is proposed. The RANSAC (Random Sample Consensus) method is used to eliminate the dynamic obstacle points from a single scan, and the ego-motion velocity is estimated from the static point cloud. A 4D Radar velocity factor is constructed in GTSAM to receive the estimated velocity in a single scan as a measurement and directly integrated into the factor graph. The 4D Radar point clouds of consecutive frames are matched as the odometry factor. A modified scan context method, which is more suitable for 4D Radar’s
Zihang, HeXiong, LuZhuo, GuirongGAO, LetianLu, ShouyiZhu, JiaqiLeng, Bo
Behrooz Rezvani, founder and CEO of Neural Propulsion Systems, cuts to the chase quickly. “We can improve the performance of any radar and help it see clearer, farther and sooner,” he said. Using a mathematical framework initially discussed in an MIT research paper 14 years ago, Rezvani says his company can take any manufacturer's radar unit and help it: Increase resolution by a factor of 10 for two-dimensional imaging Suppress 10 times the number of false positives Detect targets at twice the current distance with a lidar-like point-cloud density Differentiate notoriously difficult targets, such as pedestrians walking or standing next to parked vehicles NPS Executive Consultant Lawrence Burns, the former head of GM research and development, has seen plenty of advancements during deep involvement with the development of night vision and adaptive cruise control. But he always knew existing radar systems were not yet the answer for the future needs of hands-free driving and other
Clonts, Chris
Northrop Grumman Corporation is developing AN/APG-85, an advanced Active Electronically Scanned Array (AESA) radar for the F-35 Lightning II. Northrop Grumman currently manufactures the AN/APG-81 active electronically scanned array (AESA) fire control radar, the cornerstone to the F-35 Lightning II’s sensor suite
Researchers have created a device that enables them to electronically steer and focus a beam of terahertz electromagnetic energy with extreme precision. This opens the door to high-resolution, real-time imaging devices that are hundredths the size of other radar systems and more robust than other optical systems
Driver safety has become an important aspect. To have driver safety RADAR is an essential part of vehicles hence RADAR has great significance in the automotive industry. The Radar sensor collects data from surroundings that may have unwanted data that may lead to improper detections of intended objects, so to have proper object detections it is needed to use clustering methods on the radar point cloud data. There are numerous unsupervised clustering methods used for RADAR applications. In this paper, the comparisons of different unsupervised algorithms such as K-Means Clustering, Hierarchical Clustering, Cluster Using the Gaussian Mixture Model, and DBSCAN are presented. All these clustering algorithms are evaluated based on various evaluation criteria such as the Silhouette coefficient, Davies Bouldin index, etc. Based on evaluations and comparative studies applications of the clustering algorithms are classified
Prajapati, MiitPayghan, VaibhavChauhan, AbhishaNidubrolu, Kranthi
As the automotive industry is coming up with various ADAS solutions, RADAR is playing an important role. There are many parameters concerning RADAR detections to acknowledge. Unsupervised Clustering methods are used for RADAR applications. DBSCAN clustering method which is widely used for RADAR applications. The existing clustering DBSCAN is not aligned very well with its hyperparameters such as epsilon (the radius within which each data point checks the density) and minimum points (minimum data points required within a circle to check for core point) for which a calibration is needed. In this paper, different methods to choose the hyperparameters of DBSCAN are compared and verified with different clustering evaluation criteria. A novel method to select hyperparameters of the DBSCAN algorithm is presented with the paper. For testing the given algorithm, ground truth data is collected, and the results are verified with MATLAB-Simulink
Payghan, Vaibhav SantoshPrajapati, MiitChauhan, Abhisha
The fusion of multi-modal perception in autonomous driving plays a pivotal role in vehicle behavior decision-making. However, much of the previous research has predominantly focused on the fusion of Lidar and cameras. Although Lidar offers an ample supply of point cloud data, its high cost and the substantial volume of point cloud data can lead to computational delays. Consequently, investigating perception fusion under the context of 4D millimeter-wave radar is of paramount importance for cost reduction and enhanced safety. Nevertheless, 4D millimeter-wave radar faces challenges including sparse point clouds, limited information content, and a lack of fusion strategies. In this paper, we introduce, for the first time, an approach that leverages Graph Neural Networks to assist in expressing features from 4D millimeter-wave radar point clouds. This approach effectively extracts unstructured point cloud features, addressing the loss of object detection due to sparsity. Additionally, we
Fan, LiliZeng, ChangxianLi, YunjieWang, XuCao, Dongpu
Many learning-based methods estimate ego-motion using visual sensors. However, visual sensors are prone to intense lighting variations and textureless scenarios. 4D radar, an emerging automotive sensor, complements visual sensors effectively due to its robustness in adverse weather and lighting conditions. This paper presents an end-to-end 4D radar-visual odometry (4DRVO) approach that combines sparse point cloud data from 4D radar with image information from cameras. Using the Feature Pyramid, Pose Warping, and Cost Volume (PWC) network architecture, we extract 4D radar point features and image features at multiple scales. We then employ a hierarchical iterative refinement approach to supervise the estimated pose. We propose a novel Cross-Modal Transformer (CMT) module to effectively fuse the 4D radar point modality, image modality, and 4D radar point-image connection modality at multiple scales, achieving cross-modal feature interaction and multi-modal feature fusion. Additionally
Lu, ShouyiZhuo, GuirongXiong, LuZhou, MingyuLu, Xinfei
Implementation calibration of automotive radar systems plays a fundamental but crucial role to guarantee sensor performance. The commonly used method relies on the environment such as a specific test station for static calibration or a straight metal guardrail for dynamic calibration. In this paper, a sequential method for estimating the radar angle misalignment derived from the Lagrange Multiplier Method in solving an optimization problem is proposed. The sequential method, which requires radar measurements and vehicle speed measurements as input, is more environment-free and can yield a consistent estimation. A simulation study is conducted to validate the consistency and analyze the influence of noise. The result shows that the radar azimuth measurement noise has little influence that the bias could be compensated and the effect of non-gaussianity is negligible. The radar velocity measurement noise bias and vehicle speed measurement noise bias have a linear effect whose coefficient
Pan, SongLu, XinfeiRen, WenpingXue, Dan
4D millimeter wave radar is a high-resolution sensor that has a strong perception ability of the surrounding environment. This paper uses millimeter wave radar point cloud to establish a static probabilistic occupancy grid map for static environment modeling. In order to obtain a clean occupancy grid map, we classify the point cloud according to the result of dynamic point clustering and project the classified point cloud into the grid map. Based on the distribution and category of millimeter wave radar point cloud, we propose a calculation model of grid occupancy probability. After obtaining the occupancy probability according to the calculation model, we calculate the posterior occupancy probability by using the motion law of self-vehicle and Bayesian filtering, and construct a stable probabilistic occupancy grid map. We test the method on real roads, and the results show that the proposed method can effectively suppress the influence of noise points on the quality of grid map, and
Liu, ChangLu, XinfeiXue, DanWu, Li
Radar is playing more and important role in multiple object detection and tracking system due to the fact that Radar can not only determine the velocity instantly but also it is less influenced by environment conditions. However, Radar faces the problem that it has many detection clutter,false alarms and detection results are easily affected by the reflected echoes of road boundary in traffic scenes. Besides this, With the increase of the number of targets and the number of effective echoes, the number of interconnection matrices increases exponentially in joint probability data association, which will seriously affect the real-time and accuracy of high-speed scene algorithms.in the tracking system. So, A method of using millimeter wave radar to detect and fit the boundary guardrail of high-speed road is proposed, and the fitting results are applied to the vehicle detection and tracking system to improve the tracking accuracy. Through the comparison and verification of ablation
Li, Fu-XiangZhu, Yuan
The fusion of 4D millimeter-wave imaging radar and camera is an important development trend of advanced driver assistance systems and autonomous driving. In the field of multi-target tracking, the tracking is easy to lose due to the mutual occlusion of targets in the camera view. Therefore, combining the advantages of visual sensors and 4D millimeter-wave radar, a multi-sensor information fusion association algorithm is proposed. First, the 4D millimeter-wave radar point cloud is preprocessed, outliers are removed, and target-related information in the image is detected; then the point cloud is projected onto the image, and the targets in the segmented region are filtered. The filtered point cloud is clustered, and the correlation between the region projected onto the image and the detection box is calculated. Then use the unscented Kalman filter to predict, design rules to associate targets, and update innovation by multi-point weighting. This paper integrates the information of 4D
Zhao, DingjiaPeng, ShushengXue, DanLu, Xinfei
In this paper, we introduce one imu radar loosely coupled SLAM method based on our 4D millimeter-wave image radar which it outputs pointcloud containing xyz position information and power information in our autonomous vehicles. at common pointcloud-based slam such as lidar slam usually adopt imu-lidar tightly coupled structure, which slam front end outputs odometry reversly affect imu preintegration. slam system badness occurs when front end odometry drift bigger and bigger or one frame pointcloud match failed. so in our method, we decouple imu and radar odometry crossed relationship, fusing imu and wheel odometry to generate one rough pose trajectory as initial guess value for front end registration, not directly from radar estimated odometry pose, that is to say, front end registration is independent of imu preintegration. besides, we empirically propose one idea juding front end registration result to identify match-less environment and adopt relative wheel odometry pose instead of
Zhao, YingzhongLu, XinfeiYe, Tingfeng
The results of monocular depth estimation are no satisfactory in the automatic driving scenario. The combination of radar and camera for depth estimation is a feasible solution to the problem of depth estimation in similar scenes. The radar-camera pixel depth association model establishes a reliable correlation between radar depth and camera pixel. In this paper, a new depth estimation model named Deep-PDANet based on RC-PDA is proposed, which increases the depth and width of the network and alleviates the problem of network degradation through residual structure. Convolution kernels of different sizes are selected in the basic units to further improve the ability to extract global information while taking into account the extraction of information from a single pixel. The convergence speed and learning ability of the network are improved by the training strategy of multi-weight loss function in stages. In this paper, comparison experiments and ablation study were performed on the
Ai, WenjinMa, ZhixiongZheng, Lianqing
Provizio promises its 5D Perception stack can safely compete with expensive lidar sensors at a fraction of the cost. “Safety first” is more than a catchphrase. For sensing company Provizio, it's the only way the transportation industry should introduce autonomous vehicles. In Provizio's view, using AV building blocks - technology such as automatic emergency braking and lane-keep assist - can be valuable in ADAS systems, but they should not be used to drive vehicles until the perception problem has been solved. “It's not that we're skeptical about autonomous driving, it's just that we strongly believe that the industry has taken this wrong path,” Dane Mitrev, machine learning engineer at Provizio, told SAE Media at September 2023's AutoSens Brussels conference. “The industry has looked at things the other way around. They tried to solve autonomy first, without looking at accident prevention and simpler ADAS systems. We are building a perception technology which will first eliminate road
Blanco, Sebastian
A research team at the Illinois Institute of Technology has for the first time demonstrated the use of a novel control method in a tailless aircraft. The technology allows an aircraft to be as smooth and sleek as possible — making it safer to fly in dangerous areas where radar scans the sky for sharp edges
The closet in-path vehicle (CIPV) is recognized relying on the detection results for road lane lines in most current ACC system, which may not work well in the poor conditions, for example, unclear road lane lines, low light level, bad weather, and so on. To solve this problem, the article proposes a sensor fusion-based CIPV recognition algorithm independent of road lane lines. First, a robust Kalman filter based on the global coordinate system is designed to fuse the millimeter-wave radar and camera targets. The fusion algorithm can dynamically adjust the covariance matrix of sensor observations to avoid the influence of anomalous observations on the fusion results. Stable detection of targets by the fusion algorithm is the basis of the CIPV recognition algorithm. Then, the CIPV recognition algorithm generates virtual lane lines using the motion parameters of self-vehicle or the driving trajectory of vehicle target and develops a mode switch strategy for virtual lane lines generation
Yang, YifeiZhao, ZhiguoYu, QinDeng, YunhongLi, Wenchang
Synthetic Aperture Radar (SAR) images are a powerful tool for studying the Earth’s surface. They are radar signals generated by an imaging system mounted on a platform such as an aircraft or satellite. As the platform moves, the system emits sequentially high-power electromagnetic waves through its antenna. The waves are then reflected by the Earth’s surface, re-captured by the antenna, and finally processed to create detailed images of the terrain below
It is hard to imagine an industry more reliant on seamless, resilient, and secure communication than aerospace and defense (A&D). Communication and electromagnetic signal processing are at the core of advanced systems, which is why the trend towards higher frequencies (and millimeter waves) makes optoelectronic signal transmission a critical topic in this sector as technology advances at a rapid pace and demands better performance. A&D communication networks use a mix of digital and analog transmission, with emphasis on the former, but given the industry's proclivity towards lower latency and higher bandwidth applications, analog transmission will play an even larger role in the future. Passive and active electromagnetic sensing (e.g., radar, radio telescopes, and other listening devices) requires high fidelity signal transport for “remote” processing. It brings transport of radio frequency signals over fiber (RFoF) to the forefront, which is an analog technique of converting radio
Radio is a well-established technology. For over 100 years, it has been widely used: in communication, radar, navigation, remote control, remote sensing, and other respects. It is popular because it works; it is reliable. And yet laser has shown itself to be a superior medium of communication. Indeed, the laser-vs-radio debate is already getting old. What is new – and what will truly change the debate – are the transformations currently taking place in laser telecommunications – transformations which will drive innovation in defense
Radio is a well-established technology. For over 100 years, it has been widely used: in communication, radar, navigation, remote control, remote sensing, and other respects. It is popular because it works; it is reliable. And yet laser has shown itself to be a superior medium of communication. Indeed, the laser-vs-radio debate is already getting old. What is new - and what will truly change the debate - are the transformations currently taking place in laser telecommunications - transformations which will drive innovation in defense. It is perhaps worth pausing to remind ourselves of what laser's existing advantages over radio are. Laser communications offer faster data transfer, and greater data capacity. And by virtue of their structure and size, lasers are almost impossible to detect, intercept, or jam. Interference is also rare. Lasers do not ‘leak’ in the same way radio does, and, as against the broad transmission style of radio, they transfer information along a very narrow beam
An extensive evaluation of the Deep Image Prior (DIP) technique for image inpainting on Synthetic Aperture Radar (SAR) images. Air Force Research Laboratory, Wright Patterson Air Force Base, OH Synthetic Aperture Radar (SAR) images are a powerful tool for studying the Earth's surface. They are radar signals generated by an imaging system mounted on a platform such as an aircraft or satellite. As the platform moves, the system emits sequentially high-power electromagnetic waves through its antenna. The waves are then reflected by the Earth's surface, re-captured by the antenna, and finally processed to create detailed images of the terrain below. SAR images are employed in a wide variety of applications. Indeed, as the waves hit different objects, their phase and amplitude are modified according to the objects' characteristics (e.g., permittivity, roughness, geometry, etc.). The collected signal provides highly detailed information about the shape and elevation of the Earth's surface
Speckle noise degrades the visual appearance and the quality of a synthetic aperture radar (SAR) image. The reduction of speckle noise is the first step in any remote-sensing device. To improve the noisy SAR images, a variety of adaptive and nonadaptive noise reduction filters were used. In order to eliminate speckle noise present in SAR images, an adaptive cuckoo search optimization-based speckle reduction bilateral filter has been designed in this article. To test the ability to eliminate multiplicative noise, the suggested filter’s effectiveness was compared to that of several de-speckling approaches. It has been measured with different assessment metrics such as PSNR, EPI, SSIM, and ENL. When compared to conventional de-noising filters, the proposed filter shows promising results for lowering speckle noise and retaining edge properties. In addition, the PSNR value has increased as compared to the PMD method and this method has been shown to be efficient in reducing speckle noise in
Abdus Subhahan, D.Kumar, C.N.S. Vinoth
Kongsberg Defence & Aerospace selected a radar test setup from Rohde & Schwarz based on the R&S SMW200A vector signal generator for multi-channel phase-coherent radar signal generation. Kongsberg is Norway’s premier supplier of defense and aerospace-related technologies. The joint strike missile (JSM) is a fifth generation long range precision strike missile. Using advanced sensors, the JSM can locate targets based on their electronic signature. Qualification of the JSM is under way with the Royal Norwegian Air Force (RNoAF
Kongsberg Defence & Aerospace selected a radar test setup from Rohde & Schwarz based on the R&S SMW200A vector signal generator for multi-channel phase-coherent radar signal generation. Kongsberg is Norway's premier supplier of defense and aerospace-related technologies. The joint strike missile (JSM) is a fifth generation long range precision strike missile. Using advanced sensors, the JSM can locate targets based on their electronic signature. Qualification of the JSM is under way with the Royal Norwegian Air Force (RNoAF). Kongsberg's JSM must operate autonomously in highly contested environments. To increase mission success, the missile has a passive RF sensor that can locate and identify radio frequency emitters. To test and verify this RF direction finding capability in a laboratory, Kongsberg required a multi-channel phase coherent vector signal generator that could be linked to existing test environments
Boeing San Antonio, TX 572-522-7508
The use of personal light electric vehicles (PLEVs), such as electric scooters, has rapidly increased in recent years. However, their widespread use has raised concerns about rider safety due to their vulnerability in shared traffic spaces. To address this issue, this paper presents a radar-based rider assistance system aimed at enhancing the safety of PLEV riders. The system consists of an adaptive feedback system and a single-channel anti-lock braking system (ABS). The adaptive feedback system uses multiple-input multiple-output (MIMO) radar sensors to detect nearby objects and provide real-time warnings to the rider through haptic, visual, and acoustic signals. The system takes into account traffic density and uses online data to warn about obscured objects, thereby improving the rider’s situational awareness. Results from testing the feedback system show that it effectively detects potential collisions and provides warning signals, reducing the risk of accidents. The ABS is
Pyschny, JanBerger, FelixRothen, SamuelDenker, JoachimFrantzen, MichaelRoder, FelixKneiphof, Simon
The Current Icing Product (CIP; Bernstein et al. 2005) and Forecast Icing Product (FIP; Wolff et al. 2009) were originally developed by the United States’ National Center for Atmospheric Research (NCAR) under sponsorship of the Federal Aviation Administration (FAA) in the mid 2000’s and provide operational icing guidance to users through the NOAA Aviation Weather Center (AWC). The current operational version of FIP uses the Rapid Refresh (RAP; Benjamin et al. 2016) numerical weather prediction (NWP) model to provide hourly forecasts of Icing Probability, Icing Severity, and Supercooled Large Drop (SLD) Potential. Forecasts are provided out to 18 hours over the Contiguous United States (CONUS) at 15 flight levels between 1,000 ft and FL290, inclusive, and at a 13-km horizontal resolution. CIP provides similar hourly output on the same grid, but utilizes geostationary satellite data, ground-based radar data, Meteorological Terminal Air Reports (METARS), lightning data, and voice pilot
Rugg, AllysonHaggerty, JulieAdriaansen, DanielSerke, DavidEllis, Scott
Measurements in snow conditions performed in the past were rarely initiated and best suited for pure and extremely detailed quantification of microphysical properties of a series of microphysical parameters, needed for accretion modelling. Within the European ICE GENESIS project, a considerable effort of natural snow measurements has been made during winter 2020/21. Instrumental means, both in-situ and remote sensing were deployed on the ATR-42 aircraft, as well as on the ground (ground station at ‘Les Eplatures’ airport in the Swiss Jura Mountains with ATR-42 overflights). Snow clouds and precipitation in the atmospheric column were sampled with the aircraft, whereas ground based and airborne radar systems allowed extending the observations of snow properties beyond the flight level chosen for the in situ measurements. Overall, five flight missions have been performed at different numerous flight levels (related temperature range from -10°C to +2°C) beyond the ‘Les Eplatures’ airport
Jaffeux, LouisSchwarzenboeck, AlfonsCoutris, PierreFebvre, GuyDezitter, FabienAguilar, Borisbillault-Roux, Anne-claireGrazioli, JacopoBerne, AlexisKöbschall, KilianJorquera, SusanaDelanoe, Julien
Items per page:
1 – 50 of 894