Browse Topic: Cameras
The U-Shift IV represents the latest evolution in modular urban mobility solutions, offering significant advancements over its predecessors. This innovative vehicle concept introduces a distinct separation between the drive module, known as the driveboard, and the transport capsules. The driveboard contains all the necessary components for autonomous driving, allowing it to operate independently. This separation not only enables versatile applications - such as easily swapping capsules for passenger or goods transportation - but also significantly improves the utilization of the driveboard. By allowing a single driveboard to be paired with different capsules, operational efficiency is maximized, enabling continuous deployment of driveboards while the individual capsules are in use. The primary focus of U-Shift IV was to obtain a permit for operating at the Federal Garden Show 2023. To achieve this goal, we built the vehicle around the specific requirements for semi-public road
With 2D cameras and space robotics algorithms, astronautics engineers at Stanford have created a navigation system able to manage multiple satellites using visual data only. They recently tested it in space for the first time. Stanford University, Stanford, CA Someday, instead of large, expensive individual space satellites, teams of smaller satellites - known by scientists as a “swarm” - will work in collaboration, enabling greater accuracy, agility, and autonomy. Among the scientists working to make these teams a reality are researchers at Stanford University's Space Rendezvous Lab, who recently completed the first-ever in-orbit test of a prototype system able to navigate a swarm of satellites using only visual information shared through a wireless network. “It's a milestone paper and the culmination of 11 years of effort by my lab, which was founded with this goal of surpassing the current state of the art and practice in distributed autonomy in space,” said Simone D'Amico
In October 2024, Kongsberg NanoAvionics discovered damage to their MP42 satellite, and used the discovery as an opportunity to raise awareness on the need to reduce space debris generated by satellites. Kongsberg NanoAvionics, Vilnius, Lithuania Our MP42 satellite, which launched into low Earth orbit (LEO) two and a half years ago aboard the SpaceX Transporter-4 mission, recently took an unexpected hit from a small piece of space debris or micrometeoroid. The impact created a 6 mm hole, roughly the size of a chickpea, in one of its solar panels. Despite this damage, the satellite continued performing its mission without interruption, and we only discovered the impact thanks to an image taken by its onboard selfie camera in October of 2024. It is challenging to pinpoint exactly when the impact occurred because MP42's last selfie was taken a year and a half ago, in April of 2023.
Design verification and quality control of automotive components require the analysis of the source location of ultra-short sound events, for instance the engaging event of an electromechanical clutch or the clicking noise of the aluminium frame of a passenger car seat under vibration. State-of-the-art acoustic cameras allow for a frame rate of about 100 acoustic images per second. Considering that most of the sound events introduced above can be far less than 10ms, an acoustic image generated at this rate resembles an hard-to-interpret overlay of multiple sources on the structure under test along with reflections from the surrounding test environment. This contribution introduces a novel method for visualizing impulse-like sound emissions from automotive components at 10x the frame rate of traditional acoustic cameras. A time resolution of less than 1ms eventually allows for the true localization of the initial and subsequent sound events as well as a clear separation of direct from
This study presents a novel methodology for optimizing the acoustic performance of rotating machinery by combining scattered 3D sound intensity data with numerical simulations. The method is demonstrated on the rear axle of a truck. Using Scan&Paint 3D, sound intensity data is rapidly acquired over a large spatial area with the assistance of a 3D sound intensity probe and infrared stereo camera. The experimental data is then integrated into far-field radiation simulations, enabling detailed analysis of the acoustic behavior and accurate predictions of far-field sound radiation. This hybrid approach offers a significant advantage for assessing complex acoustic sources, allowing for quick and reliable evaluation of noise mitigation solutions.
The segment manipulator machine, a large custom-built apparatus, is used for assembling and disassembling heavy tooling, specifically carbon fiber forms. This complex yet slow-moving machine had been in service for nineteen years, with many control components becoming obsolete and difficult to replace. The customer engaged Electroimpact to upgrade the machine using the latest state-of-the-art controls, aiming to extend the system's operational life by at least another two decades. The program from the previous control system could not be reused, necessitating a complete overhaul.
Video analysis plays a major role in many forensic fields. Many articles, publications, and presentations have covered the importance and difficulty in properly establishing frame timing. In many cases, the analyst is given video files that do not contain native metadata. In other cases, the files contain video recordings of the surveillance playback monitor which eliminates all original metadata from the video recording. These “video of video” recordings prevent an analyst from determining frame timing using metadata from the original file. However, within many of these video files, timestamp information is visually imprinted onto each frame. Analyses that rely on timing of events captured in video may benefit from these imprinted timestamps, but for forensic purposes, it is important to establish the accuracy and reliability of these timestamps. The purpose of this research is to examine the accuracy of these timestamps and to establish if they can be used to determine the timing
Seoul National University College of Engineering announced that researchers from the Department of Electrical and Computer Engineering’s Optical Engineering and Quantum Electronics Laboratory have developed an optical design technology that dramatically reduces the volume of cameras with a folded lens system utilizing “metasurfaces,” a next-generation nano-optical device. By arranging metasurfaces on the glass substrate so that light can be reflected and moved around in the glass substrate in a folded manner, the researchers have realized a lens system with a thickness of 0.7 mm, which is much thinner than existing refractive lens systems. The research, which was supported by the Samsung Future Technology Development Program and the Institute of Information & Communications Technology Planning & Evaluation (IITP), was published on October 30 in the journal Science Advances. Traditional cameras are designed to stack multiple glass lenses to refract light when capturing images. While
A team led by University of Maryland computer scientists invented a camera mechanism that improves how robots see and react to the world around them. Inspired by how the human eye works, their innovative camera system mimics the tiny involuntary movements used by the eye to maintain clear and stable vision over time. The team’s prototyping and testing of the camera — called the Artificial Microsaccade-Enhanced Event Camera (AMI-EV) — was detailed in a paper published in the journal Science Robotics in May 2024.
Sometimes, we try to capture a QR code with a good digital camera on a smartphone, but the reading eventually fails. This usually happens when the QR code itself is of poor image quality, or if it has been printed on surfaces that are not flat — deformed or with irregularities of unknown pattern — such as the wrapping of a courier package or a tray of prepared food. Now, a team from the University of Barcelona (UB) and the Universitat Oberta de Catalunya (UOC) has designed a methodology that facilitates the recognition of QR codes in these physical environments, where reading is more complicated.
The flow structure and unsteadiness of shock wave–boundary layer interaction (SWBLI) has been studied using rainbow schlieren deflectometry (RSD), ensemble averaging, fast Fourier transform (FFT), and snapshot proper orthogonal decomposition (POD) techniques. Shockwaves were generated in a test section by subjecting a Mach = 3.1 free-stream flow to a 12° isosceles triangular prism. The RSD pictures captured with a high-speed camera at 5000 frames/s rate were used to determine the transverse ray deflections at each pixel of the pictures. The interaction region structure is described statistically with the ensemble average and root mean square deflections. The FFT technique was used to determine the frequency content of the flow field. Results indicate that dominant frequencies were in the range of 400 Hz–900 Hz. The Strouhal numbers calculated using the RSD data were in the range of 0.025–0.07. The snapshot POD technique was employed to analyze flow structures and their associated
This project presents the development of an advanced Autonomous Mobile Robot (AMR) designed to autonomously lift and maneuver four-wheel drive vehicles into parking spaces without human intervention. By leveraging cutting-edge camera and sensor technologies, the AMR integrates LIDAR for precise distance measurements and obstacle detection, high-resolution cameras for capturing detailed images of the parking environment, and object recognition algorithms for accurately identifying and selecting available parking spaces. These integrated technologies enable the AMR to navigate complex parking lots, optimize space utilization, and provide seamless automated parking. The AMR autonomously detects free parking spaces, lifts the vehicle, and parks it with high precision, making the entire parking process autonomous and highly efficient. This project pushes the boundaries of autonomous vehicle technology, aiming to contribute significantly to smarter and more efficient urban mobility systems.
Researchers led by Professor Young Min Song from the Gwangju Institute of Science and Technology (GIST) have unveiled a vision system inspired by feline eyes to enhance object detection in various lighting conditions. Featuring a unique shape and reflective surface, the system reduces glare in bright environments and boosts sensitivity in low-light scenarios. By filtering unnecessary details, this technology significantly improves the performance of single-lens cameras, representing a notable advancement in robotic vision capabilities.
There are certain situations when landing an Advanced Air Mobility (AAM) aircraft is required to be performed without assistance from GPS data. For example, AAM aircraft flying in an urban environment with tall buildings and narrow canyons may affect the ability of the AAM aircraft to effectively use GPS to access a landing area. Incorporating a vision-based navigation method, NASA Ames has developed a novel Alternative Position, Navigation, and Timing (APNT) solution for AAM aircraft in environments where GPS is not available.
In non-cooperative environments, unmanned aerial vehicles (UAVs) have to land without artificial markers, which is a key step towards achieving full autonomy. However, the existing vision-based schemes have the common problems of poor robustness and generalization, and the LiDAR-based schemes have the disadvantages of low resolution, high power consumption and high weight. In this paper, we propose an UAV landing system equipped with a binocular camera to preform 3D reconstruction and select the safe landing zone. The whole system only consists of a stereo camera, and the innovation of the solution is fusing the stereo matching algorithm and monocular depth estimation(MDE) model to get a robust prediction on the metric depth. The whole landing system consists of a stereo matching module, a monocular depth estimation (MDE) module, a depth fusion module, and a safe landing zone selection module. The stereo matching module uses Semi-Global Matching (SGM) algorithm to calculate the
Items per page:
50
1 – 50 of 570