Browse Topic: Cameras
Planetary and lunar rover exploration missions can encounter environments that do not allow for navigation by typical, stereo camera-based systems. Stereo cameras meet difficulties in areas with low ambient light (even when lit by floodlights), direct sunlight, or washed-out environments. Improved sensors are required for safe and successful rover mobility in harsh conditions. NASA Goddard Space Flight Center has developed a Space Qualified Rover LiDAR (SQRLi) system that will improve rover sensing capabilities in a small, lightweight package. The new SQRLi package is developed to survive the hazardous space environment and provide valuable image data during planetary and lunar rover exploration.
Measuring the volume of harvested material behind the machine can be beneficial for various agricultural operations, such as baling, dropping, material decomposition, cultivation, and seeding. This paper aims to investigate and determine the volume of material for use in various agricultural operations. This proposed methodology can help to predict the amount of residue available in the field, assess field readiness for the next production cycle, measure residue distribution, determine hay readiness for baling, and evaluate the quantity of hay present in the field, among other applications which would benefit the customer. Efficient post-harvest residue management is essential for sustainable agriculture. This paper presents an Automated Offboard System that leverages Remote Sensing, IoT, Image Processing, and Machine Learning/Deep Learning (ML/DL) to measure the volume of harvested material in real-time. The system integrates onboard cameras and satellite imagery to analyze the field
Image sensors built into every smartphone and digital camera, distinguish colors like the human eye. In our retinas, individual cone cells recognize red, green and blue (RGB). In image sensors, individual pixels absorb the corresponding wavelengths and convert them into electrical signals.
Researchers have developed a prototype imaging system that could significantly improve doctors’ ability to detect cancerous tissue during endoscopic procedures. This approach combines light-emitting diodes (LEDs) with hyperspectral imaging technology to create detailed maps of tissue properties that are invisible to conventional endoscopic cameras.
In today’s digital age, the use of “Internet-of-Things” devices (embedded with software and sensors) has become widespread. These devices include wireless equipment, autonomous machinery, wearable sensors, and security systems. Because of their intricate structures and properties there is a need to scrutinize them closely to assess their safety and utility and rule out any potential defects. But, at the same time, damage to the device during inspection must be avoided.
Northwestern engineers have developed a new system for full-body motion capture — and it doesn’t require specialized rooms, expensive equipment, bulky cameras, or an array of sensors. Instead, it requires a simple mobile device.
Engineers have developed a smart capsule called PillTrek that can measure pH, temperature, and a variety of different biomarkers. It incorporates simple, inexpensive sensors into a miniature wireless electrochemical workstation that relies on low-power electronics. PillTrek measures 7 mm in diameter and 25 mm in length, making it smaller than commercially available capsule cameras used for endoscopy but capable of executing a range of electrochemical measurements.
The U-Shift IV represents the latest evolution in modular urban mobility solutions, offering significant advancements over its predecessors. This innovative vehicle concept introduces a distinct separation between the drive module, known as the driveboard, and the transport capsules. The driveboard contains all the necessary components for autonomous driving, allowing it to operate independently. This separation not only enables versatile applications - such as easily swapping capsules for passenger or goods transportation - but also significantly improves the utilization of the driveboard. By allowing a single driveboard to be paired with different capsules, operational efficiency is maximized, enabling continuous deployment of driveboards while the individual capsules are in use. The primary focus of U-Shift IV was to obtain a permit for operating at the Federal Garden Show 2023. To achieve this goal, we built the vehicle around the specific requirements for semi-public road
With 2D cameras and space robotics algorithms, astronautics engineers at Stanford have created a navigation system able to manage multiple satellites using visual data only. They recently tested it in space for the first time. Stanford University, Stanford, CA Someday, instead of large, expensive individual space satellites, teams of smaller satellites - known by scientists as a “swarm” - will work in collaboration, enabling greater accuracy, agility, and autonomy. Among the scientists working to make these teams a reality are researchers at Stanford University's Space Rendezvous Lab, who recently completed the first-ever in-orbit test of a prototype system able to navigate a swarm of satellites using only visual information shared through a wireless network. “It's a milestone paper and the culmination of 11 years of effort by my lab, which was founded with this goal of surpassing the current state of the art and practice in distributed autonomy in space,” said Simone D'Amico
In October 2024, Kongsberg NanoAvionics discovered damage to their MP42 satellite, and used the discovery as an opportunity to raise awareness on the need to reduce space debris generated by satellites. Kongsberg NanoAvionics, Vilnius, Lithuania Our MP42 satellite, which launched into low Earth orbit (LEO) two and a half years ago aboard the SpaceX Transporter-4 mission, recently took an unexpected hit from a small piece of space debris or micrometeoroid. The impact created a 6 mm hole, roughly the size of a chickpea, in one of its solar panels. Despite this damage, the satellite continued performing its mission without interruption, and we only discovered the impact thanks to an image taken by its onboard selfie camera in October of 2024. It is challenging to pinpoint exactly when the impact occurred because MP42's last selfie was taken a year and a half ago, in April of 2023.
This study presents a novel methodology for optimizing the acoustic performance of rotating machinery by combining scattered 3D sound intensity data with numerical simulations. The method is demonstrated on the rear axle of a truck. Using Scan&Paint 3D, sound intensity data is rapidly acquired over a large spatial area with the assistance of a 3D sound intensity probe and infrared stereo camera. The experimental data is then integrated into far-field radiation simulations, enabling detailed analysis of the acoustic behavior and accurate predictions of far-field sound radiation. This hybrid approach offers a significant advantage for assessing complex acoustic sources, allowing for quick and reliable evaluation of noise mitigation solutions.
Design verification and quality control of automotive components require the analysis of the source location of ultra-short sound events, for instance the engaging event of an electromechanical clutch or the clicking noise of the aluminium frame of a passenger car seat under vibration. State-of-the-art acoustic cameras allow for a frame rate of about 100 acoustic images per second. Considering that most of the sound events introduced above can be far less than 10ms, an acoustic image generated at this rate resembles an hard-to-interpret overlay of multiple sources on the structure under test along with reflections from the surrounding test environment. This contribution introduces a novel method for visualizing impulse-like sound emissions from automotive components at 10x the frame rate of traditional acoustic cameras. A time resolution of less than 1ms eventually allows for the true localization of the initial and subsequent sound events as well as a clear separation of direct from
The segment manipulator machine, a large custom-built apparatus, is used for assembling and disassembling heavy tooling, specifically carbon fiber forms. This complex yet slow-moving machine had been in service for nineteen years, with many control components becoming obsolete and difficult to replace. The customer engaged Electroimpact to upgrade the machine using the latest state-of-the-art controls, aiming to extend the system's operational life by at least another two decades. The program from the previous control system could not be reused, necessitating a complete overhaul.
Video analysis plays a major role in many forensic fields. Many articles, publications, and presentations have covered the importance and difficulty in properly establishing frame timing. In many cases, the analyst is given video files that do not contain native metadata. In other cases, the files contain video recordings of the surveillance playback monitor which eliminates all original metadata from the video recording. These “video of video” recordings prevent an analyst from determining frame timing using metadata from the original file. However, within many of these video files, timestamp information is visually imprinted onto each frame. Analyses that rely on timing of events captured in video may benefit from these imprinted timestamps, but for forensic purposes, it is important to establish the accuracy and reliability of these timestamps. The purpose of this research is to examine the accuracy of these timestamps and to establish if they can be used to determine the timing
Items per page:
50
1 – 50 of 585