Browse Topic: Cameras
Measuring the volume of harvested material behind the machine can be beneficial for various agricultural operations, such as baling, dropping, material decomposition, cultivation, and seeding. This paper aims to investigate and determine the volume of material for use in various agricultural operations. This proposed methodology can help to predict the amount of residue available in the field, assess field readiness for the next production cycle, measure residue distribution, determine hay readiness for baling, and evaluate the quantity of hay present in the field, among other applications which would benefit the customer. Efficient post-harvest residue management is essential for sustainable agriculture. This paper presents an Automated Offboard System that leverages Remote Sensing, IoT, Image Processing, and Machine Learning/Deep Learning (ML/DL) to measure the volume of harvested material in real-time. The system integrates onboard cameras and satellite imagery to analyze the field
Engineers have developed a smart capsule called PillTrek that can measure pH, temperature, and a variety of different biomarkers. It incorporates simple, inexpensive sensors into a miniature wireless electrochemical workstation that relies on low-power electronics. PillTrek measures 7 mm in diameter and 25 mm in length, making it smaller than commercially available capsule cameras used for endoscopy but capable of executing a range of electrochemical measurements.
The U-Shift IV represents the latest evolution in modular urban mobility solutions, offering significant advancements over its predecessors. This innovative vehicle concept introduces a distinct separation between the drive module, known as the driveboard, and the transport capsules. The driveboard contains all the necessary components for autonomous driving, allowing it to operate independently. This separation not only enables versatile applications - such as easily swapping capsules for passenger or goods transportation - but also significantly improves the utilization of the driveboard. By allowing a single driveboard to be paired with different capsules, operational efficiency is maximized, enabling continuous deployment of driveboards while the individual capsules are in use. The primary focus of U-Shift IV was to obtain a permit for operating at the Federal Garden Show 2023. To achieve this goal, we built the vehicle around the specific requirements for semi-public road
With 2D cameras and space robotics algorithms, astronautics engineers at Stanford have created a navigation system able to manage multiple satellites using visual data only. They recently tested it in space for the first time. Stanford University, Stanford, CA Someday, instead of large, expensive individual space satellites, teams of smaller satellites - known by scientists as a “swarm” - will work in collaboration, enabling greater accuracy, agility, and autonomy. Among the scientists working to make these teams a reality are researchers at Stanford University's Space Rendezvous Lab, who recently completed the first-ever in-orbit test of a prototype system able to navigate a swarm of satellites using only visual information shared through a wireless network. “It's a milestone paper and the culmination of 11 years of effort by my lab, which was founded with this goal of surpassing the current state of the art and practice in distributed autonomy in space,” said Simone D'Amico
In October 2024, Kongsberg NanoAvionics discovered damage to their MP42 satellite, and used the discovery as an opportunity to raise awareness on the need to reduce space debris generated by satellites. Kongsberg NanoAvionics, Vilnius, Lithuania Our MP42 satellite, which launched into low Earth orbit (LEO) two and a half years ago aboard the SpaceX Transporter-4 mission, recently took an unexpected hit from a small piece of space debris or micrometeoroid. The impact created a 6 mm hole, roughly the size of a chickpea, in one of its solar panels. Despite this damage, the satellite continued performing its mission without interruption, and we only discovered the impact thanks to an image taken by its onboard selfie camera in October of 2024. It is challenging to pinpoint exactly when the impact occurred because MP42's last selfie was taken a year and a half ago, in April of 2023.
This study presents a novel methodology for optimizing the acoustic performance of rotating machinery by combining scattered 3D sound intensity data with numerical simulations. The method is demonstrated on the rear axle of a truck. Using Scan&Paint 3D, sound intensity data is rapidly acquired over a large spatial area with the assistance of a 3D sound intensity probe and infrared stereo camera. The experimental data is then integrated into far-field radiation simulations, enabling detailed analysis of the acoustic behavior and accurate predictions of far-field sound radiation. This hybrid approach offers a significant advantage for assessing complex acoustic sources, allowing for quick and reliable evaluation of noise mitigation solutions.
Design verification and quality control of automotive components require the analysis of the source location of ultra-short sound events, for instance the engaging event of an electromechanical clutch or the clicking noise of the aluminium frame of a passenger car seat under vibration. State-of-the-art acoustic cameras allow for a frame rate of about 100 acoustic images per second. Considering that most of the sound events introduced above can be far less than 10ms, an acoustic image generated at this rate resembles an hard-to-interpret overlay of multiple sources on the structure under test along with reflections from the surrounding test environment. This contribution introduces a novel method for visualizing impulse-like sound emissions from automotive components at 10x the frame rate of traditional acoustic cameras. A time resolution of less than 1ms eventually allows for the true localization of the initial and subsequent sound events as well as a clear separation of direct from
The segment manipulator machine, a large custom-built apparatus, is used for assembling and disassembling heavy tooling, specifically carbon fiber forms. This complex yet slow-moving machine had been in service for nineteen years, with many control components becoming obsolete and difficult to replace. The customer engaged Electroimpact to upgrade the machine using the latest state-of-the-art controls, aiming to extend the system's operational life by at least another two decades. The program from the previous control system could not be reused, necessitating a complete overhaul.
Video analysis plays a major role in many forensic fields. Many articles, publications, and presentations have covered the importance and difficulty in properly establishing frame timing. In many cases, the analyst is given video files that do not contain native metadata. In other cases, the files contain video recordings of the surveillance playback monitor which eliminates all original metadata from the video recording. These “video of video” recordings prevent an analyst from determining frame timing using metadata from the original file. However, within many of these video files, timestamp information is visually imprinted onto each frame. Analyses that rely on timing of events captured in video may benefit from these imprinted timestamps, but for forensic purposes, it is important to establish the accuracy and reliability of these timestamps. The purpose of this research is to examine the accuracy of these timestamps and to establish if they can be used to determine the timing
A team led by University of Maryland computer scientists invented a camera mechanism that improves how robots see and react to the world around them. Inspired by how the human eye works, their innovative camera system mimics the tiny involuntary movements used by the eye to maintain clear and stable vision over time. The team’s prototyping and testing of the camera — called the Artificial Microsaccade-Enhanced Event Camera (AMI-EV) — was detailed in a paper published in the journal Science Robotics in May 2024.
Seoul National University College of Engineering announced that researchers from the Department of Electrical and Computer Engineering’s Optical Engineering and Quantum Electronics Laboratory have developed an optical design technology that dramatically reduces the volume of cameras with a folded lens system utilizing “metasurfaces,” a next-generation nano-optical device. By arranging metasurfaces on the glass substrate so that light can be reflected and moved around in the glass substrate in a folded manner, the researchers have realized a lens system with a thickness of 0.7 mm, which is much thinner than existing refractive lens systems. The research, which was supported by the Samsung Future Technology Development Program and the Institute of Information & Communications Technology Planning & Evaluation (IITP), was published on October 30 in the journal Science Advances. Traditional cameras are designed to stack multiple glass lenses to refract light when capturing images. While
Sometimes, we try to capture a QR code with a good digital camera on a smartphone, but the reading eventually fails. This usually happens when the QR code itself is of poor image quality, or if it has been printed on surfaces that are not flat — deformed or with irregularities of unknown pattern — such as the wrapping of a courier package or a tray of prepared food. Now, a team from the University of Barcelona (UB) and the Universitat Oberta de Catalunya (UOC) has designed a methodology that facilitates the recognition of QR codes in these physical environments, where reading is more complicated.
The flow structure and unsteadiness of shock wave–boundary layer interaction (SWBLI) has been studied using rainbow schlieren deflectometry (RSD), ensemble averaging, fast Fourier transform (FFT), and snapshot proper orthogonal decomposition (POD) techniques. Shockwaves were generated in a test section by subjecting a Mach = 3.1 free-stream flow to a 12° isosceles triangular prism. The RSD pictures captured with a high-speed camera at 5000 frames/s rate were used to determine the transverse ray deflections at each pixel of the pictures. The interaction region structure is described statistically with the ensemble average and root mean square deflections. The FFT technique was used to determine the frequency content of the flow field. Results indicate that dominant frequencies were in the range of 400 Hz–900 Hz. The Strouhal numbers calculated using the RSD data were in the range of 0.025–0.07. The snapshot POD technique was employed to analyze flow structures and their associated
Items per page:
50
1 – 50 of 579