Browse Topic: Imaging and visualization
In this article we will discuss the development and implementation of a computer vision system to be used in decision-making and control of an electro-hydraulic mechanism in order to guarantee correct functioning and efficiency during the logistics project. To achieve this, we have brought together a team of engineering students with knowledge in the area of Artificial Intelligence, Front End and mechanical, electrical and hydraulic devices. The project consists of installing a system on a forklift that moves packaged household appliances that can identify and differentiate the different types of products moved in factories and distribution centers. Therefore, the objective will be to process this identification and control an electro-hydraulic pressure control valve (normally controlled in PWM) so that it releases only the hydraulic pressure configured for each type of packaging/product, and thus correctly squeezing (compressing) the specific volume, without damaging it due to
Innovators at NASA Johnson Space Center have developed a technology that can isolate a single direction of tensile strain in biaxially woven material. This is accomplished using traditional digital image correlation (DIC) techniques in combination with custom red-green-blue (RGB) color filtering software. DIC is a software-based method used to measure and characterize surface deformation and strain of an object. This technology was originally developed to enable the extraction of circumferential and longitudinal webbing strain information from material comprising the primary restraint layer that encompasses inflatable space structures.
Planetary and lunar rover exploration missions can encounter environments that do not allow for navigation by typical, stereo camera-based systems. Stereo cameras meet difficulties in areas with low ambient light (even when lit by floodlights), direct sunlight, or washed-out environments. Improved sensors are required for safe and successful rover mobility in harsh conditions. NASA Goddard Space Flight Center has developed a Space Qualified Rover LiDAR (SQRLi) system that will improve rover sensing capabilities in a small, lightweight package. The new SQRLi package is developed to survive the hazardous space environment and provide valuable image data during planetary and lunar rover exploration.
Virtual reality (VR), Augmented Reality (AR) and Mixed reality (MR) are advanced engineering techniques that coalesces physical and digital world to showcase better perceiving. There are various complex physics which may not be feasible to visualize using conventional post processing methods. Various industrial experts are already exploring implementation of VR for product development. Traditional computational power is improving day-by-day with new additional features to reduce the discrepancy between test and CFD. There has been an increase in demand to replace actual tests with accurate simulation approaches. Post processing and data analysis are key to understand complex physics and resolving critical failure modes. Analysts spend a considerable amount of time analyzing results and provide directions, design changes and recommendations. There is a scope to utilize advanced features of VR, AR and MR in CFD post process to find out the root cause of any failures occurred with
Measuring the volume of harvested material behind the machine can be beneficial for various agricultural operations, such as baling, dropping, material decomposition, cultivation, and seeding. This paper aims to investigate and determine the volume of material for use in various agricultural operations. This proposed methodology can help to predict the amount of residue available in the field, assess field readiness for the next production cycle, measure residue distribution, determine hay readiness for baling, and evaluate the quantity of hay present in the field, among other applications which would benefit the customer. Efficient post-harvest residue management is essential for sustainable agriculture. This paper presents an Automated Offboard System that leverages Remote Sensing, IoT, Image Processing, and Machine Learning/Deep Learning (ML/DL) to measure the volume of harvested material in real-time. The system integrates onboard cameras and satellite imagery to analyze the field
Healthcare data is growing at a faster rate compared to any other industry globally. This data, which plays an instrumental role in patient diagnosis, comes from diverse medical sources, which include magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), genomics, proteomics, wearable sensor streams and electronic health records (EHRs) that vary in structure. Since the data sets differ from each other and have multiple dimensions, they can be hard to interpret in clinical settings, especially when putting together details from different formats.
A noninvasive imaging system combines two advanced techniques to examine both the structure and chemical composition of skin cancers. This approach could improve how doctors diagnose and classify skin cancer and how they monitor treatment responses.
In view of the complexity of railway engineering structure, the systematicness of professional collaboration and the high reliability of operation safety, this paper studied the spatial-temporal information data organization model with all elements in whole domain for Shuozhou-Huanghua Railway from the aspect of Shuozhou-Huanghua Railway spatial-temporal information security. Taking the unique spatial-temporal benchmark as the main line, the paper associated different spatial-temporal information to form an efficient organization model of Shuozhou-Huanghua Railway spatial-temporal information with all elements in the whole domain, so as to implement the effective organization of massive spatial-temporal information in various specialties and fields of Shuozhou-Huanghua Railway; By using GIS (Geographic Information System) visualization technology, spatial analysis technology and big data real-time dynamic rendering technology, it was realized the real-time dynamic visualization display
Image sensors built into every smartphone and digital camera, distinguish colors like the human eye. In our retinas, individual cone cells recognize red, green and blue (RGB). In image sensors, individual pixels absorb the corresponding wavelengths and convert them into electrical signals.
A new bioimaging device can operate with significantly lower power and in an entirely non-mechanical way. It could one day improve detecting eye and even heart conditions. The device uses a process called electrowetting to change the surface shape of a liquid to perform optical functions. By creating a device that doesn’t use scanning mirrors, the technique requires less electrical power than other devices used for OCT and bioimaging. To test the device’s ability to perform biomedical imaging, the researchers turned to zebrafish. The researchers focused on identifying where the cornea, iris, and retina was from the zebrafish. The two benchmarks that the group hoped to achieve were 10 μm in axial resolution and then around 5 μm in lateral resolution.
Researchers have developed a prototype imaging system that could significantly improve doctors’ ability to detect cancerous tissue during endoscopic procedures. This approach combines light-emitting diodes (LEDs) with hyperspectral imaging technology to create detailed maps of tissue properties that are invisible to conventional endoscopic cameras.
In today’s digital age, the use of “Internet-of-Things” devices (embedded with software and sensors) has become widespread. These devices include wireless equipment, autonomous machinery, wearable sensors, and security systems. Because of their intricate structures and properties there is a need to scrutinize them closely to assess their safety and utility and rule out any potential defects. But, at the same time, damage to the device during inspection must be avoided.
Items per page:
50
1 – 50 of 6489