Using Generative Models to Synthesize Multi-Component Asset Images for Training Defect Inspection Models

2024-26-0474

06/01/2024

Features
Event
AeroCON 2024
Authors Abstract
Content
Industries have been increasingly adopting AI based computer vision models for automated asset defect inspection. A challenging aspect within this domain is the inspection of composite assets consisting of multiple components, each of which is an object of interest for inspection, with its own structural variations, defect types and signatures. Training vision models for such an inspection process involves numerous challenges around data acquisition such as insufficient volume, inconsistent positioning, poor quality and imbalance owing to inadequate image samples of infrequently occurring defects. Approaches to augmenting the dataset through Standard Data Augmentation (SDA) methods (image transformations such as flipping, rotation, contrast adjustment, etc.) have had limited success. When dealing with images of such composite assets, it is challenging to correct the data imbalance at the component level using image transformations as they apply to all the components within an image. Even though existing methods would aid in increasing the dataset volume, the continued data imbalance leads to a biased model whose accuracy is adversely impacted. This paper presents an approach to synthesizing composite asset images using Deep Learning, particularly focusing on organically inserting specific components with defect signatures in a composite image without compromising the integrity of the composite asset. The described approach leverages diffusion-based models to generate images, enabling granular control over the location and appearance of the synthesized components in the image. The exercise of this granular control over individual components of an image leads to the creation of a refined and balanced dataset across multiple-components and defect categories. The utility of the approach is established through experiments conducted with real-world asset images. The synthetic data is assessed through metrics that measure how well it represents the real data distribution as well as its impact on downstream asset inspection models through accuracy metrics. The paper concludes by contrasting the efficacy of the presented approach against other family of algorithms including Generative Adversarial Networks.
Meta TagsDetails
DOI
https://doi.org/10.4271/2024-26-0474
Pages
9
Citation
Bhate, U., Jha, A., Kalyan, V., Gupta, R. et al., "Using Generative Models to Synthesize Multi-Component Asset Images for Training Defect Inspection Models," SAE Technical Paper 2024-26-0474, 2024, https://doi.org/10.4271/2024-26-0474.
Additional Details
Publisher
Published
Jun 01
Product Code
2024-26-0474
Content Type
Technical Paper
Language
English