Virtual Recording Generation Using Generative AI and Carla Simulator

2024-28-0261

12/05/2024

Event
11th SAEINDIA International Mobility Conference (SIIMC 2024)
Authors Abstract
Content
To establish and validate new systems incorporated into next generation vehicles, it is important to understand actual scenarios which the autonomous vehicles will likely encounter. Consequently, to do this, it is important to run Field Operational Tests (FOT). FOT is undertaken with many vehicles and large acquisition areas ensuing the capability and suitability of a continuous function, thus guaranteeing the randomization of test conditions. FOT and Use case(a software testing technique designed to ensure that the system under test meets and exceeds the stakeholders' expectations) scenario recordings capture is very expensive, due to the amount of necessary material (vehicles, measurement equipment/objectives, headcount, data storage capacity/complexity, trained drivers/professionals) and all-time robust working vehicle setup is not always available, moreover mileage is directly proportional to time, along with that it cannot be scaled up due to physical limitations. During the early development phase, ground truth data is not available, and data that can be reused from other projects may not match 100% with current project requirements. All event scenarios/weather conditions cannot be ensured during recording capture, in such cases synthetic/virtual recording comes very handy which can accurately mimic real conditions on test bench and can very well address the before mentioned constraints. Car Learning to Act (CARLA) [1] is an autonomous open-source driving simulator, used for the development, training, and validation of autonomous driving systems is extended for generation of synthetic/virtual data/recordings, by integrating Generative Artificial Intelligence (Gen AI), particularly Generative Adversarial Networks (GANs) [2] and Retrieval Augmented Generation (RAG) [3] which are deep learning models. The process of creating synthetic data using vehicle models becomes more efficient and reliable as Gen AI can hold and reproduce much more data in scenario development than a developer or tester. A Large Language Model (LLM) [4] takes user input in the form of user prompts and generate scenarios that are used to produce a vast amount of high-quality, distinct, and realistic driving scenarios that closely resemble real-world driving data. Gen AI [5] empowers the user to generate not only dynamic environment conditions (such as different weather conditions and lighting conditions) but also dynamic elements like the behavior of other vehicles and pedestrians. Synthetic/Virtual recording [6] generated using Gen AI can be used to train and validate virtual vehicle models, FOT/Use case data which is used to indirectly prove real-world performance of functionality of tasks such as object detection, object recognition, image segmentation, and decision-making algorithms in autonomous vehicles. Augmenting LLM with CARLA involves training generative models on real-world driving data using RAG which allows the model to generate new, synthetic instances that resemble real-world conditions/scenarios.
Meta TagsDetails
DOI
https://doi.org/10.4271/2024-28-0261
Pages
15
Citation
Sehgal, V., and Sekaran, N., "Virtual Recording Generation Using Generative AI and Carla Simulator," SAE Technical Paper 2024-28-0261, 2024, https://doi.org/10.4271/2024-28-0261.
Additional Details
Publisher
Published
Dec 05
Product Code
2024-28-0261
Content Type
Technical Paper
Language
English