Woman at the wheel

Partner call

More effective training and robust validation of autonomous vehicles

Integration of GenAI-Based Neural Rendering into Simulation Platforms

Simulations are essential tools in the development and validation of autonomous vehicles (AVs). Platforms such as CARLA allow researchers and engineers to model complex traffic scenarios under controlled and reproducible conditions. However, a key limitation remains: while the underlying physics and scenario logic are well captured, the visual realism of simulated sensor data often lags behind real-world perception. This gap significantly limits the transferability of perception and end-to-end driving models trained in simulation to deployment in real traffic environments.

This research project aims to address the Sim2Real gap by coupling physics-based simulation with recent advances in Generative AI. In particular, we focus on neural rendering techniques, including Neural Radiance Fields (NeRFs), Gaussian Splatting, and diffusion-based generative models, to improve the photorealism of simulated camera and LiDaR data. These methods enable the synthesis of images and point clouds that closely match real-world appearance while remaining consistent with the geometry, dynamics, and semantics of the simulated environment.

The core idea is to integrate neural rendering pipelines directly into established simulators such as CARLA, using simulator outputs (e.g., geometry, depth, semantics, and trajectories) as structured conditioning signals for generative models. This allows the generation of visually realistic scenes that preserve physical correctness and temporal coherence, while enabling systematic variation of appearance factors such as lighting, weather, materials, and sensor characteristics.

By combining CARLA's scenario control and physical fidelity with neural rendering and diffusion-based synthesis, the proposed framework aims to produce simulation data that is both behaviorally meaningful and visually realistic. This, in turn, is expected to improve the generalization of AV perception and driving models, reduce reliance on expensive real-world data collection, and support reproducible, high-fidelity scenario-based testing for safety-critical systems.

Key information for project participants

Project duration

  • approx. 2 years

Target group

We are looking for SMEs or companies with expertise in:

  • Simulation technology
  • 3D rendering
  • Synthetic data generation

Your benefits

  • Access to the latest methods for improving simulation technologies and synthetic data generation
  • Testing opportunity for future tools to optimize simulation environments and 3D rendering technologies
  • Early access to current research results in the field of autonomous driving and synthetic data generation
  • Individual analysis of the current state of your simulation practices and 3D rendering processes in terms of realistic, physically consistent data

Become a project partner!

We are looking for companies that want to analyze and optimize their existing requirements and specification practices in the field of simulation technology, 3D rendering and synthetic data generation. Your insights will contribute to the development of practical and efficient solutions for improving simulation technologies and creating realistic, data-driven models.

Are you interested?

Get in touch with us and actively shape the future of autonomous driving technologies. Companies that become project partners will have the opportunity to test future tools that we will develop to improve simulation environments and generate synthetic data.

Register now!

Please fill in the form below. We will get back to you as soon as possible and inform you in detail about the next steps.

*Mandatory fields
 Jana Kümmel

Your contact

Jana Kümmel

+49 89 3603522 146
kuemmel@fortiss.org

Prof. Dr. Andrea Stocco

Your contact

Prof. Dr. Andrea Stocco

+49 89 3603522 271
stocco@fortiss.org