More effective training and robust validation of autonomous vehicles
Simulations are essential tools in the development and validation of autonomous vehicles (AVs). Platforms such as CARLA allow researchers and engineers to model complex traffic scenarios under controlled and reproducible conditions. However, a key limitation remains: while the underlying physics and scenario logic are well captured, the visual realism of simulated sensor data often lags behind real-world perception. This gap significantly limits the transferability of perception and end-to-end driving models trained in simulation to deployment in real traffic environments.
This research project aims to address the Sim2Real gap by coupling physics-based simulation with recent advances in Generative AI. In particular, we focus on neural rendering techniques, including Neural Radiance Fields (NeRFs), Gaussian Splatting, and diffusion-based generative models, to improve the photorealism of simulated camera and LiDaR data. These methods enable the synthesis of images and point clouds that closely match real-world appearance while remaining consistent with the geometry, dynamics, and semantics of the simulated environment.
The core idea is to integrate neural rendering pipelines directly into established simulators such as CARLA, using simulator outputs (e.g., geometry, depth, semantics, and trajectories) as structured conditioning signals for generative models. This allows the generation of visually realistic scenes that preserve physical correctness and temporal coherence, while enabling systematic variation of appearance factors such as lighting, weather, materials, and sensor characteristics.
By combining CARLA's scenario control and physical fidelity with neural rendering and diffusion-based synthesis, the proposed framework aims to produce simulation data that is both behaviorally meaningful and visually realistic. This, in turn, is expected to improve the generalization of AV perception and driving models, reduce reliance on expensive real-world data collection, and support reproducible, high-fidelity scenario-based testing for safety-critical systems.
We are looking for companies that want to analyze and optimize their existing requirements and specification practices in the field of simulation technology, 3D rendering and synthetic data generation. Your insights will contribute to the development of practical and efficient solutions for improving simulation technologies and creating realistic, data-driven models.
Get in touch with us and actively shape the future of autonomous driving technologies. Companies that become project partners will have the opportunity to test future tools that we will develop to improve simulation environments and generate synthetic data.
Please fill in the form below. We will get back to you as soon as possible and inform you in detail about the next steps.