Why is bridging the simulation-reality gap such a critical challenge?
Simulation-based testing plays a crucial role in the development of cyber-physical systems such as autonomous vehicles, intelligent robots, and smart infrastructure. It allows for safe, scalable, and cost-effective validation. However, simulations often fail to fully capture the complexity of the real world, leading to what is known as the "simulation-reality gap" (Sim2real Gap).
This gap arises from several factors: The perception gap occurs when simulated sensors fail to realistically model real-world influences like noise, obstructions, or lighting conditions. The actuation gap refers to discrepancies between the idealized controls in the simulation and the actual behavior of the hardware, such as delays or mechanical inaccuracies. The scenario gap deals with the challenge of fully simulating the variety and complexity of real-world environments, especially rare events. Such discrepancies can lead to systems being either overestimated (overconfidence) or underestimated (underconfidence) in simulations, potentially failing in reality or being wrongly rejected.
We bridge this gap by combining realistic, data-driven scenarios with high-resolution simulations, generative models, digital twins, hardware-in-the-loop, and mixed-reality environments. High-resolution simulations enhance physical reality through detailed sensor and environmental models. Data-driven scenarios increase realism based on real-world recordings. Generative models translate synthetic data into more realistic inputs. Digital twins replicate physical systems virtually and synchronize with real-world data. Hardware-in-the-loop connects real components with simulations for testing under realistic conditions. Mixed reality blends real sensors with simulated elements for versatile testing.
This ensures that simulation results reliably reflect real-world behavior.
What role does the institute play in the advancement of autonomous and connected mobility systems?
Our researchers develop methods and tools to make testing of autonomous and connected mobility systems more realistic, reliable, and scalable. We place a strong emphasis on real-world datasets to create complex, data-driven test scenarios that accurately reflect actual driving conditions.
To reduce the perception gap, we use advanced techniques such as generative image translations and image perturbations to make synthetic data more realistic and simulate rare or challenging conditions.
Our testing systems also integrate connected infrastructure components, such as smart traffic lights and vehicle-to-infrastructure communication, to examine how autonomous systems interact with their environment. In the Mobility Lab, we conduct modular and end-to-end vehicle model tests on a platform that enables vehicle-in-the-loop and mixed-reality testing. This allows us to combine physical components with virtual environments to assess behavior under realistic conditions.
We are currently transferring these capabilities to a full-fledged testing platform that allows for system evaluations in real-world scenarios to validate our technologies.
What sets your approach to scenario-based and simulation-based testing apart, and what added value does it offer to the industry?
Our infrastructure covers the entire testing cycle, from early scenario creation and simulation to hardware-in-the-loop testing and real-world evaluation. We operate multiple simulation platforms, a certified, fully-fledged testing environment, and a smaller platform that supports mixed-reality and vehicle-in-the-loop testing. This allows us to conduct the entire cycle of testing, validation, and refinement entirely in-house and under controlled, reproducible conditions.
These platforms support a wide range of simulation environments, including CARLA, Udacity, BeamNG, and custom Unity-based simulators. Our testing facilities are fully compatible with widely used autonomous driving stacks like Autoware and Apollo. The Mobility Lab and all developed solutions are ROS-compatible, enabling seamless integration with various robotics and automotive systems.
Since our approach is modular and aligned with industry standards, it integrates seamlessly into existing development pipelines. This ensures that our methods not only reflect the latest research and cutting-edge technologies but are also practical and immediately applicable for industrial use.
What role does AI play in fortiss’s research, especially in testing and validating driving functions?
Generative AI is used to reconstruct real-world scenarios from data and automatically generate a wide and diverse range of new scenarios. This helps us test how autonomous systems respond to complex, rare, or edge-case conditions that are difficult to construct manually. We also use AI to translate simulator inputs and outputs, which enhances the realism of sensor data and bridges the perception gap between simulation and reality.
In many of our projects, the AI systems themselves such as perception, planning, and control models are the main components being evaluated. We test both modular and end-to-end vehicle models to understand how they behave under varying and uncertain conditions.
Why are flexible and realistic test environments crucial for the safe development of automated driving technologies and how are they designed at fortiss today?
At fortiss, we develop test environments that combine safety, realism, and flexibility to evaluate automated driving technologies under controlled but challenging conditions. In our Mobility Lab, we can conduct realistic small-scale tests with physical vehicles and infrastructure. This allows us to test the system's behavior in real-world environments while avoiding the risks and costs of testing on actual roads.
We also use mixed-reality techniques to combine virtual elements with real components. On our smaller platform, this allows us to simulate complex and safety-critical scenarios—such as a pedestrian suddenly crossing the street—without exposing people or hardware to real dangers. These scenarios can be triggered and consistently repeated to thoroughly investigate the system's responses.
In the near future, we will extend this capability to our certified, fully-fledged vehicle platform to enable highly precise tests of critical edge cases in realistic, connected driving environments.