FOCETA

FOCETA

FOundations for Continuous Engineering of Trustworthy Autonomy

FOCETA

Future autonomous systems will deploy AI-based components for increased performance. To enable a broad adoption in safety-critical settings they must be developed to high quality standards. FOundations for Continuous Engineering of Trustworthy Autonomy (FOCETA) takes an integrated approach for engineering trustworthy learning-enabled autonomous systems, combining the advantages of data-based and model-based techniques.

Project description

Applications are increasingly being developed based on complex autonomous systems driven by artificial intelligence. As smart robots are starting to replace humans in complicated or dangerous tasks on the road, in industry or in hospital, their safety, autonomy and trustworthiness are of particular concern. This is due to the increasing complexity of deployments, especially those of learned-enabled systems, which are not easily traced by continuous engineering.

The FOCETA project will develop the foundation for continuous engineering of trustworthy learning-enabled autonomous systems, integrating data-driven and model-based engineering. The new techniques, leveraging open-source tools and open data-exchange standards, will be validated through industrially relevant, highly demanding applications such as urban driving automation and intelligent medical devices, to prove viability, scalability, and safety.

By activating this video, you consent to transmitting data to YouTube.

FOCETA official video

Research contribution

FOCETA addresses the convergence of “data-driven” and “model-based” engineering, where this convergence is further complicated by the need to apply verification and validation incrementally and to avoid complete re-verification and re-validation efforts.

FOCETA’s paradigm is built on three scientific pillars:

  1. integration of learning-enabled components & model-based components via a contract-based methodology which allows incremental modification of systems including threat models for cyber-security,
  2. adaptation of verification techniques applied during model-driven design to learning components in order to enable unbiased decision making, and finally,
  3. incremental synthesis techniques unifying both the enforcement of safety & security-critical properties as well as the optimization of performance.

The fortiss team will develop methods for system-level testing, applying unsupervised learning approaches for clustering data and to find safety-relevant edge or corner cases. Furthermore, fortiss will contribute to constructing a rigorous and scalable distributed framework for generating safety and security workflows, and continuously curating corresponding assurance cases for the FOCETA industrial use cases.

Project duration

01.10.2020 – 30.09.2023

Dr. Holger Pfeifer

Your contact

Dr. Holger Pfeifer

+49 89 3603522 29
pfeifer@fortiss.org

More information