Safe and reliable artificial intelligence for cognitive, autonomous systems
The central research issue at the interface of AI and software development for the successful utilization of AI technologies in embedded and critical enterprise systems is: how can AI-enhanced, particularly continuously self-learning software systems, be reliably and safely developed? The research issues that are still open in the field of dependable AI engineering encompass all phases of conventional engineering, including specification, architecture design, implementation, testing and verification, as well as the data-controlled adaptation and optimization of AI systems and the dynamic certification of critical enterprise learning systems during operation. The fortiss research activities focus primarily on the following aspects:
What we need are dependable AI systems that can make timely and reliable decisions in uncertain and unpredictable environments, be impervious to targeted hacker attacks and process increasingly larger volumes of enterprise and organizational data without impacting data confidentially and privacy. Key provisions of the EU General Data Protection Regulations can be reliably and transparently fulfilled with AI algorithms when processing personal data.