Hand on screen

Project call

Adaptive policies for secure and trustworthy AI agents

Artificial intelligence is becoming increasingly important in enterprises, particularly in the areas of automation, decision support, and data-driven processes. This raises the central question of how AI agents can act reliably, transparently, and in compliance with applicable rules and regulations. The present project addresses this challenge by developing an adaptive framework that automatically generates, verifies, and enforces policies.

Project Overview

The project develops an adaptive framework that enables AI agents to operate securely, transparently, and in a compliant manner. The objective is to automatically generate, validate, and enforce policies so that agent behavior consistently aligns with user intent, organizational requirements, and legal regulations.

Core Activities

  • Just-in-Time Policy Generation: Automatic generation and verification of context-aware policies using large language models (LLMs) and formal methods such as Answer Set Programming (ASP).
  • Runtime Enforcement & Verification: Real-time monitoring of AI agents to ensure continuous compliance and prevent undesired or harmful behavior.
  • Requirements-Driven Intent Capture: Precise capture of user goals and their translation into enforceable policies.

Objectives

The research project delivers a prototype framework that enables organizations to deploy trustworthy, auditable, and regulation-compliant AI agents. By combining fortiss’ expertise with the domain-specific knowledge of the project partners, this initiative strengthens regional innovation capacity in Bavaria and fosters collaboration between research and industry.

Leverage Strategic Opportunities for Your Innovation Project

  • Application Deadline: March 31, 2026
  • Expected Project Start: To be determined
  • Project Duration: 2 years

Register now to learn more

Information at a glance

Target group

We are seeking an SME partner with a highly autonomous AI agent use case. The AI agent should be used or planned in a critical domain where failures such as data leakage or misuse could have serious impact. Deployment is not required, as the project aims to develop a proof of concept.

The ideal partner should

• provide a use case involving highly autonomous AI agents.

• operate in or target critical, mission-critical, or regulated environments (e.g., industrial systems, healthcare, mobility, energy, critical infrastructure, finance, public services).

• have sufficient technical and organizational capacity to collaborate in a research project, including access to relevant systems, data, or expertise.

Your benefit

The partner will get early access to the developed framework and gain expertise by applying it to their use cases. Additional benefits include collaboration with fortiss on cutting-edge research, influence on emerging AI standards, and opportunities for joint publications, demonstrators, and potential follow-up funding.

 Jana Kümmel

Your contact

Jana Kümmel

+49 89 3603522 146
kuemmel@fortiss.org

 Radouane Bouchekir

Your contact

Radouane Bouchekir

+49 89 3603522 262
bouchekir@fortiss.org

Would you like to participate in the project?

Then please register using the form on the right. We will inform you immediately and in detail about the next steps.

*Mandatory fields