GRATA

GRATA

GraphRAG-based training and education system for robot-assisted medical procedures

GRATA

The GRATA project is developing a modular education and training system for robot-assisted surgery. GenKI base models are being expanded into medical language models and combined with semantic knowledge models to generate instructions and dialogues. The system collects sensor and robot data, checks processes, provides adaptive feedback, and is being validated for ophthalmological procedures.

Project description

The objectives of the GRATA project are to develop a generic, modular education and training system that addresses the specific requirements of robot-assisted surgery. Using the example of the treatment of age-related macular degeneration (AMD), for which a massive increase in cases is expected worldwide, a system is being created that specifically trains medical personnel in the safe, precise, and efficient use of surgical robots. This involves not only the actual operation, but also structured preparation and follow-up, as well as a clear distribution of roles in the operating room.

The biggest challenges are to map the complexity of robot-assisted procedures using uniform semantic knowledge models, to enable safe and intuitive interaction with language models, and to ensure reliable monitoring and adaptive feedback through sensor integration. At the same time, training environments must be created that offer realistic training situations – without endangering patients – and ensure continuous qualification despite the limited availability of robotic systems.

With this holistic approach, GRATA aims to reduce training costs, increase efficiency, and sustainably improve patient safety in highly sensitive surgical procedures.

Research contribution

fortiss is making a significant contribution to the GRATA project by developing semantic knowledge models for mapping and controlling complex, robot-assisted eye surgeries. The central task is to create ontologies that define a structured vocabulary for surgical processes, roles, and interaction objects. Based on this, fortiss is developing a detailed surgical process model and a semantic representation of the entire surgical environment in the form of a scene graph that links static and dynamic information.

A particular focus is on the integration of sensor data for recognizing the actions of medical personnel. In addition, fortiss is supporting the development of a GraphRAG system for natural language interaction with the knowledge graph and other data, information, and knowledge sources. To complement this, research is being conducted into a framework for automated instruction generation and a dialogue system for context-dependent questions from trainees.

In this way, fortiss is creating the basis for an intelligent, modularly expandable training and assistance system that can be transferred beyond ophthalmology.

Projektdauer

01.10.2025 - 30.09.2028

 Alexander Perzylo

Your contact

Alexander Perzylo

+49 89 3603522 531
perzylo@fortiss.org

Project partner