FLA

FLA

End-to-end, pervasive, and continuous compliance checking for Federated Learning

FLA

Compliance-by-design for Federated Learning (FL) based on multi-faceted & multi-perspective assurance.

Project description

Federated Learning (FL) allows for sharing knowledge, while keeping data private. Multiple actors train models collaboratively by applying privacy-preserving machine learning techniques without having to share the raw data used for training.

However, FL can still be vulnerable to communication intercepts or private data leakage through inference attacks. This is of particular relevance to highly regulated domains, where trustworthiness of FL is critical to its practical introduction. Trustworthiness generally includes explicit information about the data, such as provenance or bias, and its processing, such as consent, explainability, or fairness. From a legal perspective, trustworthiness is linked to lawfulness and compliance, which leads to the necessity for assuring compliance for each participant as well as for the overall federated model. Assurance would involve checking design-time and run-time of FL, and mitigating risks.

In the project Federated Learning Architecuture (FLA), fortiss designs a FL system that draws a privacy-preserving architecture and integrates cutting-edge privacy enhancing techniques across all stages. It sis including differential privacy and homomorphic encryption for learning models, practical anonymization, as well as a tamper-proof record-keeping via a distributed ledger. Moreover, the system provides multi-faceted, multi-perspective, pervasive and end-to-end formal guarantees for compliance based on a knowledge graph.

fortiss evaluates its research on the use case of collaborative training of a feedback text classifier. The feedback is natural text input provide by users of online public administration services. The task at hand is to assign appropriate feedback classes to responsible departments in German public administration.

Research contribution

In this project, fortiss contributes with three main results:

  • a set of architectural patterns for privacy-by-design FL,
  • a method for multi-faceted and multi-perspective assurance based on formalized claims,
  • a toolchain and a library of claims around EU AI Act and GDPR for applying the developer method in practical use cases.

Funding

Projektdauer

01.01.2023 - 31.07.2023

 Mahdi Sellami

Your contact

Mahdi Sellami

+49 89 3603522 171
sellami@fortiss.org

Project partner