Privacy and Trustworthy AI

Trustworthy media technologies

Fraunhofer IDMT is focusing on the development, use and integration of tools that promote privacy, security, robustness, transparency, explainability and fairness within data-driven applications, especially when dealing with media technologies and media content. We aim at providing a technology toolbox that helps our partners and clients to integrate “trust” into their applications and products.

News and upcoming events

 

New project

AVATAR

Privacy Enhancing Technologies for an avatar-based health data management system

 

Trade fair / 10.10.2023

DA3KMU at it-sa 2023

Fraunhofer IDMT presents software prototype at it-sa 2023 and invites SMEs to beta test

 

New project / 9.5.2023

Data protection for biosignals

The »NEMO« project is exploring anonymisation techniques, using the example of electroencephalograms (EEG)

The balance between trust and data analysis

Privacy Enhancing Technologies (PETs) and Trustworthy AI are critical to protect personal and business-critical information, to address legal requirements, and to promote fairness, transparency, robustness and security in today's data-driven applications and systems.

There are some who believe that trust and utility and data analysis are mutually incompatible, some who believe that regulation alone is sufficient to solve all problems, or vice versa, who believe that regulation is not needed at all. But we need both regulation and innovation, and we should aim at solutions to build trust into data-driven systems and AI. The way to do this is

  • to understand the specific requirements for a given application,
  • to understand the potential trade-offs between utility and trust aspects involved,
  • and to know, use and adapt technologies achieve an optimal trade-off for a given application.

Privacy and trust in the age of AI

As for protecting privacy for data analysis, Privacy-Preserving Data Publishing (PPDP) tools such as data anonymization and pseudonymization that are based on removal or altering of data (e. g. via applying suppression or generalization), have always played a key role and remain important tools for many applications. Moreover, recent years have brought an increasing need to deal with large amounts of data with high dimensionality and complexity, and advanced analysis capabilities due to AI. This has led to the emergence of in Privacy-Preserving AI (PPAI) tools that are meant to deal with amplified risks related to unintended exposure of sensitive data in the context of AI including inference attacks (attacks that aim at analyzing data to gain knowledge about a subject) and model poisoning, ( attacks that manipulate data in order to influence or corrupt the model).

These tools include, for instance, Differential Privacy, a technique that adds noise to the output of computations to protect sensitive data; Homomorphic Encryption, which allows for computations over encrypted data; Secure Multi-party Computation, which allows multiple parties to jointly compute a function over their inputs while keeping inputs private; and Secure Federated Learning, a decentralized machine learning approach where models are trained across multiple participants, ensuring data privacy.

Beyond privacy, further aspects of trustworthy AI that we deal with at Fraunhofer IDMT include

  • security and adversarial robustness: provide resilience of AI tools against deceptive inputs, and to ensure authenticity (this is closely related to topic media forensics)
  • transparency and explainability: making input data and AI processes understandable, thereby fostering trust and effective interaction
  • bias and fairness: reducing sample bias and other biases, while being aware of the various trade-offs involved (the reduction of certain biases may increase other biases); this also includes the interaction between machine and human biases when technologies are applied, e. g. the question on how to reduce filter bubbles effects created by recommendation systems which amplify confirmation bias

How to proceed

Understanding the specific requirements for a given application, selecting and adapting the necessary tools, and systematic evaluation are key to achieve the aforementioned goals. We follow a by-design approach, which means that privacy, security, transparency, fairness, and robustness considerations must be taken into account from the very beginning of the development process, rather than being retroactively applied, thus reducing risks and costs.

 

Research project

DA3KMU

Adaptive anonymization of log lata for SIEM analysis and health data

 

Research project

TRA-ICT

Secure and privacy-aware acoustic monitoring

 

Research project

SEC-Learn

Secure Federated Learning for Audio Event Detection

 

Research project

AI4Media

Center of excellence for AI in media: Robust audio forensics, Privacy, Recommendation

Research project

MuSEc

Audio Analysis and PET for the MusicDNA Sustainable Eco system

 

Research project

AVATAR

Privacy Enhancing Technologies for an avatar-based health data management system

 

Research project

NEMO

EEG re-identification analysis and anonymization

Research project

InfarctCare

Enhancing acute heart attack care via intelligent data use and interoperability

Products

SecFed Framework

Technologies for secure federated learning that can be adapted for specific needs.

PETools

Privacy Enhancing Tools (PET) for anonymization and pseudonymization, which can be adapted for specific needs.

Trustworthy AI – R&D, Consulting and Evaluation

  • Customized research and development
  • Consulting and evaluation of components and systems regarding privacy and trust aspects (adaptation, development, or evaluation of technologies)

Jahr
Year
Titel/Autor:in
Title/Author
Publikationstyp
Publication Type
2023 KI-basiertes akustisches Monitoring: Herausforderungen und Lösungsansätze für datengetriebene Innovationen auf Basis audiovisueller Analyse
Aichroth, Patrick; Liebetrau, Judith
Aufsatz in Buch
Book Article
2022 SEC-Learn: Sensor Edge Cloud for Federated Learning
Aichroth, Patrick; Antes, Christoph; Gembaczka, Pierre; Graf, Holger; Johnson, David S.; Jung, Matthias; Kämpfe, Thomas; Kleinberger, Thomas; Köllmer, Thomas; Kuhn, Thomas; Kutter, Christoph; Krüger, Jens; Loroch, Dominik M.; Lukashevich, Hanna; Laleni, Nelli; Zhang, Lei; Leugering, Johannes; Martín Fernández, Rodrigo; Mateu, Loreto; Mojumder, Shaown; Prautsch, Benjamin; Pscheidl, Ferdinand; Roscher, Karsten; Schneickert, Sören; Vanselow, Frank; Wallbott, Paul; Walter, Oliver; Weber, Nico
Konferenzbeitrag
Conference Paper
2021 Wertschöpfung durch Software in Deutschland
Aichroth, Patrick; Bös, Joachim; Sladeczek, Christoph; Bodden, Eric; Liggesmeyer, Peter; Trapp, Mario; Falk Howar; Otto, Boris; Rehof, Jakob; Spiekermann, Markus; Arzt, Steven; Steffen, Barbara; Nouak, Alexander; Köhler, Henning
Bericht
Report
2020 Vertrauenswürdige KI im Medienkontext
Aichroth, Patrick; Lukashevich, Hanna
Zeitschriftenaufsatz
Journal Article
2020 Anonymisierung und Pseudonymisierung von Daten für Projekte des maschinellen Lernens
Aichroth, Patrick; Battis, Verena; Dewes, Andreas; Dibak, Christoph; Doroshenko, Vadym; Geiger, Bernd; Graner, Lukas; Holly, Steffen; Huth, Michael; Kämpgen, Benedikt; Kaulartz, Markus; Mundt, Michael; Rapp, Hermann; Steinebach, Martin; Sushko, Yurii; Swarat, Dominic; Winter, Christian; Weiß, Rebekka
Buch
Book
2019 Hybride, datenschutzfreundliche Empfehlungssysteme - Mehr als nützlich
Aichroth, Patrick
Zeitschriftenaufsatz
Journal Article
2018 Selective face encryption in H.264 encoded videos
Aichroth, P.; Gerhardt, C.; Mann, S.
Konferenzbeitrag
Conference Paper
2015 Benefits and Pitfalls of Predictive Policing
Aichroth, Patrick; Schlehahn, E.; Mann, S.; Schreiner, R.; Lang, U.; Shepherd, I.D.H.; Wong, B.L.W.
Konferenzbeitrag
Conference Paper
2015 Bridging the gap between privacy requirements and implementation with semantic privacy modeling and a privacy technology framework
Lang, Ulrich; Davis, Mike; Schreiner, Patrick; Aichroth, Patrick; Mann, Sebastian
Zeitschriftenaufsatz
Journal Article
Diese Liste ist ein Auszug aus der Publikationsplattform Fraunhofer-Publica

This list has been generated from the publication platform Fraunhofer-Publica