Media Forensics

Trustworthy media content

Fraunhofer IDMT is focusing on research and development of various media forensics technologies to analyze, detect and localize manipulation, decontextualization, and fabrication in media content (audio, visual, text), using a combination of different techniques and competencies, including signal analysis, machine learning, reasoning and IT security methods. The goal is to support e. g. journalists, law enforcement, or media platforms in the process of content verification. By doing so, it helps avoiding potential negative impacts of misinformation, deepfakes, and other malicious uses of manipulated media.

News and upcoming events

 

News / 7.8.2024

AES Best Paper Award

Luca Cuccovillo, Patrick Aichroth and Thomas Köllmer honored for best paper at the AES International Conference on Audio Forensics 2024.

 

Event / 13.9.2024

Meet us at IBC 2024

At IBC 2024 we will present the Audio Forensics Toolbox for content verification and authentification.

 

Article / 21.6.2024

dpa Fact Checking

Article about an audio analysis for dpa on fan chants during a European Championship match

Disinformation as a challenge

Thanks to the availability of ever-growing amounts of content, low-cost editing tools, advanced synthesis techniques for content generation, and an abundance of distribution and communication channels, the creation and distribution of disinformation in all forms has become cheap and easy, and increasingly common.

Obvious forms of disinformation include decontextualization, i.e. presentation of authentic material in a misleading or inappropriate context; manipulation, i.e. modification of existing material; and fabrication, where material is made up from scratch. Two terms that are commonly used in this context are

  • Shallowfakes/cheapfakes, a term that refers to media content created through transformation or editing of genuine content, for example, through deletion, splicing or doctoring, with the aim of manipulation or decontextualisation. Until now, most fakes have fallen into this category, as they are simple to create and yet can be very effective and convincing.
  • Deepfakes, a term that refers to media content that is fabricated using AI. Until now, these remain less common than shallowfakes/cheapfakes, but ease of use, pervasiveness and availability of technologies to create them improve by the day, and it is clear that they pose serious challenge for disinformation detection.

Content verification – the search for truth

The process of content verification can be considered a "search for truth" that applies falsification, similar to how scientific theories and hypotheses are or should be tested according to "critical rationalism": To answer an overall question (typically something along the lines of ‘does the material at hand capture a real event and its appropriate context?’), there must be falsifiable claims about the material. For an audio recording, for instance, this could look like this:

"This file was recorded on Dec 6, 2022 in Amsterdam, NL, using an iPhone 6 and its standard recording app. The recording was not processed afterwards. The SHA-512 hash of the original file is 9f86d081 … The file was uploaded to cloud service XYZ… no transcoding or other modifications were applied."

Content verification is the process of testing such claims (which can also be implicit) against facts and findings using human assessment and various tools. The more and the “richer” the claims provided and verified (i. e. not rejected), the more trustworthy the content. However, human capabilities are limited with respect to perception and speed necessary to conduct the testing - therefore, our goal is to develop approaches and tools that support this process in the best possible manner, focusing on a broad technological coverage for the audio domain. The objective is to provide solutions and methods to support verification, including

  • Technologies for the analysis of acquisition, editing and synthesis traces within A/V material, to understand whether and how it was recorded, encoded, edited or synthesized, and then use it for the falsification process, and especially for manipulation and synthesis detection and localization.
  • Technologies for content provenance analysis, i. e. detect relationships between A/V content items, to understand whether and how they were reused and transformed, and in which order they were created (including the detection of “root” items).
  • Technologies for automatic annotation of A/V material, to quickly research relevant material for content verification, i. e. for a specific event, a specific person, or to retrieve information about circumstances that can be used for the verification process, i. e. acoustic scene classification and event detection.

We focus primarily on broad technological coverage for the audio domain and collaborate with other organizations that specialize in other tasks and modalities. The aim is to provide a comprehensive set of tools that can enhance and accelerate the verification of content.

In addition, our research also includes development of technologies for active media authentication, which are based on a combination of digital signatures and signal analysis. The idea is that content providers can use this to proactively sign and “mark” content and related metadata, including synthetic content, to allow other stakeholders to check its authenticity afterwards. Both approaches, (passive) falsification and (active) authentication, have distinct advantages and disadvantages, and we believe the two approaches are not mutually exclusive. On the contrary, they are complementary and should be considered and used together wherever possible. 

 

How to proceed

Media forensics research includes various disciplines, such as signal analysis, machine learning, but also security and adversarial thinking. At Fraunhofer IDMT, we address the topic with a focus on audio, and a combination of signal analysis and machine learning. Both provide specific and somewhat complementary advantages and disadvantages regarding interpretability / explainability, robustness, and performance for several tasks.

We believe that there are particular challenges in media forensic research that need to be addressed:

  • The need to design and develop technologies that users can work with, to enable them to make the best possible decisions within the verification process; this also includes addressing trust aspects, especially explainability, bias (most importantly sample bias, by ensuring suitable selection of training data wherever applicable), adversarial robustness and generalizability, all of which needs to be supported by systematic evaluation.
  • The need to establish a “falsification culture” that ensures that whoever provides content to be verified, also provides enough information to enable a proper verification process.
  • The need to cooperate with many other disciplines related to disinformation analysis, including textual analysis, visual analysis, social network analysis, legal analysis, and others; similarly, disinformation analysis needs to be understood as a complex interplay between technology, market, law and norms.
  • The understanding that media forensics is a cat-and-mouse game, which requires continuous research and development and sustainable business models as well as cooperation between companies and research institutions – financing such activities solely through publicly funded projects will not be enough.
  • The understanding that not only machine bias, but also human bias is a topic for content verification, and that organizational and technical measures are needed to address this.

The projects REWIND (EU) and AudioTrust+ (TMWWDG) were aiming at the development of a core set of technologies for audio forensics, providing a technological basis for the research domain. The technologies developed were then adapted and optimized in DIGGER (Google DNI), particularly with regard to user-friendliness and verification workflows, and integrated into the TrulyMedia verification platform, where they are available to journalists. This was key to ensure practical applicability, and the project delivered the basis for further work that also included selected trustworthiness aspects for audio manipulation detection in the AI4Media (EU) project.

More recent and current activities focus on the difficult domain of synthesis detection and the improvement of manipulation detection and origin analysis. On one hand, the SpeechTrust+ (BMBF) project focuses on speech synthesis detection for the requirements from law enforcement in the project. The vera.ai project (EU) involves content provenance analysis, scalability improvements and research on speaker-informed speech synthesis detection for the media domain. Finally, the news-polygraph (BMBF) project aims at embedding such tools in a large multi-modal disinformation analysis technology platform that also includes the use of crowdsourcing and recommendation systems to support and improve the content verification process.

As for the development of technologies for active media authentication, first prototypes were developed many years ago in AudioTrust+ (TMWWDG) and TRA-ICT (FhG), and they are now enhanced for the demands of journalistic purposes in news-polygraph (BMBF) and authentication of AI-based synthetic speech for media production in GEISST (BMBF).

Beyond the research and networking activities within the aforementioned projects, Fraunhofer IDMT is also active within the European Network of Forensic Science Institutes (ENFSI) and The European Digital Media Observatory (EDMO).

 

Research project

news-polygraph

Multi-modal, crowd-supported technology platform for disinformation analysis

 

Research project

vera.ai

Explainable AI-based disinformation and provenance analysis

 

Research project

AI4Media

Center of excellence for AI in media – Our contributions: Audio forensics, audio provenance analysis, music analysis, privacy and recommendation systems

 

Research project

SpeechTrust+

Detection of AI-based speech synthesis and voice alienation for law enforcement

 

Research project

DIGGER

Integration of audio forensic tools into the content verification platform TrulyMedia

 

Research project

GEI§T

Robust authentication of synthetic audio material

 

Research project

REWIND

REVerse engineering of audio-VIsual coNtent Data: Detection of recording and manipulation traces, development of a test framework for media forensics

 

Research project

AudioTrust+

Advanced audio analysis for audio manipulation detection and provenance analysis

Products

 

Content Verification Toolbox

Advanced Tools for Audio Manipulation and Synthesis Detection

Media Verification Support

Support in the use of the latest analysis technologies for manipulation and synthesis detection (only for official request e.g. for judicial expert opinions and investigations, not for private requests).

Audio Forensics Research and Development

Customized research and development: Development and evaluation of technologies for forensic analyses, audio manipulation and speech synthesis detection 

Jahr
Year
Titel/Autor:in
Title/Author
Publikationstyp
Publication Type
2024 Calibrating neural networks for synthetic speech detection: A likelihood-ratio-based approach
Cuccovillo, Luca; Aichroth, Patrick; Köllmer, Thomas
Zeitschriftenaufsatz
Journal Article
2024 Visual and audio scene classification for detecting discrepancies in video: a baseline method and experimental protocol
Apostolidis, Konstantinos; Abeßer, Jakob; Cuccovillo, Luca; Vasileios, Mezaris
Konferenzbeitrag
Conference Paper
2024 Advancing Audio Phylogeny: A Neural Network Approach for Transformation Detection
Gerhardt, Milica; Cuccovillo, Luca; Aichroth, Patrick
Konferenzbeitrag
Conference Paper
2024 Audio Transformer for Synthetic Speech Detection via Formant Magnitude and Phase Analysis
Cuccovillo, Luca; Gerhardt, Milica; Aichroth, Patrick
Konferenzbeitrag
Conference Paper
2024 An Open Dataset of Synthetic Speech
Yaroshchuk, Artem; Papastergiopoulos, Christoforos; Cuccovillo, Luca; Aichroth, Patrick; Votis, Konstantinos; Tzovaras, Dimitrios
Konferenzbeitrag
Conference Paper
2024 Generative AI and Disinformation
Bontcheva, Kalina; Papadopoulous, Symeon; Tsalakanidou, Filareti; Gallotti, Riccardo; Dutkiewicz, Lidia; Krack, Noémie; Nucci, Francesco Severio; Spangenberg, Jochen; Srba, Ivan; Aichroth, Patrick; Cuccovillo, Luca; Verdoliva, Luisa
Paper
2023 MAD'23 Workshop Chairs' Welcome Message
Cuccovillo, Luca; Ionescu, Bogdan; Kordopatis-Zilos, Giorgos; Papadopoulos, Symeon; Popescu, Ana Maria
Konferenzbeitrag
Conference Paper
2022 MAD 2022, 1st International Workshop on Multimedia AI against Disinformation. Proceedings
Tagungsband
Conference Proceeding
Diese Liste ist ein Auszug aus der Publikationsplattform Fraunhofer-Publica

This list has been generated from the publication platform Fraunhofer-Publica