Amsterdam, The Netherlands / June 30, 2025
5th ACM International Workshop on Multimedia AI against Disinformation (MAD’26)
On June 16, 2026, the 5th edition of the ACM International Workshop on Multimedia AI against Disinformation (MAD’26) organized with the ACM International Conference on Multimedia Retrieval (ICMR’26) will take place in Amsterdam, The Netherlands. The workshop welcomes contributions related to different aspects of AI-powered disinformation detection, analysis and mitigation.
Fraunhofer IDMT will support the organization of the workshop and present its latest research findings there.
Milica Gerhardt, Luca Cuccovillo, Patrick Aichroth
Audio-text decontextualization is a form of real-world misinformation in which genuine audio recordings – speech excerpts, news clips, interviews – are detached from their authentic context and paired with misleading textual narratives. Addressing it in practice requires both audio provenance analysis and context analysis: provenance retrieves candidate source recordings, while context analysis determines whether the recovered source supports the narrative attached to the post. This paper presents three context-analysis pipelines able to address this issue and their cascade combinations, and evaluates them on the M3A dataset alongside four audio-language baselines. We show that a substantial fraction of M3A manipulations are fundamentally undetectable from audio-text content alone, and that on the subset where detection is possible our best pipelines reach 0.73 accuracy on Named Entity Manipulation (NEM) and 0.92 on Multimodal Misalignment (MM) audio swap. Building on these findings, we formulate an operational workflow for real-world investigations and demonstrate it on three case studies, which also motivate a lightweight linguistic middle layer for conditional and modal/hedging framing drops. This leads to two practical deployment recommendations: (1) a fast bulk-screening pipeline that flags context-stripping attacks via entailment failure; and (2) a large language model (LLM)-based deep-verification pipeline for the most suspicious cases, capable of explicit reasoning about framing shifts.
Modern communication does not rely anymore solely on mainstream media like newspapers or television, but rather takes place over social networks, in real-time, and with live interactions among users, or increasingly mediated via AI-based systems, such as bots and recommendation algorithms. The speedup of distribution and the amount of information available, however, also led to an increased amount of misleading content, disinformation and propaganda. Conversely, the fight against disinformation, in which news agencies and NGOs (among others) take part on a daily basis to avoid the risk of citizens’ opinions being distorted, became even more crucial and demanding, especially for what concerns sensitive topics such as immigration, health and climate change.
Disinformation campaigns are leveraging, among others, AI-based tools for content generation and modification: hyper-realistic visual, speech, textual and video content have emerged under the collective name of “deepfakes”, and more recently with the use of Large Language Models (LLMs) and Large Multimodal Models (LMMs), undermining the perceived credibility of media content. It is, therefore, even more crucial to counter these advances by devising new robust and trustworthy AI tools able to detect the presence of inaccurate, synthetic and manipulated content, accessible to journalists and fact-checkers.
Future multimedia disinformation detection research relies on the combination of different modalities and on the adoption of the latest advances of deep learning approaches and architectures. These raise new challenges and questions that need to be addressed to reduce the effects of disinformation campaigns. The workshop, in its fourth edition, welcomes contributions related to different aspects of AI-powered disinformation detection, analysis and mitigation.
Topics of interest include but are not limited to:
| Paper submission due | April 5th, 2026 |
| Acceptance notification | April 19th, 2026 |
| Camera-ready papers due | April 25th, 2026 |
| Workshop @ACM ICMR 2026 | June 16, 2026 |
The 5th ACM International Workshop on Multimedia AI against Disinformation (MAD’26) is organized with the ACM International Conference on Multimedia Retrieval (ICMR’26) and is supported under the following projects
For more information about the workshop and submission instructions please visit the workshop website MAD2026.