Call for Papers
Modern communication does not rely anymore solely on classic media like newspapers or television, but rather takes place over social networks, in real-time, and with live interactions among users. The speedup in the amount of information available, however, also led to an increased amount and quality of misleading content, disinformation and propaganda Conversely, the fight against disinformation, in which news agencies and NGOs (among others) take part on a daily basis to avoid the risk of citizens’ opinions being distorted, became even more crucial and demanding, especially for what concerns sensitive topics such as politics, health and religion.
Disinformation campaigns are leveraging, among others, market-ready AI-based tools for content creation and modification: hyper-realistic visual, speech, textual and video content have emerged under the collective name of “deepfakes”, undermining the perceived credibility of media content. It is, therefore, even more crucial to counter these advances by devising new analysis tools able to detect the presence of synthetic and manipulated content, accessible to journalists and fact-checkers, robust and trustworthy, and possibly based on AI to reach greater performance.
Future multimedia disinformation detection research relies on the combination of different modalities and on the adoption of the latest advances of deep learning approaches and architectures. These raise new challenges and questions that need to be addressed in order to reduce the effects of disinformation campaigns. The workshop, in its second edition, welcomes contributions related to different aspects of AI-powered disinformation detection, analysis and mitigation.
Topics of interest include but are not limited to:
- Disinformation detection in multimedia content (e.g., video, audio, texts, images)
- Multimodal verification methods
- Synthetic and manipulated media detection
- Multimedia forensics
- Disinformation spread and effects in social media
- Analysis of disinformation campaigns in societally-sensitive domains
- Robustness of media verification against adversarial attacks and real-world complexities
- Fairness and non-discrimination of disinformation detection in multimedia content
- Explaining disinformation /disinformation detection technologies for non-expert users
- Temporal and cultural aspects of disinformation
- Dataset sharing and governance in AI for disinformation
- Datasets for disinformation detection and multimedia verification
- Open resources, e.g., datasets, software tools
- Multimedia verification systems and applications
- System fusion, ensembling and late fusion techniques
- Benchmarking and evaluation frameworks
|Paper submission due||February 28, 2023|
|Acceptance notification||March 31, 2023|
|Camera-ready papers due||April 20, 2023|
|Workshop @ACM ICMR 2023||June 12, 2023 (to be confirmed)|
The workshop is supported under the H2020 project AI4Media “A European Excellence Centre for Media, Society and Democracy”, and the Horizon Europe project vera.ai “VERification Assisted by Artificial Intelligence”.
For more information about the workshop and submission instructions please visit MAD'23.