With ever increasing amounts of content, there is a huge demand for extracting meaning from raw multimedia via automatic annotation, but it also remains an enormous challenge. Audio-Visual (A/V) analysis components typically operate in isolation, and they do not consider the context of a media resource, which results in insufficient quality for many tasks. Moreover, most of the available technologies are complex and difficult to configure and combine, which makes them very expensive to use – especially in the case of SMEs. MICO is an EU research project which aims at solving this dilemma by:
- Providing an architecture to analyze “media in context” by orchestrating various analysis components for several media types (video, images, audio, text, link structure and metadata), with each component drawing on each other and contributing to an overall picture of the meaning of the media content
- Providing all necessary technologies for a distributed analysis workflow: Cross-media extraction, extraction model and orchestration, metadata publishing, metadata querying and cross-media recommendations
- Contributing the core software results as components under business-friendly OSS licenses (building on projects such as Apache Marmotta), but allowing inclusion of both OSS and closed-source extractors to simplify the use of the technology in industrial products.