The main goal of inEvent is to develop new means to structure, retrieve, and share large archives of networked, and dynamically changing, multimedia recordings, mainly consisting here of meetings, video-conferences, and lectures. Several partners of the inEvent consortium have indeed access to (and continuously generate) such large multimedia repositories, which keep being enriched everyday by new recordings, as well as social network data. The resulting resources often share common or related information, or are highly complementary, but also come from different sources, in different formats, and different types of metadata information (if any). Hence, it is still impossible to properly search across those very rich multimedia resources simply based on metadata.
Exploiting, and going beyond, the current state-of-the-art in audio, video, and multimedia processing and indexing, the present project proposes research and development towards a system that addresses the above problem by breaking our multimedia recordings into interconnected "hyper-events" (as opposed to hypertext) consisting of a particular structure of simpler "facets" which are easier to search, retrieve and share. Building and adaptively linking such "hyper-events", as a means to search and link networked multimedia archives, will result in more efficient search system, in which information can be retrieved based on "insights" and "experiences" (in
addition to the usual metadata).
Reaching the aforementioned goal requires challenging RTD efforts going much beyond current state-of-the- art in the fields of knowledge representation, audio processing, video analysis, semantics of information, and exploitation of social network information. Ultimately, the main goal of inEvent could thus be summarized as developing new ways to replace the usual "hypertext" links (linking "information" bits) by multi-faceted "hyper-events" (linking different "experiences/insights" related to dynamic multimedia recordings).