Contextual information in virtual collaboration systems beyond current standards

Anna Carreras*, Maria Teresa Andrade, Tim Masterton, Hemantha Kodikara Arachchi, Vitor Barbosa, Safak Dogan, Jaime Delgado, Ahmet M. Kondoz

*Corresponding author for this work

Research output: Book chapterConference contributionpeer-review

2 Citations (Scopus)

Abstract

Context-aware applications are fast becoming popular as a means of enriching users' experiences in various multimedia content access and delivery scenarios. Nevertheless, the definition, identification, and representation of contextual information are still open issues that need to be addressed. In this paper, we briefly present our work developed within the VISNET II Network of Excellence (NoE) project on context-based content adaptation in Virtual Collaboration Systems (VCSs). Based on the conducted research, we conclude that MPEG-21 Digital Item Adaptation (DIA) is the most complete standardization initiative to represent context for content adaptation. However, tools defined in MPEG-21 DIA Usage Environment Descriptors (UEDs) are not adequate for Virtual Collaboration application scenarios, and thus, wepropose potential extensions to the available UEDs.

Original languageEnglish
Title of host publication2009 10th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2009
Pages209-213
Number of pages5
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event2009 10th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2009 - London, United Kingdom
Duration: 6 May 20098 May 2009

Publication series

Name2009 10th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2009

Conference

Conference2009 10th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2009
Country/TerritoryUnited Kingdom
CityLondon
Period6/05/098/05/09

Fingerprint

Dive into the research topics of 'Contextual information in virtual collaboration systems beyond current standards'. Together they form a unique fingerprint.

Cite this