Context Sensing and Interpretation

The main objective of this work package is to exploit all of the available on-board hardware (and virtual internet) sensors in order to provide a comprehensive understanding of user context, from multiple points of view.
The task structure reflects the different typologies of sensors: audio, video, and other sensors are covered in separate tasks, but they work together to reach the following multi-modal objectives:

  • Geographical localization and absolute orientation estimation: hardware sensors (e.g. GPS, digital compass, accelerometers), a-priori knowledge about the World (e.g. maps, Digital Elevation Models, geo-tagged media), and vision based algorithms are exploited in a coherent way in order to provide accurate measures also in challenging situations where traditional location sensors fail.
  • Scene reconstruction and user behaviour understanding: information measured by all available sensors is used to infer what the user is doing and what he could experience with his senses.

The outcomes of WP4 will follow standard formats as much as possible, and will be made accessible from other VENTURI WPs through well defined APIs. All WP4 outcomes, and when possible their preliminary results, will be considered for the preparation of deliverables D4.2 and D4.3.

Comments are closed.