CITEC Central Lab Facilities
CITEC
British Flag  

Multimodal Communication

In my work on multimodal communication I am focussing on gaze and gestures in interaction with speech. The interactions I am interesting in are alwas communication scenarios between humans or humans and virtual avatars or, recently, robots.

Gesture Space

Gesture space visualiuation of the hand positions during pointing gestures (stroke)

During my research on gestures in 3D environments, I came up with the problem of presenting results collected over many trials or including many participants. Therefore I devleped methods to visualize gesture data in 3D.

The picture above shows a visualization of the end position of the pointing hand during pointing gestures towards all of the objects on the table. The particular group of people depicted had the strategy to raise their hands higher above the table when pointing to objects more distant than the first two thirds of the table.

Related publications:

Framework for the Annotation, Analysis and Augmentation of Multimodal Experiments (FAME)

The work on a framework for the annotation, analysis and augmentation of multimodal experiments (FAME) is the successor of IADE (see below). It makes use of a more general framework (InstantReality, InstantIO, RSB) which makes it more versatile in number and kind of devices that are supported.

Related publications:

Interactive Augmented Data Explorer

In our work on assessing the accuracy of pointing gestures and the interplay between speech and gestures during pointing, it was necessary to collect data on the performance of pointing gestures with a very high spatial precision.

Our solution was even more than that: with the Interactive Augmented Data Explorer (Pfeiffer et al. 2006) we were not only able to record human-human interactions using audio, video and motion capturing, we could also simulate the recorded data back into an immersive virtual reality installation. This allowed us to explore the collected data 1:1 in real-life by walking right through the data!

In addition to that, based on the rich 3D model of the environemnt, we were also able to test hypothesis and mathematical models, e.g. of the precision or the direction of pointing gestures, in a data-driven way based on the once recorded sessions and mathematical scripts that were evaluated and visualized during the virtual reality simulations.

Results:

References