Afrigraph 2001, 1st International Conference on Computer Graphics,
Virtual Reality and Visualization in Africa, 5 - 7 November 2001.
This article presents a gesture detection and analysis framework
for modelling multimodal interactions. It is particulary designed
for its use in Virtual Reality (VR) applications and contains an
abstraction layer for different sensor hardware. Using the framework,
gestures are described by their characteristic spatio-temporal
features which are on the lowest level calculated by simple predefined
detector modules or nodes. These nodes can be connected by a data
routing mechanism to perform more elaborate evaluation functions,
therewith establishing complex detector nets. Typical problems
that arise from the time-dependent invalidation of multimodal
utterances under immersive conditions lead to the development of
pre-evaluation concepts that as well support their integration into
scene graph based systems to support traversal-type access.
Examples of realized interactions illustrate applications which
make use of the described concepts.
3D HCI, gestures, multimodal, gesture processing,
multimodal interface framework, gesture and speech input,
interaction in virtual reality, immersive conditions.