Universität Bielefeld

VR Lab Showcase

Gaze interaction in 3D

Gaze-Based Interaction in Virtual Reality
SFB 673 Project A1 - Modelling Partners
since 2006

This video demonstrates eye gaze interaction in an immersive Virtual Reality setup. A monocular eye tracker from Arrington Research is combined with a tracking system from A.R.T. GmbH to allow for the selection of objects or the detection of visual attention in human-agent communication.

Classic 3D Interaction Techniques

Classic 3D Interaction Techniques Revisited
Student Project 2009/2010

This video demonstrates classic 3D user interaction techniques for navigation/travel, selection and manipulation applied to a virtual supermarket scenario. The implementations, a user study and the video have been done by our student group "Interaction in Virtual Reality" in fall/winter 2009/2010 for a video submission to the Grand Price contest of the 3D User Interfaces conference in 2010. Implemented techniques are: Ray Casting, Path Drawing, World in Miniature, Walking in Place, Image Plane, Lean-Based Velocity and Grabbing in the Air.


SONAR - Social Networks in Virtual Reality
Student Project 2009

This video presents work of a student project on the creation of interactive visualization tools for the exploration of social networks. As an example, a 3D graph-based browser called SONAR for the music network Last.FM has been created.

Interactive Social Displays

Interactive Social Displays
PASION - Psychologically Augmented Social Interaction Over Networks
EU Project, 2006 - 2010

This prototype of Interactive Social Displays (ISDs) emphasizes the design of the communication environment and the interaction. Using gesture and speech the user can interact with the ISDs in the virtual environment and establish connections to communication partners. The ISDs can be arranged in space to comfort the individual needs.

For communication the main channels audio and video are already supported. Also available in the prototype are first mock-ups (non-functional versions) of different visualization techniques, e.g., bars for presenting ordinal-scaled sources. As a special highlight a first version of the avatar Pasqual is embedded in the prototype. Pasqual displays the system's estimation of the socio-emotional communication situation.

Older Demos

The virtual lab entrance located at the VR-Lab. This is how it started. This has changed a lot in recent years. Follow this link to see how it looks like now...

The walk in the virtual lab ...

... and to the window out.

Now approaching the speedbike.

Finally arriving at the work place. Some bars are connected using gesture and speech...
The SGIM project (1996-1999) investigated gesture and speech based human-computer interaction using large-screen displays.

This work is now used in several new projects where gesture and speech interaction is required...

Results from input and ouput processing are combined when Max follows our interaction in a large scale installation. Below you can find examples of interaction with Max where both communication partners use their speech and gesture abilities to express themselves.

The "Articulated Communicator" is a virtual humanoid agent that is to produce complex multimodal utterances (comprising speech, gesture, speech animation, and facial expression) from symbolic specifications of their outer form.

Our Text-to-Speech system builds on and extends the capabilities of txt2pho and MBROLA: It controls prosodic parameters like speechrate and intonation. Furthermore, it facilitates contrastive stress and offers mechanisms to synchronize pitch peaks in the speech output with external events.

Examples (German; wav-files): "Drehe die Leiste quer zu der Leiste." "Drehe DIE Leiste quer zu DER Leiste..." "Schlafes Bruder..." [See Gregor Möhler's collection for comparison.]

)))))))   Hello, I'm Max, what can I do for you? (click on picture)

Max, the Multimodal Assembly eXpert, can demonstrate to the user the assembly of complex aggregates (size of movie file 11 MB).

The arm sensor is dislocated, now some calibration is needed to tweak the system. We can use our speech input for calibration.

Knowledge-based assembly simulation in the Virtual Constructor: When port knowledge is activated (indicated by highlighting), parts snap together according to legal mating conditions.

Extraction of port knowledge from unstructured CAD models. The port recognition software detects mating features in polygon soups. Non-visual port attributes are interactively modeled on the Responsive Workbench.

Recognition of iconic gestures: Gestures can be used to indicate the shape of objects. In the example the user shows a cube, a cylindrical object, and a bar with his hands. The system determines the most appropriate object that matches the gestural description.

Imaginal prototypes are parametric shape representations for 3D object recognition. Imaginal prototypes can be defined at several levels of abstraction. The video demonstrates how an abstract skeleton model is used for the classification of different types of airplanes (and even only partially assembled toy airplanes). Caution! 17MB mpg-file.

From the AkuVis project: A moment during interaction with a changeable landscape illustrating the noise conditions in a city district of Bielefeld...

Join a camera flight through the virtual VIENA environment.

Instructing the VIENA system (1996) by speech and gesture.

Virtual RoboCup provides a realtime 3D visualization tool for RoboCup simulation league soccer games.

Ipke Wachsmuth, Bernhard Jung, Marc Latoschik, Thies Pfeiffer; August 21, 2000 (last updated November 16, 2011)