Universität Bielefeld

VR Lab Showcase

Our more recent demos can be found at our project pages – for example, clicking on the
picture left brings you to a page where you can explore some demos with Emma and Max.

Some more mpg videos (and mp3 audios) can be found at our media page.

Below, some (low resolution) older demos are kept that were pioneering in their time.

Some pioneering demos

The virtual lab entrance located at the VR-Lab. This is how we started in the 1990s.

The walk in the virtual lab ...

... and to the window out.

Now approaching the speedbike.

Finally arriving at the work place. Some bars are connected using gesture and speech...
The SGIM project (1996-1999) investigated gesture and speech based human-computer interaction using large-screen displays.

This work is now used in several new projects where gesture and speech interaction is required...
The "Articulated Communicator" is a virtual humanoid agent that is to produce complex multimodal utterances (comprising speech, gesture, speech animation, and facial expression) from symbolic specifications of their outer form.
Our Text-to-Speech system builds on and extends the capabilities of txt2pho and MBROLA: It controls prosodic parameters like speechrate and intonation. Furthermore, it facilitates contrastive stress and offers mechanisms to synchronize pitch peaks in the speech output with external events.

Examples (German; wav-files): "Drehe die Leiste quer zu der Leiste." "Drehe DIE Leiste quer zu DER Leiste..."

)))))))   Hello, I'm Max, what can I do for you? (click on picture)

Max, the Multimodal Assembly eXpert, can demonstrate to the user the assembly of complex aggregates (size of movie file 11 MB).

The arm sensor is dislocated, now some calibration is needed to tweak the system. We can use our speech input for calibration.

Knowledge-based assembly simulation in the Virtual Constructor: When port knowledge is activated (indicated by highlighting), parts snap together according to legal mating conditions.

Extraction of port knowledge from unstructured CAD models. The port recognition software detects mating features in polygon soups. Non-visual port attributes are interactively modeled on the Responsive Workbench.

Recognition of iconic gestures: Gestures can be used to indicate the shape of objects. In the example the user shows a cube, a cylindrical object, and a bar with his hands. The system determines the most appropriate object that matches the gestural description.

Imaginal prototypes are parametric shape representations for 3D object recognition. Imaginal prototypes can be defined at several levels of abstraction. The video demonstrates how an abstract skeleton model is used for the classification of different types of airplanes (and even only partially assembled toy airplanes). Caution! 17MB mpg-file, takes a moment to load...

From the AkuVis project: A moment during interaction with a changeable landscape illustrating the noise conditions in a city district of Bielefeld...

Join a camera flight through the virtual VIENA environment.

Instructing the VIENA system (1996) by speech and gesture.

Virtual RoboCup provides a realtime 3D visualization tool for RoboCup simulation league soccer games.


Ipke Wachsmuth, Bernhard Jung, Marc Latoschik, Thies Pfeiffer; August 21, 2000 (last updated October 12, 2017)