Gaze-Based Interaction in Virtual Reality
SFB 673 Project A1 - Modelling Partners
since 2006
This video demonstrates eye gaze interaction in an immersive Virtual Reality setup. A monocular eye tracker from Arrington Research is combined with a tracking system from A.R.T. GmbH to allow for the selection of objects or the detection of visual attention in human-agent communication.
- Pfeiffer, T. (2008). Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tracking Set-Up. In Virtuelle und Erweiterte Realität - Fünfter Workshop der GI-Fachgruppe VR/AR, 81-92. Aachen: Shaker Verlag GmbH.
- Pfeiffer-Leßmann, N., & Wachsmuth, I. (2009). Formalizing joint attention in cooperative interaction with a virtual human. In B. Mertsching, M. Hund, & Z. Aziz (Eds.), KI 2009: Advances in Artificial Intelligence (pp. 540-547). Berlin: Springer (LNAI 5803).
Classic 3D Interaction Techniques Revisited
Student Project 2009/2010
This video demonstrates classic 3D user interaction techniques for navigation/travel, selection and manipulation applied to a virtual supermarket scenario. The implementations, a user study and the video have been done by our student group "Interaction in Virtual Reality" in fall/winter 2009/2010 for a video submission to the Grand Price contest of the 3D User Interfaces conference in 2010. Implemented techniques are: Ray Casting, Path Drawing, World in Miniature, Walking in Place, Image Plane, Lean-Based Velocity and Grabbing in the Air.
- Renner, P., Dankert, T., Schneider, D., Mattar, N., & Pfeiffer, T. (2010). Navigating and Selecting in the Virtual Supermarket: Review and Update of Classic Interaction Techniques. Virtuelle und Erweiterte Realität: 7. Workshop der GI-Fachgruppe VR/AR (pp. 71 - 82). Stuttgart: Workshop der GI-Fachgruppe VR/AR.
SONAR - Social Networks in Virtual Reality
Student Project 2009
This video presents work of a student project on the creation of interactive visualization tools for the exploration of social networks. As an example, a 3D graph-based browser called SONAR for the music network Last.FM has been created.
- Bluhm, A., Eickmeyer, J., Feith, T., Mattar, N., & Pfeiffer, T. (2009). Exploration von sozialen Netzwerken im 3D Raum am Beispiel von SONAR für Last.fm. Virtuelle und Erweiterte Realität: 6. Workshop der GI-Fachgruppe VR/AR (pp. 269 - 280). Aachen: Shaker Verlag.
Interactive Social Displays
PASION - Psychologically Augmented Social Interaction Over Networks
EU Project, 2006 - 2010
This prototype of Interactive Social Displays (ISDs) emphasizes the design of the communication environment and the interaction. Using gesture and speech the user can interact with the ISDs in the virtual environment and establish connections to communication partners. The ISDs can be arranged in space to comfort the individual needs.
For communication the main channels audio and video are already supported. Also available in the prototype are first mock-ups (non-functional versions) of different visualization techniques, e.g., bars for presenting ordinal-scaled sources. As a special highlight a first version of the avatar Pasqual is embedded in the prototype. Pasqual displays the system's estimation of the socio-emotional communication situation.
- Pfeiffer, T. & Latoschik, M.E. (2007). Interactive Social Displays. In IPT-EGVE 2007, Virtual Environments 2007, Short Papers and Posters, 41-42: Eurographics Association.
Older Demos
|
The virtual lab entrance located at the VR-Lab. This is how it started. This has changed a lot in recent years. Follow this link to see how it looks like now... |
|
The walk in the virtual lab ... |
|
... and to the window out. |
|
Now approaching the speedbike. |
|
Finally arriving at the work place. Some bars are connected using gesture and speech...
The SGIM project (1996-1999) investigated gesture and speech based human-computer interaction using large-screen displays. |
|
This work is now used in several new projects where gesture and speech interaction is required... |
|
|
The "Articulated Communicator" is a virtual humanoid agent that is to produce complex multimodal utterances (comprising speech, gesture, speech animation, and facial expression) from symbolic specifications of their outer form. |
|
Our Text-to-Speech system builds on and extends the capabilities of txt2pho
and MBROLA: It controls prosodic parameters like speechrate and intonation.
Furthermore, it facilitates contrastive stress and offers mechanisms to
synchronize pitch peaks in the speech output with external events.
Examples (German; wav-files):
|
|
))))))) Hello, I'm Max, what can I do for you? (click on picture) |
|
Max, the Multimodal Assembly eXpert, can demonstrate to the user the assembly of complex aggregates (size of movie file 11 MB). |
|
The arm sensor is dislocated, now some calibration is needed to tweak the system. We can use our speech input for calibration. |
|
Knowledge-based assembly simulation in the Virtual Constructor: When port knowledge is activated (indicated by highlighting), parts snap together according to legal mating conditions. |
|
From the AkuVis project: A moment during interaction with a changeable landscape illustrating the noise conditions in a city district of Bielefeld... |
|
Join a camera flight through the virtual VIENA environment. |
|
Instructing the VIENA system (1996) by speech and gesture. |
|
Virtual RoboCup provides a realtime 3D visualization tool for RoboCup simulation league soccer games. |
Ipke Wachsmuth, Bernhard Jung, Marc Latoschik, Thies Pfeiffer; August 21, 2000 (last updated November 16, 2011)

















