AG Wissensbasierte Systeme, Technische Fakultät
British Flag  

Gaze-based Interaction

3D Attention Volumes / Binocular Gaze Recordings

3D Attention Volumes can be used to visualize attention in 3D space. The target object(s) can be inspected in immersive virtual reality from any perspective. Instead of a 2D heatmap, 3D volume renderings are used for presentation. They look like a standard heatmap when projected on a 2D screen, as it is the case in the video below. However, they are real 3D objects that are best viewed in immersive virtual reality.

The video shows 3D Attention Volumes (Pfeiffer, 2011; Pfeiffer, T. 2012) that have been generated based on the data of up to ten participants. The participants had looked at a sequence of 10 objects from a larger object assembly of Baufix toys. The 3D point of regard of each fixation has then been estimated based on binocular eye tracking with our technology (Pfeiffer, Latoschik, & Wachsmuth, 2009).

Monocular Gaze-based Interaction

The following video I have compiled after my paper presentation at the 2008 Workshop on AR and VR of the Gesellschaft für Informatik (Pfeiffer, 2008).

Gaze-based Interaction with the Virtual Agent Max - 2007

This early video documents a basic interaction by gaze with the virtual agent Max, who detects the point-of-regard and reacts by gazing at the same target object. This could be called "shared attention", but it's not "joint attention", as Max is not considering any intentions except "always look where the user is looking to". This changed in our later work on joint attention (e.g. Pfeiffer-Leßmann, Pfeiffer, & Wachsmuth, 2012).

Gaze-based Interaction: Calibration Process Explained - 2007

The following video from 2007 shows our first steps towards gaze-based interaction in virtual reality. The video gives a detailed description of the calibration procedure.

At that time, we used the Arrington Research PC-60 SceneCamera Eye-Tracking system combined with motion tracking from Advanced Real-Time Tracking GmbH, Germany. The virtual reality framework used was SGI OpenGL Performer.

 

References