CITEC Central Lab Facilities
CITEC
British Flag  

Gaze-based Interaction

The human eyes are our primary sensory organ. While they are revealing us a lot about our environment, they also tell the environment a story about us: what we are interested in, whether we are listening to someone or not, how we are interpreting what we hear.

What makes the eyes especially interesting is, that they provide means to link internal processes (what we are thinking about) to the people and objects in our environment. Both in explicit and implicit ways. This makes following eyegaze a very interesting approach for attentive systems and human-computer interaction. Who would not love to have someone who is reading every wish from their eyes?

In my research I am exploring ways to overcome the restrictions of laboratory research and desktop-based set-ups to make use of eye gaze in mobile scenarios.

Related projects:

EyeSee3D - A method for studying visual attention in 3D settings

Moving from desktop-based setups for eyetracking studies towards more realistic setups increases the efforts needed for data recording and annotation tremendously. Often a single minute of gaze-video recordings requires an hour of manual annotation.

Building upon our technologies for real-time gaze-based interaction in virtual reality, we created an easy to use method to support studies in restricted, but still realistic 3D environments (Pfeiffer & Renner 2014).

3D Attention Volumes / Binocular Gaze Recordings

3D Attention Volumes can be used to visualize attention in 3D space. The target object(s) can be inspected in immersive virtual reality from any perspective. Instead of a 2D heatmap, 3D volume renderings are used for presentation. They look like a standard heatmap when projected on a 2D screen, as it is the case in the video below. However, they are real 3D objects that are best viewed in immersive virtual reality.

The video shows 3D Attention Volumes (Pfeiffer, 2011; Pfeiffer, T. 2012) that have been generated based on the data of up to ten participants. The participants had looked at a sequence of 10 objects from a larger object assembly of Baufix toys. The 3D point of regard of each fixation has then been estimated based on binocular eye tracking with our technology (Pfeiffer, Latoschik, & Wachsmuth, 2008).

Monocular Gaze-based Interaction

The following video I have compiled after my paper presentation at the 2008 Workshop on AR and VR of the Gesellschaft für Informatik (Pfeiffer, 2008).

Gaze-based Interaction with the Virtual Agent Max - 2007

This early video documents a basic interaction by gaze with the virtual agent Max, who detects the point-of-regard and reacts by gazing at the same target object. This could be called "shared attention", but it's not "joint attention", as Max is not considering any intentions except "always look where the user is looking to". This changed in our later work on joint attention (e.g. Pfeiffer-Leßmann, Pfeiffer, & Wachsmuth, 2012).

Gaze-based Interaction: Calibration Process Explained - 2007

The following video from 2007 shows our first steps towards gaze-based interaction in virtual reality. The video gives a detailed description of the calibration procedure.

At that time, we used the Arrington Research PC-60 SceneCamera Eye-Tracking system combined with motion tracking from Advanced Real-Time Tracking GmbH, Germany. The virtual reality framework used was SGI OpenGL Performer.

 

References