Summary
This paper proposes a scenario for the analysis of interaction mediated
by AR. Using this scenario (a) we can easily track all objects in
space and over time and record who handles each at which moment,
(b) we can easily adjust the displayed augmented objects, (c) we
can add meta-information next to the objects in the users' visual
field, and (d) we can explore truly multimodal interactions, such
as allowing users to perceive the soundscape at any location on the
plan by interactively mixing the acoustic contributions that the
exhibits make. Most importantly, we will be able to control which
information will be perceived by which participant, for example,
presenting different features of the object to the two participants
(small vs. big, silent vs. noisy, etc.), so that we are able to induce
potentially problematic situations which will allow us to investigate
how participants deal with such non-obvious misinterpretation of
the setting.