- Florian Grond, Till Bovermann, Thomas Hermann (2011)
A SuperCollider Class for Vowel Synthesis and its Use for Sonification.
Proceedings of the 17th International Conference on Auditory Display
(ICAD-2011), ICAD, Budapest, Hungary
Abstract: In this paper, we present building blocks for the synthesis of vowel
sounds in the programming language SuperCollider. We discuss the
advantages of using vowel based synthesis, and make a review where
it has been used in sonifications already. Then we describe in detail
the main class Vowel which handles all parameters related to the
formants that are typically used for vowel synthesis. We further
introduce two auxiliary pseudo Ugens, which where designed to simplify
the handling of the Vowel class: Formants for additive synthesis,
and BPFStack for subtractive synthesis. This is followed by code
examples for sound synthesis, which incorporate the introduced classes
and make use of their specific features. We finally present sample
applications, showing how the introduced classes can be used in sonification.
- Florian Grond, Oliver Kramer, Thomas Hermann (2011)
Interactive Sonification Monitoring in Evolutionary Optimization.
Proceedings of the 17th International Conference on Auditory Display
(ICAD-2011), ICAD, Budapest, Hungary
Abstract: This case study introduces interactive sonification to evolutionary
strategies (ES) for global optimization. We briefly describe the
specific strengths of sonification as a tool for monitoring, the
emerging trend of interactive sonification, and what it can add to
the field of evolutionary computation. Then we line out the background
of ES as optimization heuristics, briefly explain the algorithmic
procedure of ES and discuss the need to intervene during optimization
runs and the current shortcomings in appropriate user feedback. This
motivates the development of an auditory closed loop setup that brings
the expertise of interactive sonification to the field of monitoring
ES algorithms. Further, we present considerations for the sound design
and the detailed mapping of parameters from the ES to sound properties.
Finally, we discuss the various implemented modes of interaction
and their significance for the optimization through ES.
- Eckard Riedenklau, Thomas Hermann, Helge Ritter (2011)
Saving and Restoring Mechanisms for Tangible User Interfaces through
Tangible Active Objects.
In J.A. Jacko (Ed.) Human-Computer Interaction, Part II, p. 110--118, Springer, Heidelberg
Abstract: In this paper we present a proof of concept for saving and restoring
mechanisms for Tangible User Interfaces (TUIs). We describe our actuated
Tangible Active Objects (TAOs) and explain the design which allows
equal user access to a dial-based fully tangible actuated menu metaphor.
We present a new application extending an existing TUI for interactive
sonification of process data with saving and restoring mechanisms
and we outline another application proposal for family therapists.
- Eckard Riedenklau, Dimitri Petker, Thomas Hermann,
Helge Ritter (2011)
Embodied Social Networking with Gesture-enabled Tangible Active Objects.
In RŸckert, Ulrich and Sitte, Joaquin and Werner, Felix (Ed.) Proceedings of AMiRE 2011
Abstract: In this paper we present a novel approach for Tangible User Interfaces
(TUIs) which incorporate small mobile platforms to actuate Tangible
User Interface Objects (TUIOs). We show an application of Tangible
Active Objects (TAOs) in combination with gestural interaction for
social networking. TUIOs represent messages while gestural input
using these objects is used for triggering actions with these messages.
We conducted a case study and present the results. We demonstrate
interaction with a working social networking client.
- Christian Leichsenring, René Tünnermann,
Thomas Hermann (2011)
feelabuzz -- Direct Tactile Communication with Mobile Phones.
International Journal of Mobile Human Computer Interaction.
, vol. 3, no. 2, p. 65--74, (to appear)
Abstract: Touch can create a feeling of intimacy and connectedness even when
transmitted over a distance. In this work we propose feelabuzz, a
system to transmit movements of one mobile phone to the vibration
actuator of another one. This is done in a direct, non-abstract way,
without the use of pattern recognition techniques in order not to
destroy the feel for the other. This means that the same channel
enables direct communication, i.e. what another person explicitly
signals, as well as implicit context communication, i.e. the complex
movements any activity consists of or even those that are produced
by the environment. We explore the potential of this approach, present
the mapping we use and discuss further possible development beyond
the existing prototype to enable a large-scale user study.
- Angelika Dierker, Karola Pitsch, Thomas Hermann (2011)
An augmented-reality-based scenario for the collaborative construction
of an interactive museum.
Bielefeld University
Abstract: This paper proposes a scenario for the analysis of interaction mediated
by AR. Using this scenario (a) we can easily track all objects in
space and over time and record who handles each at which moment,
(b) we can easily adjust the displayed augmented objects, (c) we
can add meta-information next to the objects in the users' visual
field, and (d) we can explore truly multimodal interactions, such
as allowing users to perceive the soundscape at any location on the
plan by interactively mixing the acoustic contributions that the
exhibits make. Most importantly, we will be able to control which
information will be perceived by which participant, for example,
presenting different features of the object to the two participants
(small vs. big, silent vs. noisy, etc.), so that we are able to induce
potentially problematic situations which will allow us to investigate
how participants deal with such non-obvious misinterpretation of
the setting.
- Christian Peters, Thomas Hermann, Sven Wachsmuth (2011)
Prototyping of an automatic prompting system for a residential home.
Proceedings of the RESNA/ICTA, (to appear in 06/2011)
Abstract: In this paper, we describe the prototyping process of an automatic
prompting system in the healthcare domain. The application area of
our system will be a residential home where persons with various
cognitive disabilities live together. We describe the specific requirements
for an automatic prompting system in a residential home where persons
are directly assisted by a caregiver in the execution of Activities
of Daily Living (ADLs). We observe real-world trials using qualitative
data analysis techniques and conduct a first study using a Wizard-of-Oz
(WOz) methodology where we describe the user's reaction behavior
when faced with system prompts instead of direct caregiver prompts.
- Florian Grond, Stefan Janssen, Stefanie Schirmer,
Thomas Hermann (2010)
Browsing RNA Structures by Interactive Sonification.
In Bresin, Roberto (Ed.) Proceedings of the 3rd Interactive Sonification Workshop (ISon 2010), ISon, ISon, Stockholm
Abstract: This paper presents a new interactive sonification technique to browse
ribonucleic acid secondary structures using a combined auditory and
visual interface. Despite the existence of several optimization criteria
for searching an optimal structure within the numerous possible structures
of an RNA sequence, it is still necessary to manually inspect a huge
number of the resulting structures in detail. We describe briefly
the background of RNA structure representation and typical search
scenarios. Then we discuss the audio-visual browser in detail, with
a special focus on the sound design, data-to-sound mapping and interactive
aspects. The sonifications we propose turn RNA structures into auditory
timbre gestalts according to the shape classes they belong to. Various
research-relevant phenomena become clearly audible such as transitions
among shape classes and different free energies of selected folds.
Both can be simultaneously assessed in an interface that allows for
an integrated audio-visual perception.
- Florian Grond, Trixi Drossard, Thomas Hermann (2010)
SonicFunction: Experiments with a function browser for the visually
impaired.
Proceedings of the 16th International Conference on Auditory Display, ICAD, Washington D.C.
Abstract: We present in this paper SonicFunction, a prototype for the interactive
sonification of mathematical functions. Since many approaches to
represent mathematical functions as auditory graphs exist already,
we introduce in SonicFunction three new aspects related to sound
design. Firstly, SonicFunction features a hybrid approach of discrete
and continuous sonification of the function values f(x) . Secondly,
the sonification includes information about the derivative of the
function. Thirdly, SonicFunction includes information about the sign
of the function value f(x) within the timbre of the sonification
and leaves the auditory graph context free for an acoustic representation
of the bounding box. We discuss SonicFunction within the context
of existing function sonifications, and report the results from an
evaluation of the program with 14 partially sighted and blind students.
- Roberto Bresin, Thomas Hermann, Andy Hunt (2010)
ISon 2010 - Interactive Sonification Workshop - Human Interaction
with Auditory Displays - Proceedings, 7th April 2010, KTH, Stockholm,
Sweden.
In Roberto Bresin, Thomas Hermann, Andy Hunt (Ed.) , ISon, Interactive Sonification Community, KTH, Stockholm
- Florian Grond, Alexandre Bouenard, Thomas Hermann,
Marcelo M. Wanderley (2010)
Virtual Auditory Myography of Timpani-playing Avatars.
Proceedings of the 13th Int. Conference on Digita Audio Effects (DAFx-10), Graz, Austria
Abstract: Music performance is highly related to instrumentalists' movements
and one of the biggest challenges is the identification and understanding
of gesture strategies according to the plethora of musical nuances
(dynamics, tempo, etc..) available to performers. During these past
few years, a novel approach has been elaborated, consisting in studying
movement strategies through auditory rendering. In this paper, we
focus on the auditory analysis of timpani (percussion) gestures.
We present a novel interface combining movement simulation and sonification
as a means of enhancing the auditory analysis of timpani gestures.
We further report the results from an evaluation of this interface,
where we study the contributions of sonification to the multimodal
display.
- Jessica Hummel, Thomas Hermann, Christopher
and Stockman Frauenberger (2010)
Interactive sonification of German wheel sports.
In Bresin, Roberto (Ed.) Proceedings of the 3rd Interactive Sonification Workshop (ISon 2010), ISon, KTH, Stockholm
Abstract: This paper presents the design, implementation and evaluation of a
sonification system, which gives real-time auditory feedback to a
performer who is carrying out moves on sports equipment called a
German wheel. The system, which is implemented in the programming
language SuperCollider, uses a magnetometer to collect data about
the motion of the wheel. Parameters of this motion are then transformed
into sound to give real-time feedback to the performer. The aim of
the project is to examine whether such additional convergent audio
feedback can lead to an improved performance of wheel moves. The
design and implementation of four different sonification approaches
is discussed, two of which were chosen to conduct an exempla study.
The study was carried out with a group of seven novices and four
experts and shows a significant positive influence of one of the
sonifications on the performance of the given task by the experts.
- Bastian Kriesten, René Tünnermann, Christian
and Hermann Mertes (2010)
Controlling Ambient Information Flow Between Smart Objects with a
Mobile Mixed-Reality Interface.
MobileHCI 2010: Proceedings of the 12th International Conference
on Human-Computer Interaction with Mobile Devices and Services, p. 405--406, ACM, New York, NY, USA
Abstract: In this work we present a method to intuitively issue control over
devices in smart environments, to display data that smart objects
and sensors provide, and to create and manipulate flows of information
in smart environments. This makes it easy to customize smart environments
by linking arbitrary data sources to various display modalities on
the fly. Touchscreen smartphones -- as readily available multi-purpose
devices -- are used to overlay real objects with virtual controls.
We evaluated this system with a first qualitative user study.
- Alberto de Campo, Julian Rohrhuber, Till Bovermann,
Christopher Frauenberger (2010)
Sonification and Auditory Display in SuperCollider.
The SuperCollider Book, MIT Press
, Wilson, S. and Cottle, D. and Collins, N., (in press)
- Alberto de Campo, Julian Rohrhuber, Till Bovermann (2010)
Object Modeling.
The SuperCollider Book, MIT Press
, Wilson, S. and Cottle, D. and Collins, N., (in press)
- Thomas Hermann (2010)
Sonic Interaction Design: New Applications and Challenges for Interactive
Sonification.
Proceedings of the 13th Int. Conf. on Digital Audio Effects (DAFx-10),
Graz, Austria, p. 1-2, DAFx, IEM, Graz, (abstract for keynote presentation)
- Till Bovermann, René Tünnermann, Thomas Hermann (2010)
Auditory Augmentation.
International Journal on Ambient Computing and Intelligence (IJACI).
, vol. 2, no. 2, p. 27--41
Abstract: With auditory augmentation, the authors describe building blocks supporting
the design of data representation tools, which unobtrusively alter
the auditory characteristics of structure-borne sounds. The system
enriches the structure-borne sound of objects with a sonification
of (near) real time data streams. The object's auditory gestalt is
shaped by data-driven parameters, creating a subtle display for ambient
data streams. Auditory augmentation can be easily overlaid to existing
sounds, and does not change prominent auditory features of the augmented
objects like the sound's timing or its level. In a peripheral monitoring
situation, the data stay out of the users' attention, which thereby
remains free to focus on a primary task. However, any characteristic
sound change will catch the users' attention. This article describes
the principles of auditory augmentation, gives an introduction to
the Reim Software Toolbox, and presents the first observations made
in a preliminary long-term user study.
- Lukas Kolbe, René Tünnermann, Thomas Hermann (2010)
Growing Neural Gas Sonification Model for Interactive Surfaces.
In Bresin, Roberto (Ed.) Proceedings of the 3rd Interactive Sonification Workshop (ISon 2010), ISon, KTH, Stockholm
Abstract: In this paper we present our reimplementation of the Growing Neural
Gas Sonification for interactive surfaces such as our t-Desk or touch-capable
tablet PCs. Growing Neural Gas (GNG) is an undirected learning algorithm
that incrementally 'grows' a network graph into data distributions,
revealing the data distributions' intrinsic dimensionality and aspects
of its structure. The GNG Sonification (GNGS) provides a method to
interactively explore the GNG during the growing process, utilizing
a Model- Based Sonification (MBS) to convey audible information about
the data distribution in addition to the visualization. The goal
of our reimplementation was to be able to rapidly grasp the structure
of the sonified and visualized data, to give the user the ability
to conduct direct A/B comparisons between different (or similar)
clusters within a data distribution. The direct bi-manual interaction
as well as a simplified full-screen touchable user interface helps
to focus on the exploration of the GNG rather than the interaction
itself. We present and discuss different interaction metaphors for
the excitation of the model setup in this MBS.
- Christian Peters, Thomas Hermann, Sven Wachsmuth (2010)
Task Assistance for Persons with cognitive Disabilities (TAPeD).
Abstract: TAPeD is a project at the Cognitive Interaction Technology, Center
of Excellence (CITEC) at Bielefeld University, with the aim to develop
an automatic prompting system in the healthcare domain. In comparison
to systems applied in individual user's homes to prolong the user's
independence in everyday life [2], we aim to develop a system for
a residential home where persons with different cognitive disabilities
live together and share the same system. We cooperate with Haus Bersaba,
a residential home belonging to v. Bodelschwinghsche Stiftungen Bethel
which is a care facility in Bielefeld, Germany. Our user group has
problems fulfilling Activities of Daily Living (ADLs), in particular
in brushing teeth. We describe the progress of development towards
an automatic prompting system assisting in brushing teeth and give
an overview of our project from a machine learning perspective.
- Tobias Grosshauser, Thomas Hermann (2010)
Wearable Setup for Gesture And Motion Based Closed Loop Audio-Haptic
Interaction.
ICAD 2010: Proceedings of the 16th International Community for Auditory
Display, Washington, USA
Abstract: The wearable sensor and feedback system presented in this paper is
a type of audio-haptic display which contains on board sensors, embedded
sound synthesis, external sensors, and on the feedback side a loudspeaker
and several vibrating motors. The so-called ``embedded sonification''
in this case here is an onboard IC with implemented sound synthesis.
This is adjusted directly by the user and/or controlled in real-time
by the sensors, which are on the board or fixed on the human body
and connected to the board via cable or radio frequency transmission.
Direct audio out and tactile feedback closes the loop between the
wearable board and the user. In many situations, this setup can serve
as a complement to visual output, e.g. exploring data in 3D-space
or learning motion and gestures in dance, sports or outdoor and every-day
activities. A new metaphor for interactive acoustical augmentation
is introduced, the so called ``audio loupe''. In this case it means
the sonification of minimal movements or state changes, which can
sometimes hardly be perceived visually or corporal. These are for
example small jitters or deviations of predefined ideal gestures
or movements. Our system is easy to use, it even allows operation
without an external computer. We demonstrate and outline the benefits
of our wearable interactive setup in highly skilled motion learning
scenarios in dance and sports.
- Jan Anlauff, Tobias Grosshauser, Thomas Hermann (2010)
tacTiles: a low-cost modular tactile sensing system for floor interactions.
Proceedings of the 6th Nordic Conference on Human-Computer Interaction:
Extending Boundaries, p. 591--594, ACM, New York, NY, USA
Abstract: In this paper, we present a prototype of a spatially resolved force
sensing floor surface. The force sensors are based on conductive
paper and grouped into modules called tacTiles. Due to the cheap
and widely available materials used for tacTiles, the approach is
suitable as a low-cost alternative for spatially resolved tactile
sensing. The necessary techniques are shared as an open source and
open hardware project to provide an affordable tactile sensing for
smart environments. As an interactive application of these tacTiles,
we present a detection of step direction algorithm used to count
steps into and out of a room.
- Tobias Grosshauser, Thomas Hermann (2010)
Multimodal Closed-loop Human Machine Interaction.
In Bresin, Roberto (Ed.) Proceedings of the 3rd International workshop on Interactive Sonification, ISon, ISon, KTH, Stockholm, Sweden
Abstract: Every-day or highly skilled activities have in common that for the
correct execution a movement activity needs to be carried out at
high accuracy in response to perceptions as they occur in real-time
during the performance. While in real-world situations a mixture
of senses interplays for us to generate stimuli at hand of which
we can learn to coordinate and refine our actions, we may find that
for some tasks certain modalities may be missing such as sound in
drawing tasks and dance, or vision in drilling tasks or be faint
for instance the deviation from a linear bowing movement of musical
string instruments. Here the supportive function of feedback in different
scenarios should be outlined in the meaning of man-computer symbiosis
in every-day and highly skilled learning tasks. As it already was
stated 1960 by \citeLickliderMCS1960 regarding to intellectual
operations, here also operations are performed more effectively and
learning processes are shortened by useful tool integrated interfaces
and audio or audio-haptic feedback. Sonification and vibrotactile
feedback in embedded and wearable devices show new possibilities
in the field of multimodal human computer interaction. Especially
in every-day and working situations, where traditional interfaces
like monitors and keyboards would disturb the used work-flow, the
embedded wearable devices provide many solutions. This are on the
one hand new input possibilities with sensors like distance, pressure,
acceleration and gyroscopes, video cameras and microphones and on
the other hand ubiquitous and adaptable output possibilities like
loudspeakers and vibrotactiles.
- Tobias Grosshauser, Thomas Hermann (2010)
Wearable Multi-Modal Sensor System for Embedded Audio-Haptic Feedback.
In Bresin, Roberto (Ed.) Proceedings of the 3rd International workshop on Interactive Sonification, ISon, ISon, KTH, Stockholm, Sweden
Abstract: Wearable sensing technologies give the user the possibility of on-site
and real-time measurements, analysis and feedback of movements and
postures in everyday behaviour, learning and training situations.
There are many established motion capturing technologies to support
complex movements, but less mobile and wearable systems. One of the
key disadvantages of the existing systems is their high complexity,
for instance they demand high-speed cameras, multi-channel audio
systems or the fixation to a special room or laboratory. As a low-cost
and flexible solution, this paper presents an easily relocatable,
flexible sensor-based system for motion capturing and multi-modal
real-time feedback. Our system is easy to use, it even allows operation
without an external computer. Here we introduce our wearable sensor-setup
and outline its applications and benefits in typical everyday training
situations in combination with multi-modal feedback and embedded
systems including closed loop interactive sonification.
- Nils-Christian Wöhler, Ulf Großekathöfer,
Angelika Dierker, Marc Hanheide, Stefan Kopp, Thomas Hermann (2010)
A calibration-free head gesture recognition system with online capability.
International Conference on Pattern Recognition, p. 3814-3817, IEEE Computer Society, IEEE Computer Society, Istanbul, Turkey
Abstract: In this paper, we present a calibration-free head gesture recognition
system using a motion-sensor-based approach. For data acquisition
we conducted a comprehensive study with 10 subjects. We analyzed
the resulting head movement data with regard to separability and
transferability to new subjects. Ordered means models (OMMs) were
used for classification since they provide an easy-to-use, fast,
and stable approach to machine learning of time series. In result,
we achieved classification rates of 85-95 % for nodding, head shaking
and tilting head gestures and good transferability. Finally, we show
first promising attempts towards online recognition.
- Jan Anlauff, Erik Weitnauer, Alexander Lehnhardt,
Stefanie Schirmer, Sebastian Zehe, Keywan Tonekaboni, Nick
and Fusco Thomas (2010)
A method for outdoor skateboarding video games.
Proceedings of the 7th International Conference on Advances in Computer
Entertainment Technology, p. 40--44, ACM, New York, NY, USA
Abstract: Video games aimed at motivating players to excercise have gained popularity
in the last years, but most games are still designed for indoor scenarios.
In this paper, we present a novel game concept: a mobile video game
that is controlled by performing tricks on a real skateboard. A small
sensor module is integrated unobtrusively and well-protected into
the skateboard. The sensor readings are transmitted via Bluetooth
to the game which runs on a smartphone. We present the Tilt'n'Roll
game as prototype application using this platform. It uses data mining
techniques to detect skating tricks in the raw sensor data and awards
points based on their difficulty. Audio feedback allows the skater
to track his/her progress while skating.
- Tobias Grosshauser, Ulf Großekathöfer, Thomas Hermann (2010)
New Sensors and Pattern Recognition Techniques for String Instruments.
International Conference on New Interfaces for Musical Expression,
NIME2010, Sydney, Australia
Abstract: Pressure, motion, and gesture are important parameters in musical
instrument playing. Pressure sensing allows to interpret complex
hidden forces, which appear during playing a musical instrument.
The combination of our new sensor setup with pattern recognition
techniques like the lately developed ordered means models allows
fast and precise recognition of highly skilled playing techniques.
This includes left and right hand analysis as well as a combination
of both. In this paper we show bow position recognition for string
Instruments by means of support vector regression machines on the
right hand finger pressure, as well as bowing recognition and inaccurate
playing detection with ordered means models. We also introduce a
new left hand and chin pressure sensing method for coordination and
position change analysis. Our methods in combination with our audio,
video, and gesture recording software can be used for teaching and
exercising. Especially studies of complex movements and finger force
distribution changes can benefit from such an approach. Practical
applications include the recognition of inaccuracy, cramping, or
malposition, and, last but not least, the development of augmented
instruments and new playing techniques.
- Thomas Hermann, Gerold Baier (2010)
Sonic Triptychon of the Human Brain.
Proc. of the 16th Int. Conference on Auditory Display (ICAD-2010),
June 9-15, 2010, Washington, D.C, USA, International Community on Auditory Display, International Community on Auditory Display, Washington, D.C., USA
Abstract: This paper describes the motivation, data and sonification technique
for three sound examples on the auditory display of human brain activity,
selected and formatted as contribution to the ICAD aural submission
category. The human brain generates complex temporal and spatial
signal patterns whose dynamics correspond to normal (e.g. cognitive)
processes and as well as abnormal conditions, i.e. disease. Our sonification
technique Event-based Sonification allows to render multi-channel
representations of the multivariate data so that temporal, spectral
and spatial patterns can be discerned. Being a scientific approach,
the sonifications are reproducible, systematic and the mapping is
made transparent. Control parameters help to increase the saliency
of specific features in the auditory display. This is demonstrated
using data with sleep spindles, a photic response and epileptic discharges.
Since all 'sonic pictures' are rendered with the same technique,
a variety of dynamic phenomena related to different brain states
are demonstrated as auditory Gestalts. Sonification of the EEG offers
a meaningful complement of the prevailing visual displays.
- Thomas Hermann (2009)
Sonification and Sonic Interaction Design for the Broadband Society.
In Sapio, B. and Haddon, L. and Mante-Meijer, E. and Fortunati, L. and
Turk, T. and Loos, E. (Ed.) The good, the bad and the challenging: The user and the future of
information and communication technologies: Conference proceedings, p. 887-892, COST 298, ABS-Center, d.o.o. Koper, Slovenia
Abstract: Imagine a huge dataset of a public census - or medical data - or the
worldwide Internet traffic. What do you hear? Obviously, we are not
very familiar with the use of our listening capabilities when investigating
large amounts of information! The typical data analyst is indeed
confronted with large visual displays showing visualizations in front
of a (concerning information value) rather silent computer. This
is interesting since sound plays a highly important role in most
real-world contexts, e.g. to monitor complex processes, to analyze
complex systems, to selectively direct our attention, to allow us
to gain insight into systems beyond the surface. Sonification, the
auditory display of information makes arbitrary data accessible by
our listening skills and addresses complementary modes of understanding
which put dynamic instead of static features into the fore, which
are well connected to interaction. Sonification can play an important
role for the broadband society, e.g. to increase awareness of network
behavior, our virtual neighborhood, to feel connected without being
bound to a visual display. The paper will introduce, demonstrate
and discuss the utility of sonification in sonic interaction design
from monitoring and analysis tasks, interactive biofeedback to interfaces
for visually impaired, introduce the concept of Sonic Overloading,
and furthermore relate sonification to expected trends in the broadband
society.
- Thomas Hermann, Gerold Baier (2009)
Interpreting EEG by voice: Vocal EEG Sonification.
In Rinnot, Michal (Ed.) Proceedings of the 1st Sketching Sonic Interaction Design Workshop, COST IC0601 SID, HIT, Holon, Israel
Abstract: Vocal EEG sonifications are presented as a method for complex time
series sonification that is particularly tailored to address both
humans' articulatory and auditory competences in order to improve
the understanding and communication of the underlying data. In Vocal
EEG sonification, the EEG data is represented in real-time by synthesized
sound in a systematic, reproducible, task-centered way using an articulatory
sound synthesizer capable of creating vowel transitions. Patterns
such as 'EEG at rest', epileptic EEG, sleep EEG, etc. are thereby
turned into characteristically different sonic gestalts that human
listeners can discern from listening to the 'data babble'. In this
paper, we emphasize the aspect of designing sonification particularly
for the purpose of enhancing communication about sonic patterns,
and we conduct a preliminary study about the human skill to use the
own vocal tract to mimic or imitate patterns heard in the sonification.
Our study will show to what degree humans are capable to recognize
signal types correctly, both from the original sonifications and
from vocal imitations performed by trained sonification users and
na\"\ive users without extended previous experience in sonification.
- Tobias Großhauser, Bettina Bläsing, Thomas
and Anlauff Hermann (2009)
Wearable Motion and Posture Sensing with Closed Loop Acoustic and
Haptic Feedback.
- Angelika Dierker, Christian Mertes, Thomas Hermann,
Marc Hanheide, Gerhard Sagerer (2009)
Mediated attention with multimodal augmented reality.
ICMI-MLMI '09: Proceedings of the 2009 international conference on
Multimodal interfaces, p. 245--252, ACM, New York, NY, USA
Abstract: We present an AR system to support collaborative tasks by facilitating
joint attention. The users are assisted by information about their
interaction partner's field of view both visually and acoustically.
In a study, the audiovisual improvements are compared with an AR
system without these support mechanism by regarding the participants'
reaction times and error rates. The participants performed a simple
object-choice task we call the gaze game to ensure controlled experimental
conditions. Additionally, we asked the subjects to fill in a questionnaire
to gain sub jective feedback from them. We could show an improvement
for both dependent variables as well as positive feedback for the
visual augmentation in the questionnaire.
- Jessica Hummel (2009)
Interactive Sonification for monitoring and skill learning.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Angelika Dierker, Till Bovermann, Marc Hanheide,
Thomas Hermann, Gerhard Sagerer (2009)
A Multimodal Augmented Reality System for Alignment Research.
Proceedings of the 13th International Conference on Human-Computer
Interaction, Springer, New York, Heidelberg, San Diego, USA
Abstract: In this paper we present the Augmented Reality-based Interception
Interface (ARbInI), a multimodal Augmented Reality (AR) system to
investigate effects and structures of human-human interaction in
collaborative tasks. It is introduced as a novel methodology to monitor,
record, and simultaneously manipulate multimodal perception and to
measure so-called alignment signals. The linguistic term 'alignment'
here refers to automatic and unconscious processes during communication
of interactants. As a consequence of these processes, the structure
of communication between the two interactants conforms to each other
(for example, they use the same terms, gestures, etc.). Alignment
is a debated model for communication in the community [1] and here
we strive for providing novel means to study it by instrumenting
the interaction channels of human interactants [2]. AR allows for
a very close coupling between the user and a technical system. ARbInI
adds to this a decoupling mechanism between two interacting users
from the outside world and from each other via cameras and head-mounted
displays (resp. microphone and headphones). This allows to monitor
and record the exact perceived auditory and visual stimuli. This
furthermore adds full control to the audiovisual input of the subjects:
for instance, AR allows the manipulalion of shown virtual objects
on top of physical objects by displaying a different size, shape
or level of detail to the participants. Thus, the ARbInI has full
control on the visual and auditory input of the subjects by means
of the perceptual decoupling. With the help of visual and auditory
AR techniques it is possible to manipulate the stimuli, for example
by selectively changing the size, color or shape of virtual objects
that are augmented in the views of cooperating users. Using ARToolKit
markers attached to physical objects we make full use of augmented
reality to realize physical interaction with virtual objects in our
studies. In this paper, we also present VideoDB as a scenario. It
focuses on the task to collaboratively organize and arrange multimodal
video snippets. We discuss its potential regarding the recording
and investigation of alignment.
- Tobias Grosshauser, Thomas Hermann (2009)
The Sonified Music Stand - An Interactive Sonification System for
Musicians.
In Fabien Gouyon, \'Alvaro Barbosa, Xavier Serra (Ed.) SMC 2009 - Proceedings of the 6th Sound and Music Computing Conference, p. 233-238, Casa da Musica, Porto, Portugal, Porto, Portugal
Abstract: This paper presents the sonified music stand, a novel interface for
musicians that provides real-time feedback for professional musicians
in an auditory form by means of interactive sonification. Sonifications
convey information by using non-speech sound and are a promising
means for musicians since they (a) leave the visual sense unoccupied,
(b) address the sense of hearing which is already used and in this
way further trained, (c) allow to relate feedback information in
the same acoustic medium as the musical output, so that dependencies
between action and reaction can be better understood. This paper
presents a prototype system together with demonstrations of applications
that support violinists during musical instrument learning. For that
a pair of portable active loudspeaker has been designed for the music
stand and a small motion sensor box has been developed to be attached
to the bow, hand or hand wrist. The data are sonified in real-time
according to different training objectives. We sketch several sonification
ideas with sound examples and give a qualitative description of using
the system.
- Florian Grond, Thomas Hermann, Vincent Verfaille,
Marcelo M. Wanderley (2009)
Methods for Effective Sonification of Clarinetists' Ancillary Gestures.
In Kopp, Stefan and Wachsmuth, Ipke (Ed.) Gesture in Embodied Communication and Human Computer Interfaces:
Proceedings of the 8th International Gesture Workshop, Springer Verlag, Berlin, Heidelberg
Abstract: We present the implementation of two different sonifications of ancillary
gestures from clarinetists. The sonifications are data driven from
the clarinetist's posture which is captured with VICON motion tracking
system. Further we develop a simple complementary visual display
with a similar information content to match the sonification. The
effect of the sonifications with respect to the movement perception
is studied in an experiment where test sub jects annotate the clarinetists
performance represented by various combinations of the resulting
uni- and multimodal displays.
- Tobias Grosshauser, Thomas Hermann (2009)
Augmented Haptics - An Interactive Feedback System for Musicians.
In Altinsoy, M. Ercan and Merchel, Sebastian (Ed.) Haptic and Audio Interaction Design 4th International Conference,
HAID 2009 Dresden, Germany, September 10-11, 2009 Proceedings, p. 100-108, Springer, Berlin, Heidelberg
Abstract: This paper presents integrated vibrotactiles, a novel interface for
movement and posture tuition that provides real-time feedback in
a tactile form by means of interactive haptic feedback, thereby conveying
information neither acoustically nor visually and it is a promising
feedback means for movements in 3D-space. In this paper we demonstrate
haptic augmentation for applications for musicians, since it (a)
doesn't affect the visual sense, occupied by reading music and communication,
(b) doesn't disturb in bang sensitive situations such as concerts,
(c) allows to relate feedback information in the same tactile medium
as the output of the musical instrument, so that an important feedback
channel for musical instrument playing is extended and trained supportive.
Even more, instructions from the teacher and the computer can be
transmitted directly and unobtrusively in this channel. This paper
presents a prototype system together with demonstrations of applications
that support violinists during musical instrument learning.
- Florian Grond, Theresa Schubert-Minski (2009)
Sonification, Scientific Method and Artistic Practice.
See this sound, Verlag der Buchhandlung Walter König, Köln
, Daniels, D and Naumann, S., p. 285-295
Abstract: Sonification refers first of all to the scientific method, developed
about twenty years ago, of using 'non-speech audio to convey information'.
Its point of departure is the fact that in many cases, the sense
of hearing has a high degree of potential to convey in a simple way
information that is complementary to the sense of sight. As an alternative
and supplement to visualization, sonification often facilitates the
understanding of time-based phenomena and structures. Today, the
areas of sonification applications are manifold, such as process
monitoring, data mining, explorative data analysis, and interface
design. Against this background, sonification is also increasingly
being used artistically as an esthetic concept and method. Although
the current relevance of sonification only emerged with the advent
of the polymorphic depiction of information in the context of digitalization,
acoustic means have been used for centuries to explain various concepts.
- Thomas Hermann (2009)
Sonifikation hochdimensionaler Daten - Funktionaler Klang zum Erkenntnisgewinn.
Funktionale Klänge, transcript Verlag
, Spehr, Georg (Ed.), p. 67-85, no. 2, Sound Studies
- Gerold Baier, Thomas Hermann (2009)
Sonification: Listen to Brain Activity.
Music that works - Contributions of biology, neurophysiology, psychology,
sociology, medicine and musicology, Springer
, Haas, Roland and Brandes, Vera (Ed.)
- Till Bovermann (2009)
Tangible Auditory Interfaces: Combining Auditory Displays and Tangible
Interfaces.
Faculty of Technology, Bielefeld University
- Christian Peters, Sven Wachsmuth, Jesse Hoey (2009)
Learning to recognise behaviours of persons with dementia using multiple
cues in an HMM-based approach.
PETRA '09: Proceedings of the 2nd International Conference on PErvsive
Technologies Related to Assistive Environments, p. 1--8, ACM, New York, NY, USA
Abstract: This paper presents a learning technique for visual event recognition
in a system that assists persons with dementia during handwashing.
The challenge is that persons with dementia present a wide variety
of behaviors during a single task, typically changing their behaviours
drastically from day to day. Any attempt at modeling this variety
requires a large set of features, image regions, and temporal dynamics.
In this paper, we approach this challenge by supervised learning
of generative models from manually segmented and labelled video sequences.
Our method uses a generic set of appearance-based colour, motion
and texture features, over a static set of regions. We then present
two HMM architectures that incorporate multiple image regions by
either fusing on a feature-level, or later in the recognition process
using a mixture-of-experts approach, in which a gating HMM is applied
for the dynamic selection between specialised expert HMMs. Our models
are trained on a clinical database of videos, and we compare the
HMM approaches with a nearest neighbours scheme. Our results confirm
the challenge we present, and indicate that our generative modelling
techniques are suitable for inclusion in future prototypes of the
hand washing assistant.
- Eckard Riedenklau (2009)
TAOs - Tangible Active Objects for Table-top Interaction.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
Abstract: Personal Computers have arrived in almost every part of our live to
do work faster and better. They are used for writing texts, creating
music or drawings, or simply organizing and guiding everyday tasks.
Nearly all these tasks are done with computers which are operated
using screen, keyboard and mouse even if their use may be sometimes
cumbersome or even unsuitable for some tasks. Human Computer Interaction
(HCI) aims to analyze the way people use computers and suggest new
methods for interaction. One area of this research field is called
'Tangible Interaction'. Tangible Interaction tries to use everyday
objects as tangible representations for digital data. It is hoped
that by pulling the data into the tangible real world (in contrast
to the virtual world) they can be made more vivid and graspable and
thereby better understandable. These real-world representations are
called Tangible User Interface Objects (TUIOs) and the systems they
are used in Tangible User Interfaces (TUIs).
The main goal of this work is to create active objects as a new kind
of TUIO. These active objects extend the concept of TUIOs by the
possibility to be not only manipulated by the user but also by the
computer. Many different ways of manipulation are possible, e.g.
adding LEDs or liquid crystal displays, sound output or tactile and
haptic feedback with vibration, etc. One of the most challenging
manipulation possibilities is computer controlled planar movement
for instance on a desk surface, which will be developed in this work.
The developed objects are constructed as modular as possible to be
open for future extensions and modifications. A software structure
for the coordination of the objects is implemented. Furthermore some
applications shown to give examples for the potential of this novel
technique.
This publication is part of the Tangible Active Objects (TAOs) research
project
- Marije Baalman, Stefan Kersten, Till Bovermann (2009)
Ins and Outs --- SuperCollider and external devices.
The SuperCollider Book, MIT Press
, Wilson, S. and Cottle, D. and Collins, N., (in press)
- Christian Mertes, Angelika Dierker, Thomas Hermann,
Marc Hanheide, Gerhard Sagerer (2009)
Enhancing Human Cooperation with Multimodal Augmented Reality.
Proceedings of the 13th International Conference on Human-Computer
Interaction, p. 447--451, Springer, Heidelberg, Germany
Abstract: Humans naturally use an impressive variety of ways to communicate.
In this work, we investigate the possibilities of complementing these
natural communication channels with artificial ones. For this, augmented
reality is used as a technique to add synthetic visual and auditory
stimuli to people's perception. A system for the mutual display of
the gaze direction of two interactants is presented and its acceptance
is shown through a study. Finally, future possibilities of promoting
this novel concept of artificial communication channels are explored.
- Jan Stefan Anlauff (2009)
Development and Application of Tactile Surfaces for Smart Environments.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
Abstract: In this thesis, we present a design proposal and a prototype for a
flexible, tile-based tactile sensing system for smart environments,
called tacTiles. We begin with a comparison of existing approaches
for tactile sensing in and on floor surfaces. As all investigated
approaches failed to meet our requirements in some regards, we develop
a novel design for a spatially resolved tactile sensing tile. We
describe our method of arranging and connecting these sensors, and
the bus system we developed to interconnects the individual tiles.
As our system is based on custom force sensors based on conductive
paper, we wished to evaluate the performance of these sensors to
compare them to similar commercial ones. We describe the construction
of a reference actuation setup and the subsequent evaluation. Furthermore,
we use the setup to evaluate a PCB-based variant for future design,
more modular designs. Finally, we presented two sonifications as
example applications for the spatial pressure data of the tacTiles
system.
A dedicated page on this project can be found
here
- Jan Anlauff, Thomas Hermann, Tobias Grosshauser,
Jeremy Cooperstock (2009)
Modular tacTiles for Sonic Interactions with Smart Environments.
In Altinoy, Ercan and Jekosch, Ute and Brewster, Stephen (Ed.) Proceedings of the 4. International Workshop on Haptic and Auditory
Design (HAID 09), p. 26-27, Springer, Dresden, Germany
Abstract: In this paper we present a prototype of a spatially resolved tactile
surface that can serve as an `artificial skin' to extend the perceptual
capabilities of furniture or other artifacts in environments such
as the floor or wall. The surface material is based on paper and
flexible circuit board material, so that tacTiles -- elements of
this type -- can be applied to various contexts. As an interactive
application, we present a real-time sonification of interaction patterns
with the tacTile doormat, allowing for instance to perceive how someone
leaves or enters a room, or how the body weight is distributed.
- René Tünnermann (2009)
Modular Multi-Touch Interface for Direct Closed-Loop Interactions.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Alexander Oehl (2009)
Visual Person localization in an intelligent Room - ViPeR.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- René Tünnermann, Thomas Hermann (2009)
Multi-Touch Interactions for Model-Based Sonification.
In Aramaki, Mitsuko and Kronland-Martinet, Richard and Ystad, S\olvi
and Jensen, Kristoffer (Ed.) Proceedings of the 15th International Conference on Auditory Display
(ICAD2009), Re:New -- Digital Arts Forum, Re:New -- Digital Arts Forum, Copenhagen, Denmark
Abstract: This paper presents novel interaction modes for Model-Based Sonification
(MBS) via a multi-touch interface. We first lay out details about
the constructed multi-touch surface. This is followed by a description
of the Data Sonogram Sonification Model and how it is implemented
using the system. Modifications from the original sonification model
such as the limited space scans are described and discussed with
sonification examples. Videos showing exam- ples of interaction are
provided for various data sets. Beyond Data Sonograms, the presented
system provides a basis for the implementation of known and novel
sonification models. We discuss the available interaction modes with
multi-touch surfaces and how these interactions can be profitably
used to control spatial and non-spatial sonification models.
- René Tünnermann, Lukas Kolbe, Till and
Hermann Bovermann (2009)
Surface Interactions for Interactive Sonification.
ICAD'09/CMMR'09 post proceedings edition, Re:New -- Digital Arts Forum
, Aramaki, Mitsuko and Kronland-Martinet, Richard and Ystad, S\olvi
and Jensen, Kristoffer (Ed.), Re:New -- Digital Arts Forum
Abstract: This paper presents novel interaction modes for Model-Based Sonification
(MBS) via interactive surfaces. We first discuss possible interactions
for MBS on a multi-touch surface. This is followed by a description
of the Data Sonogram Sonification and the Growing Neural Gas Sonification
Model and their implementation for the multi-touch interface. Modifications
from the original sonification models such as the limited space scans
are described and discussed with sonification examples. Videos showing
interaction examples are provided. Furthermore, the presented system
provides a basis for the implementation of known and novel sonification
models. We discuss the available interaction modes with multi-touch
surfaces and how these interactions can be profitably used to control
spatial and non-spatial sonification models.
- Bastian Kriesten (2009)
Smartphone-basierte Mixed-Reality Interaktionen für Intelligente
Umgebungen.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Tobit Kollenberg, Alexander Neumann, Dorothe and
Tews Schneider, Thomas Hermann, Helge Ritter,
Angelika Dierker, Hendrik Koesling (2009)
Visual search in the (un)real world: How head-mounted displays affect
eye movements, head movements and target detection.
Proceedings of the Eye Tracking Research \& Applications Symposium
(ETRA), ACM, ACM, Austin, TX, USA, (accepted)
Abstract: Head-mounted displays (HMDs) that use a see-through display method
allow for superimposing computer-generated images upon a real-world
view. Such devices, however, normally restrict the user's field of
view. Furthermore, low display resolution and display curvature are
suspected to make foveal as well as peripheral vision more difficult
and may thus affect visual processing. In order to evaluate this
assumption, we compared performance and eye-movement patterns in
a visual search paradigm under different viewing conditions: participants
either wore an HMD, had their field of view restricted by blinders
or could avail themselves of an unrestricted field of view (normal
viewing). From the head and eye-movement recordings we calculated
the contribution of eye rotation to lateral shifts of attention.
Results show that wearing an HMD leads to less eye rotation and requires
more head movements than under blinders conditions and during normal
viewing.
- Thomas Hermann, Gerold Baier, Ulrich Stephani,
Helge Ritter (2008)
Kernel Regression Mapping for Vocal EEG Sonification.
In Susini, P. and Warusfel, O. (Eds.) Proceedings of the International Conference on Auditory Display (ICAD
2008), ICAD, ICAD, France
Abstract: This paper introduces kernel regression mapping sonification (KRMS)
for optimized mappings between data features and the parameter space
of Parameter Mapping Sonification. Kernel regression allows to map
data spaces to high-dimensional parameter spaces such that specific
locations in data space with pre-determined extent are represented
by selected acoustic parameter vectors. Thereby, specifically chosen
correlated settings of parameters may be selected to create perceptual
fingerprints, such as a particular timbre or vowel. With KRMS, the
perceptual fingerprints become clearly audible and separable. Furthermore,
kernel regression defines meaningful interpolations for any point
in between. We present and discuss the basic approach exemplified
by our previously introduced vocal EEG sonification, report new sonifications
and generalize the approach towards automatic parameter mapping generators
using unsupervised learning approaches.
- Thomas Hermann (2008)
Auditory Interfaces: Chapter 5.2.
Auditory Interfaces: Technology of the Interface, Morgan Kaufman
, Kortum, Philip, p. 156--160, The Morgan Kaufman series in interactive technologies
- Till Bovermann, Julian Rohrhuber, Helge Ritter (2008)
Durcheinander -- Understanding Clustering via Interactive Sonification.
Proceedings of the 14th International Conference on Auditory Display, Paris, France
Abstract: In this paper we present Durcheinander, an interactive system designed
to support learners in understanding agglomerative clustering (AG)
processes. This datamining approach is mainly used to unveil structural
relations in high-dimensional datasets. Durcheinander operates on
a two-dimensional dataset comprising of the positions of 20 or more
objects located on a tabletop surface. Changing the objects' location
causes the system to compute an updated dendrogram that represents
the actual dataset. If the dendrogram's configuration differs substantially
from its predecessor, a spatial sound event that represents the internal
structure is rendered in realtime and played back to the users by
a multi-channel audio system surrounding the table. Replaying that
configuration is possible by knocking onto the surface.
- Felix Hagemann (2008)
Auditive Interaktionsschleifen am Beispiel sonifikationsbasierter
Bewegungsspiele.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Thomas Hermann, Gerold Baier (2008)
Die Sonifikation des menschlichen EEG.
Katalog: Wien Modern 2008, Verein Wien modern
, Berno Odo Polzer (Ed.), p. 25-27
- Katharina Vogt, Till Bovermann, Philipp Huber,
Alberto de Campo (2008)
Exploration of 4D-Data Spaces. Sonification in Lattice QCD.
Proceedings of the 14th International Conference on Auditory Display, Paris, France
Abstract: We describe a pilot study on the sonification of data from lattice
Quantum Chromodynamics, a branch of computational physics. This data
is basically 4-dimensional and discretized on a lattice. The implementation
allows interactive navigation through the data via different interfaces.
Two different sonification schemes have been applied, giving information
on small regions of the lattice. In real data sets we searched for
structures that are hidden by quantum fluctuations. First results
have been achieved with simplified data sets.
- Thomas Hermann, Risto Koiva (2008)
tacTiles for Ambient Intelligence and Interactive Sonification.
In Pirhonen, Antti and Brewster, Stephen A. (Ed.) Haptic and Audio Interaction Design, Third International Workshop,
HAID 2008, Jyväskylä, Finland, September 15-16, 2008, Proceedings, p. 91-101, Springer, Berlin, Heidelberg
Abstract: In this paper we introduce tacTiles, a novel wireless modular tactile
sensitive surface element attached to a deformable textile, designed
as a lay-on for surfaces such as chairs, sofas, floor or other furniture.
tacTiles can be used as interface for human-computer interaction
or ambient information systems. We give a full account on the hardware
and show applications that demonstrate real-time sonification for
process monitoring and biofeedback. Finally we sketch ideas for using
tacTiles paired with sonification for interaction games.
- Thomas Hermann (2008)
Daten hören - Sonifikation zur explorativen Datenanalyse.
Sound Studies: Traditionen - Methoden - Desiderate, transcript Verlag
, Schulze, Holger (Ed.), p. 209-228, no. 1, Sound Studies
Abstract: Stellen Sie sich einen großen Datensatz vor, zum Beispiel die
Daten einer Volkszählung oder Börsendaten. Was hören
Sie? Diese Frage ist ungewöhnlich. Offenbar ist der Zugang zu
Daten über das Hören für uns noch nicht alltäglich.
Dieser Aufsatz horcht den Ursachen hierfür nach; motiviert, warum
eine Verwendung unseres Hörsinns zur Untersuchung komplexer Daten
ausgesprochen sinnvoll ist; beschreibt verschiedene Methoden, wie
Daten klanglich dargestellt werden können; und zeigt wie wir
sogar mit Daten interagieren können, um sie zum Klingen zu bringen.
- S. Camille Peres, Virginia Best, Derek Brock,
Christopher Frauenberger, Thomas Hermann, John G. Neuhoff,
Lousie Valgerdaur, Barbara Shinn-Cunningham, Tony Stockman (2008)
Auditory Interfaces, Chapter 5.
Auditory Interfaces, Morgan Kaufman
, Kortum, Philip, p. 145-195, ed. 1
- Thomas Hermann (2008)
Taxonomy and Definitions for Sonification and Auditory Display.
In Susini, P. and Warusfel, O. (Eds.) Proc. 14th Int. Conf. Auditory Display (ICAD 2008), ICAD, ICAD, Paris, France
Abstract: Sonification is still a young research field and many terms such as
sonification, auditory display, auralization, audification have been
used without a precise definition. Recent developments such as the
introduction of Model-based Sonification, the establishing of interactive
sonification and the increased interest in sonification from arts
have raised the issue of revisiting the definitions towards a clearer
terminology. This paper introduces a new definition for sonification
and auditory display that emphasize necessary and sufficient conditions
for organized sound to be called sonification. It furthermore suggests
a taxonomy, and discusses the relation between visualization and
sonification. A hierarchy of closed-loop interactions is furthermore
introduced. This paper aims at initiating vivid discussions towards
the establishing of a deeper theory of sonification and auditory
display.
- Till Bovermann, Risto Koiva, Thomas Hermann,
Helge Ritter (2008)
TUImod: Modular Objects for Tangible User Interfaces.
Proceedings of the 2008 Conference on Pervasive Computing, Syndey, Australia
Abstract: This paper describes the design and construction of TUImod, a modular
system of basic elements generated by rapid-prototyping techniques
that can be combined in various ways into human distinguishable and
computer trackable physical objects with specific physical properties.
The described system is used in our tangible desk environment for
data exploration applications.
- Louise Valgerður Nickerson, Thomas Hermann (2008)
Interactive Sonifcation of Grid-based Games.
Proceedings of the Audio Mostly Conference, p. 27--34, Piteå, Sweden
Abstract: This paper presents novel designs for the sonification (auditory representation)
of data from grid-based games such as Connect Four, Sudoku and others,
motivated by the search for effective auditory representations that
are useful for visually-impaired users as well as to support overviews
in case that the visual sense is already otherwise allocated. Grid-based
games are ideal to develop sonification strategies since they offer
the advantage of providing an excellent test environment to evaluate
the designs by measuring details of the interaction, learning, performance
of the users, etc. We present in detail two new playable sonification-based
audio games, and finally discuss how the approaches might generalise
to general grid-based interactive exploration, e.g. for spreadsheet
data.
- Christian Mertes (2008)
Multimodal Augmented Reality to Enhance Human Communication.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
Abstract: Humans naturally use an impressive variety of ways to communicate.
In this work, we will investigate the possibilities of complementing
these natural communication channels with artificial ones. For this,
augmented reality is used as a technique to add synthetic visual
and auditory stimuli to people's perception. A system for the mutual
display of the gaze direction of two interactants is presented and
its acceptance is shown through a study. Finally, future possibilities
of promoting this novel concept of artificial communication channels
are explored.
- Jean R. Dawin, Danny Hartwig, Andreas Kudenko,
Eckard Riedenklau (2008)
TRecS: Ein tangibles, rekonfigurierbares System zur explorativen
Datenanalyse.
Bielefeld University
, Bielefeld University, Bielefeld, Germany, (Supervisors: Till Bovermann and Thomas Hermann)
Abstract: Durch die Virtualisierung von Information verlieren Menschen das Gefühl
für die Beschaffenheit dieser digitalen Information. Z.B. wird
der Festplattenspeicher lediglich in einer Zahl ausgedrückt.
Da sich die Maße der Festplatte nicht ändern, wenn Daten
auf ihr gespeichert werden, kann man nur anhand dieser virtuellen
Zahl den freien Speicherplatz ermitteln. Digitale Informationen sind
folglich nicht anfassbar und werden durch das Verlieren physischer
Attribute durch Digitalisierung immer abstrakter und unverständlicher.
Ein weiteres Beispiel sei Geld. Viele Menschen zahlen bevorzugt mit
Kreditkarten, dem sog. "Plastikgeld". Sie kaufen damit ein und merken
u.U. gar nicht, dass sie immer mehr Schulden anhäufen. Durch
die Nutzung von realem Papier- und Münzgeld hat man ein besseres
Verständnis für den eigentlichen Wert des Geldes - auch und
gerade durch das Anfassen.
Dinge anzufassen, um sie zu begreifen, d.h. zu verstehen, ist sehr
wichtig für ein tiefes Verständnis der Dinge. Durch das 'Begreifbar'-machen
von Information (oder zumindest das Erfahren und Erkunden) wird eine
reale Verbindung zwischen Mensch und Information hergestellt. So
ist es durch Anfassen und Manipulieren möglich, etwas über
die Beschaffenheit der Dinge zu erfahren. Dies ist die Quintessenz
von Tangible Computing.
Tangible Computing versucht zwischen der realen, physischen und der
virtuellen, digitalen Welt zu vermitteln und die Vorteile beider
zu nutzen, um Informationen besser verständlich zu machen.
Wegen der intuitiven Benutzbarkeit, die bei der Nutzung von Tangible
Computing entsteht, wurde es bereits in vielen Anwendungsgebieten
und Domänen genutzt. Beispiele sind klangverarbeitende Systeme
(ein sehr prominentes Gebiet für Tangible Computing; REACtable
und AudioPad sind nur wenige Beispiele), Datenaustausch, Arbeitsplatzerweiterungen
oder Alltagsinterfaces, wie z.B. die sog. "Marble Answering Machine"
(Murmel-Anrufbeantworter).
In allen uns bekannten Implementationen wird Tangible Computing für
eine bestimmte Aufgabe und zumindest ein bestimmtes Anwendungsgebiet
genutzt. In dieser Arbeit entwickelten wir eine neuartige Strategie,
um Tangible Computing für ein möglichst weites Feld von Anwendungsgebieten
nutzbar zu machen.
Als Domäne für unser System haben wir tabellarische Datensätze
gewählt. Normalerweise werden Daten in einer Datenbank oder das
Resultat einer Anfrage an eine relationale Datenbank in tabellarischer
Form organisiert. Sogar Zeitserien werden tabellarisch organisiert,
indem man pro Zeile einen diskreten Zeitpunkt abbildet.
Ganz allgemein können auf diese Weise sehr hochdimensionale, multivariate
Daten organisiert und verarbeitet werden. Also kann unser System
mit (fast) allen Arten von Daten umgehen. Natürlich gibt es auch
Domänen, die in diese Repräsentation nicht eingepasst werden
können. Die Domäne Text ist ein Beispiel hierfür. Schriftstücke
lassen sich nicht in tabellarische Form pressen, ohne Informationen
zu verlieren oder zu verändern.
- Till Bovermann, Christof Elbrechter, Thomas and
Ritter Hermann (2008)
AudioDB: Get in Touch with Sounds.
In Susini, P. and Warusfel, O. (Eds.) Proceedings of the 14th International Conference on Auditory Display
(ICAD 2008), ICAD, ICAD, Paris, France
Abstract: Digital audio in its various appearances is ubiquitous in our everyday
life. Searching and sorting of sounds collected in extensive databases
e.g. sampling libraries for musical production or seismographical
surveys is difficult and often bound to tight restrictions of the
used standard human-computer interface technique of keyboard and
mouse. Also the common technique of tagging sounds and other media
files has the drawback that it needs descriptive words, which is
a not to underestimated difficulty for sounds. We therefore created
AudioDB, an intuitive human computer interface to interactively explore
sound by representing them as physical artifacts (grains) on a tabletop
surface. The system is capable for sonic sorting, grouping and selecting
of sounds represented as physical artifacts, and can therefore serve
as a basis for discussions on audio-related tasks in working teams.
AudioDB however is not a special solution for problems appearing
in a dedicated field of work, but is moreover designed as an easy-to-use
multi purpose tool for audio-based information. As a side-effect,
AudioDB can be used for grounding work on how humans handle digital
information that is projected onto physical artifacts.
- Gerold Baier, Thomas Hermann (2008)
Temporal Perspective from Auditory Perception.
Simultaneity: Temporal Structures and Observer Perspectives, World Scientific
, Vrobel, Susie and Rössler, Otto E. and Marks-Tarlow, Terry (Ed.), p. 348--363
Abstract: Dynamically complex diseases with distributed and multi-scale interacting
physiological rhythms require a more refined temporal perspective
of the scientific observer than is currently provided by visual displays
of physiological data. We argue that sonification, the auditory inspection
of experimental data, provides a unique approach to the representation
of the temporal aspects of the data as it addresses the human sense
of listening. The ear \u0301s capacity to detect temporal patterns
of sameness and differences, of coincidence and coordination - widely
exploited in listening to music and spoken language -creates a new
temporal perspective in the scientific observer. We briefly describe
some examples of sonifications of biomedical data and discuss their
value in recovering the temporality of complex physiological processes.
Auditory Gestalt formation can be exploited for the classification
and differentiation of diseases. Finally, we stress the complementarity
of auditory and visual representations and argue for combined audio-visual
displays in order to adequately deal with complex phenomena, as in
the case of dynamical diseases.
- Thomas Hermann, John Williamson, Roderick and
Visell Murray-Smith, Eoin Brazil (2008)
Sonification for Sonic Interaction Design.
In Rocchesso, Davide (Ed.) Proceedings of the Proc. Conf. Human Factors in Computing Systems
(CHI) Workshop on Sonic Interaction Design (SID), CHI, Florence, Italy
Abstract: This paper advocates a closer connection between the emerging field
of sonic interaction design and that of sonification. We firstly
discuss the issue of information conveyance by sound in everyday
interactions, including HCI and product interaction design. Existing
sonification techniques are examined, to identify principles for
displaying information by sound during interaction, focusing particularly
on Model-Based Sonification. We present two implementations: the
Data Solids Sonification Model for exploratory data analysis, and
the Shoogle system for mobile phone interactions. Both exemplify
aspects of sonic interactions that connect well to the users' intuitions.
Finally, the Sonic Interaction Atlas is introduced, a prototype community
application that allows for the archival and organization of information
in existing sonic interaction design cases, and for the generation
of new scenarios during early-stage design research by aiding exploration
of the suitability of different sonic interaction models. It is hoped
that the Atlas may be useful for revealing the possibilities of physically-based
sonic interaction methods that may connect well to users' intuition
and innate capacities.
- Christof Elbrechter (2006)
Das TDI-Framework für dynamische Lernarchitekturen in intelligenten,
interaktiven Systemen.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Angelika Dierker (2006)
Sonifikationsbasierte Korrelationsanalyse gekoppelter FitzHugh-Nagumo-Systeme.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Gerold Baier, Thomas Hermann, Sven Sahle, Ulrich Stephani (2006)
Sonified Epilectic Rhythms.
In Stockman, Tony (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2006), p. 148--151, International Community for Auditory Display (ICAD), Department of Computer Science, Queen Mary, University of London, London, U.K.
Abstract: We describe techniques to sonify rhythmic activity of epileptic seizures
as measured by human EEG. Event-based mapping of parameters is found
to be informative in terms of auto- and cross-correlations of the
multivariate data. For the study, a group of patients with childhood
absence seizures are selected. We find consistent intra-patient conservation
of the rhythmic pattern as well as inter-patient variations, especially
in terms of cross-correlations. The sound synthesis is suitable for
online sonification. Thus, the application of the proposed sonification
in clinical monitoring is possible.
- Gerold Baier, Thomas Hermann, Ulrich Stephani (2006)
Multivariate Sonification of Epileptic Rhythms for Real-Time Applications.
Abstracts of the American Epilepsy Society, AES - American Epilepsy Society, AES
Abstract: Auditory displays present a new platform to represent complex data
sets. They provide efficient information about data features, for
example, when monitoring or interacting with multivariate time series.
The human auditory sense seems to be particularly optimized for the
detection and interpretation of multiple rhythmic events in real
time, which may be of practical importance in the context of the
epileptic EEG.
- Till Bovermann, Thomas Hermann, Helge Ritter (2006)
A Tangible Environment for Ambient Data Representation.
In McGookin, David and Brewster, Stephen (Ed.) First International Workshop on Haptic and Audio Interaction Design, p. 26--30, www.multivis.org, Glasgow, UK
Abstract: In this paper we develop an ambient information environment called
AmbiD that allows the user to specify intuitively - by moving tangible
objects on our tangible desk environment - which data sources shall
be connected with different ambient information displays, and how
important the information is to the user. We explain the used technique
and our current implementation in detail and give examples of possible
data sources, displays and their interconnection.
- Till Bovermann, Thomas Hermann, Helge Ritter (2006)
Tangible Data Scanning Sonification Model.
In Stockman, Tony (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2006), p. 77--82, International Community for Auditory Display (ICAD), Department of Computer Science, Queen Mary, University of London, London, UK
Abstract: In this paper we develop a sonification model following the Model-based
Sonification approach that allows to scan high-dimensional data distributions
by means of a physical object in the hand of the user. In the sonification
model, the user is immersed in a 3D space of invisible but acoustically
active objects which can be excited by him. Tangible computing allows
to identify the excitation object (e.g. a geometric surface) with
a physical object used as controller, and thus creates a strong metaphor
for understanding and relating feedback sounds in response to the
user's own activity, position and orientation. We explain the technique
and our current implementation in detail and give examples at hand
of synthetic and real-world data sets.
- Kerstin Bunte (2006)
Interaktive relevanzkartenbasierte Optimierung der Sonifikation
von Daten.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Dennis Altjohann (2006)
Sonifikationssystem zur Analyse von Infraschall.
Department Informatik, Universität Hamburg, (Diplomarbeit)
- Thomas Hermann, Oliver Höner, Helge Ritter (2006)
AcouMotion - An Interactive Sonification System for Acoustic Motion
Control.
In Gibet, Sylvie and Courty, Nicolas and Kamp, Jean-Francois (Ed.) Gesture in Human-Computer Interaction and Simulation: 6th International
Gesture Workshop, GW 2005, Berder Island, France, May 18-20, 2005,
Revised Selected Papers, p. 312--323, Springer, Berlin, Heidelberg
Abstract: This paper introduces AcouMotion as a new hard-/software system for
combining human body motion, tangible interfaces and sonification
to a closed-loop human computer interface that allows non-visual
motor control by using sonification (non-speech auditory displays)
as major feedback channel. AcouMotion's main components are (i) a
sensor device for measuring motion parameters (ii) a computer simulation
to represent the dynamical evolution of a model world, and (iii)
a sonification engine which generates an auditory representation
of objects and any interactions in the model world. The intended
applications of AcouMotion range from new kinds of sport games that
can be played without visual displays and therefore may be particularly
interesting for people with visual impairment to further applications
in data mining, physiotherapy and cognitive research. The first application
of AcouMotion presented in this paper is Blindminton, a sport game
similar to Badminton which is particularly adapted to the abilities
of people with visual impairment. We describe our current system
and its state of development, and we present first sound examples
for interactive sonification using an early prototype. Finally, we
discuss some interesting research directions based on the fact that
AcouMotion binds auditory stimuli and body motion, and thus can represent
a counterpart to the Eye-tracker device that exploits the binding
of visual stimuli and eye-movement in cognitive research.
- Matthias Milczynski, Thomas Hermann, Till and
Ritter Bovermann (2006)
A Malleable Device with Applications to Sonification-based Data Exploration.
In Stockman, Tony (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2006), p. 69--76, International Community for Auditory Display (ICAD), Department of Computer Science, Queen Mary, University of London, London, UK
Abstract: This article introduces a novel human computer interaction device,
developed in the scope of a Master's Thesis. The device allows continuous
localized interaction by providing a malleable interaction surface.
Diverse multi-finger as well as multi-handed manipulations can be
applied. Furthermore, the device acts as a tangible user interface
object, integrated into a tangible computing framework called tDesk.
Software to convert the malleable element's shape into an internal
surface representation has been developed. Malleable interactions
are applied to a new Model-based Sonification approach for exploratory
data analysis. High-dimensional data are acoustically explored via
their informative interaction sound in result to the user's excitation.
- Thomas Hermann, Stella Paschalidou, Dirk Beckmann,
Helge Ritter (2006)
Gestural Interactions for Multi-parameter Audio Control and Audification.
In Gibet, Sylvie and Courty, Nicolas and Kamp, Jean-Francois (Ed.) Gesture in Human-Computer Interaction and Simulation: 6th International
Gesture Workshop, GW 2005, Berder Island, France, May 18-20, 2005,
Revised Selected Papers, p. 335--338, Springer, Berlin, Heidelberg
Abstract: This paper presents an interactive multi-modal system for real-time
multi-parametric gestural control of audio processing applications.
We claim that this can ease the use / control of different tasks
and for this we present the following as a demonstration: (1) A musical
application, i.e. the multi-parametric control of digital audio effects,
and (2) a scientific application, i.e. the interactive navigation
of audifications. In the first application we discuss the use of
PCA-based control axes and clustering to obtain dimensionality reduced
control variables. In the second application we show how the tightly
closed human-computer loop actively supports the detection and discovery
of features in data under analysis.
- Oliver Höner, Thomas Hermann (2006)
Entwicklung und Evaluation eines sonifikationsbasierten Gerätes
zur Leistungsdiagnostik und Trainingssteuerung für den Sehgeschädigten-Leistungssport.
BISp-Jahrbuch 2006/07, Bundesinstitut für Sportwissenschaft
, Fischer, Jürgen, p. 163-168
Abstract: In dem interdisziplinären Projekt wird unter Einbindung der Methode
der interaktiven Sonifikation (Hermann & Hunt, 2005) ein neuer Weg
für den Sehgeschädigten-Leistungssport beschritten, der mittlerweile
vorhandene technische Möglichkeiten akustischer Datenpräsentation
zur Entwicklung eines Leistungstests ,,TAM'' (Test for audio-motor
performance) im Goalball nutzt. Dabei wird weniger eine Unterstützung
der Bewegungsvorstellung des Sportlers (im Sinne der ,,Bewegungs-Sonifikation'',
vgl. Effenberg & Mechling, 1998), sondern vielmehr eine Unterstützung
der Situationsvorstellung angestrebt, die über die auditive Darbietung
von Umweltinformationen perspektivisch vielfältige Möglichkeiten
der Interaktion mit der Umwelt ermöglicht. Die technische Basis
für die Entwicklung von TAMP bildet ein interaktives System zur
sonifikationsbasierten Bewegungskontrolle (,,AcouMotio'', Hermann,
Höner & Ritter, 2006), das als Entwicklungskomponente eines sonifikationsbasierten
Leistungstests seine erste Anwendung im Sport findet. Nach der Entwicklung
des TAMP wurde dieser in einem zweiten Schritt mit den Spieler/innen
der deutschen Goalball-Nationalmannschaften empirisch validiert.
- Matthias Milczynski (2006)
A Malleable Device with Applications to Sonification-based Data Exploration.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Matthias Kaper, Peter Meinicke, Horst M. and
Weiss Müller, Holger Bekel, Thomas Hermann,
Axel Saalbach, Helge Ritter (2006)
Neuroinformatic techniques in cognitive neuroscience of language.
Situated Communication, Mouton de Gruyter
, Rickheit, Gert and Wachsmuth, Ipke (Ed.), p. 265--286, vol. 166, Trends in Linguistics. Studies and Monographs [TiLSM]
Abstract: Processes of language comprehension can successfully be investigated
by non-invasive electrophysiological techniques like electroencephalography
(EEG). This article presents innovative applications of neuroinformatic
techniques to EEG data analysis in the context of Cognitive Neuroscience
of Language to gain deeper insights in the processes of the human
brain. A variety of techniques ranging from principal component analysis
(PCA), independent component analysis (ICA), coherence analysis,
self-organizing maps (SOM), and sonification were employed to overcome
the restrictions of traditional EEG data analysis, which only yield
comparably rough ideas about brain processes. Our findings, for example,
allow to provide insights in the variety within EEG data sets, perform
single trial classification with high accuracy, and investigate communication
processes between cell assemblies during language processing.
- Daniel Schmitzek (2006)
Echtzeit-Sonifikation von EEG-Daten.
Faculty of Technology, Bielefeld University, (Bachelor Thesis)
- Thomas Hermann, Gerold Baier, Ulrich Stephani,
Helge Ritter (2006)
Vocal Sonification of Pathologic EEG Features.
In Stockman, Tony (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2006), p. 158--163, International Community for Auditory Display (ICAD), Department of Computer Science, Queen Mary, University of London, London, U.K.
Abstract: We introduce a novel approach in EEG data sonification for process
monitoring and exploratory as well as comparative data analysis.
The approach uses an excitory/articulatory speech model and a particularly
selected parameter mapping to obtain auditory gestalts (or auditory
objects) that correspond to features in the multivariate signals.
The sonification is adaptable to patient-specific data patterns,
so that only characteristic deviations from background behavior (pathologic
features) are involved in the sonification rendering. Thus the approach
combines data mining techniques and case-dependent sonification design
to give an application-specific solution with high potential for
clinical use. We explain the sonification technique in detail and
present sound examples from clinical data sets.
- Jens Schmüdderich (2006)
Das TDI-Framework für dynamische Lernarchitekturen in intelligenten,
interaktiven Systemen.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Arne Wulf (2006)
Auditory Closed Loops -- Entwicklung eines sonifikationsbasierten
Systems zur Unterstützung von Interaktionsschleifen.
Faculty of Physics, Bielefeld University, (Diplomarbeit)
- Thomas Hermann, Helge Ritter (2005)
Model-based sonification revisited---authors' comments on Hermann
and Ritter, ICAD 2002.
ACM Trans. Applied Perception.
, vol. 2, no. 4, p. 559--563, ACM Press, New York, NY, USA
Abstract: We discuss the framework of Model-Based Sonification (MBS) and its
contribution to a principled design of mediators between high-dimensional
data spaces and perceptual spaces, particularly sound spaces. Data
Crystallization Sonification, discussed in the reprinted paper, exemplifies
the design of sonification models according to this framework. Finally,
promising lines of development in this area are pointed out, concerning
generalizations, applications, and open research directions.
- Thomas Hermann, Helge Ritter (2005)
Crystallization sonification of high-dimensional datasets.
ACM Trans. Applied Perception.
, vol. 2, no. 4, p. 550--558, ACM Press, New York, NY, USA
Abstract: This paper introduces Crystallization Sonification, a sonification
model for exploratory analysis of high-dimensional datasets. The
model is designed to provide information about the intrinsic data
dimensionality (which is a local feature) and the global data dimensionality,
as well as the transitions between a local and global view on a dataset.
Furthermore the sound allows to display the clustering in high-dimensional
datasets. The model defines a crystal growth process in the high-dimensional
data-space which starts at a user selected ``condensation nucleus''
and incrementally includes neighboring data according to some growth
criterion. The sound summarizes the temporal evolution of this crystal
growth process. For introducing the model, a simple growth law is
used. Other growth laws which are used in the context of hierarchical
clustering are also suited and their application in crystallization
sonification offers new ways to inspect the results of data clustering
as an alternative to dendrogram plots. In this paper, the sonification
model is described and example sonifications are presented for some
synthetic high-dimensional datasets.
- Gerold Baier, Thomas Hermann, Oscar Manuel Lara,
Markus Müller (2005)
Using sonification to detect weak cross-correlations in coupled excitable
systems.
In Brazil, Eoin (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2005), p. 312--315, ICAD, International Community for Auditory Display, Limerick, Ireland
Abstract: We study cross-correlations in irregularly spiking systems. A single
system displays spiking sequences that resemble a stochastic (Poisson)
process. Linear coupling between two systems leaves the inter-spike
interval distribution qualitatively unchanged but induces cross-correlations
between the units. For strong coupling this leads to synchronization
as expected but for weak coupling, both a good statistic and sonification
reveal the presence of ``motifs'', preferred short firing sequences
which are due to the deterministic spiking mechanism. We argue that
the use of sonification for time series analysis is superior in the
case where intrinsic non-stationarity of an experiment cannot be
ruled out.
- Gerold Baier, Thomas Hermann, Marcus Müller (2005)
Polyrhythmic Organization of Coupled Nonlinear Oscillators.
IV' 05: Proceedings of the Ninth International Conference on Information
Visualisation (IV'05), p. 5--10, IEEE Computer Society, Los Alamitos, CA, USA
Abstract: We study the rhythmic organization of coupled nonlinear oscillators.
If oscillators with non-identical internal frequency are coupled,
they generate a great variety of periodic and chaotic rhythmic patterns.
Sonification of these patterns suggests their characterization in
terms of polyrhythms: each oscillatory unit subdivides "measures"
of equal or varying length differently. For the case of two coupled
oscillators, the organization of these polyrhythms is exemplified
as a function of the internal frequency ratio and the coupling strength.
Some sonification strategies are presented which aid to detect complex
rhythmic relationships between oscillators. The results may be of
importance for the analysis of complex multivariate time series like
human EEG.
- Thomas Hermann, Andy Hunt (2005)
An Introduction to Interactive Sonification (Guest Editors' Introduction).
IEEE MultiMedia.
, vol. 12, no. 2, p. 20--24, IEEE, IEEE Computer Society Press Los Alamitos, CA, USA
Abstract: The research field of sonification, a subset of the topic of auditory
display, has developed rapidly in recent decades. It brings together
interests from the areas of data mining, exploratory data analysis,
human-computer interfaces, and computer music. Sonification presents
information by using sound (particularly nonspeech), so that the
user of an auditory display obtains a deeper understanding of the
data or processes under investigation by listening.
- Oliver Höner, Thomas Hermann, Thomas Prokein (2005)
Entwicklung eines goalballspezifischen Leistungstests (Development
of a goalball-specific performance test).
In Würth, S. and Panzer, S. and Krug, J. and Alfermann, D. (Ed.) Sport in Europa, p. 331, Feldhaus Verlag, Hamburg, Germany
- Oliver Höner, Thomas Hermann (2005)
`Listen to the ball!' - sonification-based sport games for people
with visual impairment.
A.P.A.: a discipline, a profession, an attitude (Proceedings of the
15th International Symposium Adapted Physical Activity), IFAPA, Verona, Italy
Abstract: Visual information is the leading afferent information for players
to regulate their actions in sports games. This makes the access
to sports games particularly difficult for people with visual impairment.
But since these people also desire to get access to sports games,
it is our task in the field of adapted physical activity to push
the boundary of ordinary sports games in search for new opportunities
or enabling techniques to facilitate their participation. In our
contribution we present the development of new sports games focusing
on the excellent auditory perception skills which are highly adapted
for people with visual impairment due to their enhanced everyday
use. We break new ground in using the method of interactive sonification
(Hermann, 2002) in sports games to present auditory information as
the leading information for action regulation. For this, we use insights
from three areas: firstly, actual sports games are analysed in order
to discover basic principles for non-visual games (e.g. goalball).
Secondly, we regard successful applications of interactive sonification
in auditory computer games, and finally we take virtual simulations
of sports games into consideration.
Building on this analysis, we introduce the new technical system "AcouMotion",
a hard- and software-based system that offers new ways for developing
sonification-based sports games that can be played just by using
auditory, non-visual information. AcouMotion presents for instance
information on the position of a virtual ball by using sound. Based
on this information the player is expected to play the ball with
a virtual racket against a wall without dropping on the ground. AcouMotion
provides a sensor equipped hand-held controller to assess the position
and orientation of the racket. Interactions like the hit of the ball
update the ball's motion state in the simulation environment and
thus the sonification. The real-time control and auditory feedback
creates a closed interaction loop that engages the player in sports
activity. AcouMotion goes beyond hitherto existing systems and offers
various possibilities for the enhancement of sports games for people
with visual impairment. In particular, we report on the development
progress of a non-visual game "blindminton" in the style of the traditional
game badminton. Furthermore, AcouMotion offers perspectives to test
specific -- and in games like goalball performance determining --
skills like auditory-perception-based orientation in space.
- Thomas Hermann, Matthias Milczynski (2005)
Videosonifikation am Beispiel von Verkehrsflußdaten (A28 - 01000100100010101011101010101).
Sendung 14 (CD) des "Fremder Sender", Haus am Gern, http://hausamgern.ch/satellit, Switzerland
Abstract: Sonifikation verwandelt Daten in Klänge, um über das H ören
Muster und Strukturen in den Daten zu entdecken. Gerade das Gehör
ist hochgradig geeignet, Rhythmen und selbst subtile spektrale Eigenschaften
wahrzunehmen, die dem Auge beim Betrachten der Daten leicht entgehen.
Generell lassen sich Daten aller Art für Sonifikations-Displays
verwenden. In diesem Text stellen wir ein Verfahren zur Verwandlung
von Videodaten, also Bilddatenströmen in Klangströme vor,
welches erlaubt, große Videodatenmengen als hochkomprimiertes
Klanggefüge zu erleben. Als Datenquelle für die vorliegenden
Sonifikationen wurden Webcam-Aufnahmen des Straßenverkehrsflusses
an der A28 im Prättigau verwendet. Wir beschreiben das Verfahren,
präsentieren mehrere Sonifikationen und diskutieren Anwendungsfelder
von Videosonifikation jenseits des hier zur Illustration ausgewählten
Beispiels.
- Oliver Höner, Thomas Hermann, Christian Grunow (2005)
Sonifikation - Ein Hilfsmittel zur Taktikanalyse im Sportspiel?.
Zur Vernetzung von Forschung und Lehre in Biomechanik, Sportmotorik
und Trainingswissenschaft, Czwalina, Feldhaus Verlag GmbH
, Gabler, H. and Göhner, U. and Schiebl, F. (Ed.), p. 226--230, vol. 144, ed. 1
- Till Bovermann, Thomas Hermann, Helge Ritter (2005)
The Local Heat Exploration Model for Interactive Sonification.
In Brazil, Eoin (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2005), p. 85--91, ICAD, International Community for Auditory Display, Limerick, Ireland
Abstract: This paper presents a new sonification model for the exploration of
topographically ordered high-dimensional data (multi-parameter maps,
volume data) where each data item consists of a position and feature
vector. The sonification model implements a common metaphor from
thermodynamics that heat can be interpreted as stochastic motion
of 'molecules'. The latter are determined by the data under examination,
and 'live' only in the feature space. Heat-induced interactions cause
acoustic events that fuse to a granular sound texture which conveys
meaningful information about the underlying distribution in feature
space. As a second ingredient of the model, data selection is achieved
by a separated navigation process in position space using a dynamic
aura model, such that heat can be induced locally. Both, a visual
and an auditory display are driven by the underlying model. We exemplify
the sonification by means of interaction examples for different high-dimensional
distributions.
- Oliver Höner, Thomas Hermann (2005)
Selbststeuerung im Sportspiel mittels interaktiver Sonifikation.
Selbststeuerung im Sport, Czwalina
, Seelig, H. and Göhner, W. and Fuchs, R. (Ed.), p. 55, vol. 144
Abstract: Die visuelle Informationsaufnahme im Sportspiel kann ohne Zweifel
als die bedeutendste Informationsquelle für die Selbststeuerung
der Akteure angesehen werden. Ohne die Bedeutung anderer Afferenzsysteme
zu leugnen, konzentrieren sich demzufolge die meisten Studien zum
Antizipations- und Entscheidungshandeln in der Sportspielforschung
auf die Analyse der visuellen Informationsaufnahme und -verarbeitung
(vgl. Höner, i.D.). Als weitere Konsequenz führt die Dominanz
visuell-basierter Selbststeuerungsprozesse in der Praxis dazu, dass
Menschen mit Sehschädigung einen erschwerten Zugang zu Sportspielen
haben. In einem interdisziplinären Forschungsprojekt sollen die
adaptierbaren Stärken des auditiven Wahrnehmungssystems zum Anlass
genommen werden, neue Spielformen zu entwickeln, die vor allem auf
Basis non-visueller, auditiver Information gespielt werden können
und damit insbesondere eine Alternative für den Sehgeschädigten-Sport
darstellen. Hierzu wird mit der Anwendung der Methode der interaktiven
Sonifikation (vgl. Hermann & Hunt, 2004) ein neuer Weg beschritten,
der die mittlerweile vorhandenen technischen Möglichkeiten akustischer
Datenpräsentation für die Entwicklung auditiver Sportspiele
nutzbar macht. Als Zielspiel wird mit dem Badminton ein traditionelles
Rückschlagspiel anvisiert, das mit dem Arbeitstitel ,,Blindminton''
bezeichnet wird.
- Oliver Höner, Thomas Hermann, Christian Grunow (2004)
Sonification of Group Behavior for Analysis and Training of Sports
Tactics.
In Hermann, Thomas and Hunt, Andy (Ed.) Proceedings of the International Workshop on Interactive Sonification
(ISon 2004), ISon, Interactive Sonification Community, Bielefeld, Germany
Abstract: This paper presents a new application for auditory display, the use
of sound to assist the analysis of tactics and tactical training
in sports games. For a pilot study we have set up a system for the
video recording and processing (i.e. tracking of the players and
the ball) of tactical training cycles in handball. An auditory display
was designed to meet the requirements for supporting and extending
visual analysis of tactics, e.g. it allows to identify those players
that deviate from a nominal tactical position, and it allows to value
the degree of deviation. Sound examples for tactically correct and
deviating group behavior in game situations are provided and discussed
for the example of the 6:0-defense. We also discuss our intended
psychophysical experiment for the validation of the method. For technical
reasons, the display is not yet applicable in real-time but we aim
towards an online interactive player feedback for tactical training.
- Andy Hunt, Thomas Hermann, Sandra Pauletto (2004)
Interacting with Sonification Systems: Closing the Loop.
In Banissi, Ebad and Börner, Katy (Ed.) IV '04: Proceedings of the Information Visualisation, Eighth International
Conference on (IV'04), p. 879--884, IEEE, IEEE Computer Society, Washington, DC, USA
Abstract: This paper stresses the importance of the human user being tightly
embedded within an interactive control loop for exploring data sets
using sound. We consider the quality of interaction, and how this
can be improved in computer systems by learning from real-world acoustic
interactions. We describe how different sonification methods can
utilise the human feedback loop to enhance the perception and analysis
of the data under investigation. Some considerations are given regarding
systems and applications.
- Christian Lange, Thomas Hermann, Helge Ritter (2004)
Holistic Body Tracking for Gestural Interfaces.
In Camurri, Antonio and Volpe, Gualtiero (Ed.) Gesture-Based Communication in Human-Computer Interaction 5th International
Gesture Workshop, GW 2003 Genova, Italy, April 15-17, 2003, Selected
Revised Papers, p. 132--139, International Gesture Workshop, Springer, Berlin, Heidelberg
Abstract: In this paper we present an approach to track a moving body in a sequence
of camera images by model adaptation. The parameters of a stick figure
model are varied by using a stochastic search algorithm. The similarity
of rendered model images and camera images of the user are used as
quality measure. A refinement of the algorithm is introduced by using
combined stereo views and relevance maps to infer responsible joint
angles from the difference of successive input images. Finally, the
successful application of various versions of the algorithm on sequences
of synthetic images is demonstrated.
- Till Bovermann (2004)
A Multi-Modal Exploration Framework with Application to Interactive
Sonification.
Faculty of Technology, Bielefeld University, (Diplomarbeit)
- Jörg Martini, Thomas Hermann, Dario Anselmetti,
Helge Ritter (2004)
Interactive Sonification for exploring Single Molecule Properties
with AFM-based Force Spectroscopy.
In Hermann, Thomas and Hunt, Andy (Ed.) Proceedings of the International Workshop on Interactive Sonification
(ISon 2004), Bielefeld University, Interactive Sonification Community, Bielefeld, Germany, (peer-reviewed article)
Abstract: This paper presents an interactive audio-haptic human-computer interface
for controlling an atomic force microscope (AFM) in force spectroscopy
experiments on single molecules. The sensor data used are proportional
to the force that is applied to a single molecule. These forces are
measured in real-time by using the reflection of a laser beam from
a cantilever. We present a system that involves (a) a visual display
of the data (b) a force-feedback joystick for navigating the sample
and providing a tactile feedback of the forces, and (c) an auditory
display to monitor the measured data while interactively moving the
sample. The sonification we have developed integrates information
at various levels of detail, including audifications of the high-frequency
cantilever movement, and an auditory stream that communicates the
instantaneous deviation of forces between the approach and retract
phase of the sample. The sonification design and offline-computed
sonifications will be presented and discussed. We further report
on our first experiences with this interactive multi-modal control
interface for manipulation of individual DNA molecules.
- Tim W. Nattkemper, Walter Schubert, Thomas Hermann,
Helge Ritter (2004)
A Hybrid System for Cell Detection in Digital Micrographs.
In Tilg, B. (Ed.) Biomedical Engineering, Proc. BIOMED 2004, ACTA Press, Innsbruck, Austria
Abstract: To analyze large sets of digital micrographs from high-throughput
screening studies with constant accuracy, advanced image processing
algorithms are necessary. In the literature, systems have been proposed
applying model based fitting algorithms, morphological operators
and arti ficial neural networks (ANN). Because single approaches
show limited performance, we propose a hybrid system that combines
the Hough transform with a multi-layer percep tron (MLP) network.
Our results show, that the combina tion of both approaches improves
the performance and the positions of cell bodies are obtained with
increased sensi tivity and positive predictive value.
- Gerold Baier, Thomas Hermann (2004)
The Sonification of Rhythms in Human Electroencephalogram.
In Barrass, Stephen and Vickers, Paul (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2004), International Community for Auditory Display (ICAD), ICAD, Sydney, Australia
Abstract: We use sonification of temporal information extracted from scalp EEG
to characterize the dynamic properties of rhythms in certain frequency
bands. Sonification proves particularly useful in the simultaneous
monitoring of several EEG channels. Our results suggest sonification
as an important tool in the analysis of multivariate data with subtle
correlation differences.
- Peter Meinicke, Thomas Hermann, Holger Bekel,
Horst M. Müller, Sabine. Weiss, Helge Ritter (2004)
Identification of Discriminative Features in EEG.
Intelligent Data Analysis.
, vol. 8, no. 1, p. 97--107, (IOS Press)
Abstract: An important step for the correlation of EEG signals with cognitive
processes is the identification of discriminative features in the
EEG signal. In this paper we utilize independent component analysis
(ICA) for feature extraction and selection. Our specific ICA technique
is based on a nonparametric source representation which in particular
allows for modelling of multimodal feature distributions as generally
required for the analysis of mixed data from different experiment
conditions. To demonstrate the potential of the resulting ICA feature
selection scheme we report results from an analysis of psycholinguistic
experiments on the discrimination of speech perception from perception
of so-called pseudo speech signals and demonstrate how the obtained
ICA features can be further analyzed with the technique of sonification.
Our results correlate well with results from coherence analysis and
strongly indicate that these new methods are well suited for uncovering
cognitively relevant features in EEG signals.
- Thomas Hermann, Thomas Henning, Helge Ritter (2004)
Gesture Desk - An Integrated Multi-modal Gestural Workplace for Sonification.
In Camurri, Antonio and Volpe, Gualtiero (Ed.) Gesture-Based Communication in Human-Computer Interaction, 5th International
Gesture Workshop, GW 2003 Genova, Italy, April 15-17, 2003, Selected
Revised Papers, p. 369--379, Gesture Workshop, Springer, Berlin, Heidelberg
Abstract: This paper presents the gesture desk, a new platform for a human-computer
interface at a regular computer workplace. It extends classical input
devices like keyboard and mouse by arm and hand gestures, without
the need to use any inconvenient accessories like data gloves or
markers. A central element is a gesture box containing two infrared
cameras and a color camera which is positioned under a glass desk.
Arm and hand motions are tracked in three dimensions. A synchronizer
board has been developed to provide an active glare-free IR-illumination
for robust body and hand tracking. As a first application, we demonstrate
interactive real-time browsing and querying of auditory self-organizing
maps (AuSOMs). An AuSOM is a combined visual and auditory presentation
of high-dimensional data sets. Moving the hand above the desk surface
allows to select neurons on the map and to manipulate how they contribute
to data sonification. Each neuron is associated with a prototype
vector in high-dimensional space, so that a set of 2D-topologically
ordered feature maps is queried simultaneously. The level of detail
is selected by hand altitude over the table surface, allowing to
emphasize or deemphasize neurons on the map.
- Andy Hunt, Thomas Hermann (2004)
The Importance of Interaction in Sonification.
In Barrass, Stephen and Vickers, Paul (Ed.) Proceedings of the International Conference on Auditory Display (ICAD
2004), International Community for Auditory Display (ICAD), ICAD, Sydney, Australia
Abstract: This paper argues for a special focus on the use of dynamic human
interaction to explore datasets while they are being transformed
into sound. We describe why this is a special case of both human
computer interaction (HCI) techniques and sonification methods. Humans
are adapted for interacting with their physical environment and making
continuous use of all their senses. When this exploratory interaction
is applied to a dataset (by continuously controlling its transformation
into sound) new insights are gained into the data's macro and micro-structure,
which are not obvious in a visual rendering. This paper reviews the
importance of interaction in sonification, describes how a certain
quality of interaction is required, provides examples of the techniques
being applied interactively, and outlines a plan of future work to
develop interaction techniques to aid sonification.
- Thomas Hermann, Gerold Baier, Markus Müller (2004)
Polyrhythm in the Human Brain.
In Barrass, Stephen (Ed.) Listening to the Mind Listening - Concert of Sonifications at the
Sydney Opera House, International Community for Auditory Display (ICAD), ICAD, Sydney, Australia
Abstract: Three complementary methods are used to analyze the dynamics of multivariate
EEG data obtained from a human listening to a piece of music. The
analysis yields parameters for a data sonification that conserves
temporal and frequency relationships as well as wave intensities
of the data. Multiple events taking place on different time scales
are combined to a polyrhythmic display in real time.
- Thomas Hermann, Helge Ritter (2004)
Sound and Meaning in Auditory Data Display.
Proceedings of the IEEE (Special Issue on Engineering and Music -
Supervisory Control and Auditory Communication).
, vol. 92, no. 4, p. 730--741
Abstract: Auditory data display is an interdisciplinary field linking auditory
perception research, sound engineering, data mining, and human-computer
interaction in order to make semantic contents of data perceptually
accessible in the form of (nonverbal) audible sound. For this goal
it is important to understand the different ways in which sound can
encode meaning. We discuss this issue from the perspectives of language,
music, functionality, listening modes, and physics, and point out
some limitations of current techniques for auditory data display,
in particular when targeting high-dimensional data sets. As a promising,
potentially very widely applicable approach, we discuss the method
of model-based sonification (MBS) introduced recently by the authors
and point out how its natural semantic grounding in the physics of
a sound generation process supports the design of sonifications that
are accessible even to untrained, everyday listening. We then proceed
to show that MBS also facilitates the design of an intuitive, active
navigation through "acoustic aspects", somewhat analogous to the
use of successive two-dimensional views in three-dimensional visualization.
Finally, we illustrate the concept with a first prototype of a "tangible"
sonification interface which allows us to "perceptually map" sonification
responses into active exploratory hand motions of a user, and give
an outlook on some planned extensions.
- Thomas Hermann, Helge Ritter (2004)
Neural Gas Sonification - Growing Adaptive Interfaces for Interacting
with Data.
In Banissi, Ebad and Börner, Katy (Ed.) IV '04: Proceedings of the Information Visualisation, Eighth International
Conference on (IV'04), p. 871--878, IEEE CNF, IEEE Computer Society, Washington, DC, USA
Abstract: In this paper we present an approach using incrementally constructed
neural gas networks to 'grow' an intuitive interface for interactive
exploratory sonification of high-dimensional data. The sonifications
portray information about the intrinsic data dimensionality and its
variation within the data space. The interface follows the paradigm
of model-based sonification and consists of a graph of nodes that
can be acoustically \'yexcited\'y with simple mouse actions.
The sound generation process is defined in terms of the node parameters
and the graph topology, following a physically motivated model of
energy flow through the graph structure. The resulting sonification
model is tied to the given data set by constructing both graph topology
and node parameters by an adaptive, fully data-driven learning process,
using a growing neural gas network. We report several examples of
applying this method to static data sets and point out a generalization
to the task of process analysis
- Christian Grunow (2004)
Sonifikation von gruppentaktischen Verhaltensweisen -- Eine Pilotstudie
zur 6:0-Abwehr im Hallenhandball.
Fakultät für Psychologie und Sportwissenschaft, Universität
Bielefeld, (Diplomarbeit)
- Thomas Hermann, Andy Hunt (2004)
The Discipline of Interactive Sonification.
In Hermann, Thomas and Hunt, Andy (Ed.) Proceedings of the International Workshop on Interactive Sonification
(ISon 2004), Bielefeld University, Interactive Sonification Community, Bielefeld, Germany, (peer-reviewed article)
Abstract: This paper argues for a special focus on the use of dynamic human
interaction to explore datasets while they are being transformed
into sound. We describe why this is a special case of both human
computer interaction (HCI) techniques and sonification methods. Humans
are adapted for interacting with their physical environment and making
continuous use of all their senses. When this exploratory interaction
is applied to a dataset (by continuously controlling its transformation
into sound) new insights are gained into the data's macro and micro-structure,
which are not obvious in a visual rendering. This paper defines the
sub-topic of Interactive Sonification, explains how a certain quality
of interaction is required, overviews current sonification techniques,
provides examples of the techniques being applied interactively,
and outlines a research agenda for this topic.
- Thomas Hermann, Oliver Höner, Helge Ritter (2004)
Verfahren zur Steuerung eines auditiven Spiels und Vorrichtung hierzu
(Method for Controlling an Auditory Game and Associated Device).
German Patent Application, and European Patent Application.
, (Patent Number Code DE102004048583A1, European Patent: EP20050796210)
Abstract: Die Erfindung betrifft ein Verfahren zur Steuerung eines auditiven
Spiels, wobei die Bewegungen eines Spielobjekts in Echtzeit vorgegeben
und klanglich wiedergegeben werden, dadurch gekennzeichnet, dass
nach Beginn eines Bewegungsereignisses stets ein Bewegungszustand
des Spielobjekts in einem dreidimensionalen Raum einerseits und stets
ein Bewegungszustand eines menschlichen Nutzerobjekts im dreidimensionalen
Raum andererseits ermittelt werden, dass stets der Bewegungszustand
des Spielobjekts und der Bewegungszustand des Nutzerobjekts verglichen
wird und in Abhängigkeit vom Vergleichsergebnis ein akustisches
Bewegungszustandssignal an den Nutzer ausgegeben wird.