All talks will be held in lecture hall H6 at the western end of the main hall.
The time for a talk is 20 minutes plus 5 min discussion.
Wednesday, 2007-09-05 - from 13:15 to 15:00
Learning Vector Quantization: generalization ability and dynamics of competing prototypes
|Presenting Author: Aree Witoelar|
|Authors: Aree Witoelar, Michael Biehl, Barbara Hammer|
Learning Vector Quantization (LVQ) are popular multi-class classification algorithms. Prototypes in an LVQ system represent the typical features of classes in the data. Frequently multiple prototypes are employed for a class to improve the representation of variations within the class and the generalization ability. In this paper, we investigate the dynamics of LVQ in an exact mathematical way, aiming at understanding the influence of the number of prototypes and their assignment to classes. The theory of on-line learning allows a mathematical description of the learning dynamics in model situations. We demonstrate using a system of three prototypes the different behaviors of LVQ systems of multiple prototype and single prototype class representation.
A comparison between dissimilarity SOM and kernel SOM for clustering the vertices of a graph
|Presenting Author: Nathalie Villa|
|Authors: Nathalie Villa, Fabrice Rossi|
Flexible and efficient variants of the Self Organizing Map algorithm have been proposed for non vector data, including, for example, the dissimilarity SOM (also called the Median SOM) and several kernelized versions of SOM. Although the first one is a generalization of the batch version of the SOM algorithm to data described by a dissimilarity measure, the various versions of the second ones are stochastic SOM. We propose here to introduce a batch version of the kernel SOM and to show how this one is related to the dissimilarity SOM. Finally, an application to the classification of the vertices of a graph is proposed and the algorithms are tested and compared on a simulated data set.
Improving the H2MLVQ algorithm by the Cross Entropy Method
|Presenting Author: Abderrahmane Boubezoul,|
|Authors: Abderrahmane Boubezoul, Sébastien Paris, Mustapha Ouladsine|
This paper addresses the use of a stochastic optimization method called the Cross Entropy
(CE) Method in the improvement of a recently proposed H2MLVQ (Harmonic to
minimum LVQ) algorithm, this algorithm was proposed as an initialization insensitive variant
of the well known Learning Vector Quantization (LVQ) algorithm. This paper has
two aims, the first aim is the use of the Cross Entropy (CE) Method to tackle the initialization
sensitiveness problem associated with the original (LVQ) algorithm and its
variants and the second aim is to use a weighted norm instead of the Euclidean
norm in order to select the most relevant features. The results in this paper
indicate that the CE method can successfully be applied to this kind of
problems and efficiently generate high quality solutions. Also, good
competitive numerical results on several datasets are reported.
Accelerating Relational Clustering Algorithms With Sparse Prototype Representation
|Presenting Author: Fabrice Rossi|
|Authors: Fabrice Rossi, Alexander Hasenfuß, Barbara Hammer|
In some application contexts, data are better described by a matrix of
pairwise dissimilarities rather than by a vector representation. Clustering
and topographic mapping algorithms have been adapted to this type of data,
either via the generalized Median principle, or more recently with the so
called relational approach, in which prototypes are represented by virtual
linear combinations of the original observations. One drawback of those
methods is their complexity, which scales as the square of the number of
observations, mainly because they use dense prototype representations: each
prototype is obtained as a virtual combination of all the elements of its
cluster (at least). We propose in this paper to use a sparse representation of
the prototypes to obtain relational algorithms with sub-quadratic complexity.