Traditional pattern recognition and machine learning techniques are intimately linked to the notion of "feature space." Adopting this view, each object is described in terms of a vector of numerical attributes and is therefore mapped to a point in a Euclidean vector space so that the distances between the points reflect the observed (dis)similarities between the respective objects. This kind of representation is attractive because such spaces offer powerful analytical as well as computational tools that are simply not available in other representations. This approach, however, suffers from a major intrinsic limitation, which concerns the representational power of vectorial, feature-based descriptions. In fact, there are numerous application domains where either it is not possible to find satisfactory features or they are inefficient for learning purposes.
In the last few years, interest around purely (dis)similarity-based techniques has grown considerably. For example, within the supervised learning paradigm the well-established kernel-based methods shift the focus from the choice of an appropriate set of features to the choice of a suitable kernel, which is related to object similarities. This shift in focus, however, is only partial, as the classical interpretation of the notion of a kernel is that it provides an implicit transformation of the feature space rather than a purely similarity-based representation. Similarly, in the unsupervised domain, there has been an increasing interest around pairwise or even multiway algorithms, such as spectral and graph-theoretic clustering methods, which avoid the use of features altogether.
By departing from vector-space representations one is confronted with the challenging problem of dealing with (dis)similarities that do not necessarily possess the Euclidean behavior or do not even obey the requirements of a metric. The lack of such properties undermines the very foundations of traditional pattern recognition and machine learning theories and algorithms and poses totally new theoretical and computational questions and challenges.
The aim of this workshop, following those held in Venice and York, is to consolidate research efforts in this area and to provide an informal discussion forum for researchers and practitioners interested in this important yet diverse subject. We aim at covering a wide range of problems and perspectives, from supervised to unsupervised learning, from generative to discriminative models, and from theoretical issues to real-world applications.
Original, unpublished papers dealing with these issues are solicited. Topics of interest include (but are of course not limited to):
All papers (not exceeding 16 pages) must be submitted electronically. All submissions will be subject to a rigorous peer-review process. Accepted papers will appear in the workshop proceedings, which will be published in Springer's Lecture Notes in Computer Science (LNCS) series.
In addition to regular, original contributions, we also solicit papers (in any LaTeX format, no page restriction) that have been recently published elsewhere. These papers will undergo the same review process as regular ones: if accepted, they will be presented at the workshop but will not be published in the workshop proceedings.