Traditional pattern recognition techniques are intimately linked to the notion of "feature space." Adopting this view, each object is described in terms of a vector of numerical attributes and is therefore mapped to a point in a Euclidean (geometric) vector space so that the distances between the points reflect the observed (dis)similarities between the respective objects. This kind of representation is attractive because geometric spaces offer powerful analytical as well as computational tools that are simply not available in other representations. However, the geometric approach suffers from a major intrinsic limitation, which concerns the representational power of vectorial, feature-based descriptions. In fact, there are numerous application domains where either it is not possible to find satisfactory features or they are inefficient for learning purposes.
In the last few years, interest around purely similarity-based techniques has grown considerably. For example, within the supervised learning paradigm (where expert-labeled training data is assumed to be available) the well-established kernel-based methods shift the focus from the choice of an appropriate set of features to the choice of a suitable kernel, which is related to object similarities. However, this shift of focus is only partial, as the classical interpretation of the notion of a kernel is that it provides an implicit transformation of the feature space rather than a purely similarity-based representation. Similarly, in the unsupervised domain, there has been an increasing interest around pairwise or even multiway algorithms, such as spectral and graph-theoretic clustering methods, which avoid the use of features altogether.
By departing from vector-space representations one is confronted with the challenging problem of dealing with (dis)similarities that do not necessarily possess the Euclidean behavior or not even obey the requirements of a metric. The lack of the Euclidean and/or metric properties undermines the very foundations of traditional pattern recognition theories and algorithms, and poses totally new theoretical/computational questions and challenges.
The aim of this workshop, which follows the one held in Venice in 2011, is to consolidate research efforts in this area, and to provide an informal discussion forum for researchers and practitioners interested in this important yet diverse subject. We aim at covering a wide range of problems and perspectives, from supervised to unsupervised learning, from generative to discriminative models, and from theoretical issues to real-world applications.
Original, unpublished papers dealing with these issues are solicited. Topics of interest include (but are not limited to):
All papers (not exceeding 16 pages) must be submitted electronically. All submissions will be subject to a rigorous peer-review process. Accepted papers will appear in the workshop proceedings, which will be published in Springer's Lecture Notes in Computer Science (LNCS) series.
In addition to regular, original contributions, we also solicit papers (in any LaTeX format, no page restriction) that have been recently published elsewhere. These papers will undergo the same review process as regular ones: if accepted, they will be presented at the workshop but will not be published in the workshop proceedings.
Authors of the best workshop papers will be encouraged to submit their contribution to a forthcoming special issue of the IEEE Transactions on Neural Networks and Learning Systems on "Learning in non-(geo)metric spaces" (submission deadline: October 1, 2013). Further details about the special issue can be downloaded here.