Abstract
In some pattern analysis problems, there exists expert knowledge, in addition to the original data involved in the classification process. The vast majority of existing approaches simply ignore such auxiliary (privileged) knowledge. Recently a new paradigm - learning using privileged information - was introduced in the framework of {\rm SVM}+. This approach is formulated for binary classification and, as typical for many kernel-based methods, can scale unfavorably with the number of training examples. While speeding up training methods and extensions of {\rm SVM}+ to multiclass problems are possible, in this paper we present a more direct novel methodology for incorporating valuable privileged knowledge in the model construction phase, primarily formulated in the framework of generalized matrix learning vector quantization. This is done by changing the global metric in the input space, based on distance relations revealed by the privileged information. Hence, unlike in {\rm SVM}+, any convenient classifier can be used after such metric modification, bringing more flexibility to the problem of incorporating privileged information during the training. Experiments demonstrate that the manipulation of an input space metric based on privileged data improves classification accuracy. Moreover, our methods can achieve competitive performance against the {\rm SVM}+ formulations.
Original language | English |
---|---|
Article number | 6488857 |
Pages (from-to) | 1086-1098 |
Number of pages | 13 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 24 |
Issue number | 7 |
DOIs | |
Publication status | Published - Jul 2013 |
Keywords
- Distance metric learning (DML)
- generalized matrix learning vector quantization (GMLVQ)
- information theoretic metric learning (ITML)
- learning using privileged information (LUPI)