Abstract
In machine learning, Gaussian process latent variable model (GP-LVM) has been extensively applied in the field of unsupervised dimensionality reduction. When some supervised information, e.g., pairwise constraints or labels of the data, is available, the traditional GP-LVM cannot directly utilize such supervised information to improve the performance of dimensionality reduction. In this case, it is necessary to modify the traditional GP-LVM to make it capable of handing the supervised or semi-supervised learning tasks. For this purpose, we propose a new semi-supervised GP-LVM framework under the pairwise constraints. Through transferring the pairwise constraints in the observed space to the latent space, the constrained priori information on the latent variables can be obtained. Under this constrained priori, the latent variables are optimized by the maximum a posteriori (MAP) algorithm. The effectiveness of the proposed algorithm is demonstrated with experiments on a variety of data sets.
Original language | English |
---|---|
Pages (from-to) | 2186-2195 |
Number of pages | 10 |
Journal | Neurocomputing |
Volume | 73 |
Issue number | 10-12 |
Early online date | 25 Mar 2010 |
DOIs | |
Publication status | Published - Jun 2010 |
Bibliographical note
Subspace Learning / Selected papers from the European Symposium on Time Series PredictionKeywords
- dimensionality reduction
- Gaussian process latent variable model
- pairwise constraints
- semi-supervised learning