What is it about?
Correlation learning is a technique utilized to find a common representation in cross-domain and multiview datasets. However, most existing methods are not robust enough to handle noisy data. As such, the common representation matrix learned could be influenced easily by noisy samples inherent in different instances of the data. In this paper, we propose a novel correlation learning method based on a low-rank representation, which learns a common representation between two instances of data in a latent subspace. Specifically, we begin by learning a low-rank representation matrix and an orthogonal rotation matrix to handle the noisy samples in one instance of the data so that a second instance of the data can linearly reconstruct the low-rank representation. Our method then finds a similarity matrix that approximates the common low-rank representation matrix much better such that a rank constraint on the Laplacian matrix would reveal the clustering structure explicitly without any spectral postprocessing. Extensive experimental results on ORL, Yale, Coil-20, Caltech 101-20, and UCI digits datasets demonstrate that our method has superior performance than other state-of-the-art compared methods in six evaluation metrics.
Featured Image
Photo by Markus Winkler on Unsplash
Why is it important?
This work is noteworthy because it can simultaneously exploit cross-domain and Multiview data to improve performance.
Perspectives
Read the Original
This page is a summary of: Low Rank Correlation Representation and Clustering, Scientific Programming, February 2021, Hindawi Publishing Corporation,
DOI: 10.1155/2021/6639582.
You can read the full text:
Contributors
The following have contributed to this page