What is it about?

Improving the classification accuracy of fused SAR and optical data based on feature space dimensionality reduction, supervised feature selection, and learning

Featured Image

Why is it important?

In multi-sensor data fusion based on multiple features, the high dimensionality of feature space increases the runtime and computational complexity. The present study proposes a new algorithm based on the combination of random subspace (RS), linear discriminant analysis and sparse regularisation (LDASR), namely RS–LDASR for feature space dimensionality reduction, supervised feature selection, and learning. The use of RSs can effectively solve the problem of high dimensionality and high feature-to-instance ratio. The extraction of multiple features from the images raises the possibility of a correlation between features which reduce classification accuracy. In this study, after the construction of several RSs, supervised feature selection and learning based on LDASR were applied with very high accuracy. Classification and image fusion for remote sensing data analysis were tested by the implementation of feature-based fusion on two pairs of fused synthetic aperture radar and optical data. Four feature matrices were constructed using attribute profiles (APs), multi-APs (MAPs), non-negative matrix factorisation (NMF), and textural features. Support vector machine and rotation forest were applied as the base classifiers. The results show that use of RS–LDASR significantly improved the classification accuracy based on NMF plus texture features and even NMF alone.

Read the Original

This page is a summary of: Effective supervised multiple-feature learning for fused radar and optical data classification , IET Radar Sonar & Navigation, May 2017, the Institution of Engineering and Technology (the IET),
DOI: 10.1049/iet-rsn.2016.0346.
You can read the full text:

Read

Contributors

The following have contributed to this page