What is it about?

Hyperspectral image is composed of many spectral bands. Due to this reason many problems crop up in the picture. The Presence of high dimension, information loss, clinging redundant information in spectral bands etc hinder at the time of hyperspectral image classification. Here we proposed Resnet Spectral Spatial ConvLstm model which is composed of 3D Convolution Neural Network together with batch normalization layers in order to extract the spectral spatial features from hyperspectral image simultaneously we added shortcut connections to get rid of vanishing gradient problem which is followed by 2D Convolution Neural Network layers to reduce the computational complexity over and above that Long Short Term Memory layer removes redundant information from an input image. Our model produced better accuracy than others’ proposed models like reaching the levels of 1.62%, 0.71%, 0.16%, and 0.01% more in “kennedy space center”, “Botswana”, “Indian Pines” and “Pavia University” data sets respectively. The errors also decreased from time series data sets by 0.49 in “Electricity production”, 0.16 in “International Airline Passenger” and 0.52 in “Production of shampoo over three years” by using our proposed model.

Featured Image

Why is it important?

Proposing a Resnet Spectral Spatial ConvLstm model for hyperspectral image classification, leveraging 3D Convolutional Neural Networks with shortcut connections and LSTM layers to enhance feature extraction and mitigate information loss, resulting in superior accuracy and reduced errors across various datasets.

Perspectives

The Resnet Spectral Spatial ConvLstm model offers a novel approach to address challenges in hyperspectral image classification, achieving higher accuracy and reduced errors across diverse datasets, showcasing its potential for advancing remote sensing applications.

Dr. Debajyoty Banik

Read the Original

This page is a summary of: Pooled hybrid-spectral for hyperspectral image classification, Multimedia Tools and Applications, September 2022, Springer Science + Business Media,
DOI: 10.1007/s11042-022-13721-2.
You can read the full text:

Read

Contributors

The following have contributed to this page