University of Bahrain
Scientific Journals

Deep Feature segmentation model Driven by Hybrid Convolution Network for Hyper Spectral Image Classification

Show simple item record

dc.contributor.author K Ghotekar, Rahul
dc.contributor.author Shaw, Kailash
dc.contributor.author Rout, Minakhi
dc.date.accessioned 2024-01-09T16:19:03Z
dc.date.available 2024-01-09T16:19:03Z
dc.date.issued 2024-01-09
dc.identifier.issn 2210-142X
dc.identifier.uri https://journal.uob.edu.bh:443/handle/123456789/5333
dc.description.abstract Hyperspectral image (HSI) classification can support different applications, such as agriculture, military, city planning, land utilization, and identifying distinct regions. It is treated as a crucial topic in the research community. Recent advancement in convolution neural network (CNN) has shown the unique capability of extracting meaningful feature and classification. However, CNN works with square images with fixed dimensions and cannot extract local information of images having distinct geometric variations with context and content relationships; hence there is a scope for improvement in correctly identifying class boundaries. Encouraged by the facts, we propose an HSI feature segmentation model by the hybrid convolution network (GCNN-RESNET152) for the HSI classification. First, pre-trained CNN on ImageNet is used to obtain the multilayer feature. Second, the 3D discrete wavelet transform image is fed into the graph convolution network GCN model to gain patch-to-patch correlations feature maps. Finally, the features are integrated using the three weighted coefficients concatenation method. Finally, the linear classifier is used to predict the semantic classes of pixel HSI. The proposed model is tested on four benchmark dataset Houston University (HU), Indian pines(IP), Kennedy space station(KSS), and Pavia university(PU). The result is compared with state-of-art algorithms and found to be superior in terms of overall, average, and kappa accuracy. The Overall, average and kappa accuracy achieved for HU: 97.7%, 99.4%, 95.6%, IP: 97.7%, 99.4%, 95.6%, KSS:97.48%,99.68%,96.43%, and PU: 97.7%, 99.4%, 95.6% respectively, which is 5 to 8% more than state of art methods. en_US
dc.language.iso en en_US
dc.publisher University of Bahrain en_US
dc.subject Hybrid Convolution Network, Hyper-spectral image, classification, deep feature segmentation en_US
dc.title Deep Feature segmentation model Driven by Hybrid Convolution Network for Hyper Spectral Image Classification en_US
dc.identifier.doi http://dx.doi.org/10.12785/ijcds/160153
dc.volume 16 en_US
dc.issue 1 en_US
dc.pagestart 719 en_US
dc.pageend 738 en_US
dc.contributor.authorcountry India en_US
dc.contributor.authorcountry India en_US
dc.contributor.authorcountry India en_US
dc.contributor.authoraffiliation School of Computer Engineering, KIIT Deemed to be University en_US
dc.contributor.authoraffiliation Department of AIML, Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University) en_US
dc.contributor.authoraffiliation School of Computer Engineering, KIIT Deemed to be a University en_US
dc.source.title International Journal of Computing and Digital Systems en_US
dc.abbreviatedsourcetitle IJCDS en_US


Files in this item

This item appears in the following Issue(s)

Show simple item record

All Journals


Advanced Search

Browse

Administrator Account