loading
Papers

Research.Publish.Connect.

Paper

Multi-layer Extreme Learning Machine-based Autoencoder for Hyperspectral Image Classification

Topics: Active and Robot Vision; Features Extraction; Imaging for Cultural Heritage (Modeling/Simulation, Virtual Restoration); Machine Learning Technologies for Vision; Multimodal and Multi-Sensor Models of Image Formation

Authors: Muhammad Ahmad 1 ; Adil Khan 2 ; Manuel Mazzara 2 and Salvatore Distefano 3

Affiliations: 1 Innopolis University, Innopolis, Russia, University of Messina, Messina and Italy ; 2 Innopolis University, Innopolis and Russia ; 3 University of Messina, Messina and Italy

ISBN: 978-989-758-354-4

Keyword(s): Extreme Learning Machine (ELM), Deep Neural Networks (DNN), Auto Encoder (AE), Hyperspectral Image Classification.

Related Ontology Subjects/Areas/Topics: Active and Robot Vision ; Applications and Services ; Computer Vision, Visualization and Computer Graphics ; Features Extraction ; Image and Video Analysis ; Image Formation and Preprocessing ; Imaging for Cultural Heritage (Modeling/Simulation, Virtual Restoration) ; Motion, Tracking and Stereo Vision ; Multimodal and Multi-Sensor Models of Image Formation

Abstract: Hyperspectral imaging (HSI) has attracted the formidable interest of the scientific community and has been applied to an increasing number of real-life applications to automatically extract the meaningful information from the corresponding high dimensional datasets. However, traditional autoencoders (AE) and restricted Boltzmann machines are computationally expensive and do not perform well due to the Hughes phenomenon which is observed in HSI since the ratio of the labeled training pixels on the number of bands is usually quite small. To overcome such problems, this paper exploits a multi-layer extreme learning machine-based autoencoder (MLELM-AE) for HSI classification. MLELM-AE learns feature representations by adopting a singular value decomposition and is used as basic building block for learning machine-based autoencoder (MLELM-AE). MLELM-AE method not only maintains the fast speed of traditional ELM but also greatly improves the performance of HSI classification. The experiment al results demonstrate the effectiveness of MLELM-AE on several well-known HSI dataset. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.83.32.171

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Ahmad, M.; Khan, A.; Mazzara, M. and Distefano, S. (2019). Multi-layer Extreme Learning Machine-based Autoencoder for Hyperspectral Image Classification.In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4 VISAPP: VISAPP, ISBN 978-989-758-354-4, pages 75-82. DOI: 10.5220/0007258000750082

@conference{visapp19,
author={Muhammad Ahmad. and Adil Mehmood Khan. and Manuel Mazzara. and Salvatore Distefano.},
title={Multi-layer Extreme Learning Machine-based Autoencoder for Hyperspectral Image Classification},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4 VISAPP: VISAPP,},
year={2019},
pages={75-82},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007258000750082},
isbn={978-989-758-354-4},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4 VISAPP: VISAPP,
TI - Multi-layer Extreme Learning Machine-based Autoencoder for Hyperspectral Image Classification
SN - 978-989-758-354-4
AU - Ahmad, M.
AU - Khan, A.
AU - Mazzara, M.
AU - Distefano, S.
PY - 2019
SP - 75
EP - 82
DO - 10.5220/0007258000750082

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.