A deep contractive autoencoder for solving multiclass classification problems
Contractive auto encoder (CAE) is on of the most robust variant of standard Auto Encoder (AE). The major drawback associated with the conventional CAE is its higher reconstruction error during encoding and decoding process of input features to the network. This drawback in the operational procedure...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer Berlin Heidelberg
2020
|
Subjects: | |
Online Access: | http://eprints.uthm.edu.my/6386/1/AJ%202020%20%28313%29.pdf http://eprints.uthm.edu.my/6386/ https://doi.org/10.1007/s12065-020-00424-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Contractive auto encoder (CAE) is on of the most robust variant of standard Auto Encoder (AE). The major drawback associated with the conventional CAE is its higher reconstruction error during encoding and decoding process of input features to the network. This drawback in the operational procedure of CAE leads to its incapability of going into finer details present in the input features by missing the information worth consideration. Resultantly, the features extracted by CAE lack the true representation of all the input features and the classifier fails in solving classification problems efficiently. In this work, an improved variant of CAE is proposed based on layered architecture following feed forward mechanism named as deep CAE. In the proposed architecture, the normal CAEs are arranged in layers and inside each layer, the process of encoding and decoding take place. The features obtained from the previous CAE are given as inputs to the next CAE. Each CAE in all layers are responsible for reducing the reconstruction error thus resulting in obtaining the informative features. The feature set obtained from the last CAE is given as input to the softmax classifier for classification. The performance and efficiency of the proposed model has been tested on five MNIST variant-datasets. The results have been compared with standard SAE, DAE, RBM, SCAE, ScatNet and PCANet in term of training error, testing error and execution time. The results revealed that the proposed model outperform the aforementioned models. |
---|