An observation of different clustering algorithms and clustering evaluation criteria for a feature selection based on linear discriminant analysis

Linear discriminant analysis (LDA) is a very popular method for dimensionality reduction in machine learning. Yet, the LDA cannot be implemented directly on unsupervised data as it requires the presence of class labels to train the algorithm. Thus, a clustering algorithm is needed to predict the cla...

Full description

Saved in:
Bibliographic Details
Main Authors: Tie, K. H., A., Senawi, Chuan, Z. L.
Format: Book Section
Language:English
Published: Springer Nature Singapore Ptd. Ltd. 2022
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/35517/1/FULL%20TEXT%20PAPER.pdf
http://umpir.ump.edu.my/id/eprint/35517/
https://doi.org/10.1007/978-981-19-2095-0_42
https://doi.org/10.1007/978-981-19-2095-0_42
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Linear discriminant analysis (LDA) is a very popular method for dimensionality reduction in machine learning. Yet, the LDA cannot be implemented directly on unsupervised data as it requires the presence of class labels to train the algorithm. Thus, a clustering algorithm is needed to predict the class labels before the LDA can be utilized. However, different clustering algorithms have different parameters that need to be specified. The objective of this paper is to investigate how the parameters behave with a measurement criterion for feature selection, that is, the total error reduction ratio (TERR). The k-means and the Gaussian mixture distribution were adopted as the clustering algorithms and each algorithm was tested on four datasets with four distinct clustering evaluation criteria: Calinski-Harabasz, Davies-Bouldin, Gap and Silhouette. Overall, the k-means outperforms the Gaussian mixture distribution in selecting smaller feature subsets. It was found that if a certain threshold value of the TERR is set and the k-means algorithm is applied, the Calinski-Harabasz, Davies-Bouldin, and Silhouette criteria yield the same number of selected features, less than the feature subset size given by the Gap criterion. When the Gaussian mixture distribution algorithm is adopted, none of the criteria can consistently select features with the least number. The higher the TERR threshold value is set, the more the feature subset size will be, regardless of the type of clustering algorithm and the clustering evaluation criterion are used. These results are essential for future work direction in designing a robust unsupervised feature selection based on LDA.