Extensions to the K-AMH algorithm for numerical clustering
The k-AMH algorithm has been proven efficient in clustering categorical datasets. It can also be used to cluster numerical values with minimum modification to the original algorithm. In this paper, we present two algorithms that extend the k-AMH algorithm to the clustering of numerical values. The o...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Universiti Utara Malaysia
2018
|
Subjects: | |
Online Access: | http://repo.uum.edu.my/24940/1/JICT%2017%204%202018%20587%20599.pdf http://repo.uum.edu.my/24940/ http://jict.uum.edu.my/index.php/current-issues-1#a |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.uum.repo.24940 |
---|---|
record_format |
eprints |
spelling |
my.uum.repo.249402018-10-14T02:22:05Z http://repo.uum.edu.my/24940/ Extensions to the K-AMH algorithm for numerical clustering Seman, Ali Mohd Sapawi, Azizian QA75 Electronic computers. Computer science The k-AMH algorithm has been proven efficient in clustering categorical datasets. It can also be used to cluster numerical values with minimum modification to the original algorithm. In this paper, we present two algorithms that extend the k-AMH algorithm to the clustering of numerical values. The original k-AMH algorithm for categorical values uses a simple matching dissimilarity measure, but for numerical values it uses Euclidean distance. The first extension to the k-AMH algorithm, denoted k-AMH Numeric I, enables it to cluster numerical values in a fashion similar to k-AMH for categorical data. The second extension, k-AMH Numeric II, adopts the cost function of the fuzzy k-Means algorithm together with Euclidean distance, and has demonstrated performance similar to that of k-AMH Numeric I. The clustering performance of the two algorithms was evaluated on six real-world datasets against a benchmark algorithm, the fuzzy k-Means algorithm. The results obtained indicate that the two algorithms are as efficient as the fuzzy k-Means algorithm when clustering numerical values. Further, on an ANOVA test, k-AMH Numeric I obtained the highest accuracy score of 0.69 for the six datasets combined with p-value less than 0.01, indicating a 95% confidence level. The experimental results prove that the k-AMH Numeric I and k-AMH Numeric II algorithms can be effectively used for numerical clustering. The significance of this study lies in that the k-AMH numeric algorithms have been demonstrated as potential solutions for clustering numerical objects. Universiti Utara Malaysia 2018-10 Article PeerReviewed application/pdf en http://repo.uum.edu.my/24940/1/JICT%2017%204%202018%20587%20599.pdf Seman, Ali and Mohd Sapawi, Azizian (2018) Extensions to the K-AMH algorithm for numerical clustering. Journal of ICT, 17 (4). pp. 585-599. ISSN 1675-414X http://jict.uum.edu.my/index.php/current-issues-1#a |
institution |
Universiti Utara Malaysia |
building |
UUM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Utara Malaysia |
content_source |
UUM Institutionali Repository |
url_provider |
http://repo.uum.edu.my/ |
language |
English |
topic |
QA75 Electronic computers. Computer science |
spellingShingle |
QA75 Electronic computers. Computer science Seman, Ali Mohd Sapawi, Azizian Extensions to the K-AMH algorithm for numerical clustering |
description |
The k-AMH algorithm has been proven efficient in clustering categorical datasets. It can also be used to cluster numerical values with minimum modification to the original algorithm. In this paper, we present two algorithms that extend the k-AMH algorithm to the clustering of numerical values. The original k-AMH algorithm for categorical values uses a simple matching dissimilarity measure, but for numerical values it uses Euclidean distance. The first extension to the k-AMH algorithm, denoted k-AMH Numeric I, enables it to cluster numerical values in a fashion similar to k-AMH for categorical data. The second extension, k-AMH Numeric II, adopts the cost function of the fuzzy k-Means algorithm together with Euclidean distance, and has demonstrated performance similar to that of k-AMH Numeric I. The clustering performance of the two algorithms was evaluated on six real-world datasets against a benchmark algorithm, the fuzzy k-Means algorithm. The results obtained indicate that the two algorithms are as efficient as the fuzzy k-Means algorithm when clustering numerical values. Further, on an ANOVA test, k-AMH Numeric I obtained the highest accuracy score of 0.69 for the six datasets combined with p-value less than 0.01, indicating a 95% confidence level. The experimental results prove that the k-AMH Numeric I and k-AMH Numeric II algorithms can be effectively used for numerical clustering. The significance of this study lies in that the k-AMH numeric algorithms have been demonstrated as potential solutions for clustering numerical objects. |
format |
Article |
author |
Seman, Ali Mohd Sapawi, Azizian |
author_facet |
Seman, Ali Mohd Sapawi, Azizian |
author_sort |
Seman, Ali |
title |
Extensions to the K-AMH algorithm for numerical clustering |
title_short |
Extensions to the K-AMH algorithm for numerical clustering |
title_full |
Extensions to the K-AMH algorithm for numerical clustering |
title_fullStr |
Extensions to the K-AMH algorithm for numerical clustering |
title_full_unstemmed |
Extensions to the K-AMH algorithm for numerical clustering |
title_sort |
extensions to the k-amh algorithm for numerical clustering |
publisher |
Universiti Utara Malaysia |
publishDate |
2018 |
url |
http://repo.uum.edu.my/24940/1/JICT%2017%204%202018%20587%20599.pdf http://repo.uum.edu.my/24940/ http://jict.uum.edu.my/index.php/current-issues-1#a |
_version_ |
1644284183012442112 |
score |
13.214268 |