Data classification with k-NN using novel character frequency-direct word frequency (CF-DWF) similarity formula

The k-NN is one of the most popular and easy in implementation algorithm to classify the data. The best thing about k-NN is that it accepts changes with improved version. Despite many advantages of the k-NN, it is also facing many issues. These issues are: distance/similarity calculation complexity,...

Full description

Saved in:
Bibliographic Details
Main Authors: Zardari, M.A., Jung, L.T.
Format: Conference or Workshop Item
Published: Institute of Electrical and Electronics Engineers Inc. 2016
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-84995551134&doi=10.1109%2fISMSC.2015.7594066&partnerID=40&md5=449ec4f765f99240969706e2a6057759
http://eprints.utp.edu.my/30930/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The k-NN is one of the most popular and easy in implementation algorithm to classify the data. The best thing about k-NN is that it accepts changes with improved version. Despite many advantages of the k-NN, it is also facing many issues. These issues are: distance/similarity calculation complexity, training dataset complexity at classification phase, proper selection of k, and get duplicate values when training dataset is of single class. This paper focuses on only issue of distance/similarity calculation complexity. To avoid this complexity a new distance formula is proposed. The CF-DWF formula is only strings. The CF-DWF is no applicable for other data types. The F1-Score and precision of CF-DWF with k-NN are higher than traditional k-NN. The proposed similarity formula is also efficient than Euclidean Distance (E.D) and Cosine Similarity (C.S). The results section depicts that the k-NN with CF-DWF reduced computational complexity of k-NN with E.D and C.S from 4.77 to 43.69 and improved the F1-Score of traditional k-NN from 12 to 19. © 2015 IEEE.