Determining the preprocessing clustering algorithm in radial basis function neural network

Radial Basis Function Networks have been widely used to approximate and classify data. In the common model for radial basis function, the centres and spreads are fixed while the weights are adjusted until it manages to approximate the data. There exist some problems in finding the best centres fo...

Full description

Saved in:
Bibliographic Details
Main Authors: S.L. Ang,, H.C. Ong,, H.C. Law,
Format: Article
Published: Penerbit ukm 2008
Online Access:http://journalarticle.ukm.my/1889/
http://www.ukm.my/~ppsmfst/jqma/index.html
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Radial Basis Function Networks have been widely used to approximate and classify data. In the common model for radial basis function, the centres and spreads are fixed while the weights are adjusted until it manages to approximate the data. There exist some problems in finding the best centres for the hidden layer of Radial Basis Function. Even though some clustering methods like K-means or K-median are used in finding the centres, there are no consistent results that show which one is better. The main objective in this study is to determine the better method to be used to find the centres in the Radial Basis Functional Link Nets for data classification. Three types of method used in this study to find the centres include random selections, K-means clustering algorithm and also K-median clustering algorithm. The effects of K-means and K-median clustering algorithms on centres selection for Radial Basis Functional Link Nets in terms of accuracy and speed are shown in this study. To determine which clustering method is better, we calculate the preliminary Mardia’s skewness. Therefore the skewness of the data is calculated to choose between the K-means or K-median clustering method in finding the centre of Radial Basis Function Network. Besides, the initial selection criterion using Mardia’s skewness is able to show the improvement of efficiency in data classification. We use two sets of real data to demonstrate our result