Bootsrapping instance-based ontology matching via unsupervised generation of training samples

Training set is the key role player that can improve the performance of any classification task. Different techniques and methods are being applied to generate training set depending on its area of application. Researchers in data science and semantic web community use different kind of training set...

Full description

Saved in:
Bibliographic Details
Main Authors: Abubakar, Mansir, Hamdan, Hazlina, Mustapha, Norwati, Mohd Aris, Teh Noranis
Format: Article
Language:English
Published: Little Lion Scientific R&D 2019
Online Access:http://psasir.upm.edu.my/id/eprint/80836/1/BOOTS.pdf
http://psasir.upm.edu.my/id/eprint/80836/
http://www.dit.unitn.it/~pavel/OM/articles/14Vol97No6.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Training set is the key role player that can improve the performance of any classification task. Different techniques and methods are being applied to generate training set depending on its area of application. Researchers in data science and semantic web community use different kind of training sets generated to improve the performances of classifications and information retrieval capability. Operational Training Set Generator (TSG) should always solve a minimum of two issues; (1) it must address the computational cost in producing a reasonable outcome, thereby reducing the computational cost in the whole system. The runtime of TSG is near linear as in blocking approach and (2) it must produce the qualitative training sets. We use LogTfIdf as the cosine similarity function of two given vectors to produce Bag of Words (BoW); the tokenizer is developed to specially take care of delimiters that often come across URIs and other RDF essentials. We evaluated our UTSG on nine cross-domain benchmark ontologies publically available in OAEI website. The results obtained shows that our UTSG outperforms the two baseline TSGs previously developed to address similar problem.