Common benchmark functions for metaheuristic evaluation: a review

In literature, benchmark test functions have been used for evaluating performance of metaheuristic algorithms. Algorithms that perform well on a set of numerical optimization problems are considered as effective methods for solving real-world problems. Different researchers choose different set of f...

Full description

Saved in:
Bibliographic Details
Main Authors: Hussain, Kashif, Mohd Salleh, Mohd Najib, Shi, Cheng, Naseem, Rashid
Format: Article
Language:English
Published: JOIV 2017
Subjects:
Online Access:http://eprints.uthm.edu.my/4825/1/AJ%202017%20%28665%29.pdf
http://eprints.uthm.edu.my/4825/
https://dx.doi.org/10.30630/joiv.1.4-2.65
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In literature, benchmark test functions have been used for evaluating performance of metaheuristic algorithms. Algorithms that perform well on a set of numerical optimization problems are considered as effective methods for solving real-world problems. Different researchers choose different set of functions with varying configurations, as there exists no standard or universally agreed test-bed. This makes hard for researchers to select functions that can truly gauge the robustness of a metaheuristic algorithm which is being proposed. This review paper is an attempt to provide researchers with commonly used experimental settings, including selection of test functions with different modalities, dimensions, the number of experimental runs, and evaluation criteria. Hence, the proposed list of functions, based on existing literature, can be handily employed as an effective test-bed for evaluating either a new or modified variant of any existing metaheuristic algorithm. For embedding more complexity in the problems, these functions can be shifted or rotated for enhanced robustness.