Prediction of the high-cost normalised discounted cumulative gain (nDCG) measure in information retrieval evaluation
Introduction. Information retrieval systems are vital to meeting daily information needs of users. The effectiveness of these systems has often been evaluated using the test collections approach, despite the high evaluation costs of this approach. Recent methods have been proposed that reduce evalua...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Published: |
Univ Sheffield Dept Information Studies
2022
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/41961/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Introduction. Information retrieval systems are vital to meeting daily information needs of users. The effectiveness of these systems has often been evaluated using the test collections approach, despite the high evaluation costs of this approach. Recent methods have been proposed that reduce evaluation costs through the prediction of information retrieval performance measures at the higher cut-off depths using other measures computed at the lower cut-off depths. The purpose of this paper is to propose two methods that addresses the challenge of accurately predicting the normalised discounted cumulative gain (nDCG) measure. Method. Data from selected test collections of the Text REtrieval Conference was used. The proposed methods employ the gradient boosting and linear regression models trained with topic scores of measures partitioned by TREC Tracks. Analysis. To evaluate the proposed methods, the coefficient of determination, Kendall's tau and Spearman correlations were used. Results. The proposed methods provide better predictions of the nDCG measure at the higher cut-off depths while using other measures computed at the lower cut-off depths. Conclusions. These proposed methods have shown improvement in the predictions of the nDCG measure while reducing the evaluation costs. |
---|