JOURNAL ARTICLE

On Crowdsourcing Relevance Magnitudes for Information Retrieval Evaluation

Eddy MaddalenaStefano MizzaroFalk ScholerAndrew Turpin

Year: 2017 Journal:   ACM Transactions on Information Systems Vol: 35 (3)Pages: 1-32

Abstract

Magnitude estimation is a psychophysical scaling technique for the measurement of sensation, where observers assign numbers to stimuli in response to their perceived intensity. We investigate the use of magnitude estimation for judging the relevance of documents for information retrieval evaluation, carrying out a large-scale user study across 18 TREC topics and collecting over 50,000 magnitude estimation judgments using crowdsourcing. Our analysis shows that magnitude estimation judgments can be reliably collected using crowdsourcing, are competitive in terms of assessor cost, and are, on average, rank-aligned with ordinal judgments made by expert relevance assessors. We explore the application of magnitude estimation for IR evaluation, calibrating two gain-based effectiveness metrics, nDCG and ERR, directly from user-reported perceptions of relevance. A comparison of TREC system effectiveness rankings based on binary, ordinal, and magnitude estimation relevance shows substantial variation; in particular, the top systems ranked using magnitude estimation and ordinal judgments differ substantially. Analysis of the magnitude estimation scores shows that this effect is due in part to varying perceptions of relevance: different users have different perceptions of the impact of relative differences in document relevance. These results have direct implications for IR evaluation, suggesting that current assumptions about a single view of relevance being sufficient to represent a population of users are unlikely to hold.

Keywords:
Relevance (law) Crowdsourcing Computer science Magnitude (astronomy) Estimation Learning to rank Information retrieval Statistics Data mining Ranking (information retrieval) Mathematics

Metrics

129
Cited By
42.27
FWCI (Field Weighted Citation Impact)
46
Refs
0.99
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Mobile Crowdsensing and Crowdsourcing
Physical Sciences →  Computer Science →  Computer Science Applications
Information Retrieval and Search Behavior
Physical Sciences →  Computer Science →  Information Systems
Advanced Text Analysis Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

BOOK-CHAPTER

Crowdsourcing for Information Retrieval Experimentation and Evaluation

Omar Alonso

Lecture notes in computer science Year: 2011 Pages: 2-2
JOURNAL ARTICLE

Crowdsourcing for information retrieval

Matthew LeaseEmine Yılmaz

Journal:   ACM SIGIR Forum Year: 2012 Vol: 45 (2)Pages: 66-75
BOOK-CHAPTER

Crowdsourcing for Information Retrieval

Dmitry UstalovAlisa SmirnovaNatalia FedorovaNikita Pavlichenko

Lecture notes in computer science Year: 2023 Pages: 357-361
JOURNAL ARTICLE

Crowdsourcing for information retrieval

Omar AlonsoMatthew Lease

Year: 2011 Pages: 1299-1300
© 2026 ScienceGate Book Chapters — All rights reserved.