Your search
In authors or contributors
Result 1 resource
-
Evaluation measures act as objective functions to be optimized by information retrieval systems. Such objective functions must accurately reflect user requirements, particularly when tuning IR systems and learning ranking functions. Ambiguity in queries and redundancy in retrieved documents are poorly reflected by current evaluation measures. In this paper, we present a framework for evaluation that systematically rewards novelty and diversity. We develop this framework into a specific evaluation measure, based on cumulative gain. We demonstrate the feasibility of our approach using a test collection based on the TREC question answering track.
Last update from database: 11/24/24, 7:42 AM (UTC)
Explore
Topic
Field of study
- Computer science (1)
Contribution
- Evaluation model (1)
Resource type
- Conference Paper (1)
Publication year
-
Between 2000 and 2024
(1)
-
Between 2000 and 2009
(1)
- 2008 (1)
-
Between 2000 and 2009
(1)
Resource language
- English (1)