Your search
Results 23 resources
-
This paper examines the reliability of implicit feedback generated from clickthrough data in WWW search. Analyzing the users' decision process using eyetracking and comparing implicit feedback against manual relevance judgments, we conclude that clicks are informative but biased. While this makes the interpretation of clicks as absolute relevance judgments difficult, we show that relative preferences derived from clicks are reasonably accurate on average.
-
In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.
-
Traditional editorial effectiveness measures, such as nDCG, remain standard for Web search evaluation. Unfortunately, these traditional measures can inappropriately reward redundant information and can fail to reflect the broad range of user needs that can underlie a Web query. To address these deficiencies, several researchers have recently proposed effectiveness measures for novelty and diversity. Many of these measures are based on simple cascade models of user behavior, which operate by considering the relationship between successive elements of a result list. The properties of these measures are still poorly understood, and it is not clear from prior research that they work as intended. In this paper we examine the properties and performance of cascade measures with the goal of validating them as tools for measuring effectiveness. We explore their commonalities and differences, placing them in a unified framework; we discuss their theoretical difficulties and limitations, and compare the measures experimentally, contrasting them against traditional measures and against other approaches to measuring novelty. Data collected by the TREC 2009 Web Track is used as the basis for our experimental comparison. Our results indicate that these measures reward systems that achieve an balance between novelty and overall precision in their result lists, as intended. Nonetheless, other measures provide insights not captured by the cascade measures, and we suggest that future evaluation efforts continue to report a variety of measures.
Explore
Topic
- Information behavior (1)
-
Information retrieval
(23)
- Faceted search (1)
- Implicit feedback (5)
-
Ranking
(8)
- Diversity (6)
- Relevance (5)
- Search log analysis (4)
Field of study
Contribution
- Algorithm (7)
- Conceptual model (2)
- Empirical study (11)
- Evaluation model (2)
- Literature review (1)
- Methodology (1)
- Primer (3)
Resource type
- Book (3)
- Conference Paper (9)
- Journal Article (11)
Publication year
-
Between 1900 and 1999
(5)
-
Between 1960 and 1969
(1)
- 1960 (1)
-
Between 1970 and 1979
(1)
- 1975 (1)
-
Between 1980 and 1989
(1)
- 1988 (1)
-
Between 1990 and 1999
(2)
- 1999 (2)
-
Between 1960 and 1969
(1)
- Between 2000 and 2024 (18)