What is it about?

The purpose of this paper is to test the reliability of query intents derived from queries, either by the user who entered the query or by another juror. We report the findings of three studies: First, we conducted a large-scale classification study (approximately 50,000 queries) using a crowdsourcing approach. Then, we used click-through data from a search engine log and validated the judgments given by the jurors from the crowdsourcing study. Finally, we conducted an online survey on a commercial search engine’s portal. Since we used the same queries for all three studies, we were able to compare the results and the effectiveness of the different approaches, as well. We found that neither the crowdsourcing approach using jurors who classified queries originating from other users, nor the questionnaire approach using searchers who were asked about their own query that they just entered into a web search engine, lead to satisfying results. This leads us to conclude that there is little understanding of the classification tasks, even though both groups of jurors were given detailed instructions. While we used manual classification, our research has important implications for automatic classification, as well. We must question the success of approaches using automatic classification and comparing its performance to a baseline from human jurors.

Featured Image

Read the Original

This page is a summary of: Deriving query intents from web search engine queries, Journal of the American Society for Information Science and Technology, August 2012, Wiley,
DOI: 10.1002/asi.22706.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page