With a billion internet queries a day processed by search engines, such as Google and Bing, the amount of data captured about how people search the internet is almost unimaginable.
The parent companies of these search engines use patterns within that data — including spelling mistakes — to refine the search capabilities, which ensures the ongoing evolution of search functions.
One tool that would greatly facilitate this process is the ability to better score the quality of internet search responses, relative to the user’s intention and level of satisfaction.
A four-way collaboration between Professor Alistair Moffat, deputy head of the University of Melbourne’s School of Computing and Information Systems and like-minded computer scientists at Microsoft, CSIRO and RMIT University has been working to develop just such a tool for the past two years.
Professor Moffat says the project’s implications go beyond improved user satisfaction.
The computational costs to Google or Microsoft of processing queries are large enough that even a 10 per cent cost reduction through improved quality can amount to a substantial increase in profit margins.Professor Alistair Moffat
Part of the project involves “getting into the head of users” through controlled experiments to look at the fine detail of how people use search engines. The researchers are also harnessing the wisdom of the crowd by enlisting crowdsourcing services.
Participants in these experiments are typically provided with statements and are asked to devise a search query to answer a defined question. Knowing the user’s intention then provides novel insights about how to score the search result links.
Overall, Professor Moffat believes that a better quality-assessment algorithm is possible.
What we are aiming for is a mathematical description of how to give a numeric score to search results pages that provides a more nuanced evaluation process of the quality of those results.