Abstract
Faced with the massive amount of information on the Web, which includes not only texts but nowadays any kind of file (audio, video, images, etc.), Web users tend to lose their way when browsing the Web, falling into what psychologists call “getting lost in hyperspace”. Search engines alleviate this by presenting the most relevant pages that better match the user’s information needs. Collecting a large part of the pages in the Web, extrapolating a user information need expressed by means of often ambiguous queries, establishing the importance of Web pages and their relevance for a query, are just a few examples of the difficult problems that search engines address every day to achieve their ambitious goal. In this chapter, we introduce the concepts and the algorithms that lie at the core of modern search engines by providing running examples that simplify understanding, and we comment on some recent and powerful tools and functionalities that should increase the ability of users to match in the Web their information needs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This phenomenon happened frequently years ago. Nowadays, it is rare due to the improvement of transmission techniques. However, it can still happen if we call foreign countries or if we try to make national calls in particular periods of the year (e.g., New Year’s Eve).
- 2.
We usually refer with indexable Web to the set of pages that could be reached by search engines. The other part of the Web, called the deep Web, includes a larger amount of information contained in pages not indexed by search engines, or organized into local databases, or obtainable using special software. Nowadays an important area of research studies the possibility of extending the functionalities of search engines to the information stored in the deep Web. In this direction can be classified the initiative open data, see, for example, linkeddata.org.
- 3.
This class of problems, called NP-hard problems, is extensively discussed in Chap. 3.
- 4.
For example, for n = 20 objects we have 220 > 1 millions of subsets.
- 5.
Recall that a graph is strongly connected if and only if there exists a path that connects any pair of its nodes.
- 6.
Several experimental results have shown that the number n of distinct terms in a text T follows a mathematical law that has the form n = k | T | α, with k equal to a few tens, | T | being the number of words of the text, and α approximately equal to 1∕2. The actual size of the Web indexed by search engines is hundreds of billions of pages, each with at least a few thousand terms, from which we derive \(n > 10 \times 1{0}^{6} = 1{0}^{7}\). Thus, the dictionary can contain hundreds of millions of distinct terms, each having an arbitrary length.
- 7.
Google trusts so much in its ranking algorithm that it still shows in its homepage the button “I’m feeling lucky” that immediately sends the user to the first ranked page among the results of her query.
- 8.
In a recent interview, Udi Manber (VP Engineering at Google) revealed that some of these parameters depend on the language (ability to handle synonyms, diacritics, typos, etc.), time (some pages are interesting for a query only if they are fresh), and templates (extracted from the “history” of the queries raised in the past by the same user or by her navigation of the Web).
- 9.
See, for example, Tagme (available at tagme.di.unipi.it), and Wikipedia miner (available at http://wikipedia-miner.cms.waikato.ac.nz/).
References
Alpert, J., Hajaj, N.: We knew the web was big… Official Google Blog (2008). http://googleblog.blogspot.it/2008/07/we-knew-web-was-big.html
Baeza-Yates, R.A., Ribeiro-Neto, B.: Modern Information Retrieval: The Concepts and Technology behind Search, 2nd edn. ACM, New York (2011)
Baeza-Yates, R.A., Ciaramita, M., Mika, P., Zaragoza, H.: Towards semantic search. In: Proceedings of the International Conference on Applications of Natural Language to Information Systems, NLDB 2008, London. Lecture Notes in Computer Science, vol. 5039, pp. 4–11. Springer, Berlin (2008)
Brin, S., Page, L.: The anatomy of a large-scale hypertextual Web search engine. Comput. Netw. 30(1–7), 107–117 (1998)
Broder, A.Z., Kumar, R., Maghoul, F., Raghavan, P., Rajagopalan, S., Stata, R., Tomkins, A., Wiener, J.L.: Graph structure in the Web. Comput. Netw. 33(1–6), 309–320 (2000)
Chakrabarti, S.: Mining the Web: Discovering Knowledge from Hypertext Data. Morgan Kaufmann, San Francisco (2003)
Ferragina, P., Scaiella, U.: Fast and accurate annotation of short texts with Wikipedia pages. IEEE Softw. 29(1), 70–75 (2012)
Fetterly, D.: Adversarial information retrieval: the manipulation of Web content. ACM Comput. Rev. (2007). http://www.computingreviews.com/hottopic/hottopic_essay_06.cfm
Hawking, D.: Web search engines: part 1. IEEE Comput. 39(6), 86–88 (2006)
Hawking, D.: Web search engines: part 2. IEEE Comput. 39(8), 88–90 (2006)
López-Ortiz, A.: Algorithmic foundation of the internet. ACM SIGACT News 36(2), 1–21 (2005)
Manning, C.D., Raghavan, P., Schutze, H.: Introduction to Information Retrieval. Cambridge University Press, New York (2008)
Scaiella, U., Ferragina, P., Marino, A., Ciaramita, M.: Topical clustering of search results. In: Proceedings of the Fifth International Conference on Web Search and Web Data Mining, Seattle, USA, pp. 223–232. ACM, New York (2012)
Silvestri, F.: Mining query logs: turning search usage data into knowledge. Found. Trends Inf. Retr. 4(1–2), 1–174 (2010)
Witten, I.H., Moffat, A., Bell, T.C.: Managing Gigabytes. Morgan Kaufmann, San Francisco (1999)
Witten, I.H., Gori, M., Numerico, T.: Web Dragons. Morgan Kaufmann, Amsterdam/Boston (2007)
Zobel, J., Moffat, A.: Inverted files for text search engines. ACM Comput. Surv. 38(2), 1–56 (2006)
Acknowledgements
We would like to thank Fabrizio Luccio, who contributed to the writing of the Italian version of this chapter for Mondadori.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Ferragina, P., Venturini, R. (2013). Web Search. In: Ausiello, G., Petreschi, R. (eds) The Power of Algorithms. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39652-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-642-39652-6_5
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39651-9
Online ISBN: 978-3-642-39652-6
eBook Packages: Computer ScienceComputer Science (R0)