A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (web spidering). Web search engines and some other websites use Web crawling or spidering software to update their web content or indices of other sites' web content. Web crawlers copy pages for processing by a search engine, which indexes the downloaded pages so that users can search more efficiently. Crawlers consume resources on visited systems and often visit sites unprompted. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. For example, including a robots.txt file can request bots to index only parts of a website, or nothing at all. The number of Internet pages is extremely large; even the largest crawlers fall short of making a complete index. For this reason, search engines struggled to give relevant search results in the early years of the World Wide Web, before 2000. Today, relevant results are given almost instantly. Crawlers can validate hyperlinks and HTML code. They can also be used for web scraping and data-driven programming. A web crawler is also known as a spider, an ant, an automatic indexer, or (in the FOAF software context) a Web scutter. A Web crawler starts with a list of URLs to visit. Those first URLs are called the seeds. As the crawler visits these URLs, by communicating with web servers that respond to those URLs, it identifies all the hyperlinks in the retrieved web pages and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies. If the crawler is performing archiving of websites (or web archiving), it copies and saves the information as it goes.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (32)
CS-423: Distributed information systems
This course introduces the foundations of information retrieval, data mining and knowledge bases, which constitute the foundations of today's Web-based distributed information systems.
COM-412: Semester research project in Data Science
Individual research during the semester under the guidance of a professor or an assistant.
COM-416: Research project in communication systems II
Individual research during the semester under the guidance of a professor or an assistant.
Show more
Related lectures (34)
Random Walker Model: PageRank
Covers the Random Walker Model, PageRank algorithm, and community detection in networks.
Influence: Social Interaction and Online Metrics
Discusses social influence, online metrics, Page Rank, and the impact of money.
Social Influence and Choice
Explores social influence, democracy, individual autonomy, equality, and decision-making models.
Show more
Related publications (92)

A Large-Scale Characterization of How Readers Browse Wikipedia

Robert West, Tiziano Piccardi, Akhil Arora

Despite the importance and pervasiveness of Wikipedia as one of the largest platforms for open knowledge, surprisingly little is known about how people navigate its content when seeking information. To bridge this gap, we present the first systematic large ...
ASSOC COMPUTING MACHINERY2023

Using Cloud Functions as Accelerator for Elastic Data Analytics

Anastasia Ailamaki, Haoqiong Bian, Tiannan Sha

Cloud function (CF) services, such as AWS Lambda, have been applied as the new computing infrastructure in implementing analytical query engines. For bursty and sparse workloads, CF-based query engine is more elastic than the traditional query engines runn ...
ACM2023

Information Retrieval Under Network Uncertainty: Robust Internet Ranking

Ralf Seifert, Anna Timonina-Farkas

Internet ranking algorithms play a crucial role in information technologies and numerical analysis due to their efficiency in high dimensions and wide range of possible applications, including scientometrics and systemic risk in finance (SinkRank, DebtRank ...
INFORMS2022
Show more
Related concepts (35)
Search engine
A search engine is a software system that finds web pages that match a web search. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of hyperlinks to web pages, images, videos, infographics, articles, and other types of files. Some search engines also mine data available in databases or open directories.
World Wide Web
The World Wide Web (WWW), commonly known as the Web, is an information system enabling information to be shared over the Internet through simplified ways meant to appeal to users beyond IT specialists and hobbyists, as well as documents and other web resources to be accessed over the Internet according to specific rules, the Hypertext Transfer Protocol (HTTP). Documents and downloadable media are made available to the network through web servers and can be accessed by programs such as web browsers.
Google Search
Google Search (also known simply as Google or Google.com) is a search engine provided and operated by Google. Handling more than 3.5 billion searches per day, it has a 92% share of the global search engine market. It is the most-visited website in the world. Additionally, it is the most searched and used search engine in the entire world. The order of search results returned by Google is based, in part, on a priority rank system called "PageRank".
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.