Concept

Web archiving

Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Wayback Machine, which strives to maintain an archive of the entire Web. The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content. Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes. While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving project was the Internet Archive, a non-profit organization created by Brewster Kahle in 1996. The Internet Archive released its own search engine for viewing archived web content, the Wayback Machine, in 2001. As of 2018, the Internet Archive was home to 40 petabytes of data. The Internet Archive also developed many of its own tools for collecting and storing its data, including PetaBox for storing the large amounts of data efficiently and safely, and Heritrix, a web crawler developed in conjunction with the Nordic national libraries. Other projects launched around the same time included a web archiving project by the National Library of Canada, Australia's Pandora, Tasmanian web archives and Sweden's Kulturarw3. From 2001 the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas. The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (9)
CS-498: Research project in Computer Science II
Individual research during the semester under the guidance of a professor or an assistant.
ChE-601: Hands-on with Research Data Management in Chemistry
PhD students in Chemistry will learn hands-on Research Data Management (RDM) skills transferable to their research practices. They will contextualize their research into RDM best practices (day 1), di
DH-404: Cultural data sculpting
This course will engage novel approaches for visualizing and interacting with cultural heritage archives in immersive virtual environments.
Show more
Related lectures (31)
From a Collection to an Innovation Platform: Montreux Jazz Digital Project
Explores the Montreux Jazz Digital Project, highlighting its digitization process and innovative uses of the archive.
Hertz Theory: Real Area of Contact
Explores Hertz theory for contact problems and the Tabor measurement method.
Link-based ranking: PageRank & HITS
Explores link-based ranking through PageRank and HITS algorithms, covering practical examples and challenges in web search and ranking methods.
Show more
Related publications (37)

Querying the Digital Archive of Science: Distant Reading, Semantic Modelling and Representation of Knowledge

Alina Volynskaya

The archive of science is a place where scientific practices are sedimented in the form of drafts, protocols of rejected hypotheses and failed experiments, obsolete instruments, outdated visualizations and other residues. Today, just as science goes more a ...
EPFL2024

Datafication of audiovisual archives: from practice mapping to a thinking model

Yuchen Yang

Purpose Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of memory materials, encourage personalized sense-making and extract, man ...
Leeds2024

Echoing Swiss Coloniality. Land, Archive and Visuality between Brazil and Switzerland.

Denise Bertschi

Informed by longstanding artistic practice, this doctoral thesis approaches entanglements of Swiss coloniality in Brazil and Switzerland under the lens of land, archive, and visuality. The enduring legacies of imperial capitalism in the former Colonia Leop ...
EPFL2024
Show more
Related concepts (9)
Wayback Machine
The Wayback Machine is a digital archive of the World Wide Web founded by the Internet Archive, a nonprofit based in San Francisco, California. Created in 1996 and launched to the public in 2001, it allows the user to go "back in time" to see how websites looked in the past. Its founders, Brewster Kahle and Bruce Gilliat, developed the Wayback Machine to provide "universal access to all knowledge" by preserving archived copies of defunct web pages. Launched on May 10, 1996, the Wayback Machine had saved more than 38.
Digital library
A digital library, also called an online library, an internet library, a digital repository, a library without walls, or a digital collection is an online database of digital objects that can include text, still images, audio, video, digital documents, or other digital media formats or a library accessible through the internet. Objects can consist of digitized content like print or photographs, as well as originally produced digital content like word processor files or social media posts.
Link rot
Link rot (also called link death, link breaking, or reference rot) is the phenomenon of hyperlinks tending over time to cease to point to their originally targeted , web page, or server due to that resource being relocated to a new address or becoming permanently unavailable. A link that no longer points to its target, often called a broken, dead, or orphaned link, is a specific form of dangling pointer. The rate of link rot is a subject of study and research due to its significance to the internet's ability to preserve information.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.