World Wide WebThe World Wide Web (WWW), commonly known as the Web, is an information system enabling information to be shared over the Internet through simplified ways meant to appeal to users beyond IT specialists and hobbyists, as well as documents and other web resources to be accessed over the Internet according to specific rules, the Hypertext Transfer Protocol (HTTP). Documents and downloadable media are made available to the network through web servers and can be accessed by programs such as web browsers.
Web pageA web page (or webpage) is a hypertext document on the World Wide Web. Web pages are delivered by a web server to the user and displayed in a web browser. A website consists of many web pages linked together under a common domain name. The name "web page" is a metaphor of paper pages bound together into a book. A web page is a structured document. The core element of a web page is a written in the HyperText Markup Language (HTML) which specifies the content of the web page (including links (called hyperlinks) to other web resources, primarily other web pages, and to different sections of the same web page).
Dynamic web pageA dynamic web page is a web page constructed at runtime (during software execution), as opposed to a static web page, delivered as it is stored. A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, and including the setting up of more client-side processing. A client-side dynamic web page processes the web page using JavaScript running in the browser as it loads.
Web crawlerA Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (web spidering). Web search engines and some other websites use Web crawling or spidering software to update their web content or indices of other sites' web content. Web crawlers copy pages for processing by a search engine, which indexes the downloaded pages so that users can search more efficiently.
Dewey Decimal ClassificationThe Dewey Decimal Classification (DDC), colloquially known as the Dewey Decimal System, is a proprietary library classification system which allows new books to be added to a library in their appropriate location based on subject. It was first published in the United States by Melvil Dewey in 1876. Originally described in a 44-page pamphlet, it has been expanded to multiple volumes and revised through 23 major editions, the latest printed in 2011. It is also available in an abridged version suitable for smaller libraries.
Static web pageA static web page (sometimes called a flat page or a stationary page) is a web page that is delivered to the user's web browser exactly as stored, in contrast to dynamic web pages which are generated by a web application. Consequently, a static web page often displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so.
Library classificationA library classification is system of organization of knowledge in which sources are arranged according to the classification scheme and ordered very systematically. Library classifications are a notational system that represents the order of topics in the classification and allows items to be stored in the order of classification. Library classification systems group related materials together, typically arranged as a hierarchical tree structure.
Document classificationDocument classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science.
Web portalA web portal is a specially designed website that brings information from diverse sources, like emails, online fora and search engines, together in a uniform way. Usually, each information source gets its dedicated area on the page for displaying information (a portlet); often, the user can configure which ones to display. Variants of portals include mashups and intranet "dashboards" for executives and managers. The extent to which content is displayed in a "uniform way" may depend on the intended user and the intended purpose, as well as the diversity of the content.
Simple random sampleIn statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sample in a random way. In SRS, each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. A simple random sample is an unbiased sampling technique. Simple random sampling is a basic type of sampling and can be a component of other more complex sampling methods.