Concept

Inductive reasoning

Inductive reasoning is a method of reasoning in which a general principle is derived from a body of observations. It consists of making broad generalizations based on specific observations. Inductive reasoning is distinct from deductive reasoning, where the conclusion of a deductive argument is certain given the premises are correct; in contrast, the truth of the conclusion of an inductive argument is probable, based upon the evidence given. The types of inductive reasoning include generalization, prediction, statistical syllogism, argument from analogy, and causal inference. A generalization (more accurately, an inductive generalization) proceeds from a premise about a sample to a conclusion about the population. The observation obtained from this sample is projected onto the broader population. The proportion Q of the sample has attribute A. Therefore, the proportion Q of the population has attribute A. For example, say there are 20 balls—either black or white—in an urn. To estimate their respective numbers, you draw a sample of four balls and find that three are black and one is white. An inductive generalization would be that there are 15 black and five white balls in the urn. How much the premises support the conclusion depends upon (1) the number in the sample group, (2) the number in the population, and (3) the degree to which the sample represents the population (which may be achieved by taking a random sample). The greater the sample size relative to the population and the more closely the sample represents the population, the stronger the generalization is. The hasty generalization and the biased sample are generalization fallacies. A statistical generalization is a type of inductive argument in which a conclusion about a population is inferred using a statistically-representative sample. For example: Of a sizeable random sample of voters surveyed, 66% support Measure Z. Therefore, approximately 66% of voters support Measure Z. The measure is highly reliable within a well-defined margin of error provided the sample is large and random.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related MOOCs (3)
Parallel programming
With every smartphone and computer now boasting multiple processors, the use of functional ideas to facilitate parallel programming is becoming increasingly widespread. In this course, you'll learn th
Parallel programming
With every smartphone and computer now boasting multiple processors, the use of functional ideas to facilitate parallel programming is becoming increasingly widespread. In this course, you'll learn th
Parallel programming
With every smartphone and computer now boasting multiple processors, the use of functional ideas to facilitate parallel programming is becoming increasingly widespread. In this course, you'll learn th

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.