Preference (economics)In economics and other social sciences, preference refers to the order in which an agent ranks alternatives based on their relative utility. The process results in an "optimal choice" (whether real or theoretical). Preferences are evaluations and concern matter of value, typically in relation to practical reasoning. An individual's preferences are determined purely by a person's tastes as opposed to the good's prices, personal income, and the availability of goods. However, people are still expected to act in their best (rational) interest.
Ordinal utilityIn economics, an ordinal utility function is a function representing the preferences of an agent on an ordinal scale. Ordinal utility theory claims that it is only meaningful to ask which option is better than the other, but it is meaningless to ask how much better it is or how good it is. All of the theory of consumer decision-making under conditions of certainty can be, and typically is, expressed in terms of ordinal utility. For example, suppose George tells us that "I prefer A to B and B to C".
Knapsack problemThe knapsack problem is the following problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine which items to include in the collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.
Parameterized complexityIn computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input.
Fundamental theorems of welfare economicsThere are two fundamental theorems of welfare economics. The first states that in economic equilibrium, a set of complete markets, with complete information, and in perfect competition, will be Pareto optimal (in the sense that no further exchange would make one person better off without making another worse off). The requirements for perfect competition are these: There are no externalities and each actor has perfect information. Firms and consumers take prices as given (no economic actor or group of actors has market power).
Cardinal utilityIn economics, a cardinal utility function or scale is a utility index that preserves preference orderings uniquely up to positive affine transformations. Two utility indices are related by an affine transformation if for the value of one index u, occurring at any quantity of the goods bundle being evaluated, the corresponding value of the other index v satisfies a relationship of the form for fixed constants a and b. Thus the utility functions themselves are related by The two indices differ only with respect to scale and origin.
Welfare economicsWelfare economics is a field of economics that applies microeconomic techniques to evaluate the overall well-being (welfare) of a society. This evaluation is typically done at the economy-wide level, and attempts to assess the distribution of resources and opportunities among members of society. The principles of welfare economics are often used to inform public economics, which focuses on the ways in which government intervention can improve social welfare.
AlgorithmIn mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually.
Average-case complexityIn computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs. There are three primary motivations for studying average-case complexity.
Decision problemIn computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question of the input values. An example of a decision problem is deciding by means of an algorithm whether a given natural number is prime. Another is the problem "given two numbers x and y, does x evenly divide y?". The answer is either 'yes' or 'no' depending upon the values of x and y. A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem.