Computational complexityIn computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem. The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory.
Exponential time hypothesisIn computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved in subexponential time, i.e., for all constant , where n is the number of variables in the formula. The exponential time hypothesis, if true, would imply that P ≠ NP, but it is a stronger statement.
Parameterized complexityIn computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input.
Computational complexity theoryIn theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used.
Parameterized approximation algorithmA parameterized approximation algorithm is a type of algorithm that aims to find approximate solutions to NP-hard optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional approximation algorithms and fixed-parameter tractability. In traditional approximation algorithms, the goal is to find solutions that are at most a certain factor away from the optimal solution, known as an -approximation, in polynomial time.
Floating-point arithmeticIn computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision: However, unlike 12.345, 12.3456 is not a floating-point number in base ten with five digits of precision—it needs six digits of precision; the nearest floating-point number with only five digits is 12.
RandomnessIn common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if the probability distribution is known, the frequency of different outcomes over repeated events (or "trials") is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4.
Randomized roundingWithin computer science and operations research, many combinatorial optimization problems are computationally intractable to solve exactly (to optimality). Many such problems do admit fast (polynomial time) approximation algorithms—that is, algorithms that are guaranteed to return an approximately optimal solution given any input. Randomized rounding is a widely used approach for designing and analyzing such approximation algorithms.
False precisionFalse precision (also called overprecision, fake precision, misplaced precision and spurious precision) occurs when numerical data are presented in a manner that implies better precision than is justified; since precision is a limit to accuracy (in the ISO definition of accuracy), this often leads to overconfidence in the accuracy, named precision bias. Madsen Pirie defines the term "false precision" in a more general way: when exact numbers are used for notions that cannot be expressed in exact terms.
Double exponential functionA double exponential function is a constant raised to the power of an exponential function. The general formula is (where a>1 and b>1), which grows much more quickly than an exponential function. For example, if a = b = 10: f(x) = 1010x f(0) = 10 f(1) = 1010 f(2) = 10100 = googol f(3) = 101000 f(100) = 1010100 = googolplex. Factorials grow faster than exponential functions, but much more slowly than doubly exponential functions. However, tetration and the Ackermann function grow faster.