Sociocultural evolutionSociocultural evolution, sociocultural evolutionism or social evolution are theories of sociobiology and cultural evolution that describe how societies and culture change over time. Whereas sociocultural development traces processes that tend to increase the complexity of a society or culture, sociocultural evolution also considers process that can lead to decreases in complexity (degeneration) or that can produce variation or proliferation without any seemingly significant changes in complexity (cladogenesis).
Decision tree learningDecision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels.
Unilineal evolutionUnilineal evolution, also referred to as classical social evolution, is a 19th-century social theory about the evolution of societies and cultures. It was composed of many competing theories by various anthropologists and sociologists, who believed that Western culture is the contemporary pinnacle of social evolution. Different social status is aligned in a single line that moves from most primitive to most civilized. This theory is now generally considered obsolete in academic circles.
Loss functionIn mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.
Predictive codingIn neuroscience, predictive coding (also known as predictive processing) is a theory of brain function which postulates that the brain is constantly generating and updating a "mental model" of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses. With the rising popularity of representation learning, the theory is being actively pursued and applied in machine learning and related fields.
Space mappingThe space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The knowledge is updated with new validation information from the system when available. The space mapping methodology employs a "quasi-global" formulation that intelligently links companion "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models of different complexities.
OrthogenesisOrthogenesis, also known as orthogenetic evolution, progressive evolution, evolutionary progress, or progressionism, is an obsolete biological hypothesis that organisms have an innate tendency to evolve in a definite direction towards some goal (teleology) due to some internal mechanism or "driving force". According to the theory, the largest-scale trends in evolution have an absolute goal such as increasing biological complexity.
Polynomial kernelIn machine learning, the polynomial kernel is a kernel function commonly used with support vector machines (SVMs) and other kernelized models, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear models. Intuitively, the polynomial kernel looks not only at the given features of input samples to determine their similarity, but also combinations of these. In the context of regression analysis, such combinations are known as interaction features.
Generalization errorFor supervised learning applications in machine learning and statistical learning theory, generalization error (also known as the out-of-sample error or the risk) is a measure of how accurately an algorithm is able to predict outcome values for previously unseen data. Because learning algorithms are evaluated on finite samples, the evaluation of a learning algorithm may be sensitive to sampling error. As a result, measurements of prediction error on the current data may not provide much information about predictive ability on new data.
Production–possibility frontierIn microeconomics, a production–possibility frontier (PPF), production possibility curve (PPC), or production possibility boundary (PPB) is a graphical representation showing all the possible options of output for two goods that can be produced using all factors of production, where the given resources are fully and efficiently utilized per unit time. A PPF illustrates several economic concepts, such as allocative efficiency, economies of scale, opportunity cost (or marginal rate of transformation), productive efficiency, and scarcity of resources (the fundamental economic problem that all societies face).