Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment (of a target attribute) that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.
The theory of attribute substitution unifies a number of separate explanations of reasoning errors in terms of cognitive heuristics. In turn, the theory is subsumed by an effort-reduction framework proposed by Anuj K. Shah and Daniel M. Oppenheimer, which states that people use a variety of techniques to reduce the effort of making decisions.
In a 1974 paper, psychologists Amos Tversky and Daniel Kahneman argued that a broad family of biases (systematic errors in judgment and decision) were explainable in terms of a few heuristics (information-processing shortcuts), including availability and representativeness.
In 1975, psychologist Stanley Smith Stevens proposed that the strength of a stimulus (e.g., the brightness of a light, the severity of a crime) is encoded neurally in a way that is independent of modality. Kahneman and Frederick built on this idea, arguing that the target attribute and heuristic attribute could be unrelated.
In a 2002 revision of the theory, Kahneman and Shane Frederick proposed attribute substitution as a process underlying these and other effects.
Kahneman and Frederick propose three conditions for attribute substitution:
The target attribute is relatively inaccessible.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Les heuristiques de jugement, concept fréquemment employé dans le domaine de la cognition sociale, sont des opérations mentales automatiques, intuitives et rapides pouvant être statistiques ou non statistiques. Ces raccourcis cognitifs sont utilisés par les individus afin de simplifier leurs opérations mentales dans le but de répondre aux exigences de l’environnement. Par exemple, les gens ont tendance à estimer le temps mis pour trouver un emploi en fonction de la facilité avec laquelle ils peuvent penser à des individus qui ont récemment été engagés, et non selon le temps moyen de recherche dans la population.
alt=180+ cognitive biases, designed by John Manoogian III (jm3)|vignette|302x302px|Les biais cognitifs peuvent être organisés en quatre catégories : les biais qui découlent de trop d'informations, pas assez de sens, la nécessité d'agir rapidement et les limites de la mémoire. Modèle Algorithmique: John Manoogian III (jm3) Modèle Organisationnel: Buster Benson. Un biais cognitif est une déviation dans le traitement cognitif d'une information. Le terme biais fait référence à une déviation de la pensée logique et rationnelle par rapport à la réalité.
Decision-making permeates every aspect of human and societal development, from individuals' daily choices to the complex decisions made by communities and institutions. Central to effective decision-making is the discipline of optimization, which seeks the ...
Déplacez-vous dans la prise de décision humaine au-delà de la rationalité, explorant des concepts comme l'effet de leurre.
Social media studies often collect data retrospectively to analyze public opinion. Social media data may decay over time and such decay may prevent the collection of the complete dataset. As a result, the collected dataset may differ from the complete data ...
New York2023
, ,
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender. One approach to designing fair algorithms is to use relaxations of fairness not ...