What is ‘Omission Bias’?

Introduction

Omission bias is the phenomenon in which people prefer omission (inaction) over commission (action), and tend to judge harm as a result of commission more negatively than harm as a result of omission. It can occur due to a number of processes, including psychological inertia, the perception of transaction costs, and the perception that commissions are more causal than omissions.

In social political terms the Universal Declaration of Human Rights establishes how basic human rights are to be assessed in article 2, as “without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.” Criteria that are often subject to one or another form of omission bias. It is controversial as to whether omission bias is a cognitive bias or is often rational. The bias is often showcased through the trolley problem and has also been described as an explanation for the endowment effect and status quo bias.

Examples and Applications

Taoism may gnomically promote inaction:

“If you follow the Way you shall do less each day. You shall do less and less until you do nothing at all. And if you do nothing at all, there is nothing that is left undone.”

Spranca, Minsk and Baron extended the omission bias to judgements of morality of choices.

In one scenario, John, a tennis player, would be facing a tough opponent the next day in a decisive match. John knows his opponent is allergic to a food substance.

Subjects were presented with two conditions: John recommends the food containing the allergen to hurt his opponent’s performance, or the opponent himself orders the allergenic food, and John says nothing. A majority of people judged that John’s action of recommending the allergenic food as more immoral than John’s inaction of not informing the opponent of the allergenic substance.

The effect has also held in real-world athletic arenas: NBA statistics showcased that referees called 50% fewer fouls in the final moments of close games.

An additional real-world example is when parents decide not to vaccinate their children because of the potential chance of death – even when the probability the vaccination will cause death is much less likely than death from the disease prevented.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Omission_bias >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Status Quo Bias

Introduction

A status quo bias or default bias is a cognitive bias which results from a preference for the maintenance of one’s existing state of affairs. The current baseline (or status quo) is taken as a reference point, and any change from that baseline is perceived as a loss or gain. Corresponding to different alternatives, this current baseline or default option is perceived and evaluated by individuals as a positive.

Status quo bias should be distinguished from a rational preference for the status quo, as when the current state of affairs is objectively superior to the available alternatives, or when imperfect information is a significant problem. A large body of evidence, however, shows that status quo bias frequently affects human decision-making. Status quo bias should also be distinguished from psychological inertia, which refers to a lack of intervention in the current course of affairs.

The bias intersects with other non-rational cognitive processes such as loss aversion, in which losses comparative to gains are weighed to a greater extent. Further non-rational cognitive processes include existence bias, endowment effect, longevity, mere exposure, and regret avoidance. Experimental evidence for the detection of status quo bias is seen through the use of the reversal test. A vast amount of experimental and field examples exist. Behaviour in regard to economics, retirement plans, health, and ethical choices show evidence of the status quo bias.

Examples

Status quo experiments have been conducted over many fields with Kahneman, Thaler, and Knetsch (1991) creating experiments on the endowment effect, loss aversion and status quo bias. Experiments have also been conducted on the effect of status quo bias on contributions to retirement plans and Fevrier & Gay (2004) study on status quo bias in organ donations consent.

TypeOutline
Questionnaire1. Samuelson and Zeckhauser (1988) demonstrated status quo bias using a questionnaire in which subjects faced a series of decision problems, which were alternately framed to be with and without a pre-existing status quo position.
2. Subjects tended to remain with the status quo when such a position was offered to them.
3. Results of the experiment further show that status quo bias advantage relatively increases with the number of alternatives given within the choice set.
4. Furthermore, a weaker bias resulted from when the individual exhibited a strong discernible preference for a chosen alternative.
Hypothetical Choice Tasks1. Samuelson and Zeckhauser (1988) gave subjects a hypothetical choice task in the following “neutral” version, in which no status quo was defined: “You are a serious reader of the financial pages but until recently you have had few funds to invest.
2. That is when you inherited a large sum of money from your great-uncle.
3. You are considering different portfolios.
4. Your choices are to invest in: a moderate-risk company, a high-risk company, treasury bills, municipal bonds.”
5. Other subjects were presented with the same problem but with one of the options designated as the status quo.
6. In this case, the opening passage continued: “A significant portion of this portfolio is invested in a moderate risk company … (The tax and broker commission consequences of any changes are insignificant.)”
7. The result was that an alternative became much more popular when it was designated as the status quo.
Electric Power Consumers1. California electric power consumers were asked about their preferences regarding trade-offs between service reliability and rates.
2. The respondents fell into two groups, one with much more reliable service than the other.
3. Each group was asked to state a preference among six combinations of reliability and rates, with one of the combinations designated as the status quo.
4. A strong bias to the status quo was observed.
5. Of those in the high-reliability group, 60.2% chose the status quo, whereas a mere 5.7% chose the low-reliability option that the other group had been experiencing, despite its lower rates.
6. Similarly, of those in the low reliability group, 58.3 chose their low-reliability status quo, and only 5.8 chose the high-reliability option.
Automotive Insurance Consumers1. The US states of New Jersey and Pennsylvania inadvertently ran a real-life experiment providing evidence of status quo bias in the early 1990s.
2. As part of tort law reform programs, citizens were offered two options for their automotive insurance: an expensive option giving them full right to sue and a less expensive option with restricted rights to sue.
3. In New Jersey the cheaper insurance was the default and in Pennsylvania the expensive insurance was the default.
4. Johnson, Hershey, Meszaros and Kunreuther (1993) conducted a questionnaire to test whether consumers will stay with the default option for car insurance.
5. They found that only 20% of New Jersey drivers changed from the default option and got the more expensive option.
6. Also, only 25% of Pennsylvanian drivers changed from the default option and got the cheaper insurance.
7.Therefore, framing and status quo bias can have significant financial consequences.
General Practitioners1. Boonen, Donkers and Schut created two discrete choice experiments for Dutch residents to conclude a consumer’s preference for general practitioners and whether they would leave their current practitioner.
2. The Dutch health care system was chosen as general practitioners play the role of a gatekeeper.
3. The experiment was conducted to investigate the effect of status quo bias on a consumer’s decision to leave their current practitioner, with knowledge of other practitioners and their current relationship with their practitioner determining the role status quo bias plays.
4. Continued below.

Through the questionnaire it was shown that respondents were aware of the lack of added benefit aligned with their current general practitioner and were aware of the quality differences between potential practitioners. 35% of respondents were willing to a pay a co-payment to stay with their current general practitioner, while only 30% were willing to switch to another practitioner in exchange for a financial gain. These consumers were willing to pay a considerable amount to continue going to their current practitioner up to €17.32. For general practitioners the value assigned by the consumer to staying with their current one exceeded the total value assigned to all other attributes tested such as discounts or a certificate of quality.

Within the discrete choice experiment the respondents were offered a choice between their current practitioner and a hypothetical provider with identical attributes. The respondents were 40% more likely to choose their current practitioner than if both options were hypothetical providers, which would result in the probability being 50% for both. It was found that status quo bias had a massive impact on which general practitioner the respondents would choose. Despite consumers being offered positive financial incentives, qualitative incentives or the addition of negative financial incentives respondents were still extremely hesitant to switch from their current practitioner. The impact of status quo bias was determined as making attempts to channel consumers away from the general practitioner they are currently seeing a daunting task.

Explanations

Status quo bias has been attributed to a combination of loss aversion and the endowment effect, two ideas relevant to prospect theory. An individual weighs the potential losses of switching from the status quo more heavily than the potential gains; this is due to the prospect theory value function being steeper in the loss domain. As a result, the individual will prefer not to switch at all. In other words, we tend to oppose change unless the benefits outweigh the risks. However, the status quo bias is maintained even in the absence of gain/loss framing: for example, when subjects were asked to choose the colour of their new car, they tended towards one colour arbitrarily framed as the status quo. Loss aversion, therefore, cannot wholly explain the status quo bias, with other potential causes including regret avoidance, transaction costs and psychological commitment.

Rational Routes to Status Quo Maintenance

A status quo bias can also be a rational route if there are cognitive or informational limitations.

Informational Limitations

Decision outcomes are rarely certain, nor is the utility they may bring. Because some errors are more costly than others (Haselton & Nettle, 2006), sticking with what worked in the past is a safe option, as long as previous decisions are “good enough”.

Cognitive Limitations

Cognitive limitations of status quo bias involve the cognitive cost of choice, in which decisions are more susceptible to postponement as increased alternatives are added to the choice set. Moreover, mental effort needed to maintain status quo alternatives would often be lesser and easier, resulting in a superior choice’s benefit being outweighed by decision-making cognitive costs. Consequently, maintenance of current or previous state of affairs would be regarded as the easier alternative.

Irrational Routes

The irrational maintenance of the status quo bias links and confounds many cognitive biases.

Existence Bias

An assumption of longevity and goodness are part of the status quo bias. People treat existence as a prima facie case for goodness, aesthetic and longevity increases this preference. The status quo bias affects people’s preferences; people report preferences for what they are likely rather than unlikely to receive. People simply assume, with little reason or deliberation, the goodness of existing states.

Longevity is a corollary of the existence bias: if existence is good, longer existence should be better. This thinking resembles quasi-evolutionary notions of “survival of the fittest”, and also the augmentation principle in attribution theory.

Psychological inertia is another reason used to explain a bias towards the status quo. Another explanation is fear of regret in making a wrong decision, i.e. If we choose a partner, when we think there could be someone better out there.

Mere Exposure

Mere exposure is an explanation for the status quo bias. Existing states are encountered more frequently than non-existent states and because of this they will be perceived as more true and evaluated more preferably. One way to increase liking for something is repeated exposure over time.

Loss Aversion

Loss aversion also leads to greater regret for action than for inaction; more regret is experienced when a decision changes the status quo than when it maintains it. Together these forces provide an advantage for the status quo; people are motivated to do nothing or to maintain current or previous decisions. Change is avoided, and decision makers stick with what has been done in the past.

Changes from the status quo will typically involve both gains and losses, with the change having good overall consequences if the gains outweigh these losses. A tendency to overemphasize the avoidance of losses will thus favour retaining the status quo, resulting in a status quo bias. Even though choosing the status quo may entail forfeiting certain positive consequences, when these are represented as forfeited “gains” they are psychologically given less weight than the “losses” that would be incurred if the status quo were changed.

The loss aversion explanation for the status quo bias has been challenged by David Gal and Derek Rucker who argue that evidence for loss aversion (i.e. a tendency to avoid losses more than to pursue gains) is confounded with a tendency towards inertia (a tendency to avoid intervention more than to intervene in the course of affairs). Inertia, in this sense, is related to omission bias, except it need not be a bias but might be perfectly rational behaviour stemming from transaction costs or lack of incentive to intervene due to fuzzy preferences.

Omission Bias

Omission bias may account for some of the findings previously ascribed to status quo bias. Omission bias is diagnosed when a decision maker prefers a harmful outcome that results from an omission to a less harmful outcome that results from an action.

Overall implications of a study conducted by Ilana Ritov and Jonathan Baron, regarding status quo and omission biases, reveal that omission bias may further be diagnosed when the decision maker is unwilling to take preference from any of the available options given to them, thus enabling reduction of the number of decisions where utility comparison and weight is unavoidable.

Detection

The reversal test: when a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias. The rationale of the reversal test is: if a continuous parameter admits of a wide range of possible values, only a tiny subset of which can be local optima, then it is prima facie implausible that the actual value of that parameter should just happen to be at one of these rare local optima.

Neural Activity

A study found that erroneous status quo rejections have a greater neural impact than erroneous status quo acceptances. This asymmetry in the genesis of regret might drive the status quo bias on subsequent decisions.

A study was done using a visual detection task in which subjects tended to favour the default when making difficult, but not easy, decisions. This bias was suboptimal in that more errors were made when the default was accepted. A selective increase in sub-thalamic nucleus (STN) activity was found when the status quo was rejected in the face of heightened decision difficulty. Analysis of effective connectivity showed that inferior frontal cortex, a region more active for difficult decisions, exerted an enhanced modulatory influence on the STN during switches away from the status quo.

Research by University College London scientists that examines the neural pathways involved in ‘status quo bias’ in the human brain and found that the more difficult the decision we face, the more likely we are not to act. The study, published in Proceedings of the National Academy of Sciences (PNAS), looked at the decision-making of participants taking part in a tennis ‘line judgement’ game while their brains were scanned using functional MRI (fMRI). The 16 study participants were asked to look at a cross between two tramlines on a screen while holding down a ‘default’ key. They then saw a ball land in the court and had to make a decision as to whether it was in or out. On each trial, the computer signalled which was the current default option – ‘in’ or ‘out’. The participants continued to hold down the key to accept the default and had to release it and change to another key to reject the default. The results showed a consistent bias towards the default, which led to errors. As the task became more difficult, the bias became even more pronounced. The fMRI scans showed that a region of the brain known as the sub-thalamic nucleus (STN) was more active in the cases when the default was rejected. Also, greater flow of information was seen from a separate region sensitive to difficulty (the prefrontal cortex) to the STN. This indicates that the STN plays a key role in overcoming status quo bias when the decision is difficult.

Behavioural Economics and the Default Position

Against this background, two behavioural economists devised an opt-out plan to help employees of a particular company build their retirement savings. In an opt-out plan, the employees are automatically enrolled unless they explicitly ask to be excluded. They found evidence for status quo bias and other associated effects. The impact of defaults on decision making due to status quo bias is not purely due to subconscious bias, as it has been found that even when disclosing the intent of the default to consumers, the effect of the default is not reduced.

An experiment conducted by Sen Geng, regarding status quo bias and decision time allocation, reveal that individuals allocate more attention to default options in comparison to alternatives. This is due to individuals who are mainly risk-averse who seek to attain greater expected utility and decreased subjective uncertainty in making their decision. Furthermore, by optimally allocating more time and asymmetric attention to default options or positions, the individual’s estimate of the default’s value is consequently more precise than estimates of alternatives. This behaviour thus reflects the individual’s asymmetric choice error, and is therefore an indication of status quo bias.

Conflict

Status-quo educational bias can be both a barrier to political progress and a threat to the state’s legitimacy and argue that the values of stability, compliance, and patriotism underpin important reasons for status quo bias that appeal not to the substantive merits of existing institutions but merely to the fact that those institutions are the status quo.

Relevant Fields

The status quo bias is seen in important real life decisions; it has been found to be prominent in data on selections of health care plans and retirement programmes.

Politics

There is a belief that preference for the status quo represents a core component of conservative ideology in societies where government power is limited and laws restricting actions of individuals exist. Conversely, in liberal societies, movements to impose restrictions on individuals or governments are met with widespread opposition by those that favour the status quo. Regardless of the type of society, the bias tends to hinder progressive movements in the absence of a reaction or backlash against the powers that be.

Ethics

Status quo bias may be responsible for much of the opposition to human enhancement in general and to genetic cognitive enhancement in particular. Some ethicists argue, however, that status quo bias may not be irrational in such cases. The rationality of status quo bias is also an important question in the ethics of disability.

Education

Education can (sometimes unintentionally) encourage children’s belief in the substantive merits of a particular existing law or political institution, where the effect does not derive from an improvement in their ability or critical thinking about that law or institution. However, this biasing effect is not automatically illegitimate or counterproductive: a balance between social inculcation and openness needs to be maintained.

Given that educational curriculums are developed by Governments and delivered by individuals with their own political thoughts and feelings, the content delivered may be inadvertently affected by bias. When Governments implement certain policies, they become the status quo and are then presented as such to children in the education system. Whether through intentional or unintentional means, when learning about a topic, educators may favour the status quo. They may simply not know the full extent of the arguments against the status quo or may not be able to present an unbiased account of each side because of their personal biases.

Health

An experiment to determine if a status-quo bias, toward current medication even when better alternatives are offered—, exists in a stated-choice study among asthma patients who take prescription combination maintenance medications. The results of this study indicate that the status quo bias may exist in stated-choice studies, especially with medications that patients must take daily such as asthma maintenance medications. Stated-choice practitioners should include a current medication in choice surveys to control for this bias.

Retirement Plans

A study in 1986 examined the effect of status quo bias on those planning their retirement savings when given the yearly choice between two investment funds. Participants were able to choose how to proportionally split their retirement savings between the two funds at the beginning of each year. After each year, they were able to amend their chose split without switching costs as their preferences changed. Even though the two funds had vastly different returns in both absolute and relative terms, the majority of participants never switched the preferences across the trial period. Status quo bias was also more evident in older participants as they preferred to stay with their original investment, rather than switching as new information came to light.

In Negotiation

Korobkin’s has studied a link between negotiation and status quo bias in 1998. In this studies shows that in negotiating contracts favour inaction that exist in situations in which a legal standard and defaults from contracts will administer absent action. This involves a biased opinion opposed to alternative solutions. Heifetz’s and Segev’s study in 2004 found support for existence of a toughness bias. It is like so-called endowment effect which affects seller’s behaviour.

Price Management

Status quo bias provides a maintenance role in the theory-practice gap in price management, and is revealed in Dominic Bergers’ research regarding status quo bias and its individual differences from a price management perspective. He identified status quo bias as a possible influencer of 22 rationality deficits identified and explained by Rullkötter (2009), and is further attributed to deficits within Simon and Fassnacht’s (2016) price management process phases. Status quo bias remained as an underlying possible cause of 16 of the 22 rationality deficits. Examples of these can be seen within the analysis phase and implementation phase of price management processes.

Bergers reveal that status quo bias within the former price management process phase potentially led to complete reliance on external information sources that existed traditionally. This bias, through a price management perspective, can be demonstrated when monitoring competitor’s pricing. In the latter phase, status quo bias potentially led to the final price being determined by decentralised staff, which is potentially perpetuated by existing system profitability within price management practices.

Mutual Fund Market

An empirical study conducted by Alexandre Kempf and Stefan Ruenzi examined the presence of status quo bias within the US equity mutual fund market, and the extent in which this depends on the number of alternatives given. Using real data obtained from the US mutual fund market, this study reveals status quo bias influences fund investors, in which a stronger correlation for positive dependence of status quo bias was found when the number of alternatives was larger, and therefore confirms Samuelson and Zeckhauser (1988) experimental results.

Economic Research

Status quo bias has a significant impact on economics research and policy creation. Anchoring and adjustment theory in economics is where people’s decision-making and outcome are affected by their initial reference point. The reference point for a consumer is usually the status quo. Status quo bias results in the default option to be better understood by consumers compared to alternatives options. This results in the status quo option providing less uncertainty and higher expected utility for risk-averse decision makers. Status quo bias is compounded by loss aversion theory where consumers see disadvantages as larger than advantages when making decision away from the reference point. Economics can also describe the effect of loss aversion graphically with a consumer’s utility function for losses having a negative and 2 times steeper curve than the utility function for gains. Therefore, they perceive the negative effect of a loss as more significant and will stay with status quo. Consumers choosing the status quo goes against rational consumer choice theory as they are not maximising their utility. Rational consumer choice theory underpins many economic decisions by defining a set of rules for consumer behaviour. Therefore, status quo bias has substantial implications in economic theory.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Status_quo_bias >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of the Cognitive Miser

Introduction

In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.

The term cognitive miser was first introduced by Susan Fiske and Shelley Taylor in 1984. It is an important concept in social cognition theory and has been influential in other social sciences such as economics and political science.

Simply put, people are limited in their capacity to process information, so they take shortcuts whenever they can.

Assumption

The metaphor of the cognitive miser assumes that the human mind is limited in time, knowledge, attention, and cognitive resources. Usually people do not think rationally or cautiously, but use cognitive shortcuts to make inferences and form judgements. These shortcuts include the use of schemas, scripts, stereotypes, and other simplified perceptual strategies instead of careful thinking. For example, people tend to make correspondent reasoning and are likely to believe that behaviours should be correlated to or representative of stable characteristics.

Background

The Naïve Scientist and Attribution Theory

Before Fiske and Taylor’s cognitive miser theory, the predominant model of social cognition was the naïve scientist. First proposed in 1958 by Fritz Heider in The Psychology of Interpersonal Relations, this theory holds that humans think and act with dispassionate rationality whilst engaging in detailed and nuanced thought processes for both complex and routine actions. In this way, humans were thought to think like scientists, albeit naïve ones, measuring and analysing the world around them. Applying this framework to human thought processes, naïve scientists seek the consistency and stability that comes from a coherent view of the world and need for environmental control.

In order to meet these needs, naïve scientists make attributions. Thus, attribution theory emerged from the study of the ways in which individuals assess causal relationships and mechanisms. Through the study of causal attributions, led by Harold Kelley and Bernard Weiner amongst others, social psychologists began to observe that subjects regularly demonstrate several attributional biases including but not limited to the fundamental attribution error.

The study of attributions had two effects: it created further interest in testing the naïve scientist and opened up a new wave of social psychology research that questioned its explanatory power. This second effect helped to lay the foundation for Fiske and Taylor’s cognitive miser.

Stereotypes

According to Walter Lippmann’s arguments in his classic book Public Opinion, people are not equipped to deal with complexity. Attempting to observe things freshly and in detail is mentally exhausting, especially among busy affairs. The term stereotype is thus introduced: people have to reconstruct the complex situation on a simpler model before they can cope with it, and the simpler model can be regarded as a stereotype. Stereotypes are formed from outside sources which identified with people’s interests and can be reinforced since people could be impressed by those facts that fit their philosophy.

On the other hand, in Lippmann’s view, people are told about the world before they see it. People’s behaviour is not based on direct and certain knowledge, but pictures made or given to them. Hence, influence from external factors are unneglectable in shaping people’s stereotypes.

“The subtlest and most pervasive of all influences are those which create and maintain the repertory of stereotypes.”

That is to say, people live in a second-handed world with mediated reality, where the simplified model for thinking (i.e. stereotypes) could be created and maintained by external forces. Lippmann suggested that the public “cannot be wise”, since they can be easily misled by overly simplified reality which is consistent with their pre-existing pictures in mind, and any disturbance of the existing stereotypes will seem like “an attack upon the foundation of the universe”.

Although Lippmann did not directly define the term cognitive miser, stereotypes have important functions in simplifying people’s thinking process. As cognitive simplification, it is useful for realistic economic management, otherwise people will be overwhelmed by the complexity of the real rationales. Stereotype, as a phenomenon, has become a standard topic in sociology and social psychology.

Heuristics

Much of the cognitive miser theory is built upon work done on heuristics in judgment and decision-making, most notably Amos Tversky and Daniel Kahneman results published in a series of influential articles. Heuristics can be defined as the “judgmental shortcuts that generally get us where we need to go—and quickly—but at the cost of occasionally sending us off course.” In their work, Kahneman and Tversky demonstrated that people rely upon different types of heuristics or mental short cuts in order to save time and mental energy. However, in relying upon heuristics instead of detailed analysis, like the information processing employed by Heider’s naïve scientist, biased information processing is more likely to occur. Some of these heuristics include:

  • Representativeness heuristic (the inclination to assign specific attributes to an individual the more he/she matches the prototype of that group).
  • Availability heuristic (the inclination to judge the likelihood of something occurring because of the ease of thinking of examples of that event occurring).
  • Anchoring and adjustment heuristic (the inclination to overweight the importance and influence of an initial piece of information, and then adjusting one’s answer away from this anchor).

The frequency with which Kahneman and Tversky and other attribution researchers found the individuals employed mental shortcuts to make decisions and assessments laid important groundwork for the overarching idea that individuals and their minds act efficiently instead of analytically.

Cognitive Miser Theory

The wave of research on attributional biases done by Kahneman, Tversky and others effectively ended the dominance of Heider’s naïve scientist within social psychology. Fiske and Taylor, building upon the prevalence of heuristics in human cognition, offered their theory of the cognitive miser. It is, in many ways, a unifying theory of ad-hoc decision-making which suggests that humans engage in economically prudent thought processes instead of acting like scientists who rationally weigh cost and benefit data, test hypotheses, and update expectations based upon the results of the discrete experiments that are our everyday actions. In other words, humans are more inclined to act as cognitive misers using mental short cuts to make assessments and decisions regarding issues and ideas about which they know very little, including issues of great salience. Fiske and Taylor argue that it is rational to act as a cognitive miser due to the sheer volume and intensity of information and stimuli humans intake. Given the limited information processing capabilities of individuals, people try to adopt strategies that economise complex problems. Cognitive misers usually act in two ways: by disregarding part of the information to reduce their own cognitive load, or by overusing some kind of information to avoid the burden of finding and processing more information.

Other psychologists also argue that the cognitively miserly tendency of humans is a primary reason why “humans are often less than rational”. This view holds that evolution has made the brain’s allocation and use of cognitive resources extremely embarrassing. The basic principle is to save mental energy as much as possible, even when it is required to “use your head”. Unless the cognitive environment meets certain criteria, we will, by default, try to avoid thinking as much as possible.

Implications

The implications of this theory raise important questions about both cognition and human behaviour. In addition to streamlining cognition in complicated, analytical tasks, the cognitive miser approach is also used when dealing with unfamiliar issues and issues of great importance.

Politics

Voting behaviour in democracies are an arena in which the cognitive miser is at work. Acting as a cognitive miser should lead those with expertise in an area to more efficient information processing and streamlined decision making. However, as Lau and Redlawsk note, acting as cognitive miser who employs heuristics can have very different results for high-information and low-information voters. They write:

“…cognitive heuristics are at times employed by almost all voters, and that they are particularly likely to be used when the choice situation facing voters is complex… heuristic use generally increases the probability of a correct vote by political experts but decreases the probability of a correct vote by novices.”

In democracies, where no vote is weighted more or less because of the expertise behind its casting, low-information voters, acting as cognitive misers, can have broad and potentially deleterious choices for a society.

Samuel Popkin argues that voters make rational choices by using information shortcuts that they receive during campaigns, usually using something akin to a drunkard’s search. Voters use small amounts of personal information to construct a narrative about candidates. Essentially, they ask themselves this:

“Based on what I know about the candidate personally, what is the probability that this presidential candidate was a good governor? What is the probability that he will be a good president?”

Popkin’s analysis is based on one main premise: voters use low information rationality gained in their daily lives, through the media and through personal interactions, to evaluate candidates and facilitate electoral choices.

Economics

Cognitive misers could also be one of the contributors to the prisoner’s dilemma in game theory. To save cognitive energy, cognitive misers tend to assume that other people are similar to themselves. That is, habitual co-operators assume most of the others as co-operators, and habitual defectors assume most of the others as defectors. Experimental research has shown that since co-operators offer to play more often, and fellow co-operators will also more often accept their offer, co-operators would have a higher expected payoff compared with defectors when certain boundary conditions are met.

Mass Communication

Lack of public support towards emerging techniques are commonly attributed to lack of relevant information and the low scientific literacy among the public. Known as the knowledge deficit model, this point of view is based on idealistic assumptions that education for science literacy could increase public support of science, and the focus of science communication should be increasing scientific understanding among lay public. However, the relationship between information and attitudes towards scientific issues are not empirically supported.

Based on the assumption that human beings are cognitive misers and tend to minimize the cognitive costs, low-information rationality was introduced as an empirically grounded alternative in explaining decision making and attitude formation. Rather than using an in-depth understanding of scientific topics, people make decisions based on other shortcuts or heuristics such as ideological predistortions or cues from mass media due to the subconscious compulsion to use only as much information as necessary. The less expertise citizens have on an issue initially, the more likely they will rely on these shortcuts. Further, people spend less cognitive effort in buying toothpaste than they do when picking a new car, and that difference in information-seeking is largely a function of the costs.

The cognitive miser theory thus has implications for persuading the public: attitude formation is a competition between people’s value systems and prepositions (or their own interpretive schemata) on a certain issue, and how public discourses frame it. Framing theory suggest that the same topic will result in different interpretations among audience, if the information is presented in different ways. Audiences’ attitude change is closely connected with relabelling or re-framing the certain issue. In this sense, effective communication can be achieved if media provide audiences with cognitive shortcuts or heuristics that are resonate with underlying audience schemata.

Risk Assessment

The metaphor of cognitive misers could assist people in drawing lessons from risks, which is the possibility that an undesirable state of reality may occur. People apply a number of shortcuts or heuristics in making judgements about the likelihood of an event, because the rapid answers provided by heuristics are often right. Yet certain pitfalls may be neglected in these shortcuts. A practical example of the cognitively miserly way of thinking in the context of a risk assessment of Deepwater Horizon explosion, is presented below.

  • People have trouble in imagining how small failings can pile up to form a catastrophe;
  • People tend to get accustomed to risk. Due to the seemingly smooth current situation, people unconsciously adjust their acceptance of risk;
  • People tend to over-express their faith and confidence in backup systems and safety devices;
  • People regard complicated technical systems in line with complicated governing structures;
  • When concerned with a certain issue, people tend to spread good news and hide bad news; and
  • People tend to think alike if they are in the same field (see also: echo chamber), regardless of their position in a project’s hierarchy.

Psychology

The theory that human beings are cognitive misers, also shed light on the dual process theory in psychology. Dual process theory proposes that there are two types of cognitive processes in human mind. Daniel Kahneman described these as intuitive (System 1) and reasoning (System 2), respectively.

When processing with System 1, which starts automatically and without control, people expend little to no effort, but can generate complex patterns of ideas. When processing with System 2, people actively consider how best to distribute mental effort to accurately process data, and can construct thoughts in an orderly series of steps. These two cognitive processing systems are not separate and can have interactions with each other. Here is an example of how people’s beliefs are formed under the dual process model:

  • System 1 generates suggestions for System 2, with impressions, intuitions, intentions or feelings;
  • If System 1’s proposal is endorsed by System 2, those impressions and intuitions will turn into beliefs, and the sudden inspiration generated by System 1 will turn into voluntary actions;
  • When everything goes smoothly (as is often the case), System 2 adopts the suggestions of System 1 with little or no modification. Herein there is a window for bias to form, as System 2 may be trained to incorrectly regard the accuracy of data derived from observations gathered via System 1.

The reasoning process can be activated to help with the intuition when:

  • A question arises, but System 1 does not generate an answer
  • An event is detected to violate the model of world that System 1 maintains.

Conflicts also exists in this dual-process. A brief example provided by Kahneman is that when we try not to stare at the oddly dressed couple at the neighbouring table in a restaurant, our automatic reaction (System 1) makes us stare at them, but conflicts emerge as System 2 tries to control this behaviour.

The dual processing system can produce cognitive illusions. System 1 always operates automatically, with our easiest shortcut but often with error. System 2 may also have no clue to the error. Errors can be prevented only by enhanced monitoring of System 2, which costs a plethora of cognitive efforts.

Limitations

Omission of Motivation

The cognitive miser theory did not originally specify the role of motivation. In Fiske’s subsequent research, the omission of the role of intent in the metaphor of cognitive miser is recognised. Motivation does affect the activation and use of stereotypes and prejudices.

Updates and Later Research

Motivated Tactician

People tend to use heuristic shortcuts when making decisions. But the problem remains that although these shortcuts could not compare to effortful thoughts in accuracy, people should have a certain parameter to help them adopt one of the most adequate shortcuts. Kruglanski proposed that people are combination of naïve scientists and cognitive misers: people are flexible social thinkers who choose between multiple cognitive strategies (i.e. speed/ease vs. accuracy/logic) based on their current goals, motives, and needs.

Later models suggest that the cognitive miser and the naïve scientist create two poles of social cognition that are too monolithic. Instead, Fiske, Taylor, and Arie W. Kruglanski and other social psychologists offer an alternative explanation of social cognition: the motivated tactician. According to this theory, people employ either shortcuts or thoughtful analysis based upon the context and salience of a particular issue. In other words, this theory suggests that humans are, in fact, both naïve scientists and cognitive misers. In this sense people are strategic instead of passively choosing the most effortless shortcuts when they allocate their cognitive efforts, and therefore they can decide to be naïve scientists or cognitive misers depending on their goals.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Cognitive_miser >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Motivated Reasoning

Introduction

Motivated reasoning (motivational reasoning bias) is a cognitive and social response in which individuals, consciously or sub-consciously, allow emotion-loaded motivational biases to affect how new information is perceived. Individuals tend to favour evidence that coincides with their current beliefs and reject new information that contradicts them, despite contrary evidence.

Motivated reasoning overlaps with confirmation bias. Both favour evidence supporting one’s beliefs, at the same time dismissing contradictory evidence. However, confirmation bias is mainly a sub-conscious (innate) cognitive bias. In contrast, motivated reasoning (motivational bias) is a sub-conscious or conscious process by which one’s emotions control the evidence supported or dismissed. For confirmation bias, the evidence or arguments can be logical as well as emotional.

Motivated reasoning can be classified into two categories:

  1. Accuracy-oriented (non-directional), in which the motive is to arrive at an accurate conclusion, irrespective of the individual’s beliefs; and
  2. Goal-oriented (directional), in which the motive is to arrive at a particular conclusion.

Refer to Motivated Forgetting, Emotional Reasoning, and Motivated Tactician.

Definitions

Motivated reasoning is a cognitive and social response, in which individuals, consciously or unconsciously, allow emotion-loaded motivational biases to affect how new information is perceived. Individuals tend to favour arguments that support their current beliefs and reject new information that contradicts these beliefs.

Motivated reasoning, confirmation bias and cognitive dissonance are closely related. Both motivated reasoning and confirmation bias favour evidence supporting one’s beliefs, at the same time dismissing contradictory evidence. Motivated reasoning (motivational bias) is an unconscious or conscious process by which personal emotions control the evidence that is supported or dismissed. However, confirmation bias is mainly an unconscious (innate, implicit) cognitive bias, and the evidence or arguments utilised can be logical as well as emotional. More broadly, it is feasible that motivated reasoning can moderate cognitive biases generally, including confirmation bias.

Individual differences such as political beliefs can moderate the emotional/motivational effect. In addition, social context (groupthink, peer pressure) also partly controls the evidence utilised for motivated reasoning, particularly in dysfunctional societies. Social context moderates emotions, which in turn moderate beliefs.

Motivated reasoning differs from critical thinking, in which beliefs are assessed with a sceptical but open-minded attitude.

Cognitive Dissonance

Individuals are compelled to initiate motivated reasoning to lessen the amount of cognitive dissonance they feel. Cognitive dissonance is the feeling of psychological and physiological stress and unease between two conflicting cognitive and/or emotional elements (such as the desire to smoke, despite knowing it is unhealthy). According to Leon Festinger, there are two paths individuals can engage in to reduce the amount of distress: the first is altering behaviour or cognitive bias; the second, more common path is avoiding or discrediting information or situations that would create dissonance.

Research suggests that reasoning away contradictions is psychologically easier than revising feelings. Emotions tend to colour how “facts” are perceived. Feelings come first, and evidence is used in service of those feelings. Evidence that supports what is already believed is accepted; evidence which contradicts those beliefs is not.

Mechanisms: Cold and Hot Cognition

The notion that motives or goals affect reasoning has a long and controversial history in social psychology. This is because supportive research could be reinterpreted in entirely cognitive non-motivational terms (the hot versus cold cognition controversy). This controversy existed because of a failure to explore mechanisms underlying motivated reasoning.

Early research on how humans evaluated and integrated information supported a cognitive approach consistent with Bayesian probability, in which individuals weighted new information using rational calculations (“cold cognition”). More recent theories endorse these cognitive processes as only partial explanations of motivated reasoning, but have also introduced motivational[1] or affective (emotional) processes (“hot cognition”).

Kunda Theory

Ziva Kunda reviewed research and developed a theoretical model to explain the mechanism by which motivated reasoning results in bias. Motivation to arrive at a desired conclusion provides a level of arousal, which acts as an initial trigger for the operation of cognitive processes. To participate in motivated reasoning, either consciously or subconsciously, an individual first needs to be motivated. Motivation then affects reasoning by influencing the knowledge structures (beliefs, memories, information) that are accessed and the cognitive processes used.

Lodge–Taber Theory

Milton Lodge and Charles Taber introduced an empirically supported model in which affect is intricately tied to cognition, and information processing is biased toward support for positions that the individual already holds. Their model has three components:

  • On-line processing, in which, when called on to make an evaluation, people instantly draw on stored information which is marked with affect;
  • A component by which affect is automatically activated along with the cognitive node to which it is tied; and
  • An “heuristic mechanism” for evaluating new information, which triggers a reflection on “How do I feel?” about this topic. This process results in a bias towards maintaining existing affect, even in the face of other, disconfirming information.

This theory is developed and evaluated in their book The Rationalizing Voter (2013). David Redlawsk (2002) found that the timing of when disconfirming information was introduced played a role in determining bias. When subjects encounter incongruity during an information search, the automatic assimilation and update process is interrupted. This results in one of two outcomes:

  • Subjects may enhance attitude strength in a desire to support existing affect (resulting in degradation in decision quality and potential bias); or
  • Subjects may counter-argue existing beliefs in an attempt to integrate the new data.

This second outcome is consistent with research on how processing occurs when one is tasked with accuracy goals.

To summarise, the two models differ in that Kunda identifies a primary role for cognitive strategies such as memory processes, and the use of rules in determining biased information selection, whereas Lodge and Taber identify a primary role for affect in guiding cognitive processes and maintaining bias.

Neuroscientific Evidence

A neuroimaging study by Drew Westen and colleagues does not support the use of cognitive processes in motivated reasoning, lending greater support to affective processing as a key mechanism in supporting bias. This study, designed to test the neural circuitry of individuals engaged in motivated reasoning, found that motivated reasoning “was not associated with neural activity in regions previously linked with cold reasoning tasks [Bayesian reasoning] nor conscious (explicit) emotion regulation”.

This neuroscience data suggests that “motivated reasoning is qualitatively distinct from reasoning when people do not have a strong emotional stake in the conclusions reached.” However, if there is a strong emotion attached during their previous round of motivated reasoning and that emotion is again present when the individual’s conclusion is reached, a strong emotional stake is then attached to the conclusion. Any new information in regards to that conclusion will cause motivated reasoning to reoccur. This can create pathways within the neural network that further ingrain the reasoned beliefs of that individual along similar neural networks to where logical reasoning occurs. This causes the strong emotion to reoccur when confronted with contradictory information, time and time again. This is referred to by Lodge and Taber as affective contagion. But instead of “infecting” other individuals, the emotion “infects” the individual’s reasoning pathways and conclusions.

Categories

Motivated reasoning can be classified into two categories:

  1. Accuracy-oriented (non-directional), in which the motive is to arrive at an accurate conclusion, irrespective of the individual’s beliefs; and
  2. Goal-oriented (directional), in which the motive is to arrive at a particular conclusion.

Politically motivated reasoning, in particular, is strongly directional.

Despite their differences in information processing, an accuracy-motivated and a goal-motivated individual can reach the same conclusion. Both accuracy-oriented and directional-oriented messages move in the desired direction. However, the distinction lies in crafting effective communication, where those who are accuracy motivated will respond better to credible evidence catered to the community, while those who are goal-oriented will feel less threatened when the issue is framed to fit their identity or values.

Accuracy-Oriented (Non-Directional) Motivated Reasoning

Several works on accuracy-driven reasoning suggest that when people are motivated to be accurate, they expend more cognitive effort, attend to relevant information more carefully, and process it more deeply, often using more complex rules.

Kunda asserts that accuracy goals delay the process of coming to a premature conclusion, in that accuracy goals increase both the quantity and quality of processing—particularly in leading to more complex inferential cognitive processing procedures. When researchers manipulated test subjects’ motivation to be accurate by informing them that the target task was highly important or that they would be expected to defend their judgments, it was found that subjects utilized deeper processing and that there was less biasing of information. This was true when accuracy motives were present at the initial processing and encoding of information. In reviewing a line of research on accuracy goals and bias, Kunda concludes, “several different kinds of biases have been shown to weaken in the presence of accuracy goals”. However, accuracy goals do not always eliminate biases and improve reasoning: some biases (e.g. those resulting from using the availability heuristic) might be resistant to accuracy manipulations. For accuracy to reduce bias, the following conditions must be present:

  • Subjects must possess appropriate reasoning strategies.
  • They must view these as superior to other strategies.
  • They must be capable of using these strategies at will.

However, these last two conditions introduce the construct that accuracy goals include a conscious process of utilising cognitive strategies in motivated reasoning. This construct is called into question by neuroscience research that concludes that motivated reasoning is qualitatively distinct from reasoning in which there is no strong emotional stake in the outcomes. Accuracy-oriented individuals who are thought to use “objective” processing can vary in information updating, depending on how much faith they place in a provided piece of evidence and inability to detect misinformation that can lead to beliefs that diverge from scientific consensus.

Goal-Oriented (Directional) Motivated Reasoning

Directional goals enhance the accessibility of knowledge structures (memories, beliefs, information) that are consistent with desired conclusions. According to Kunda, such goals can lead to biased memory search and belief construction mechanisms. Several studies support the effect of directional goals in selection and construction of beliefs about oneself, other people and the world.

Cognitive dissonance research provides extensive evidence that people may bias their self-characterisations when motivated to do so. Other biases such as confirmation bias, prior attitude effect and disconfirmation bias could contribute to goal-oriented motivated reasoning. For example, in one study, subjects altered their self-view by viewing themselves as more extroverted when induced to believe that extroversion was beneficial.

Michael Thaler of Princeton University, conducted a study that found that men are more likely than women to demonstrate performance-motivated reasoning due to a gender gap in beliefs about personal performance. After a second study was conducted the conclusion was drawn that both men and women are susceptible to motivated reasoning, but certain motivated beliefs can be separated into genders.

The motivation to achieve directional goals could also influence which rules (procedural structures, such as inferential rules) are accessed to guide the search for information. Studies also suggest that evaluation of scientific evidence may be biased by whether the conclusions are in line with the reader’s beliefs.

In spite of goal-oriented motivated reasoning, people are not at liberty to conclude whatever they want merely because of that want. People tend to draw conclusions only if they can muster up supportive evidence. They search memory for those beliefs and rules that could support their desired conclusion or they could create new beliefs to logically support their desired goals.

Case Studies

Smoking

When an individual is trying to quit smoking, they might engage in motivated reasoning to convince themselves to keep smoking. They might focus on information that makes smoking seem less harmful while discrediting any evidence which emphasizes any dangers associated with the behaviour. Individuals in situations like this are driven to initiate motivated reasoning to lessen the amount of cognitive dissonance they feel. This can make it harder for individuals to quit and lead to continued smoking, even though they know it is not good for their health.

Political Bias

Peter Ditto and his students conducted a meta-analysis in 2018 of studies relating to political bias. Their aim was to assess which US political orientation (left/liberal or right/conservative) was more biased and initiated more motivated reasoning. They found that both political orientations are susceptible to bias to the same extent. The analysis was disputed by Jonathan Baron and John Jost, to whom Ditto and colleagues responded. Reviewing the debate, Stuart Vyse concluded that the answer to the question of whether US liberals or conservatives are more biased is: “We don’t know.”

On 22 April 2011, The New York Times published a series of articles attempting to explain the Barack Obama citizenship conspiracy theories. One of these articles by political scientist David Redlawsk explained these “birther” conspiracies as an example of political motivated reasoning. US presidential candidates are required to be born in the US. Despite ample evidence that President Barack Obama was born in the US state of Hawaii, many people continue to believe that he was not born in the US, and therefore that he was an illegitimate president. Similarly, many people believe he is a Muslim (as was his father), despite ample lifetime evidence of his Christian beliefs and practice (as was true of his mother). Subsequent research by others suggested that political partisan identity was more important for motivating “birther” beliefs than for some other conspiracy beliefs such as 9/11 conspiracy theories.

Climate Change

Despite a scientific consensus on climate change, citizens are divided on the topic, particularly along political lines. A significant segment of the American public has fixed beliefs, either because they are not politically engaged, or because they hold strong beliefs that are unlikely to change. Liberals and progressives generally believe, based on extensive evidence, that human activity is the main driver of climate change. By contrast, conservatives are generally much less likely to hold this belief, and a subset believes that there is no human involvement, and that the reported evidence is faulty (or even fraudulent). A prominent explanation is political directional motivated reasoning, in that conservatives are more likely to reject new evidence that contradicts their long established beliefs. In addition, some highly directional climate deniers not only discredit scientific information on human-induced climate change but also to seek contrary evidence that leads to a posterior belief of greater denial.

A study by Robin Bayes and colleagues of the human-induced climate change views of 1,960 members of the Republican Party found that both accuracy and directional motives move in the desired direction, but only in the presence of politically motivated messages congruent with the induced beliefs.

Social Media

Social media is used for many different purposes and ways of spreading opinions. It is the number one place people go to get information and most of that information is complete opinion and bias. The way this applies to motivated reasoning is the way it spreads. “However, motivated reasoning suggests that informational uses of social media are conditioned by various social and cultural ways of thinking”. All ideas and opinions are shared and makes it very easy for motivated reasoning and biases to come through when searching for an answer or just facts on the internet or any news source.

COVID-19

In the context of the COVID-19 pandemic, people who refuse to wear masks or get vaccinated may engage in motivated reasoning to justify their beliefs and actions. They may reject scientific evidence that supports mask-wearing and vaccination and instead seek out information that supports their pre-existing beliefs, such as conspiracy theories or misinformation. This can lead to behaviours that are harmful to both themselves and others.

In a 2020 study, Van Bavel and colleagues explored the concept of motivated reasoning as a contributor to the spread of misinformation and resistance to public health measures during the COVID-19 pandemic. Their results indicated that people often engage in motivated reasoning when processing information about the pandemic, interpreting it to confirm their pre-existing beliefs and values. The authors argue that addressing motivated reasoning is critical to promoting effective public health messaging and reducing the spread of misinformation. They suggested several strategies, such as reframing public health messages to align with individuals’ values and beliefs. In addition, they suggested using trusted sources to convey information by creating social norms that support public health behaviours.

Outcomes and Tackling Strategies

The outcomes of motivated reasoning derive from “a biased set of cognitive processes—that is, strategies for accessing, constructing, and evaluating beliefs. The motivation to be accurate enhances use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion.” Careful or “reflective” reasoning has been linked to both overcoming and reinforcing motivated reasoning, suggesting that reflection is not a panacea, but a tool that can be used for rational or irrational purposes depending on other factors. For example, when people are presented with and forced to think analytically about something complex that they lack adequate knowledge of (i.e. being presented with a new study on meteorology whilst having no degree in the subject), there is no directional shift in thinking, and their extant conclusions are more likely to be supported with motivated reasoning. Conversely, if they are presented with a more simplistic test of analytical thinking that confronts their beliefs (i.e. seeing implausible headlines as false), motivated reasoning is less likely to occur and a directional shift in thinking may result.

Hostile Media Effect

Research on motivated reasoning tested accuracy goals (i.e. reaching correct conclusions) and directional goals (i.e. reaching preferred conclusions). Factors such as these affect perceptions; and results confirm that motivated reasoning affects decision-making and estimates. These results have far reaching consequences because, when confronted with a small amount of information contrary to an established belief, an individual is motivated to reason away the new information, contributing to a hostile media effect. If this pattern continues over an extended period of time, the individual becomes more entrenched in their beliefs.

Tipping Point

However, recent studies have shown that motivated reasoning can be overcome. “When the amount of incongruency is relatively small, the heightened negative affect does not necessarily override the motivation to maintain [belief].” However, there is evidence of a theoretical “tipping point” where the amount of incongruent information that is received by the motivated reasoner can turn certainty into anxiety. This anxiety of being incorrect may lead to a change of opinion to the better.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Motivated_reasoning >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Negativity Bias?

Introduction

The negativity bias, also known as the negativity effect, is a cognitive bias that, even when of equal intensity, things of a more negative nature (e.g. unpleasant thoughts, emotions, or social interactions; harmful/traumatic events) have a greater effect on one’s psychological state and processes than neutral or positive things.

In other words, something very positive will generally have less of an impact on a person’s behaviour and cognition than something equally emotional but negative. The negativity bias has been investigated within many different domains, including the formation of impressions and general evaluations; attention, learning, and memory; and decision-making and risk considerations.

Refer to Positivity Offset.

Explanations

Paul Rozin and Edward Royzman proposed four elements of the negativity bias in order to explain its manifestation: negative potency, steeper negative gradients, negativity dominance, and negative differentiation.

Negative potency refers to the notion that, while possibly of equal magnitude or emotionality, negative and positive items/events/etc. are not equally salient. Rozin and Royzman note that this characteristic of the negativity bias is only empirically demonstrable in situations with inherent measurability, such as comparing how positively or negatively a change in temperature is interpreted.

With respect to positive and negative gradients, it appears to be the case that negative events are thought to be perceived as increasingly more negative than positive events are increasingly positive the closer one gets (spatially or temporally) to the affective event itself. In other words, there is a steeper negative gradient than positive gradient. For example, the negative experience of an impending dental surgery is perceived as increasingly more negative the closer one gets to the date of surgery than the positive experience of an impending party is perceived as increasingly more positive the closer one gets to the date of celebration (assuming for the sake of this example that these events are equally positive and negative). Rozin and Royzman argue that this characteristic is distinct from that of negative potency because there appears to be evidence of steeper negative slopes relative to positive slopes even when potency itself is low.

Negativity dominance describes the tendency for the combination of positive and negative items/events/etc. to skew towards an overall more negative interpretation than would be suggested by the summation of the individual positive and negative components. Phrasing in more Gestalt-friendly terms, the whole is more negative than the sum of its parts.

Negative differentiation is consistent with evidence suggesting that the conceptualization of negativity is more elaborate and complex than that of positivity. For instance, research indicates that negative vocabulary is more richly descriptive of the affective experience than that of positive vocabulary. Furthermore, there appear to be more terms employed to indicate negative emotions than positive emotions. The notion of negative differentiation is consistent with the mobilisation-minimisation hypothesis, which posits that negative events, as a consequence of this complexity, require a greater mobilisation of cognitive resources to deal with the affective experience and a greater effort to minimise the consequences.

Evidence

Social Judgements and Impression Formation

Most of the early evidence suggesting a negativity bias stems from research on social judgments and impression formation, in which it became clear that negative information was typically more heavily weighted when participants were tasked with forming comprehensive evaluations and impressions of other target individuals. Generally speaking, when people are presented with a range of trait information about a target individual, the traits are neither “averaged” nor “summed” to reach a final impression. When these traits differ in terms of their positivity and negativity, negative traits disproportionately impact the final impression. This is specifically in line with the notion of negativity dominance (refer to “Explanations” above).

As an example, a famous study by Leon Festinger and colleagues investigated critical factors in predicting friendship formation; the researchers concluded that whether or not people became friends was most strongly predicted by their proximity to one another. Ebbesen, Kjos, and Konecni, however, demonstrated that proximity itself does not predict friendship formation; rather, proximity serves to amplify the information that is relevant to the decision of either forming or not forming a friendship. Negative information is just as amplified as positive information by proximity. As negative information tends to outweigh positive information, proximity may predict a failure to form friendships even more so than successful friendship formation.

One explanation that has been put forth as to why such a negativity bias is demonstrated in social judgements is that people may generally consider negative information to be more diagnostic of an individual’s character than positive information, that it is more useful than positive information in forming an overall impression. This is supported by indications of higher confidence in the accuracy of one’s formed impression when it was formed more on the basis of negative traits than positive traits. People consider negative information to be more important to impression formation and, when it is available to them, they are subsequently more confident.

An oft-cited paradox, a dishonest person can sometimes act honestly while still being considered to be predominantly dishonest; on the other hand, an honest person who sometimes does dishonest things will likely be reclassified as a dishonest person. It is expected that a dishonest person will occasionally be honest, but this honesty will not counteract the prior demonstrations of dishonesty. Honesty is considered more easily tarnished by acts of dishonesty. Honesty itself would then be not diagnostic of an honest nature, only the absence of dishonesty.

The presumption that negative information has greater diagnostic accuracy is also evident in voting patterns. Voting behaviours have been shown to be more affected or motivated by negative information than positive: people tend to be more motivated to vote against a candidate because of negative information than they are to vote for a candidate because of positive information. As noted by researcher Jill Klein, “character weaknesses were more important than strengths in determining…the ultimate vote”.

This diagnostic preference for negative traits over positive traits is thought to be a consequence of behavioural expectations: there is a general expectation that, owing to social requirements and regulations, people will generally behave positively and exhibit positive traits. Contrastingly, negative behaviours/traits are more unexpected and, thus, more salient when they are exhibited. The relatively greater salience of negative events or information means they ultimately play a greater role in the judgement process.

Attribution of Intentions

Studies reported in a paper in the Journal of Experimental Psychology: General by Carey Morewedge (2009) found that people exhibit a negativity bias in attribution of external agency, such that they are more likely to attribute negative outcomes to the intentions of another person than similar neutral and positive outcomes. In laboratory experiments, Morewedge found that participants were more likely to believe that a partner had influenced the outcome of a gamble in when the participants lost money than won money, even when the probability of winning and losing money was held even. This bias is not limited to adults. Children also appear to be more likely to attribute negative events to intentional causes than similarly positive events.

Cognition

As addressed by negative differentiation, negative information seems to require greater information processing resources and activity than does positive information; people tend to think and reason more about negative events than positive events. Neurological differences also point to greater processing of negative information: participants exhibit greater event-related potentials when reading about, or viewing photographs of, people performing negative acts that were incongruent with their traits than when reading about incongruent positive acts. This additional processing leads to differences between positive and negative information in attention, learning, and memory.

Attention

A number of studies have suggested that negativity is essentially an attention magnet. For example, when tasked with forming an impression of presented target individuals, participants spent longer looking at negative photographs than they did looking at positive photographs. Similarly, participants registered more eye blinks when studying negative words than positive words (blinking rate has been positively linked to cognitive activity). Also, people were found to show greater orienting responses following negative than positive outcomes, including larger increases in pupil diameter, heart rate, and peripheral arterial tone.

Importantly, this preferential attendance to negative information is evident even when the affective nature of the stimuli is irrelevant to the task itself. The automatic vigilance hypothesis has been investigated using a modified Stroop task. Participants were presented with a series of positive and negative personality traits in several different colours; as each trait appeared on the screen, participants were to name the colour as quickly as possible. Even though the positive and negative elements of the words were immaterial to the colour-naming task, participants were slower to name the colour of negative traits than they were positive traits. This difference in response latencies indicates that greater attention was devoted to processing the trait itself when it was negative.

Aside from studies of eye blinks and colour naming, Baumeister and colleagues noted in their review of bad events versus good events that there is also easily accessible, real-world evidence for this attentional bias: bad news sells more papers and the bulk of successful novels are full of negative events and turmoil. When taken in conjunction with the laboratory-based experiments, there is strong support for the notion that negative information generally has a stronger pull on attention than does positive information.

Learning and Memory

Learning and memory are direct consequences of attentional processing: the more attention is directed or devoted toward something, the more likely it is that it will be later learned and remembered. Research concerning the effects of punishment and reward on learning suggests that punishment for incorrect responses is more effective in enhancing learning than are rewards for correct responses—learning occurs more quickly following bad events than good events.

Drs. Pratto and John addressed the effects of affective information on incidental memory as well as attention using their modified Stroop paradigm (see section concerning “Attention”). Not only were participants slower to name the colours of negative traits, they also exhibited better incidental memory for the presented negative traits than they did for the positive traits, regardless of the proportion of negative to positive traits in the stimuli set.

Intentional memory is also impacted by the stimuli’s negative or positive quality. When studying both positive and negative behaviours, participants tend to recall more negative behaviours during a later memory test than they do positive behaviours, even after controlling for serial position effects. There is also evidence that people exhibit better recognition memory and source memory for negative information.

When asked to recall a recent emotional event, people tend to report negative events more often than they report positive events, and this is thought to be because these negative memories are more salient than are the positive memories. People also tend to underestimate how frequently they experience positive affect, in that they more often forget the positively emotional experiences than they forget negatively emotional experiences.

Decision-Making

Studies of the negativity bias have also been related to research within the domain of decision-making, specifically as it relates to risk aversion or loss aversion. When presented with a situation in which a person stands to either gain something or lose something depending on the outcome, potential costs were argued to be more heavily considered than potential gains. The greater consideration of losses (i.e. negative outcomes) is in line with the principle of negative potency as proposed by Rozin and Royzman. This issue of negativity and loss aversion as it relates to decision-making is most notably addressed by Drs. Daniel Kahneman’s and Amos Tversky’s prospect theory.

However, it is worth noting that Rozin and Royzman were never able to find loss aversion in decision making. They wrote, “in particular, strict gain and loss of money does not reliably demonstrate loss aversion”. This is consistent with the findings of a recent review of more than 40 studies of loss aversion focusing on decision problems with equal sized gains and losses. In their review, Yechiam and Hochman (2013) did find a positive effect of losses on performance, autonomic arousal, and response time in decision tasks, which they suggested is due to the effect of losses on attention. This was labelled by them as loss attention.

Politics

Research points to a correlation between political affiliation and negativity bias, where conservatives are more sensitive to negative stimuli and therefore tend to lean towards right-leaning ideology which considers threat reduction and social-order to be its main focus. Individuals with lower negativity bias tend to lean towards liberal political policies such as pluralism and are accepting of diverse social groups which by proxy could threaten social structure and cause greater risk of unrest.

Lifespan Development

Infancy

Although most of the research concerning the negativity bias has been conducted with adults (particularly undergraduate students), there have been a small number of infant studies also suggesting negativity biases.

Infants are thought to interpret ambiguous situations on the basis of how others around them react. When an adult (e.g. experimenter, mother) displays reactions of happiness, fear, or neutrality towards target toys, infants tend to approach the toy associated with the negative reaction significantly less than the neutral and positive toys. Furthermore, there was greater evidence of neural activity when the infants were shown pictures of the “negative” toy than when shown the “positive” and “neutral” toys. Although recent work with 3-month-olds suggests a negativity bias in social evaluations, as well, there is also work suggesting a potential positivity bias in attention to emotional expressions in infants younger than 7 months. A review of the literature conducted by Drs. Amrisha Vaish, Tobias Grossman, and Amanda Woodward suggests the negativity bias may emerge during the second half of an infant’s first year, although the authors also note that research on the negativity bias and affective information has been woefully neglected within the developmental literature.

Aging and Older Adults

Some research indicates that older adults may display, at least in certain situations, a positivity bias or positivity effect. Proposed by Dr. Laura Carstensen and colleagues, the socioemotional selectivity theory outlines a shift in goals and emotion regulation tendencies with advancing age, resulting in a preference for positive information over negative information. Aside from the evidence in favour of a positivity bias, though, there have still been many documented cases of older adults displaying a negativity bias.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Negativity_bias >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Functional Fixedness?

Introduction

Functional fixedness is a cognitive bias that limits a person to use an object only in the way it is traditionally used.

The concept of functional fixedness originated in Gestalt psychology, a movement in psychology that emphasizes holistic processing. Karl Duncker defined functional fixedness as being a mental block against using an object in a new way that is required to solve a problem. This “block” limits the ability of an individual to use components given to them to complete a task, as they cannot move past the original purpose of those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see how the hammer can be used as a paperweight. Functional fixedness is this inability to see a hammer’s use as anything other than for pounding nails; the person couldn’t think to use the hammer in a way other than in its conventional function.

When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age 5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired the tendency to treat the originally intended purpose of an object as special.

Examples in Research

Experimental paradigms typically involve solving problems in novel situations in which the subject has the use of a familiar object in an unfamiliar context. The object may be familiar from the subject’s past experience or from previous tasks within an experiment.

Candle Box

In a classic experiment demonstrating functional fixedness, Duncker (1945) gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so that it did not drip onto the table below. Duncker found that participants tried to attach the candle directly to the wall with the tacks, or to glue it to the wall by melting it. Very few of them thought of using the inside of the box as a candle-holder and tacking this to the wall. In Duncker’s terms, the participants were “fixated” on the box’s normal function of holding thumbtacks and could not re-conceptualize it in a manner that allowed them to solve the problem. For instance, participants presented with an empty tack box were two times more likely to solve the problem than those presented with the tack box used as a container.

Candle box problem diagram.

More recently, Frank and Ramscar (2003) gave a written version of the candle problem to undergraduates at Stanford University. When the problem was given with identical instructions to those in the original experiment, only 23% of the students were able to solve the problem. For another group of students, the noun phrases such as “box of matches” were underlined, and for a third group, the nouns (e.g. “box”) were underlined. For these two groups, 55% and 47% were able to solve the problem effectively. In a follow-up experiment, all the nouns except “box” were underlined and similar results were produced. The authors concluded that students’ performance was contingent on their representation of the lexical concept “box” rather than instructional manipulations. The ability to overcome functional fixedness was contingent on having a flexible representation of the word box which allows students to see that the box can be used when attaching a candle to a wall.

When Adamson (1952) replicated Duncker’s box experiment, Adamson split participants into 2 experimental groups:

  • Preutilisation; and
  • No preutilisation.

In this experiment, when there is preutilisation, meaning when objects are presented to participants in a traditional manner (materials are in the box, thus using the box as a container), participants are less likely to consider the box for any other use, whereas with no preutilisation (when boxes are presented empty), participants are more likely to think of other uses for the box.

The Two-Cords Problem

Birch and Rabinowitz (1951) adapted the two-cord problem from Norman Maier (1930, 1931), where subjects would be given 2 cords hanging from the ceiling, and 2 heavy objects in the room. They are told they must connect the cords, but they are just far enough apart that one cannot reach the other easily. The solution was to tie one of the heavy objects to a cord and be a weight, and swing the cord as a pendulum, catch the rope as it swings while holding on to the other rope, and then tie them together. The participants are split into 3 groups: Group R, which completes a pre-task of completing an electrical circuit by using a relay, Group S, which completes the circuit with a switch, and Group C which is the control group given no pre-test experience. Group R participants were more likely to use the switch as the weight, and Group S were more likely to use the relay. Both groups did so because their previous experience led them to use the objects a certain way, and functional fixedness did not allow them to see the objects as being used for another purpose.

Barometer Question

The barometer question is an example of an incorrectly designed examination question demonstrating functional fixedness that causes a moral dilemma for the examinator. In its classic form, popularized by American test designer professor Alexander Calandra (1911-2006), the question asked the student to “show how it is possible to determine the height of a tall building with the aid of a barometer?” The examinator was confident that there was one, and only one, correct answer. Contrary to the examinator’s expectations, the student responded with a series of completely different answers. These answers were also correct, yet none of them proved the student’s competence in the specific academic field being tested.

Calandra presented the incident as a real-life, first-person experience that occurred during the Sputnik crisis. Calandra’s essay, “Angels on a Pin”, was published in 1959 in Pride, a magazine of the American College Public Relations Association. It was reprinted in Current Science in 1964, reprinted again in Saturday Review in 1968, and included in the 1969 edition of Calandra’s The Teaching of Elementary Science and Mathematics. In the same year (1969), Calandra’s essay became a subject of an academic discussion. The essay has been referenced frequently since, making its way into books on subjects ranging from teaching, writing skills, workplace counselling, and investment in real estate to chemical industry, computer programming, and integrated circuit design.

Current Conceptual Relevance

Is Functional Fixedness Universal?

Researchers have investigated whether functional fixedness is affected by culture.

In a recent study, preliminary evidence supporting the universality of functional fixedness was found. The study’s purpose was to test if individuals from non-industrialized societies, specifically with low exposure to “high-tech” artefacts, demonstrated functional fixedness. The study tested the Shuar, hunter-horticulturalists of the Amazon region of Ecuador, and compared them to a control group from an industrial culture.

The Shuar community had only been exposed to a limited amount of industrialised artefacts, such as machete, axes, cooking pots, nails, shotguns, and fishhooks, all considered “low-tech”. Two tasks were assessed to participants for the study: the box task, where participants had to build a tower to help a character from a fictional storyline to reach another character with a limited set of varied materials; the spoon task, where participants were also given a problem to solve based on a fictional story of a rabbit that had to cross a river (materials were used to represent settings) and they were given varied materials including a spoon. In the box-task, participants were slower to select the materials than participants in control conditions, but no difference in time to solve the problem was seen. In the spoon task, participants were slower in selection and completion of task. Results showed that Individuals from non-industrial (“technologically sparse cultures”) were susceptible to functional fixedness. They were faster to use artefacts without priming than when design function was explained to them. This occurred even though participants were less exposed to industrialised manufactured artefacts, and that the few artefacts they currently use were used in multiple ways regardless of their design.

“Following the Wrong Footsteps: Fixation Effects of Pictorial Examples in a Design Problem-Solving Task”

Investigators examined in two experiments “whether the inclusion of examples with inappropriate elements, in addition to the instructions for a design problem, would produce fixation effects in students naive to design tasks”. They examined the inclusion of examples of inappropriate elements, by explicitly depicting problematic aspects of the problem presented to the students through example designs. They tested non-expert participants on three problem conditions: with standard instruction, fixated (with inclusion of problematic design), and defixated (inclusion of problematic design accompanied with helpful methods). They were able to support their hypothesis by finding that:

  1. Problematic design examples produce significant fixation effects; and
  2. Fixation effects can be diminished with the use of defixating instructions.

In “The Disposable Spill-Proof Coffee Cup Problem”, adapted from Janson & Smith, 1991, participants were asked to construct as many designs as possible for an inexpensive, disposable, spill-proof coffee cup. Standard condition participants were presented only with instructions. In the fixated condition, participants were presented with instructions, a design, and problems they should be aware of. Finally, in the defixated condition, participants were presented the same as other conditions in addition to suggestions of design elements they should avoid using. The other two problems included building a bike rack, and designing a container for cream cheese.

Techniques to Avoid Functional Fixedness

Overcoming Functional Fixedness in Science Classrooms with Analogical Transfer

Based on the assumption that students are functionally fixed, a study on analogical transfer in the science classroom shed light on significant data that could provide an overcoming technique for functional fixedness. The findings support the fact that students show positive transfer (performance) on problem solving after being presented with analogies of certain structure and format. The present study expanded Duncker’s experiments from 1945 by trying to demonstrate that when students were “presented with a single analogy formatted as a problem, rather than as a story narrative, they would orient the task of problem-solving and facilitate positive transfer”.

A total of 266 freshmen students from a high school science class participated in the study. The experiment was a 2×2 design where conditions: “task contexts” (type and format) vs. “prior knowledge” (specific vs. general) were attested. Students were classified into 5 different groups, where 4 were according to their prior science knowledge (ranging from specific to general), and 1 served as a control group (no analogue presentation). The 4 different groups were then classified into “analogue type and analogue format” conditions, structural or surface types and problem or surface formats.

Inconclusive evidence was found for positive analogical transfer based on prior knowledge; however, groups did demonstrate variability. The problem format and the structural type of analogue presentation showed the highest positive transference to problem solving. The researcher suggested that a well-thought and planned analogy relevant in format and type to the problem-solving task to be completed can be helpful for students to overcome functional fixedness. This study not only brought new knowledge about the human mind at work but also provides important tools for educational purposes and possible changes that teachers can apply as aids to lesson plans.

Uncommitting

One study suggests that functional fixedness can be combated by design decisions from functionally fixed designs so that the essence of the design is kept (Latour, 1994). This helps the subjects who have created functionally fixed designs understand how to go about solving general problems of this type, rather than using the fixed solution for a specific problem. Latour performed an experiment researching this by having software engineers analyse a fairly standard bit of code – the quicksort algorithm – and use it to create a partitioning function. Part of the quicksort algorithm involves partitioning a list into subsets so that it can be sorted; the experimenters wanted to use the code from within the algorithm to just do the partitioning. To do this, they abstracted each block of code in the function, discerning the purpose of it, and deciding if it is needed for the partitioning algorithm. This abstracting allowed them to reuse the code from the quicksort algorithm to create a working partition algorithm without having to design it from scratch.

Overcoming Prototypes

A comprehensive study exploring several classical functional fixedness experiments showed an overlying theme of overcoming prototypes. Those that were successful at completing the tasks had the ability to look beyond the prototype, or the original intention for the item in use. Conversely, those that could not create a successful finished product could not move beyond the original use of the item. This seemed to be the case for functional fixedness categorisation studies as well. Reorganisation into categories of seemingly unrelated items was easier for those that could look beyond intended function. Therefore, there is a need to overcome the prototype in order to avoid functional fixedness. Carnevale (1998) suggests analysing the object and mentally breaking it down into its components. After that is completed, it is essential to explore the possible functions of those parts. In doing so, an individual may familiarise themselves with new ways to use the items that are available to them at the givens. Individuals are therefore thinking creatively and overcoming the prototypes that limit their ability to successfully complete the functional fixedness problem.

The Generic Parts Technique

For each object, you need to decouple its function from its form. McCaffrey (2012) shows a highly effective technique for doing so. As you break an object into its parts, ask yourself two questions. “Can I subdivide the current part further?” If yes, do so. “Does my current description imply a use?” If yes, create a more generic description involving its shape and material. For example, initially I divide a candle into its parts: wick and wax. The word “wick” implies a use: burning to emit light. So, describe it more generically as a string. Since “string” implies a use, I describe it more generically: interwoven fibrous strands. This brings to mind that I could use the wick to make a wig for my hamster. Since “interwoven fibrous strands” does not imply a use, I can stop working on wick and start working on wax. People trained in this technique solved 67% more problems that suffered from functional fixedness than a control group. This technique systematically strips away all the layers of associated uses from an object and its parts.

What is Depressive Realism?

Introduction

Depressive realism is the hypothesis developed by Lauren Alloy and Lyn Yvonne Abramson that depressed individuals make more realistic inferences than non-depressed individuals.

Although depressed individuals are thought to have a negative cognitive bias that results in recurrent, negative automatic thoughts, maladaptive behaviours, and dysfunctional world beliefs, depressive realism argues not only that this negativity may reflect a more accurate appraisal of the world but also that non-depressed individuals’ appraisals are positively biased.

Refer to Defensive Pessimism.

Evidence (For)

When participants were asked to press a button and rate the control they perceived they had over whether or not a light turned on, depressed individuals made more accurate ratings of control than non-depressed individuals. Among participants asked to complete a task and rate their performance without any feedback, depressed individuals made more accurate self-ratings than non-depressed individuals. For participants asked to complete a series of tasks, given feedback on their performance after each task, and who self-rated their overall performance after completing all the tasks, depressed individuals were again more likely to give an accurate self-rating than non-depressed individuals. When asked to evaluate their performance both immediately and some time after completing a task, depressed individuals made accurate appraisals both immediately before and after time had passed.

In a functional magnetic resonance imaging (fMRI) study of the brain, depressed patients were shown to be more accurate in their causal attributions of positive and negative social events than non-depressed participants, who demonstrated a positive bias. This difference was also reflected in the differential activation of the fronto-temporal network, higher activation for non self-serving attributions in non-depressed participants and for self-serving attributions in depressed patients, and reduced coupling of the dorsomedial prefrontal cortex seed region and the limbic areas when depressed patients made self-serving attributions.

Evidence (Against)

When asked to rate both their performance and the performance of others, non-depressed individuals demonstrated positive bias when rating themselves but no bias when rating others. Depressed individuals conversely showed no bias when rating themselves but a positive bias when rating others.

When assessing participant thoughts in public versus private settings, the thoughts of non-depressed individuals were more optimistic in public than private, while depressed individuals were less optimistic in public.

When asked to rate their performance immediately after a task and after some time had passed, depressed individuals were more accurate when they rated themselves immediately after the task but were more negative after time had passed whereas non-depressed individuals were positive immediately after and some time after.

Although depressed individuals make accurate judgments about having no control in situations where they in fact have no control, this appraisal also carries over to situations where they do have control, suggesting that the depressed perspective is not more accurate overall. Note, however, that this finding alone does not imply depression as a cause; researchers did not control for philosophical factors such as determinism which could affect responses.

One study suggested that in real-world settings, depressed individuals are actually less accurate and more overconfident in their predictions than their non-depressed peers. Participants’ attributional accuracy may also be more related to their overall attributional style rather than the presence and severity of their depressive symptoms.

Criticism of the Evidence

Some have argued that the evidence is not more conclusive because no standard for reality exists, the diagnoses are dubious, and the results may not apply to the real world. Because many studies rely on self-report of depressive symptoms and self-reports are known to be biased, the diagnosis of depression in these studies may not be valid, necessitating the use of other objective measures. Due to most of these studies using designs that do not necessarily approximate real-world phenomena, the external validity of the depressive realism hypothesis is unclear. There is also concern that the depressive realism effect is merely a byproduct of the depressed person being in a situation that agrees with their negative bias.