An Overview of Motivated Reasoning

Introduction

Motivated reasoning (motivational reasoning bias) is a cognitive and social response in which individuals, consciously or sub-consciously, allow emotion-loaded motivational biases to affect how new information is perceived. Individuals tend to favour evidence that coincides with their current beliefs and reject new information that contradicts them, despite contrary evidence.

Motivated reasoning overlaps with confirmation bias. Both favour evidence supporting one’s beliefs, at the same time dismissing contradictory evidence. However, confirmation bias is mainly a sub-conscious (innate) cognitive bias. In contrast, motivated reasoning (motivational bias) is a sub-conscious or conscious process by which one’s emotions control the evidence supported or dismissed. For confirmation bias, the evidence or arguments can be logical as well as emotional.

Motivated reasoning can be classified into two categories:

  1. Accuracy-oriented (non-directional), in which the motive is to arrive at an accurate conclusion, irrespective of the individual’s beliefs; and
  2. Goal-oriented (directional), in which the motive is to arrive at a particular conclusion.

Refer to Motivated Forgetting, Emotional Reasoning, and Motivated Tactician.

Definitions

Motivated reasoning is a cognitive and social response, in which individuals, consciously or unconsciously, allow emotion-loaded motivational biases to affect how new information is perceived. Individuals tend to favour arguments that support their current beliefs and reject new information that contradicts these beliefs.

Motivated reasoning, confirmation bias and cognitive dissonance are closely related. Both motivated reasoning and confirmation bias favour evidence supporting one’s beliefs, at the same time dismissing contradictory evidence. Motivated reasoning (motivational bias) is an unconscious or conscious process by which personal emotions control the evidence that is supported or dismissed. However, confirmation bias is mainly an unconscious (innate, implicit) cognitive bias, and the evidence or arguments utilised can be logical as well as emotional. More broadly, it is feasible that motivated reasoning can moderate cognitive biases generally, including confirmation bias.

Individual differences such as political beliefs can moderate the emotional/motivational effect. In addition, social context (groupthink, peer pressure) also partly controls the evidence utilised for motivated reasoning, particularly in dysfunctional societies. Social context moderates emotions, which in turn moderate beliefs.

Motivated reasoning differs from critical thinking, in which beliefs are assessed with a sceptical but open-minded attitude.

Cognitive Dissonance

Individuals are compelled to initiate motivated reasoning to lessen the amount of cognitive dissonance they feel. Cognitive dissonance is the feeling of psychological and physiological stress and unease between two conflicting cognitive and/or emotional elements (such as the desire to smoke, despite knowing it is unhealthy). According to Leon Festinger, there are two paths individuals can engage in to reduce the amount of distress: the first is altering behaviour or cognitive bias; the second, more common path is avoiding or discrediting information or situations that would create dissonance.

Research suggests that reasoning away contradictions is psychologically easier than revising feelings. Emotions tend to colour how “facts” are perceived. Feelings come first, and evidence is used in service of those feelings. Evidence that supports what is already believed is accepted; evidence which contradicts those beliefs is not.

Mechanisms: Cold and Hot Cognition

The notion that motives or goals affect reasoning has a long and controversial history in social psychology. This is because supportive research could be reinterpreted in entirely cognitive non-motivational terms (the hot versus cold cognition controversy). This controversy existed because of a failure to explore mechanisms underlying motivated reasoning.

Early research on how humans evaluated and integrated information supported a cognitive approach consistent with Bayesian probability, in which individuals weighted new information using rational calculations (“cold cognition”). More recent theories endorse these cognitive processes as only partial explanations of motivated reasoning, but have also introduced motivational[1] or affective (emotional) processes (“hot cognition”).

Kunda Theory

Ziva Kunda reviewed research and developed a theoretical model to explain the mechanism by which motivated reasoning results in bias. Motivation to arrive at a desired conclusion provides a level of arousal, which acts as an initial trigger for the operation of cognitive processes. To participate in motivated reasoning, either consciously or subconsciously, an individual first needs to be motivated. Motivation then affects reasoning by influencing the knowledge structures (beliefs, memories, information) that are accessed and the cognitive processes used.

Lodge–Taber Theory

Milton Lodge and Charles Taber introduced an empirically supported model in which affect is intricately tied to cognition, and information processing is biased toward support for positions that the individual already holds. Their model has three components:

  • On-line processing, in which, when called on to make an evaluation, people instantly draw on stored information which is marked with affect;
  • A component by which affect is automatically activated along with the cognitive node to which it is tied; and
  • An “heuristic mechanism” for evaluating new information, which triggers a reflection on “How do I feel?” about this topic. This process results in a bias towards maintaining existing affect, even in the face of other, disconfirming information.

This theory is developed and evaluated in their book The Rationalizing Voter (2013). David Redlawsk (2002) found that the timing of when disconfirming information was introduced played a role in determining bias. When subjects encounter incongruity during an information search, the automatic assimilation and update process is interrupted. This results in one of two outcomes:

  • Subjects may enhance attitude strength in a desire to support existing affect (resulting in degradation in decision quality and potential bias); or
  • Subjects may counter-argue existing beliefs in an attempt to integrate the new data.

This second outcome is consistent with research on how processing occurs when one is tasked with accuracy goals.

To summarise, the two models differ in that Kunda identifies a primary role for cognitive strategies such as memory processes, and the use of rules in determining biased information selection, whereas Lodge and Taber identify a primary role for affect in guiding cognitive processes and maintaining bias.

Neuroscientific Evidence

A neuroimaging study by Drew Westen and colleagues does not support the use of cognitive processes in motivated reasoning, lending greater support to affective processing as a key mechanism in supporting bias. This study, designed to test the neural circuitry of individuals engaged in motivated reasoning, found that motivated reasoning “was not associated with neural activity in regions previously linked with cold reasoning tasks [Bayesian reasoning] nor conscious (explicit) emotion regulation”.

This neuroscience data suggests that “motivated reasoning is qualitatively distinct from reasoning when people do not have a strong emotional stake in the conclusions reached.” However, if there is a strong emotion attached during their previous round of motivated reasoning and that emotion is again present when the individual’s conclusion is reached, a strong emotional stake is then attached to the conclusion. Any new information in regards to that conclusion will cause motivated reasoning to reoccur. This can create pathways within the neural network that further ingrain the reasoned beliefs of that individual along similar neural networks to where logical reasoning occurs. This causes the strong emotion to reoccur when confronted with contradictory information, time and time again. This is referred to by Lodge and Taber as affective contagion. But instead of “infecting” other individuals, the emotion “infects” the individual’s reasoning pathways and conclusions.

Categories

Motivated reasoning can be classified into two categories:

  1. Accuracy-oriented (non-directional), in which the motive is to arrive at an accurate conclusion, irrespective of the individual’s beliefs; and
  2. Goal-oriented (directional), in which the motive is to arrive at a particular conclusion.

Politically motivated reasoning, in particular, is strongly directional.

Despite their differences in information processing, an accuracy-motivated and a goal-motivated individual can reach the same conclusion. Both accuracy-oriented and directional-oriented messages move in the desired direction. However, the distinction lies in crafting effective communication, where those who are accuracy motivated will respond better to credible evidence catered to the community, while those who are goal-oriented will feel less threatened when the issue is framed to fit their identity or values.

Accuracy-Oriented (Non-Directional) Motivated Reasoning

Several works on accuracy-driven reasoning suggest that when people are motivated to be accurate, they expend more cognitive effort, attend to relevant information more carefully, and process it more deeply, often using more complex rules.

Kunda asserts that accuracy goals delay the process of coming to a premature conclusion, in that accuracy goals increase both the quantity and quality of processing—particularly in leading to more complex inferential cognitive processing procedures. When researchers manipulated test subjects’ motivation to be accurate by informing them that the target task was highly important or that they would be expected to defend their judgments, it was found that subjects utilized deeper processing and that there was less biasing of information. This was true when accuracy motives were present at the initial processing and encoding of information. In reviewing a line of research on accuracy goals and bias, Kunda concludes, “several different kinds of biases have been shown to weaken in the presence of accuracy goals”. However, accuracy goals do not always eliminate biases and improve reasoning: some biases (e.g. those resulting from using the availability heuristic) might be resistant to accuracy manipulations. For accuracy to reduce bias, the following conditions must be present:

  • Subjects must possess appropriate reasoning strategies.
  • They must view these as superior to other strategies.
  • They must be capable of using these strategies at will.

However, these last two conditions introduce the construct that accuracy goals include a conscious process of utilising cognitive strategies in motivated reasoning. This construct is called into question by neuroscience research that concludes that motivated reasoning is qualitatively distinct from reasoning in which there is no strong emotional stake in the outcomes. Accuracy-oriented individuals who are thought to use “objective” processing can vary in information updating, depending on how much faith they place in a provided piece of evidence and inability to detect misinformation that can lead to beliefs that diverge from scientific consensus.

Goal-Oriented (Directional) Motivated Reasoning

Directional goals enhance the accessibility of knowledge structures (memories, beliefs, information) that are consistent with desired conclusions. According to Kunda, such goals can lead to biased memory search and belief construction mechanisms. Several studies support the effect of directional goals in selection and construction of beliefs about oneself, other people and the world.

Cognitive dissonance research provides extensive evidence that people may bias their self-characterisations when motivated to do so. Other biases such as confirmation bias, prior attitude effect and disconfirmation bias could contribute to goal-oriented motivated reasoning. For example, in one study, subjects altered their self-view by viewing themselves as more extroverted when induced to believe that extroversion was beneficial.

Michael Thaler of Princeton University, conducted a study that found that men are more likely than women to demonstrate performance-motivated reasoning due to a gender gap in beliefs about personal performance. After a second study was conducted the conclusion was drawn that both men and women are susceptible to motivated reasoning, but certain motivated beliefs can be separated into genders.

The motivation to achieve directional goals could also influence which rules (procedural structures, such as inferential rules) are accessed to guide the search for information. Studies also suggest that evaluation of scientific evidence may be biased by whether the conclusions are in line with the reader’s beliefs.

In spite of goal-oriented motivated reasoning, people are not at liberty to conclude whatever they want merely because of that want. People tend to draw conclusions only if they can muster up supportive evidence. They search memory for those beliefs and rules that could support their desired conclusion or they could create new beliefs to logically support their desired goals.

Case Studies

Smoking

When an individual is trying to quit smoking, they might engage in motivated reasoning to convince themselves to keep smoking. They might focus on information that makes smoking seem less harmful while discrediting any evidence which emphasizes any dangers associated with the behaviour. Individuals in situations like this are driven to initiate motivated reasoning to lessen the amount of cognitive dissonance they feel. This can make it harder for individuals to quit and lead to continued smoking, even though they know it is not good for their health.

Political Bias

Peter Ditto and his students conducted a meta-analysis in 2018 of studies relating to political bias. Their aim was to assess which US political orientation (left/liberal or right/conservative) was more biased and initiated more motivated reasoning. They found that both political orientations are susceptible to bias to the same extent. The analysis was disputed by Jonathan Baron and John Jost, to whom Ditto and colleagues responded. Reviewing the debate, Stuart Vyse concluded that the answer to the question of whether US liberals or conservatives are more biased is: “We don’t know.”

On 22 April 2011, The New York Times published a series of articles attempting to explain the Barack Obama citizenship conspiracy theories. One of these articles by political scientist David Redlawsk explained these “birther” conspiracies as an example of political motivated reasoning. US presidential candidates are required to be born in the US. Despite ample evidence that President Barack Obama was born in the US state of Hawaii, many people continue to believe that he was not born in the US, and therefore that he was an illegitimate president. Similarly, many people believe he is a Muslim (as was his father), despite ample lifetime evidence of his Christian beliefs and practice (as was true of his mother). Subsequent research by others suggested that political partisan identity was more important for motivating “birther” beliefs than for some other conspiracy beliefs such as 9/11 conspiracy theories.

Climate Change

Despite a scientific consensus on climate change, citizens are divided on the topic, particularly along political lines. A significant segment of the American public has fixed beliefs, either because they are not politically engaged, or because they hold strong beliefs that are unlikely to change. Liberals and progressives generally believe, based on extensive evidence, that human activity is the main driver of climate change. By contrast, conservatives are generally much less likely to hold this belief, and a subset believes that there is no human involvement, and that the reported evidence is faulty (or even fraudulent). A prominent explanation is political directional motivated reasoning, in that conservatives are more likely to reject new evidence that contradicts their long established beliefs. In addition, some highly directional climate deniers not only discredit scientific information on human-induced climate change but also to seek contrary evidence that leads to a posterior belief of greater denial.

A study by Robin Bayes and colleagues of the human-induced climate change views of 1,960 members of the Republican Party found that both accuracy and directional motives move in the desired direction, but only in the presence of politically motivated messages congruent with the induced beliefs.

Social Media

Social media is used for many different purposes and ways of spreading opinions. It is the number one place people go to get information and most of that information is complete opinion and bias. The way this applies to motivated reasoning is the way it spreads. “However, motivated reasoning suggests that informational uses of social media are conditioned by various social and cultural ways of thinking”. All ideas and opinions are shared and makes it very easy for motivated reasoning and biases to come through when searching for an answer or just facts on the internet or any news source.

COVID-19

In the context of the COVID-19 pandemic, people who refuse to wear masks or get vaccinated may engage in motivated reasoning to justify their beliefs and actions. They may reject scientific evidence that supports mask-wearing and vaccination and instead seek out information that supports their pre-existing beliefs, such as conspiracy theories or misinformation. This can lead to behaviours that are harmful to both themselves and others.

In a 2020 study, Van Bavel and colleagues explored the concept of motivated reasoning as a contributor to the spread of misinformation and resistance to public health measures during the COVID-19 pandemic. Their results indicated that people often engage in motivated reasoning when processing information about the pandemic, interpreting it to confirm their pre-existing beliefs and values. The authors argue that addressing motivated reasoning is critical to promoting effective public health messaging and reducing the spread of misinformation. They suggested several strategies, such as reframing public health messages to align with individuals’ values and beliefs. In addition, they suggested using trusted sources to convey information by creating social norms that support public health behaviours.

Outcomes and Tackling Strategies

The outcomes of motivated reasoning derive from “a biased set of cognitive processes—that is, strategies for accessing, constructing, and evaluating beliefs. The motivation to be accurate enhances use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion.” Careful or “reflective” reasoning has been linked to both overcoming and reinforcing motivated reasoning, suggesting that reflection is not a panacea, but a tool that can be used for rational or irrational purposes depending on other factors. For example, when people are presented with and forced to think analytically about something complex that they lack adequate knowledge of (i.e. being presented with a new study on meteorology whilst having no degree in the subject), there is no directional shift in thinking, and their extant conclusions are more likely to be supported with motivated reasoning. Conversely, if they are presented with a more simplistic test of analytical thinking that confronts their beliefs (i.e. seeing implausible headlines as false), motivated reasoning is less likely to occur and a directional shift in thinking may result.

Hostile Media Effect

Research on motivated reasoning tested accuracy goals (i.e. reaching correct conclusions) and directional goals (i.e. reaching preferred conclusions). Factors such as these affect perceptions; and results confirm that motivated reasoning affects decision-making and estimates. These results have far reaching consequences because, when confronted with a small amount of information contrary to an established belief, an individual is motivated to reason away the new information, contributing to a hostile media effect. If this pattern continues over an extended period of time, the individual becomes more entrenched in their beliefs.

Tipping Point

However, recent studies have shown that motivated reasoning can be overcome. “When the amount of incongruency is relatively small, the heightened negative affect does not necessarily override the motivation to maintain [belief].” However, there is evidence of a theoretical “tipping point” where the amount of incongruent information that is received by the motivated reasoner can turn certainty into anxiety. This anxiety of being incorrect may lead to a change of opinion to the better.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Motivated_reasoning >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Negativity Bias?

Introduction

The negativity bias, also known as the negativity effect, is a cognitive bias that, even when of equal intensity, things of a more negative nature (e.g. unpleasant thoughts, emotions, or social interactions; harmful/traumatic events) have a greater effect on one’s psychological state and processes than neutral or positive things.

In other words, something very positive will generally have less of an impact on a person’s behaviour and cognition than something equally emotional but negative. The negativity bias has been investigated within many different domains, including the formation of impressions and general evaluations; attention, learning, and memory; and decision-making and risk considerations.

Refer to Positivity Offset.

Explanations

Paul Rozin and Edward Royzman proposed four elements of the negativity bias in order to explain its manifestation: negative potency, steeper negative gradients, negativity dominance, and negative differentiation.

Negative potency refers to the notion that, while possibly of equal magnitude or emotionality, negative and positive items/events/etc. are not equally salient. Rozin and Royzman note that this characteristic of the negativity bias is only empirically demonstrable in situations with inherent measurability, such as comparing how positively or negatively a change in temperature is interpreted.

With respect to positive and negative gradients, it appears to be the case that negative events are thought to be perceived as increasingly more negative than positive events are increasingly positive the closer one gets (spatially or temporally) to the affective event itself. In other words, there is a steeper negative gradient than positive gradient. For example, the negative experience of an impending dental surgery is perceived as increasingly more negative the closer one gets to the date of surgery than the positive experience of an impending party is perceived as increasingly more positive the closer one gets to the date of celebration (assuming for the sake of this example that these events are equally positive and negative). Rozin and Royzman argue that this characteristic is distinct from that of negative potency because there appears to be evidence of steeper negative slopes relative to positive slopes even when potency itself is low.

Negativity dominance describes the tendency for the combination of positive and negative items/events/etc. to skew towards an overall more negative interpretation than would be suggested by the summation of the individual positive and negative components. Phrasing in more Gestalt-friendly terms, the whole is more negative than the sum of its parts.

Negative differentiation is consistent with evidence suggesting that the conceptualization of negativity is more elaborate and complex than that of positivity. For instance, research indicates that negative vocabulary is more richly descriptive of the affective experience than that of positive vocabulary. Furthermore, there appear to be more terms employed to indicate negative emotions than positive emotions. The notion of negative differentiation is consistent with the mobilisation-minimisation hypothesis, which posits that negative events, as a consequence of this complexity, require a greater mobilisation of cognitive resources to deal with the affective experience and a greater effort to minimise the consequences.

Evidence

Social Judgements and Impression Formation

Most of the early evidence suggesting a negativity bias stems from research on social judgments and impression formation, in which it became clear that negative information was typically more heavily weighted when participants were tasked with forming comprehensive evaluations and impressions of other target individuals. Generally speaking, when people are presented with a range of trait information about a target individual, the traits are neither “averaged” nor “summed” to reach a final impression. When these traits differ in terms of their positivity and negativity, negative traits disproportionately impact the final impression. This is specifically in line with the notion of negativity dominance (refer to “Explanations” above).

As an example, a famous study by Leon Festinger and colleagues investigated critical factors in predicting friendship formation; the researchers concluded that whether or not people became friends was most strongly predicted by their proximity to one another. Ebbesen, Kjos, and Konecni, however, demonstrated that proximity itself does not predict friendship formation; rather, proximity serves to amplify the information that is relevant to the decision of either forming or not forming a friendship. Negative information is just as amplified as positive information by proximity. As negative information tends to outweigh positive information, proximity may predict a failure to form friendships even more so than successful friendship formation.

One explanation that has been put forth as to why such a negativity bias is demonstrated in social judgements is that people may generally consider negative information to be more diagnostic of an individual’s character than positive information, that it is more useful than positive information in forming an overall impression. This is supported by indications of higher confidence in the accuracy of one’s formed impression when it was formed more on the basis of negative traits than positive traits. People consider negative information to be more important to impression formation and, when it is available to them, they are subsequently more confident.

An oft-cited paradox, a dishonest person can sometimes act honestly while still being considered to be predominantly dishonest; on the other hand, an honest person who sometimes does dishonest things will likely be reclassified as a dishonest person. It is expected that a dishonest person will occasionally be honest, but this honesty will not counteract the prior demonstrations of dishonesty. Honesty is considered more easily tarnished by acts of dishonesty. Honesty itself would then be not diagnostic of an honest nature, only the absence of dishonesty.

The presumption that negative information has greater diagnostic accuracy is also evident in voting patterns. Voting behaviours have been shown to be more affected or motivated by negative information than positive: people tend to be more motivated to vote against a candidate because of negative information than they are to vote for a candidate because of positive information. As noted by researcher Jill Klein, “character weaknesses were more important than strengths in determining…the ultimate vote”.

This diagnostic preference for negative traits over positive traits is thought to be a consequence of behavioural expectations: there is a general expectation that, owing to social requirements and regulations, people will generally behave positively and exhibit positive traits. Contrastingly, negative behaviours/traits are more unexpected and, thus, more salient when they are exhibited. The relatively greater salience of negative events or information means they ultimately play a greater role in the judgement process.

Attribution of Intentions

Studies reported in a paper in the Journal of Experimental Psychology: General by Carey Morewedge (2009) found that people exhibit a negativity bias in attribution of external agency, such that they are more likely to attribute negative outcomes to the intentions of another person than similar neutral and positive outcomes. In laboratory experiments, Morewedge found that participants were more likely to believe that a partner had influenced the outcome of a gamble in when the participants lost money than won money, even when the probability of winning and losing money was held even. This bias is not limited to adults. Children also appear to be more likely to attribute negative events to intentional causes than similarly positive events.

Cognition

As addressed by negative differentiation, negative information seems to require greater information processing resources and activity than does positive information; people tend to think and reason more about negative events than positive events. Neurological differences also point to greater processing of negative information: participants exhibit greater event-related potentials when reading about, or viewing photographs of, people performing negative acts that were incongruent with their traits than when reading about incongruent positive acts. This additional processing leads to differences between positive and negative information in attention, learning, and memory.

Attention

A number of studies have suggested that negativity is essentially an attention magnet. For example, when tasked with forming an impression of presented target individuals, participants spent longer looking at negative photographs than they did looking at positive photographs. Similarly, participants registered more eye blinks when studying negative words than positive words (blinking rate has been positively linked to cognitive activity). Also, people were found to show greater orienting responses following negative than positive outcomes, including larger increases in pupil diameter, heart rate, and peripheral arterial tone.

Importantly, this preferential attendance to negative information is evident even when the affective nature of the stimuli is irrelevant to the task itself. The automatic vigilance hypothesis has been investigated using a modified Stroop task. Participants were presented with a series of positive and negative personality traits in several different colours; as each trait appeared on the screen, participants were to name the colour as quickly as possible. Even though the positive and negative elements of the words were immaterial to the colour-naming task, participants were slower to name the colour of negative traits than they were positive traits. This difference in response latencies indicates that greater attention was devoted to processing the trait itself when it was negative.

Aside from studies of eye blinks and colour naming, Baumeister and colleagues noted in their review of bad events versus good events that there is also easily accessible, real-world evidence for this attentional bias: bad news sells more papers and the bulk of successful novels are full of negative events and turmoil. When taken in conjunction with the laboratory-based experiments, there is strong support for the notion that negative information generally has a stronger pull on attention than does positive information.

Learning and Memory

Learning and memory are direct consequences of attentional processing: the more attention is directed or devoted toward something, the more likely it is that it will be later learned and remembered. Research concerning the effects of punishment and reward on learning suggests that punishment for incorrect responses is more effective in enhancing learning than are rewards for correct responses—learning occurs more quickly following bad events than good events.

Drs. Pratto and John addressed the effects of affective information on incidental memory as well as attention using their modified Stroop paradigm (see section concerning “Attention”). Not only were participants slower to name the colours of negative traits, they also exhibited better incidental memory for the presented negative traits than they did for the positive traits, regardless of the proportion of negative to positive traits in the stimuli set.

Intentional memory is also impacted by the stimuli’s negative or positive quality. When studying both positive and negative behaviours, participants tend to recall more negative behaviours during a later memory test than they do positive behaviours, even after controlling for serial position effects. There is also evidence that people exhibit better recognition memory and source memory for negative information.

When asked to recall a recent emotional event, people tend to report negative events more often than they report positive events, and this is thought to be because these negative memories are more salient than are the positive memories. People also tend to underestimate how frequently they experience positive affect, in that they more often forget the positively emotional experiences than they forget negatively emotional experiences.

Decision-Making

Studies of the negativity bias have also been related to research within the domain of decision-making, specifically as it relates to risk aversion or loss aversion. When presented with a situation in which a person stands to either gain something or lose something depending on the outcome, potential costs were argued to be more heavily considered than potential gains. The greater consideration of losses (i.e. negative outcomes) is in line with the principle of negative potency as proposed by Rozin and Royzman. This issue of negativity and loss aversion as it relates to decision-making is most notably addressed by Drs. Daniel Kahneman’s and Amos Tversky’s prospect theory.

However, it is worth noting that Rozin and Royzman were never able to find loss aversion in decision making. They wrote, “in particular, strict gain and loss of money does not reliably demonstrate loss aversion”. This is consistent with the findings of a recent review of more than 40 studies of loss aversion focusing on decision problems with equal sized gains and losses. In their review, Yechiam and Hochman (2013) did find a positive effect of losses on performance, autonomic arousal, and response time in decision tasks, which they suggested is due to the effect of losses on attention. This was labelled by them as loss attention.

Politics

Research points to a correlation between political affiliation and negativity bias, where conservatives are more sensitive to negative stimuli and therefore tend to lean towards right-leaning ideology which considers threat reduction and social-order to be its main focus. Individuals with lower negativity bias tend to lean towards liberal political policies such as pluralism and are accepting of diverse social groups which by proxy could threaten social structure and cause greater risk of unrest.

Lifespan Development

Infancy

Although most of the research concerning the negativity bias has been conducted with adults (particularly undergraduate students), there have been a small number of infant studies also suggesting negativity biases.

Infants are thought to interpret ambiguous situations on the basis of how others around them react. When an adult (e.g. experimenter, mother) displays reactions of happiness, fear, or neutrality towards target toys, infants tend to approach the toy associated with the negative reaction significantly less than the neutral and positive toys. Furthermore, there was greater evidence of neural activity when the infants were shown pictures of the “negative” toy than when shown the “positive” and “neutral” toys. Although recent work with 3-month-olds suggests a negativity bias in social evaluations, as well, there is also work suggesting a potential positivity bias in attention to emotional expressions in infants younger than 7 months. A review of the literature conducted by Drs. Amrisha Vaish, Tobias Grossman, and Amanda Woodward suggests the negativity bias may emerge during the second half of an infant’s first year, although the authors also note that research on the negativity bias and affective information has been woefully neglected within the developmental literature.

Aging and Older Adults

Some research indicates that older adults may display, at least in certain situations, a positivity bias or positivity effect. Proposed by Dr. Laura Carstensen and colleagues, the socioemotional selectivity theory outlines a shift in goals and emotion regulation tendencies with advancing age, resulting in a preference for positive information over negative information. Aside from the evidence in favour of a positivity bias, though, there have still been many documented cases of older adults displaying a negativity bias.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Negativity_bias >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is the Pollyanna Principle?

Introduction

The Pollyanna principle (also called Pollyannaism or positivity bias) is the tendency for people to remember pleasant items more accurately than unpleasant ones.

Research indicates that at the subconscious level, the mind tends to focus on the optimistic; while at the conscious level, it tends to focus on the negative. This subconscious bias is similar to the Barnum effect.

What is the Barnum Effect?

The Barnum effect, also called the Forer effect or, less commonly, the Barnum-Forer effect, is a common psychological phenomenon whereby individuals give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically to them, yet which are in fact vague and general enough to apply to a wide range of people.

Development

The name derives from the 1913 novel Pollyanna by Eleanor H. Porter describing a girl who plays the “glad game” – trying to find something to be glad about in every situation. The novel has been adapted to film several times, most famously in 1920 and 1960. An early use of the name “Pollyanna” in psychological literature was in 1969 by Boucher and Osgood who described a Pollyanna hypothesis as a universal human tendency to use positive words more frequently and diversely than negative words in communicating. Empirical evidence for this tendency has been provided by computational analyses of large corpora of text.

The story of Pollyanna is about an orphaned little girl, who is sent to live with her Aunt Polly, who is known for being stiff, strict, and proper. When thrown into this environment, Pollyanna seeks to keep and spread her optimism to others. This beloved literary character’s story shares the message that despite how hard things may seem, a sunny disposition can turn anyone and anything around.

Psychological Research and Findings

The Pollyanna principle was described by Margaret Matlin and David Stang in 1978 using the archetype of Pollyanna more specifically as a psychological principle which portrays the positive bias people have when thinking of the past (aka nostalgia). According to the Pollyanna principle, the brain processes information that is pleasing and agreeable in a more precise and exact manner as compared to unpleasant information. We actually tend to remember past experiences as more rosy than they actually occurred. The researchers found that people expose themselves to positive stimuli and avoid negative stimuli, they take longer to recognise what is unpleasant or threatening than what is pleasant and safe, and they report that they encounter positive stimuli more frequently than they actually do. Matlin and Stang also determined that selective recall was a more likely occurrence when recall was delayed: the longer the delay, the more selective recall that occurred.

The Pollyanna principle has been observed on online social networks as well. For example, Twitter users preferentially share more, and are emotionally affected more frequently by, positive information.

However, the only exception to the Pollyanna principle tends to be individuals suffering from depression or anxiety, who are more likely to either have more depressive realism or a negative bias.

Positivity Bias

Positivity bias is the part of the Pollyanna principle that attributes reasons to why people may choose positivity over negative or realistic mindsets. In positive psychology, it is broken down into three ideas: positive illusions, self deception, and optimism. Having a positive bias increases with age, as it is more prevalent in adults approaching older adulthood than younger children or adolescents. Older adults tend to pay attention to positive information, and this could be due to a specific focus in cognitive processing. In studies compiled by Andrew Reed and Laura Carstensen, they found that older adults (in comparison to younger adults) purposefully directed their attention away from negative material.

Criticisms

Although the Pollyanna principle can be seen as helpful in some situations, some psychologists say it may inhibit an individual from coping effectively with life obstacles. The Pollyanna principle in some instances can be known as “Pollyanna syndrome” and is defined by such sceptics as a person who is excessively positive and blind towards the negative or real. In regards to therapy or counselling, it is viewed as dangerous to both the therapist and patient.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Pollyanna_principle >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.