An Overview of Inoculation Theory

Introduction

Inoculation theory is a social psychological/communication theory that explains how an attitude or belief can be made resistant to persuasion or influence, in analogy to how a body gains resistance to disease. The theory uses medical inoculation as its explanatory analogy but instead of applying it to disease, it is used to discuss attitudes and other positions, like opinions, values, and beliefs. It has applicability to public campaigns targeting misinformation and fake news, but it is not limited to misinformation and fake news.

The theory was developed by social psychologist William J. McGuire in 1961 to explain how attitudes and beliefs change, and more specifically, how to keep existing attitudes and beliefs consistent in the face of attempts to change them. Inoculation theory functions to confer resistance of counter-attitudinal influences from such sources as the media, advertising, interpersonal communication, and peer pressure.

The theory posits that weak counterarguments generate resistance within the receiver, enabling them to maintain their beliefs in the face of a future, stronger challenge. Following exposure to weak counterarguments (e.g. counterarguments that have been paired with refutations), the receiver will then seek out supporting information to further strengthen their threatened position. The held attitude or belief becomes resistant to a stronger “attack,” hence the medical analogy of a vaccine.

Inoculating messages can raise and refute the same counterarguments in the “attack” (refutational same) or different counterarguments on the same or a related issue (refutational different). The effect of the inoculating message can be amplified by making the message of vested and immediate importance to the receiver (based on Jack Brehm’s psychological reactance theory). Post-inoculation talk can further spread inoculation effects to their social network, and the act of talking to others can additionally strengthen resistance to attitude change.

Therapeutic inoculation is a recent extension in which an inoculation message is presented to those without the targeted belief or attitude in place. Applied in this way, an inoculation message can both change an existing position and make that new position more resistant to future attacks.

Brief History

William McGuire set out to conduct research on ways to encourage opposition to persuasion while others created experiments to do the opposite.  McGuire was motivated to study inoculation and persuasion as a result of the aftermath of the Korean War. McGuire was concerned for those who were forced into certain situations which was the main inspiration for this theory. Nine US prisoners of war, when given the opportunity, elected to remain with their captors. Many assumed they were brainwashed, so McGuire and other social scientists turned to ways of conferring resistance to persuasion. This was a change in extant persuasion research, which was almost exclusively concerned with how to make messages more persuasive, and not the other way around.

The theory of inoculation was derived from previous research studying one-sided and two-sided messages. One-sided messages are supportive messages to strengthen existing attitudes, but with no mention of counter-positions. One-sided messages are frequently seen in political campaigns when a candidate denigrates his or her opponent through “mudslinging”. This method is effective in reinforcing extant attitudes of derision toward the opposition and support for the “mudslinging” candidate. If the audience supports the opposition, however, the attack message is ineffective. Two-sided messages present both counterarguments and refutations of those counterarguments. To gain compliance and source credibility, a two-sided message must demonstrate the sender’s position, then the opposition’s position, followed by a refutation of the opposition’s argument, then finally the sender’s position again.

McGuire led a series of experiments assessing inoculation’s efficacy and adding nuance to our understanding for how it works). Early studies limited testing of inoculation theory to cultural truisms, or beliefs accepted without consideration (e.g. people should brush their teeth daily). This meant it was primarily used toward the attitudes that were rarely, if ever attacked by opposing forces. The early tests of inoculation theory were used on non-controversial issues, (e.g. brushing your teeth is good for you). Few refute that brushing one’s teeth is a good habit, therefore external opposing arguments against tooth brushing would not change one’s opinion, but it would strengthen support for brushing one’s teeth. Studies of inoculation theory currently target less popular or common attitudes, such as whether one should buy a Mac or a Windows-based PC computer or if one should support gay marriage.

Implementing inoculation theory in studies of contemporary social issues (from mundane to controversial social issues), and the variety and resurgence of such studies, helps bolster the effectiveness and utility of the theory and provides support that it can be used to strengthen and/or predict attitudes. These later developments of the theory extended inoculation to more controversial and contested topics in the contexts of politics, health, marketing, and contexts in which people have different pre-existing attitudes, such as climate change. The theory has also been applied in education to help prevent substance abuse.

About

Inoculation is a theory that explains how attitudes and beliefs can be made more resistant to future challenges. For an inoculation message to be successful, the recipient experiences threat (a recognition that a held attitude or belief is vulnerable to change) and is exposed to and/or engages in refutational processes (pre-emptive refutation, that is, defences against potential counterarguments). The arguments that are presented in an inoculation message must be strong enough to initiate motivation to maintain current attitudes and beliefs, but weak enough that the receiver will be able to refute the counterargument.

Inoculation theory has been studied and tested through decades of scholarship, including experimental laboratory research and field studies. Inoculation theory is used today as part of the suite of tools by those engaged in shaping or manipulating public opinion. These contexts include: politics, health campaigns marketing, education, and science communication, among others.

The inoculation process is analogous to the medical inoculation process from which it draws its name; the analogy served as the inaugural exemplar for how inoculation confers resistance. As McGuire (1961) initially explained, medical inoculation works by exposing the body to a weakened form of a virus – strong enough to trigger a response (that is, the production of antibodies), but not so strong as to overwhelm the body’s resistance. Attitudinal inoculation works the same way: expose the receiver to weakened counterarguments, triggering refutational processes (like counterarguing) which confers resistance to later, stronger “attack” like persuasive messages. This process works like a metaphorical vaccination: the receiver becomes immune to attacking messages that attempt to change their attitudes or beliefs. Inoculation theory suggests that if one sends out messages with weak counterarguments, an individual can build immunity to stronger messages and strengthen their original attitudes toward an issue.

Most inoculation theory research treats inoculation as a pre-emptive, preventive (prophylactic) messaging strategy—used before exposure to strong challenges. More recently, scholars have begun to test inoculation as a therapeutic inoculation treatment, administered to those who have the “wrong” target attitude/belief. In this application, the treatment messages both persuade and inoculate—much like a flu shot that cures those who already have been infected with the flu and protects them against future threats. More research is needed to better understand therapeutic inoculation treatments – especially field research that takes inoculation outside of the laboratory setting.

Another shift in inoculation research moves from a largely cognitive, intrapersonal (internal) process to a process that is both cognitive and affective, intrapersonal and interpersonal. For example, in contrast to explanations of inoculation that focused nearly entirely on cognitive processes (like internal counterarguing, or refuting persuasive attempts silently, in one’s own mind), more recent research has examined how inoculation messages motivate actual talk (conversation, dialogue) about the target issue. Scholars have confirmed that exposure to an inoculation message motivates more post-inoculation talk (PIT) about the issue. For example, Tweets containing native advertising disclosures – a type of inoculation message – were more likely to include negative commentary which is a sign of resistance to influence consistent with PIT.

Pre-Bunking

It is much more difficult to eliminate the influence or persuasion of misinformation once individuals have seen it which is why debunking and fact checking have failed in the past. Due to this, a phenomenon known as pre-bunking as introduced. Pre-bunking (or prebunking) is a form of Inoculation theory that aims to combat various kinds of manipulation and misinformation spread around the web. In recent years, misleading information and the permeation of such have become an increasingly prevalent issue. Standard Inoculation theory aims to combat persuasion. Still pre-bunking seeks to target misinformation by providing a harmless example of it. Exposure builds future resistance to similar misinformation.

In 2021, Nanlan Zhang examined inoculation by looking at harsh, preconceived ideas of mental health. Such ideas included the association of mental health with violence. The study consisted of two different experiments, including 593 participants. In the first, subjects were shown misinformation regarding gun violence, only to have the misinformation explained away. These inoculative techniques were concluded to be slightly effective. In the second experiment of the study, subjects were shown false messages that had either high or low credibility. In the first half of the study, the inoculation affected > 50% of the participants. The second half of the study showed increased effectiveness in inoculation, with subjects showing distrust in high and low credibility messages.

A common form of pre-bunking is in the form of short videos, meant to grab a viewer’s attention with a fake message and then inoculate the viewer by explaining the manipulation. In 2022, Jon Roozenbeek (with funding from Google) developed five pre-bunking video to test the viability of short-form inoculation messages. A total of 29,116 subjects were then shown multiple fabricated posts from various social media outlets. The subjects were then tasked with differentiating between benign posts and ones containing manipulation. The videos were effective in improving the viewer’s ability to identify manipulative tactics. Viewers showed about a 5% average increase in identifying such tactics.

Explanation

Inoculation theory explains how attitudes, beliefs, or opinions (sometimes referred to generically as “a position”) can be made more resistant to future challenges. Receivers are made aware of the potential vulnerability of an existing position (e.g. attitude, belief). This establishes threat and initiates defences to future attacks. The idea is that when a weak argument is presented in the inoculation message, processes of refutation or other means of protection will prepare for use of stronger arguments later. It is critical that the attack is strong enough to keep the receiver defensive, but weak enough to not actually change those pre-existing ideas. This will hopefully make the receiver actively defensive and allow them to create arguments in favour of their pre-existing thoughts. The more active receivers become in their defence the more it will strengthen their own attitudes, beliefs, or opinions.

Key Components

There are at least four basic key components to successful inoculation: threat, refutational pre-emption (pre-emptive refutation), delay, and involvement.

  1. Threat. Threat provides motivation to protect one’s attitudes or beliefs. Threat is a product of the presence of counterarguments in an inoculation message and/or an explicit forewarning of an impending challenge to an existing belief. The message receiver must interpret that a message is threatening and recognise that there is a reason to fight to maintain and strengthen their opinion. If the receiver of an opposing message does not recognize that a threat is present, they will not feel the need to start defending their position and therefore will not change their attitude or strengthen their opinion.  Compton and Ivanov (2012) found that participants who had been forewarned of an attack–i.e. threat–but not given the appropriate tools to combat the attack were more resistant than the control group. In this case, the simple act of forewarning of an attack was enough to resist the counter-attitudinal persuasion.
  2. Refutational pre-emption. This component is the cognitive part of the process. It is the ability to activate one’s own argument for future defence and strengthen their existing attitudes through counterarguing. Scholars have also explored whether other resistance processes might be at work, including affect. Refutational preemption provides specific content that receivers can employ to strengthen attitudes against subsequent change. This aids in the inoculation process by giving the message receiver a chance to argue with the opposing message. It shows the message receiver that their attitude is not the only attitude or even the right attitude, creating a threat to their beliefs. This is beneficial because the receiver will get practice in defending their original attitude, therefore strengthening it. This is important in fighting off future threats of opposing messages and helps to ensure that the message will not affect their original stance on the issues.  Refutational preemption acts as the weak strain of the virus in the metaphor. By injecting the weakened virus–the opposing opinion–into a receiver, this prompts the receiver to strengthen their position, enabling them to fight off the opposing threat. By the time the body processes the virus–the counterattack–the receiver will have learned how to eliminate the threat. In the case of messaging, if the threatening message is weak or unconvincing, a person can reject the message and stick with their original stance on the matter. By being able to reject threatening messages a person builds strength of their belief and every successful threatening message that they can encounter their original opinions only get stronger.  Recent research has studied the presence and function of word-of-mouth communication, or post-inoculation talk, following exposure to inoculation messages.
  3. Delay. There has been much debate on whether there is a certain amount of time necessary between inoculation and further attacks on a person’s attitude that will be most effective in strengthening that person’s attitude. McGuire (1961) suggested that delay was necessary to strengthen a person’s attitude and since then many scholars have found evidence to back that idea up. There are also scholars on the other side who suggest that too much of a delay lessens the strengthening effect of inoculation. Nevertheless, the effect of inoculation can still be significant weeks or even months after initial introduction or the treatment showing that it does produce somewhat long-lasting effects. Despite the limited research in this area, meta-analysis suggests that the effect becomes weakened after too long of a delay, specifically after 13 days.
  4. Involvement. Involvement, which is one of the most important concepts for widespread persuasion, can be defined as how important the attitude object is for the receiver (Pfau, et al. (1997)). Involvement is critical because it determines how effective the inoculation process will be, if at all. If an individual does not have a vested interest in the subject, they will not perceive a threat and, consequently, will not feel the need to defend and strengthen their original opinion, rendering the inoculation process ineffective.

Refutational Same and Different Messages

While there are many studies that have been conducted comparing different treatments of inoculation, there is one specific comparison that is mentioned throughout various studies. This is the comparison between what is known as refutational same and refutational different messages. A refutational same message is an inoculation treatment that refutes specific potential counterarguments that will appear in the subsequent persuasion message, while refutational different treatments are refutations that are not the same as those present in the impending persuasive message. Pfau and his colleagues (1990) developed a study during the 1988 United States presidential election. The Republicans were claiming that the Democratic candidate was known to be lenient when it came to the issue of crime. The researchers developed a refutational same message that stated that while the Democratic candidate was in favour of tough sentences, merely tough sentences could not reduce crime. The refutational different message expanded on the candidate’s platform and his immediate goals if he were to be elected. The study showed comparable results between the two different treatments. Importantly, as McGuire and others had found previously, inoculation was able to confer resistance to arguments that were not specifically mentioned in the inoculation message.

Psychological Reactance

Recent inoculation studies have incorporated Jack Brehm’s psychological reactance theory, a theory of freedom and control. The purpose is to enhance or boost resistance outcomes for the two key components of McGuire’s inoculation theory: threat and refutational pre-emption.

Such a study is the large complex multisite study of Miller et al. (2013). The main focus is to determine how to improve the effectiveness of the inoculation process by evaluating and generating reactance to a threatened freedom by manipulating explicit and implicit language and its intensity. While most inoculation studies focus on avoiding reactance, or at the very least, minimizing the impact of reactance on behaviours, in contrast, Miller, et al. chose to manipulate reactance by designing messages to enhance resistance and counterarguing output. They showed that inoculation coupled with reactance-enhanced messages leads to “stronger resistance effects”. Most importantly, reactance-enhanced inoculations result in lesser attitude change—the ultimate measure of resistance.

The participants in the Miller et al. study were college students, that is emerging adults, who display high reactance to persuasive appeals. This population is in a transitional uncertain stage in life, and are more likely to defend their behavioural freedoms if they feel others are attempting to control their behaviour. Populations in transitional stages rely on source credibility as a major proponent of cognitive processing and message acceptance. If the message is explicit and threatens their perceived freedoms, such populations will most likely derogate (criticise) the source and dismiss the message. Two important needs for reactance to a threatened freedom from an emerging adult population are immediacy and vested interest Miller et al. discuss how emerging adults need to believe their behavioural freedoms, for which they have vestedness, are being threatened, and that the threat exists in real time with almost immediate consequences. Threats that their perceived freedoms will be eliminated or minimised increases motivation to restore that freedom, or possibly engage in the threatened behaviour to reinforce their autonomy and control of their attitudes and actions. In addition, that threat does not necessarily need anger to motivate counter-argumentation, and simply attempting to provoke anger through manipulation is limited as a technique of gauging negative cognitions. Miller et al. also consider refutational pre-emption as motivation for producing initial counterarguments and provocation of dissension when contemplating the attack message.

A unique feature of their study is examining low-controlling versus high-controlling language and its impact on affect and source credibility. They found reactance enhances key resistance outcomes, including: threat, anger at attack message source, negative cognitions, negative affect, anticipated threat to freedom, anticipated attack message source derogation, perceived threat to freedom, perceived attack message source derogation, and counterarguing.

Previously, Miller, et al. (2007) utilises Brehm’s psychological reactance theory[27] to avoid or eliminate source derogation and message rejection. In this study, their focus is instead Brehm’s concept of restoration. Some of their ideas deal with low reactance and whether it can lead to more positive outcomes and if behavioural freedoms can be restored once threatened. As discussed in Miller, et al. (2013), this study ponders whether individuals know they have the behavioural freedom that is being threatened and whether they feel they are worthy of that freedom. This idea also ties into the emerging adult population of the above study and its affirmation that individuals in transitional stages will assert their threatened behaviour freedoms.

Miller et al. (2007) sought to determine how effective explicit and implicit language is at mitigating reactance. Particularly, restoration of freedom is a focus of this study, and gauging how concrete and abstract language informs an individual’s belief that he or she has a choice. Some participants were given a persuasive appeal related to health promotion with a following post-scripted message designed to remind them they have a choice as a method of restoring the participants’ freedom. Using concrete language proved more effective at increasing the possibilities of message acceptance and source credibility. This study is relevant to inoculation research in that it lends credence to Miller, et al. (2013), which transparently incorporates psychological reactance theory in conjunction with inoculation theory to improve the quality of persuasive appeals in the future.

Postinoculation Talk

Following Compton and Pfau’s (2009) research on postinoculation talk, Ivanov, et al. (2012) explore how cognitive processing could lead to talk with others after receiving an inoculation message in which threat exists. The authors found that message processing leads to postinoculation talk which could potentially lead to stronger resistance to attack messages. Further, postinoculation talk acts virally, spreading inoculation through talk with others on issues that involve negative cognitions and affect. In previous research, the assumption that talk was subvocal (existing only intrapersonally) was prevalent, without concern for the impact of vocal talk with other individuals. The authors deem vocal talk important to the incubation process. Their study concluded that individuals who receive an inoculation message that contains threat will talk to others about the message and talk more frequently than individuals who do not receive an inoculation message. Additionally, the act of postinoculation talk bolsters their attitudes and increases resistance to the message as well as increasing the likelihood that talk will generate a potentially viral effect–spreading inoculation to others through the act of vocal talk.

Straw man Fallacy

Due to the nature of attitudinal inoculation as a form of psychological manipulation, the counterarguments used in the process do not necessarily need to be accurately representative of the opposing belief in order to trigger the inoculation effect. This is a form of straw man fallacy, and can be effectively used to reinforce beliefs with less legitimate support.

Real-World Applications

Most research has involved inoculation as applied to interpersonal communication (persuasion), marketing, health and political messaging. More recently, inoculation strategies are starting to show potential as a counter to science denialism and cyber security breaches.

Science Denialism

Science denialism has rapidly increased in recent years. A major factor is the rapid spread of misinformation and fake news via social media (such as Facebook), as well as prominent placing of such misinformation in Google searches. Climate change denialism is a particular problem in that its global nature and lengthy timeframe is uniquely difficult for the individual mind to grasp, as the human brain has evolved to deal with short-term and immediate dangers. However, John Cook and colleagues have shown that inoculation theory shows promise in countering denialism. This involves a two-step process. Firstly, list and deconstruct the 50 or so most common myths about climate change, by identifying the reasoning errors and logical fallacies of each one. Secondly, use the concept of parallel argumentation to explain the flaw in the argument by transplanting the same logic into a parallel situation, often an extreme or absurd one. Adding appropriate humour can be particularly effective.

Cyber Security

Treglia and Delia (2017) apply inoculation theory to cyber security (internet security, cybercrime). People are susceptible to electronic or physical tricks, scams, or misrepresentations that may lead to deviating from security procedures and practices, opening the operator, organisation, or system to exploits, malware, theft of data, or disruption of systems and services. Inoculation in this area improves peoples resistance to such attacks. Psychological manipulation of people into performing actions or divulging confidential information via the internet and social media is one part of the broader construct of social engineering.

Political Campaigning

Compton and Ivanov (2013) offer a comprehensive review of political inoculation scholarship and outline new directions for future work.

In 1990, Pfau and his colleagues examined inoculation through the use of direct mail during the 1988 United States presidential campaign. The researchers were specifically interested in comparing inoculation and post hoc refutation. Post hoc refutation is another form of building resistance to arguments; however, instead of building resistance prior to future arguments, like inoculation, it attempts to restore original beliefs and attitudes after the counterarguments have been made. Results of the research reinforced prior conclusions that refutational same and different treatments both increase resistance to attacks. More importantly, results also indicated inoculation was superior to post hoc refutation when attempting to protect original beliefs and attitudes.

Other examples are studies showing it is possible to inoculate political supporters of a candidate in a campaign against the influence of an opponent’s attack adverts; and inoculate citizens of fledgling democracies against the spiral of silence (fear of isolation) which can thwart the expression of minority views.

Health

Much of the research conducted in health is attempting to create campaigns that will encourage people to stop unhealthy behaviours (e.g. getting people to stop smoking or prevention of teen alcoholism). Compton, Jackson and Dimmock (2016) reviewed studies where inoculation theory was applied to health-related messaging. There are many inoculation studies with the intent to inoculate children and teenagers to prevent them from smoking, doing drugs or drinking alcohol. Much of the research shows that targeting at a young age can help them resist peer pressure in high school or college. An important example of inoculation theory usage is protecting young adolescents against influences of peer pressure, which can lead to smoking, underage drinking, and other harmful behaviours.

Godbold and Pfau (2000) used sixth graders from two different schools and applied inoculation theory as a defence against peer pressure to drinking alcohol. They hypothesized that a normative message (a message tailored around the current social norms) would be more effective than an informative message. An informative message is a message tailored around giving individuals information pieces. In this case, the information was why drinking alcohol is bad. The second hypothesis was that subjects who receive a threat two weeks later will be more resistant than those receiving an immediate attack. The results supported the first hypothesis partially. The normative message created higher resistance from the attack, but was not necessarily more effective. The second hypothesis was also not supported; therefore, the time lapse did not create further resistance for teenagers against drinking. One major outcome from this study was the resistance created by utilizing a normative message.

In another study conducted by Duryea (1983), the results were far more supportive of the theory. The study also attempted to find the message to use for educational training to help prevent teen drinking and driving. The teen subjects were given resources to combat attempts to persuade them to drink and drive or to get into a vehicle with a drunk driver. The subjects were:

  1. Shown a film;
  2. Participated in question and answer;
  3. Role playing exercises; and
  4. A slide show.

The results showed that a combination of the four methods of training was effective in combating persuasion to drink and drive or get into a vehicle with a drunk driver. The trained group was far more prepared to combat the persuasive arguments.

Additionally, Parker, Ivanov, and Compton (2012) found that inoculation messages can be an effective deterrent against pressures to engage in unprotected sex and binge drinking—even when only one of these issues is mentioned in the health message.

Compton, Jackson and Dimmock (2016) discuss important future research, such as preparing new mothers for overcoming their health concerns (e.g. about breastfeeding, sleep deprivation and post-partum depression).

Inoculation theory applied to prevention of smoking has been heavily studied. These studies have mainly focused on preventing youth smokers–inoculation seems to be most effective in young children. For example, Pfau, et al. (1992) examined the role of inoculation when attempting to prevent adolescents from smoking. One of the main goals of the study was to examine longevity and persistence of inoculation. Elementary school students watched a video warning them of future pressures to smoke. In the first year, resistance was highest among those with low self-esteem. At the end of the second year, students in the group showed more attitudinal resistance to smoking than they did previously (Pfau & Van Bockern 1994). Importantly, the study and its follow-up demonstrate the long-lasting effects of inoculation treatments.

Grover (2011) researched the effectiveness of the “truth” anti-smoking campaign on smokers and non-smokers. The truth adverts aimed to show young people that smoking was unhealthy, and to expose the manipulative tactics of tobacco companies. Grover showed that inoculation works differently for smokers and non-smokers (i.e. potential smokers). For both groups, the truth adverts increased anti-smoking and anti-tobacco-industry attitudes, but the effect was greater for smokers. The strength of this attitude change is partly mediated (controlled) by aversion to branded tobacco industry products. However, counter-intuitively, exposure to pro-smoking adverts increased aversion to branded tobacco industry products (at least in this sample). In general, Grover demonstrated that the initial attitude plays a major role in the ability to inoculate an individual.

Future health-related studies can be extremely beneficial to communities. Some research areas include present-day issues (for example, inoculation-based strategies for addiction intervention to assist sober individuals from relapsing), as well as promoting healthy eating habits, exercising, breastfeeding and creating positive attitude towards mammograms. An area that has been underdeveloped is mental health awareness. Because of the large number of young adults and teens dying of suicide due to bullying, inoculation messages could be effective.

Dimmock et al. (2016) studied how inoculation messages can be used to increase participants’ reported enjoyment and interest in physical exercise. In this study, participants are exposed to inoculating messages and then given an intentionally boring exercise routine. These messages cause the reinforcement of the individual’s positive attitude towards the exercise, and as a result increase their likelihood to continue exercise in the future.

Vaccination Beliefs

Inoculation Theory has been used to combat misinformation regarding vaccine related beliefs. Vaccinations have helped stop the spread of many infections and diseases, but their effectiveness has become a controversial topic in the Western nations. Studies show that misinformation regarding the science has played a major role in the hesitancy for vaccinations. Some of the common misconceptions include the Influenza vaccine giving the flu and a link between the MMR vaccine and autism. Regardless of the many scientific studies debunking these claims, there are people that still cling to these beliefs.

In 2016, a study was conducted to see Inoculation theory combat vaccine misinformation. The participants of this study were a group of 110 young women who had not completed any doses of the human Papillomavirus vaccine (HPV). The study wanted to see the effect of attack messages that questioned the importance and safety of this specific vaccine, and other vaccines. After making arguments against the vaccines and a brief lapse in time, a control group was exposed to inoculation messages, that were in favour of the vaccine. Once the arguments were made, the participants were asked to take part in post-test measurements. The results found that those who received the inoculated messages had more positive behaviours towards the HPV vaccine, and other vaccines.

In 2017, a study was conducted to test Inoculation theory’s role in making vaccine related decisions. A group of 89 British parents were selected, and exposed to one of five potential arguments for a fictitious vaccine. Some groups were exposed to arguments that were completely based in conspiracy, Anti-conspiracy, While the other groups were exposed to both arguments in differing order. After being exposed to these arguments, they were told about a disease that would cause vomiting and a severe fever. The parents were asked if they would get their children the vaccine for this fictitious disease, and the results they gathered displayed Inoculation theory in action. The results showed that those who were exposed to anti-conspiracy arguments were more likely to get the vaccine.

Marketing

It took some time for inoculation theory to be applied to marketing, because of many possible limitations. Lessne and Didow (1987) reviewed publications about inoculation application to marketing campaigns and their limitations. They note that, at the time, the closest to true marketing context was Hunt’s 1973 study on the Chevron campaign. The Federal Trade Commission stated that Chevron had deceived consumers on the effectiveness of their gas additive F-310. The FTC was going to conduct a corrective advertising campaign to disclose the information. In response, Chevron ran a print campaign to combat the anticipated FTC campaign. The double page advertisement read, “If every motorist used Chevron with F-310 for 2000 miles, air pollutants would be reduced by thousands of tons in a single day. The FTC doesn’t think that’s significant.” Hunt used this real-life message as an inoculation treatment in his research. He used the corrective campaign by the FTC as the attack on the positive attitude toward Chevron. The results indicated that a supportive treatment offered less resistance than a refutational treatment. Another finding was that when an inoculative treatment is received, but no attack is followed, there is a drop in attitude. One of the major limitations in this study was that Hunt did not allow a time elapse between the treatment and the attack, which was a major element of McGuire’s original theory.

Inoculation theory can be used with an audience who already has an opinion on a brand, to convince existing customers to continue patronage of a company, or to protect commercial brands against the influence of comparative adverts from a competitor. An example is Apple Computers’ “Get A Mac” campaign. This campaign follows inoculation theory in targeting those who already preferred Mac computers. The series of ads put out in the duration of the campaign had a similar theme; they directly compared Macs and PCs. Inoculation theory applies here as these commercials are likely aimed at Apple users. These ads are effective because Apple users already prefer Mac computers, and they are unlikely to change their minds. This comparison creates refutational pre-emption, showing Macs may not be the only viable options on the market. The TV ads throw in a few of the positive advantages that PCs have over Macs, but by the end of every commercial they reiterate the fact that Macs are ultimately the superior consumer product. This reassures viewers that their opinion is still right and that Macs are in fact better than PCs. The inoculation theory in these ads keep Mac users coming back for Apple products, and may even have them coming back sooner for the new bigger and better products that Apple releases – especially important as technology is continually changing, and something new is always being pushed onto the shelves.

Inoculation theory research in advertising and marketing has mainly focused on promoting healthy lifestyles with the help of a product or for a specific company’s goal. However, shortly after McGuire published his inoculation theory, Szybillo and Heslin (1973) applied the concepts that McGuire used in the health industry to advertising and marketing campaigns. They sought to provide answers for advertisers marketing a controversial product or topic: if an advertiser knew the product or campaign would cause an attack, what would be the best advertising strategy? Would they want to refute the arguments or reaffirm their claims? They chose a then-controversial topic: “Inflatable air bags should be installed as passive safety devices in all new cars.” They tested four advertising strategies:

  1. Defence;
  2. Refutational-same;
  3. Refutational-different; and
  4. Supportive.

The results confirmed that a reaffirmation or refutation approach is better than not addressing the attack. They also confirmed that refuting the counterargument is more effective than a supportive defence (though the refutational-different effect was not much greater than for supportive defence). Szybillo and Heslin also manipulated the time of the counterargument attack, and the credibility of the source, but neither was significant.

In 2006, a jury awarded Martin Dunson and Lisa Jones, the parents of one-year-old Marquis Matthew Dunson, $5 million for the death of their son. Dunson and Jones sued Johnson & Johnson, the makers of Infant’s Tylenol claiming that there were not enough warnings regarding the dosage of acetaminophen What resulted was a Johnson & Johnson campaign that encouraged parents to practice proper dosage procedures. In a review of the campaign by Veil and Kent (2008), they breakdown the message of the campaign utilizing the basic concepts of inoculation theory. They theorise that Johnson & Johnson used inoculation to alter the negative perception of their product. The campaign began running prior to the actual verdict, thus the timing seemed suspicious. A primary contention of Veil and Kent was that the intentions of Johnson & Johnson were not to convey consumer safety guidelines, but to change how consumers might respond to further lawsuits on overdose. The inoculation strategy used by Johnson & Johnson is evident in their campaign script: “Some people think if you have a really bad headache, you should take extra medicine.” The term “some people” is referring to the party suing the company. The commercial also used the Vice President of Sales for Tylenol to deliver a message, who may be considered a credible source.

In 1995, Burgoon and colleagues published empirical findings on issue/advocacy advertising campaigns. Most, if not all, of these types of advertising campaigns utilize inoculation to create the messages. They posited that inoculation strategies should be used for these campaigns to enhance the credibility of the corporation, and to aid in maintain existing consumer attitudes (but not to change consumer attitudes). Based on the analysis of previous research they concluded issue/advocacy advertising is most effective for reinforcing support and avoid potential slippage in the attitudes of supporters. They used Mobil Oil’s issue/advocacy campaign message. They found that issue/advocacy adverts did work to inoculate against counter-attitudinal attacks. They also found that issue/advocacy adverts work to protect the source credibility. The results also indicated that political views play a role in the effectiveness of the campaigns. Thus, conservatives are easier to inoculate than moderates or liberals. They also concluded that females are more likely to be inoculated with these types of campaigns. An additional observation was that the type of content used in these campaigns contributed to the campaigns success. The further the advertisement was from “direct self-benefit” the greater the inoculation effect was on the audience.

Compton and Pfau (2004) extended inoculation theory into the realm of credit card marketing targeting college students. They wondered if inoculation could help protect college students against dangerous levels of credit card debt and/or help convince them to increase their efforts to pay down any existing debt. Inoculation seemed to reinforce students’ wanted attitudes to debt, as well as some of their behavioural intentions. Further, they found some evidence that those who received the inoculation treatment were more likely to talk to their friends and family about issues of credit card debt.

Deception

Inoculation theory plays a role in deception detection research. Deception detection research has largely yielded little predictable support for nonverbal cues, and rather indicates that most liars are revealed through verbal content inconsistencies. These inconsistencies can be revealed through a form of inoculation theory that exposes the subject to a distorted version of the suspected action to observe inconsistencies in their responses.

Journalism

Breen and Matusitz (2009) suggest a method through which inoculation theory can be used to prevent pack journalism, a practice in which a large quantity of journalists and news outlets swarm a person, place, thing, or idea in a way that is distressing and harmful. It also lends itself to plagiarism. Through this framework derived from Pfau and Dillard (2000), journalists are inoculated against news practices of other journalists and instead directed towards uniqueness and originality, thus avoiding pack journalism.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Inoculation_theory >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of the Einstellung Effect

Introduction

Einstellung is the development of a mechanised state of mind. Often called a problem solving set, Einstellung refers to a person’s predisposition to solve a given problem in a specific manner even though better or more appropriate methods of solving the problem exist.

The Einstellung effect is the negative effect of previous experience when solving new problems. The Einstellung effect has been tested experimentally in many different contexts.

The example which led to the coining of the term by Abraham S. Luchins and Edith Hirsch Luchins is the Luchins water jar experiment, in which subjects were asked to solve a series of water jar problems. After solving many problems which had the same solution, subjects applied the same solution to later problems even though a simpler solution existed (Luchins, 1942). Other experiments on the Einstellung effect can be found in The Effect of Einstellung on Compositional Processes and Rigidity of Behaviour, A Variational Approach to the Effect of Einstellung.

Background

Einstellung literally means “setting” or “installation” as well as a person’s “attitude” in German. Related to Einstellung is what is referred to as an Aufgabe (“task” in German). The Aufgabe is the situation which could potentially invoke the Einstellung effect. It is a task which creates a tendency to execute a previously applicable behavior. In the Luchins and Luchins experiment a water jar problem served as the Aufgabe, or task.

The Einstellung effect occurs when a person is presented with a problem or situation that is similar to problems they have worked through in the past. If the solution (or appropriate behaviour) to the problem/situation has been the same in each past experience, the person will likely provide that same response, without giving the problem too much thought, even though a more appropriate response might be available. Essentially, the Einstellung effect is one of the human brain’s ways of finding an appropriate solution/behaviour as efficiently as possible. The detail is that though finding the solution is efficient, the solution itself is not or might not be. (This is consistent with the famous remark of Blaise Pascal: “I would have written a shorter letter, but I didn’t have the time.”)

Another phenomenon similar to Einstellung is functional fixedness (Duncker 1945). Functional fixedness is an impaired ability to discover a new use for an object, owing to the subject’s previous use of the object in a functionally dissimilar context. It can also be deemed a cognitive bias that limits a person to using an object only in the way it is traditionally used. Duncker also pointed out that the phenomenon occurs not only with physical objects, but also with mental objects or concepts (a point which lends itself nicely to the phenomenon of Einstellung effect).

Luchins and Luchins Water Jar Experiment

The water jar test, first described in Abraham S. Luchins’ 1942 classic experiment, is a commonly cited example of an Einstellung situation. The experiment’s participants were given the following problem: there are 3 water jars, each with the capacity to hold a different, fixed amount of water; the subject must figure out how to measure a certain amount of water using these jars. It was found that subjects used methods that they had used previously to find the solution even though there were quicker and more efficient methods available. The experiment shines light on how mental sets can hinder the solving of novel problems.

An example water jar puzzle.

In the Luchins’ experiment, subjects were divided into two groups. The experimental group was given five practice problems, followed by four critical test problems. The control group did not have the five practice problems. All of the practice problems and some of the critical problems had only one solution, which was “B minus A minus 2⋅C.” For example, one is given jar A capable of holding 21 units of water, B capable of holding 127, and C capable of holding 3. If an amount of 100 units must be measured out, the solution is to fill up jar B and pour out enough water to fill A once and C twice.

One of the critical problems was called the extinction problem. The extinction problem was a problem that could not be solved using the previous solution B − A − 2C. In order to answer the extinction problem correctly, one had to solve the problem directly and generate a novel solution. An incorrect solution to the extinction problem indicated the presence of the Einstellung effect. The problems after the extinction problem again had two possible solutions. These post-extinction problems helped determine the recovery of the subjects from the Einstellung effect.

The critical problems could be solved using this solution (B − A − 2C) or a shorter solution (A − C or A + C). For example, subjects were instructed to get 18 units of water from jars with capacities 15, 39, and 3. Despite the presence of a simpler solution (A + C), subjects in the experimental group tended to give the lengthier solution in lieu of the shorter one. Instead of simply filling up Jars A and C, most subjects from the experimental group preferred the previous method of B − A − 2C, whereas virtually all of the control group used the simpler solution. When Luchins and Luchins gave experimental group subjects the warning, “Don’t be blind”, over half of them used the simplest solution to the remaining problems.

Explanations and Interpretations

The Einstellung effect can be supported by theories of inductive reasoning. In a nutshell, inductive reasoning is the act of inferring a rule based on a finite number of instances. Most experiments on human inductive reasoning involve showing subjects a card with an object (or multiple objects, or letters, etc.) on it. The objects can vary in number, shape, size, colour, etc., and the subject’s job is to answer (initially by guessing) “yes” or “no” whether (or not) the card is a positive instance of the rule (which must be inferred by the subject). Over time, the subjects do tend to learn the rule, but the question is how? Kendler and Kendler (1962) proposed that older children and adults tend to exhibit noncontinuity theory; that is, the subjects tend to pick a reasonable rule and assume it to be true until it proves false.

Regarding the Einstellung effect, one can view noncontinuity theory as a way of explaining the tendency to maintain a specific behaviour until it fails to work. In the water-jar problem, subjects generated a specific rule because it seemed to work in all situations; when they were given problems for which the same solution worked, but a better solution was possible, they still gave their ‘tried and true’ response. Where theories of inductive reasoning tend to diverge from the idea of the Einstellung effect is when analysing the fact that, even after an instance where the Einstellung rule failed to work, many subjects reverted to the old solution when later presented with a problem for which it did work (again, this problem also had a better solution). One way to explain this observation is that in actuality subjects know (consciously) that the same solution might not always work, yet since they were presented with so many instances where it did work, they still tend to test that solution before any other (and so if it works, it will be the first solution found).

Neurologically, the idea of synaptic plasticity, which is an important neurochemical explanation of memory, can help to understand the Einstellung effect. Specifically, Hebbian theory (which in many regards is the neuroscience equivalent of original associationist theories) is one explanation of synaptic plasticity (Hebb, 1949). It states that when two associated neurons frequently fire together – while infrequently firing apart from one another – the strength of their association tends to become stronger (making future stimulation of one neuron even more likely to stimulate the other).

Since the frontal lobe is most often attributed with the roles of planning and problem solving, if there is a neurological pathway which is fundamental to the understanding of the Einstellung effect, the majority of it most likely falls within the frontal lobe. Essentially, a Hebbian explanation of Einstellung could be as follows: stimuli are presented in such a way that the subject recognises themself as being in a situation which they have been in before. That is, the subject sees, hears, smells, etc., an environment which is akin to an environment which they have been in before. The subject then must process the stimuli which are presented in such a way that they exhibit a behaviour which is appropriate for the situation (be it run, throw, eat, etc.).

Because neural growth is, at least in part, due to the associations between two events/ideas, it follows that the more a given stimulus is followed by a specific response, the more likely in the future that stimulus will invoke the same response. Regarding the Luchins’ experiment, the stimulus presented was a water-jar problem (or to be more technical, the stimulus was a piece of paper which had words and numbers on it which, when interpreted correctly, portray a water-jar problem) and the invoked response was B − A − 2C. While it is a bit of a stretch to assume that there is a direct connection between a water-jar problem and B − A − 2C within the brain, it is not unreasonable to assume that the specific neural connections which are active during a water-jar problem-state and those that are active when one thinks “take the second term, subtract the first term, then subtract two of the third term” tend to increase in the amount of overlap as more and more instances where B − A − 2C works are presented.

Other Einstellung Research

Psychological Stress

The following experiments were designed to gauge the effect of different stressful situations on the Einstellung effect. Overall, these experiments show that stressful situations increase the prevalence of the Einstellung effect.

The Speed Test

Luchins gave an elementary-school class a set of water jar problems. In order to create a stressful situation, experimenters told the students that the test would be timed, that the speed and accuracy of the test would be reviewed by their principal and teachers, and that the test would affect their grades. To further agitate the students during the test, experimenters were instructed to comment on how much slower the children were compared to children in lower grades. The experimenters observed anxious, stressed, and sometimes tearful faces during the experiment. (Note that while such methods were common in the 1950s, today it violates ethical practices in research.)

The results of the experiment indicated that the stressful speed test situation increased rigidity. Luchins found that only three of the ninety-eight students tested were able to solve the extinction problem, and only two students used the direct method for the critical problems. The same experiment conducted under non-stress conditions showed 70% rigidity during the test problems and 58% failure of the extinction problem, while the anxiety-inducing situation showed 98% and 97% respectively.

The speed test was performed with college students as well, which yielded similar results. Even when college students were told ahead of time to use the direct method in order to avoid mistakes made by children, the college students continued to exhibit rigidity under time pressure. The results of these studies showed that the emphasis on speed increased the Einstellung effect on the water jar problems.

Maze Tracing

Luchins also instructed subjects to draw a solution through a maze without crossing any of the maze’s lines. The maze was either traced normally or traced using the mirror reflection of the maze. If the subject drew over the lines of the figure, they had to start at the beginning, which was disadvantageous since the subject was told that their score depended on the time and smoothness of the solution. The mirror-tracing situation was the stressful situation, and the normal tracing was the non-stressful, control situation. Experimenters observed that the mirror-tracing task caused more drawing outside the boundaries, increased overt signs of stress and anxiety, and required more time to accurately complete. The mirror-tracing situation produced 89% Einstellung solution on the first two criticals instead of the 71% observed for normal tracing. In addition, 55% of the subjects failed with the mirror while only 18% failed without the mirror.

Hidden Word Test for Stutterers

In 1951, Solomon gave both stutterers and fluent speakers a hidden word test, an arithmetical test, and a mirror maze test. Experimenters called the hidden word test a “speech test” to increase stutterer anxiety. There were no marked differences between the stutterers and the fluent speakers for the arithmetical and mirror maze tests. However, the results reveal a significant difference between the performance of the stutterers and the fluent speakers on the “speech test”. On the first two critical problems, 58 percent of the stutterers gave Einstellung solutions whereas only 4 percent of the fluent speakers showed Einstellung effects.

Age

The original Luchins and Luchins experiment tested nine-, ten-, eleven-, and twelve-year-olds for the Einstellung effect. The older groups showed more Einstellung effects than the younger groups in general. However, this initial study did not control for differences in educational level and intelligence.

To remedy this problem, Ross (1952) conducted a study on middle-aged (mean 37.3 years) and older adults (mean 60.8 years). The adults were grouped according to the I.Q., years of schooling, and occupation. Ross administered five Einstellung tests including the arithmetical (water jar) test, the maze test, the hidden word test, and two other tests. For every test, the middle-aged group performed better than the older group. For example, 65% of the older adults failed the extinction task of the arithmetical test, whereas only 29% of the middle-aged adults failed the extinction problem.

Luchins devised another experiment to determine the difference between Einstellung effects in children and in adults. In this study, 140 fifth-graders (mean 10.5 years) were compared to 79 college students (mean 21 years) and 21 adults (mean 43 years). Einstellung effects prior to the extinction task increased with age: the observed Einstellung effects for the extinction task were 56, 68, and 69 percent for young adults, children, and older adults respectively. This implies that there exists a curvilinear relationship between age and the recovery from the Einstellung effect. A similar experiment conducted by Heglin in 1955, also found this relationship when the three age groups were equated for IQ.

Therefore, the initial manifestation of the Einstellung effect on the arithmetic test increases with age. However, the recovery from the Einstellung effect is greatest for young adults (average age 21 years) and decreases as the subject moves away from this age.

Gender

In Luchins and Luchins’ original experiment with 483 children, they found that boys demonstrated less of an Einstellung effect than girls. The experimental difference was only significant for the group that was instructed to write “Don’t be blind” on their papers after the sixth problem (the DBB group). “Don’t be blind” was meant as a reminder to pay attention and guard against rigidity for the sixth problem. However, this message was interpreted in many different ways including thinking of the message as just some more words to remember. The alternative interpretations occurred more frequently in girls and increased with IQ score within the female group. This difference in interpretation of “don’t be blind” may account for the fact that the male DBB group showed more direct solutions than their female counterparts.

To determine sex differences in adults, Luchins gave college students the maze Einstellung test. The female group showed slightly more (although not statistically significant) Einstellung effects than the male group. Other studies have provided conflicting data about the sex differences in the Einstellung effect.

Intelligence

Luchins and Luchins looked at the relationship between the intelligence quotient (IQ) and the Einstellung effects for the children in their original experiment. They found that there was a statistically insignificant negative relationship between the Einstellung effect and intelligence. In general, large Einstellung effects were observed for all subject groups regardless of IQ score. When Luchins and Luchins looked at the IQ range for children who did and did not demonstrate Einstellung effects, they spanned from 51 to 160 and from 75 to 155 respectively. These ranges show a slight negative correlation between intelligence and Einstellung effects.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Einstellung_effect >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Antiprocess

Introduction

Antiprocess is the pre-emptive recognition and marginalisation of undesired information by the interplay of mental defence mechanisms: the subconscious compromises information that would cause cognitive dissonance. It is often used to describe a difficulty encountered when people with sharply contrasting viewpoints are attempting (and failing) to discuss a topic.

In other words, when one is debating with another, there may be a baffling disconnect despite one’s apparent understanding of the argument. Despite the apparently sufficient understanding to formulate counter-arguments, the mind of the debater does not allow him to be swayed by that knowledge.

There are many instances on the Internet where antiprocess can be observed, but the prime location to see it is in Usenet discussion groups, where discussions tend to be highly polarised. In such debates, both sides appear to have a highly sophisticated understanding of the other position, yet neither side is swayed. As a result, the debate can continue for years without any progress being made.

Dynamics

Antiprocess occurs because:

  • The mind is capable of multitasking;
  • The mind has the innate capability to evaluate and select information at a preconscious level so that we are not overwhelmed with the processing requirements;
  • It is not feasible to maintain two contradictory beliefs at the same time;
  • It is not possible for people to be aware of every factor leading up to decisions they make;
  • People learn argumentatively effective but logically invalid defensive strategies (such as rhetorical fallacies);
  • People tend to favour strategies of thinking that have served them well in the past; and
  • The truth is just too unpalatable to the mind to accept.

The ramifications of these factors are that people can be engaged in a debate sincerely, yet the appearances suggest that they are not. This can lead to acrimony if neither party is aware of antiprocess and does not adjust his or her debating style accordingly.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Antiprocess >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Cognitive Inertia

Introduction

Cognitive inertia is the tendency for a particular orientation in how an individual thinks about an issue, belief, or strategy to resist change. Clinical and neuroscientific literature often defines it as a lack of motivation to generate distinct cognitive processes needed to attend to a problem or issue. The physics term inertia emphasizes the rigidity and resistance to change in the method of cognitive processing that has been used for a significant amount of time. Commonly confused with belief perseverance, cognitive inertia is the perseverance of how one interprets information, not the perseverance of the belief itself.

Cognitive inertia has been causally implicated in disregarding impending threats to one’s health or environment, enduring political values and deficits in task switching. Interest in the phenomenon was primarily taken up by economic and industrial psychologists to explain resistance to change in brand loyalty, group brainstorming, and business strategies. In the clinical setting, cognitive inertia has been used as a diagnostic tool for neurodegenerative diseases, depression, and anxiety. Critics have stated that the term oversimplifies resistant thought processes and suggests a more integrative approach that involves motivation, emotion, and developmental factors.

Brief History and Methods

Early History

The idea of cognitive inertia has its roots in philosophical epistemology. Early allusions to a reduction of cognitive inertia can be found in the Socratic dialogues written by Plato. Socrates builds his argument by using the detractor’s beliefs as the premise of his argument’s conclusions. In doing so, Socrates reveals the detractor’s fallacy of thought, inducing the detractor to change their mind or face the reality that their thought processes are contradictory. Ways to combat persistence of cognitive style are also seen in Aristotle’s syllogistic method which employs logical consistency of the premises to convince an individual of the conclusion’s validity.

At the beginning of the twentieth century, two of the earliest experimental psychologists, Müller and Pilzecker, defined perseveration of thought to be “the tendency of ideas, after once having entered consciousness, to rise freely again in consciousness”. Müller described perseveration by illustrating his own inability to inhibit old cognitive strategies with a syllable-switching task, while his wife easily switched from one strategy to the next. One of the earliest personality researchers, W. Lankes, more broadly defined perseveration as “being confined to the cognitive side” and possibly “counteracted by strong will”. These early ideas of perseveration were the precursor to how the term cognitive inertia would be used to study certain symptoms in patients with neurodegenerative disorders, rumination and depression.

Cognitive Psychology

Originally proposed by William J. McGuire in 1960, the theory of cognitive inertia was built upon emergent theories in social psychology and cognitive psychology that centred around cognitive consistency, including Fritz Heider’s balance theory and Leon Festinger’s cognitive dissonance. McGuire used the term cognitive inertia to account for an initial resistance to change how an idea was processed after new information, that conflicted with the idea, had been acquired.

In McGuire’s initial study involving cognitive inertia, participants gave their opinions of how probable they believed various topics to be. A week later, they returned to read messages related to the topics they had given their opinions on. The messages were presented as factual and were targeted to change the participants’ belief in how probable the topics were. Immediately after reading the messages, and one week later, the participants were again assessed on how probable they believed the topics to be. Discomforted by the inconsistency of the related information from the messages and their initial ratings on the topics, McGuire believed the participants would be motivated to shift their probability ratings to be more consistent with the factual messages. However, the participants’ opinions did not immediately shift toward the information presented in the messages. Instead, a shift towards consistency of thought on the information from the messages and topics grew stronger as time passed, often referred to as “seepage” of information. The lack of change was reasoned to be due to persistence in the individual’s existing thought processes which inhibited their ability to re-evaluate their initial opinion properly, or as McGuire called it, cognitive inertia.

Probabilistic Model

Although cognitive inertia was related to many of the consistency theories at the time of its conception, McGuire used a unique method of probability theory and logic to support his hypotheses on change and persistence in cognition. Utilising a syllogistic framework, McGuire proposed that if three issues (a, b and c) were so interrelated that an individual’s opinion were in complete support of issues a and b then it would follow their opinion on issue c would be supported as a logical conclusion. Furthermore, McGuire proposed if an individual’s belief in the probability (p) of the supporting issues (a or b) was changed, then not only would the issue (c) explicitly stated change, but a related implicit issue (d) could be changed as well.

This formula was used by McGuire to show that the effect of a persuasive message on a related, but unmentioned, topic (d) took time to sink in. The assumption was that topic d was predicated on issues a and b, similar to issue c, so if the individual agreed with issue c then so too should they agree with issue d. However, in McGuire’s initial study immediate measurement on issue d, after agreement on issues a, b and c, had only shifted half the amount that would be expected to be logically consistent. Follow-up a week later showed that shift in opinion on issue d had shifted enough to be logically consistent with issues a, b, and c, which not only supported the theory of cognitive consistency, but also the initial hurdle of cognitive inertia.

The model was based on probability to account for the idea that individuals do not necessarily assume every issue is 100% likely to happen, but instead there is a likelihood of an issue occurring and the individual’s opinion on that likelihood will rest on the likelihood of other interrelated issues.

Examples

Public Health

Historical

Group (cognitive) inertia, how a subset of individuals view and process an issue, can have detrimental effects on how emergent and existing issues are handled. In an effort to describe the almost lackadaisical attitude from a large majority of US citizens toward the insurgence of the Spanish flu in 1918, historian Tom Dicke has proposed that cognitive inertia explains why many individuals did not take the flu seriously. At the time, most US citizens were familiar with the seasonal flu. They viewed it as an irritation that was often easy to treat, infected few, and passed quickly with few complications and hardly ever a death. However, this way of thinking about the flu was detrimental to the need for preparation, prevention, and treatment of the Spanish flu due to its quick spread and virulent form until it was much too late, and it became one of the most deadly pandemics in history.

Contemporary

In the more modern period, there is an emerging position that anthropogenic climate change denial is a kind of cognitive inertia. Despite the evidence provided by scientific discovery, there are still those – including nations – who deny its incidence in favour of existing patterns of development.

Geography

To better understand how individuals store and integrate new knowledge with existing knowledge, Friedman and Brown tested participants on where they believed countries and cities to be located latitudinally and then, after giving them the correct information, tested them again on different cities and countries. The majority of participants were able to use the correct information to update their cognitive understanding of geographical locations and place the new locations closer to their correct latitudinal location, which supported the idea that new knowledge affects not only the direct information but also related information. However, there was a small effect of cognitive inertia as some areas were unaffected by the correct information, which the researchers suggested was due to a lack of knowledge linkage in the correct information and new locations presented.

Group Membership

Politics

The persistence of political group membership and ideology is suggested to be due to the inertia of how the individual has perceived the grouping of ideas over time. The individual may accept that something counter to their perspective is true, but it may not be enough to tip the balance of how they process the entirety of the subject.

Governmental organisations can often be resistant or glacially slow to change along with social and technological transformation. Even when evidence of malfunction is clear, institutional inertia can persist. Political scientist Francis Fukuyama has asserted that humans imbue intrinsic value on the rules they enact and follow, especially in the larger societal institutions that create order and stability. Despite rapid social change and increasing institutional problems, the value placed on an institution and its rules can mask how well an institution is functioning as well as how that institution could be improved. The inability to change an institutional mindset is supported by the theory of punctuated equilibrium, long periods of deleterious governmental policies punctuated by moments of civil unrest. After decades of economic decline, the United Kingdom’s referendum to leave the EU was seen as an example of the dramatic movement after a long period of governmental inertia.

Interpersonal Roles

The unwavering views of the roles people play in our lives have been suggested as a form of cognitive inertia. When asked how they would feel about a classmate marrying their mother or father, many students said they could not view their classmate as a step-father/mother. Some students went so far as to say that the hypothetical relationship felt like incest.

Role inertia has also been implicated in marriage and the likelihood of divorce. Research on couples who cohabit together before marriage shows they are more likely to get divorced than those who do not. The effect is most seen in a subset of couples who cohabit without first being transparent about future expectations of marriage. Over time, cognitive role inertia takes over, and the couple marries without fully processing the decision, often with one or both of the partners not fully committed to the idea. The lack of deliberative processing of existing problems and levels of commitment in the relationship can lead to increased stress, arguments, dissatisfaction, and divorce.

In Business

Cognitive inertia is regularly referenced in business and management to refer to consumers’ continued use of products, a lack of novel ideas in group brainstorming sessions, and lack of change in competitive strategies.

Brand Loyalty

Gaining and retaining new customers is essential to whether a business succeeds early on. To assess a service, product, or likelihood of customer retention, many companies will invite their customers to complete satisfaction surveys immediately after purchasing a product or service. However, unless the satisfaction survey is completed immediately after the point of purchase, the customer response is often based on an existing mindset about the company, not the actual quality of experience. Unless the product or service is extremely negative or positive, cognitive inertia related to how the customer feels about the company will not be inhibited, even when the product or service is substandard. These satisfaction surveys can lack the information businesses need to improve a service or product that will allow them to survive against the competition.

Brainstorming

Cognitive inertia plays a role in why a lack of ideas is generated during group brainstorming sessions. Individuals in a group will often follow an idea trajectory, in which they continue to narrow in on ideas based on the very first idea proposed in the brainstorming session. This idea trajectory inhibits the creation of new ideas central to the group’s initial formation.

In an effort to combat cognitive inertia in group brainstorming, researchers had business students either use a single-dialogue or multiple-dialogue approach to brainstorming. In the single dialogue version, the business students all listed their ideas. They created a dialogue around the list, whereas, in the multi-dialogue version, ideas were placed in subgroups that individuals could choose to enter and talk about and then freely move to another subgroup. The multi-dialogue approach was able to combat cognitive inertia by allowing different ideas to be generated in sub-groups simultaneously and each time an individual switched to a different sub-group, they had to change how they were processing the ideas, which led to more novel and high-quality ideas.

Competitive Strategies

Adapting cognitive strategies to changing business climates is often integral to whether or not a business succeeds or fails during economic stress. In the late 1980s in the UK, real estate agents’ cognitive competitive strategies did not shift with signs of an increasingly depressed real estate market, despite their ability to acknowledge the signs of decline. This cognitive inertia at the individual and corporate level has been proposed as reasons to why companies do not adopt new strategies to combat the ever-increasing decline in the business or take advantage of the potential. General Mills’ continued operation of mills long after they were no longer necessary is an example of when companies refuse to change the mindset of how they should operate.

More famously, cognitive inertia in upper management at Polaroid was proposed as one of the main contributing factors to the company’s outdated competitive strategy. Management strongly held that consumers wanted high-quality physical copies of their photos, where the company would make their money. Despite Polaroid’s extensive research and development into the digital market, their inability to refocus their strategy to hardware sales instead of film eventually led to their collapse.

Scenario planning has been one suggestion to combat cognitive inertia when making strategic decisions to improve business. Individuals develop different strategies and outline how the scenario could play out, considering different ways it could go. Scenario planning allows for diverse ideas to be heard and the breadth of each scenario, which can help combat relying on existing methods and thinking alternatives is unrealistic.

Management

In a recent review of company archetypes that lead to corporate failure, Habersang, Küberling, Reihlen, and Seckler defined “the laggard” as one who rests on the laurels of the company, believing past success and recognition will shield them from failure. Instead of adapting to changes in the market, “the laggard” assumes that the same strategies that won the company success in the past will do the same in the future. This lag in changing how they think about the company can lead to rigidity in company identity, like Polaroid, conflict in adapting when the sales plummet, and resource rigidity. In the case of Kodak, instead of reallocating money to a new product or service strategy, they cut production costs and imitation of competitors, both leading to poorer quality products and eventually bankruptcy.

A review of 27 firms integrating the use of big data analytics found cognitive inertia to hamper the widespread implementation, with managers from sectors that did not focus on digital technology seeing the change as unnecessary and cost prohibitive.

Managers with high cognitive flexibility that can change the type of cognitive processing based on the situation at hand are often the most successful in solving novel problems and keeping up with changing circumstances. Interestingly, shifts in mental models (disrupting cognitive inertia) during a company crisis are frequently at the lower group level, with leaders coming to a consensus with the rest of the workforce in how to process and deal with the crisis, instead of vice versa. It is proposed that leaders can be blinded by their authority and too easily disregard those at the front-line of the problem causing them to reject remunerative ideas.

Applications

Therapy

An inability to change how one thinks about a situation has been implicated as one of the causes of depression. Rumination, or the perseverance of negative thoughts, is often correlated with the severity of depression and anxiety. Individuals with high levels of rumination test low on scales of cognitive flexibility and have trouble shifting how they think about a problem or issue even when presented with facts that counter their thinking process.

In a review paper that outlined strategies that are effective for combating depression, the Socratic method was suggested to overcome cognitive inertia. By presenting the patient’s incoherent beliefs close together and evaluating with the patient their thought processes behind those beliefs, the therapist is able to help them understand things from a different perspective.

Clinical Diagnostics

In nosological literature relating to the symptom or disorder of apathy, clinicians have used cognitive inertia as one of the three main criteria for diagnosis. The description of cognitive inertia differs from its use in cognitive and industrial psychology in that lack of motivation plays a key role. As a clinical diagnostic criterion, Thant and Yager described it as “impaired abilities to elaborate and sustain goals and plans of actions, to shift mental sets, and to use working memory”. This definition of apathy is frequently applied to onset of apathy due to neurodegenerative disorders such as Alzheimer’s and Parkinson’s disease but has also been applied to individuals who have gone through extreme trauma or abuse.

Neural Anatomy and Correlates

Cortical

Cognitive inertia has been linked to decreased use of executive function, primarily in the prefrontal cortex, which aids in the flexibility of cognitive processes when switching tasks. Delayed response on the implicit associations task (IAT) and Stroop task have been related to an inability to combat cognitive inertia, as participants struggle to switch from one cognitive rule to the next to get the questions right.

Before taking part in an electronic brainstorming session, participants were primed with pictures that motivated achievement to combat cognitive inertia. In the achievement-primed condition, subjects were able to produce more novel, high-quality ideas. They used more right frontal cortical areas related to decision-making and creativity.

Cognitive inertia is a critical dimension of clinical apathy, described as a lack of motivation to elaborate plans for goal-directed behaviour or automated processing. Parkinson’s patients whose apathy was measured using the cognitive inertia dimension showed less executive function control than Parkinson’s patients without apathy, possibly suggesting more damage to the frontal cortex. Additionally, more damage to the basal ganglia in Parkinson’s, Huntington’s and other neurodegenerative disorders have been found with patients exhibiting cognitive inertia in relation to apathy when compared to those who do not exhibit apathy. Patients with lesions to the dorsolateral prefrontal cortex have shown reduced motivation to change cognitive strategies and how they view situations, similar to individuals who experience apathy and cognitive inertia after severe or long-term trauma.

Functional Connectivity

Nursing home patients who have dementia have been found to have larger reductions in functional brain connectivity, primarily in the corpus callosum, important for communication between hemispheres. Cognitive inertia in neurodegenerative patients has also been associated with a decrease in the connection of the dorsolateral prefrontal cortex and posterior parietal area with subcortical areas, including the anterior cingulate cortex and basal ganglia. Both findings are suggested to decrease motivation to change one’s thought processes or create new goal-directed behaviour.

Alternative Theories

Some researchers have refuted the cognitive perspective of cognitive inertia and suggest a more holistic approach that considers the motivations, emotions, and attitudes that fortify the existing frame of reference.

Alternative Paradigms

Motivated Reasoning

The theory of motivated reasoning is proposed to be driven by the individual’s motivation to think a certain way, often to avoid thinking negatively about oneself. The individual’s own cognitive and emotional biases are commonly used to justify a thought, belief, or behaviour. Unlike cognitive inertia, where an individual’s orientation in processing information remains unchanged either due to new information not being fully absorbed or being blocked by a cognitive bias, motivated reasoning may change the orientation or keep it the same depending on whether that orientation benefits the individual.

In an extensive online study, participant opinions were acquired after two readings about various political issues to assess the role of cognitive inertia. The participants gave their opinions after the first reading and were then assigned a second reading with new information; after being assigned to read more information on the issue that either confirmed or disconfirmed their initial opinion, the majority of participants’ opinions did not change. When asked about the information in the second reading, those who did not change their opinion evaluated the information that supported their initial opinion as stronger than information that disconfirmed their initial opinion. The persistence in how the participants viewed the incoming information was based on their motivation to be correct in their initial opinion, not the persistence of an existing cognitive perspective.

Socio-Cognitive Inflexibility

From a social psychology perspective, individuals continually shape beliefs and attitudes about the world based on interaction with others. What information the individual attends to is based on prior experience and knowledge of the world. Cognitive inertia is seen not just as a malfunction in updating how information is being processed but as the assumptions about the world and how it works can impede cognitive flexibility. The persistence of the idea of the nuclear family has been proposed as a socio-cognitive inertia. Despite the changing trends in family structure, including multi-generational, single-parent, blended, and same-sex parent families, the normative idea of a family has centred around the mid-twentieth century idea of a nuclear family (i.e. mother, father, and children). Various social influences are proposed to maintain the inertia of this viewpoint, including media portrayals, the persistence of working-class gender roles, unchanged domestic roles despite working mothers, and familial pressure to conform.

The phenomenon of cognitive inertia in brainstorming groups has been argued to be due to other psychological effects such as fear of disagreeing with an authority figure in the group, fear of new ideas being rejected and the majority of speech being attributed to the minority group members. Internet-based brainstorming groups have been found to produce more ideas of high-quality because it overcomes the problem of speaking up and fear of idea rejection.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Cognitive_inertia >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Normalisation

Introduction

“The normalization principle means making available to all people with disabilities patterns of life and conditions of everyday living which are as close as possible to the regular circumstances and ways of life or society.” (Nirje, 1982).

Normalisation is a rigorous theory of human services that can be applied to disability services. Normalisation theory arose in the early 1970s, towards the end of the institutionalisation period in the US; it is one of the strongest and long lasting integration theories for people with severe disabilities.

Definition

Normalisation involves the acceptance of some people with disabilities, with their disabilities, offering them the same conditions as are offered to other citizens. It involves an awareness of the normal rhythm of life – including the normal rhythm of a day, a week, a year, and the life-cycle itself (e.g. celebration of holidays; workday and weekends). It involves the normal conditions of life – housing, schooling, employment, exercise, recreation and freedom of choice previously denied to individuals with severe, profound, or significant disabilities.

Wolf Wolfensberger’s definition is based on a concept of cultural normativeness: “Utilization of a means which are as culturally normative as possible, in order to establish and/or maintain personal behaviours and characteristics that are as culturally normative as possible.” Thus, for example, “medical procedures” such as shock treatment or restraints, are not just punitive, but also not “culturally normative” in society. His principle is based upon social and physical integration, which later became popularised, implemented and studied in services as community integration encompassing areas from work to recreation and living arrangement.

Theoretical Foundations

This theory includes “the dignity of risk”, rather than an emphasis on “protection” and is based upon the concept of integration in community life. The theory is one of the first to examine comprehensively both the individual and the service systems, similar to theories of human ecology which were competitive in the same period.

The theory undergirds the deinstitutionalisation and community integration movements, and forms the legal basis for affirming rights to education, work, community living, medical care and citizenship. In addition, self-determination theory could not develop without this conceptual academic base to build upon and critique.

The theory of social role valorisation is closely related to the principle of normalisation having been developed with normalisation as a foundation. This theory retains most aspects of normalisation concentrating on socially valued roles and means, in socially valued contexts to achieve integration and other core quality of life values.

Brief History

The principle of normalisation was developed in Scandinavia during the sixties and articulated by Bengt Nirje of the Swedish Association for Retarded Children with the US human service system a product of Wolf Wolfensberger formulation of normalisation and evaluations of the early 1970s. According to the history taught in the 1970s, although the “exact origins are not clear”, the names Bank-Mikkelson (who moved the principle to Danish law), Grunewald, and Nirje from Scandinavia (later Ministry of Community and Social Services in Toronto, Canada) are associated with early work on this principle. Wolfensberger is credited with authoring the first textbook as a “well-known scholar, leader, and scientist” and Rutherford H. (Rud) Turnbull III reports that integration principles are incorporated in US laws.

Academe

The principle was developed and taught at the university level and in field education during the seventies, especially by Wolf Wolfensberger of the United States, one of the first clinical psychologists in the field of mental retardation, through the support of Canada and the National Institute on Mental Retardation (NIMR) and Syracuse University in New York State. PASS and PASSING marked the quantification of service evaluations based on normalisation, and in 1991 a report was issued on the quality of institutional and community programmes in the US and Canada based on a sample of 213 programmes in the US, Canada and the United Kingdom.

Significance in Structuring Service Systems

Normalisation has had a significant effect on the way services for people with disabilities have been structured throughout the UK, Europe, especially Scandinavia, North America, Israel, Australasia (e.g. New Zealand) and increasingly, other parts of the world. It has led to a new conceptualisation of disability as not simply being a medical issue (the medical model which saw the person as indistinguishable from the disorder, though Wolfensberger continued to use the term into the 2000s, but as a social situation as described in social role valorisation.

Government reports began from the 1970s to reflect this changing view of disability (Wolfensberger uses the term devalued people), e.g. the NSW Anti-Discrimination Board report of 1981 made recommendations on:

“the rights of people with intellectual handicaps to receive appropriate services, to assert their rights to independent living so far as this is possible, and to pursue the principle of normalization.”

The New York State Quality of Care Commission also recommended education based upon principles of normalisation and social role valorisation addressing “deep-seated negative beliefs of and about people with disabilities”. Wolfensberger’s work was part of a major systems reform in the US and Europe of how individuals with disabilities would be served, resulting in the growth in community services in support of homes, families and community living.

Critical Ideology of Human Services

Normalisation is often described in articles and education texts that reflect deinstitutionalisation, family care or community living as the ideology of human services. Its roots are European-American, and as discussed in education fields in the 1990s, reflect a traditional gender relationship-position (Racino, 2000), among similar diversity critiques of the period (i.e. multiculturalism). Normalisation has undergone extensive reviews and critiques, thus increasing its stature through the decades often equating it with school mainstreaming, life success and normalisation, and deinstitutionalisation.

In Contemporary Society

In the United States, large public institutions housing adults with developmental disabilities began to be phased out as a primary means of delivering services in the early 1970s and the statistics have been documented until the present day (2015) by David Braddock and his colleagues. As early as the late 1960s, the normalisation principle was described to change the pattern of residential services, as exposes occurred in the US and reform initiatives began in Europe. These proposed changes were described in the leading text by the President’s Committee on Mental Retardation (PCMR) titled: “Changing Patterns in Residential Services for the Mentally Retarded” with leaders Burton Blatt, Wolf Wolfensberger, Bengt Nirje, Bank-Mikkelson, Jack Tizard, Seymour Sarason, Gunnar Dybwad, Karl Gruenwald, Robert Kugel, and lesser known colleagues Earl Butterfield, Robert E. Cooke, David Norris, H. Michael Klaber, and Lloyd Dunn.

Deinstitutionalisation and Community Development

The impetus for this mass deinstitutionalisation was typically complaints of systematic abuse of the patients by staff and others responsible for the care and treatment of this traditionally vulnerable population with media and political exposes and hearings. These complaints, accompanied by judicial oversight and legislative reform, resulted in major changes in the education of personnel and the development of principles for conversion models from institutions to communities, known later as the community paradigms. In many states the recent process of deinstitutionalisation has taken 10–15 years due to a lack of community supports in place to assist individuals in achieving the greatest degree of independence and community integration as possible. Yet, many early recommendations from 1969 still hold such as financial aid to keep children at home, establishment of foster care services, leisure and recreation, and opportunities for adults to leave home and attain employment (Bank-Mikkelsen, p.234-236, in Kugel & Wolfensberger, 1969).

Community Supports and Community Integration

A significant obstacle in developing community supports has been ignorance and resistance on the part of “typically developed” community members who have been taught by contemporary culture that “those people” are somehow fundamentally different and flawed and it is in everyone’s best interest if they are removed from society (this developing out of 19th Century ideas about health, morality, and contagion). Part of the normalization process has been returning people to the community and supporting them in attaining as “normal” as life as possible, but another part has been broadening the category of “normal” (sometimes taught as “regular” in community integration, or below as “typical”) to include all human beings. In part, the word “normal” continues to be used in contrast to “abnormal”, a term also for differentness or out of the norm or accepted routine (e.g. middle class).

Contemporary Services and Workforces

In 2015, public views and attitudes continue to be critical both because personnel are sought from the broader society for fields such as mental health and contemporary community services continue to include models such as the international “emblem of the group home” for individuals with significant disabilities moving to the community. Today, the US direct support workforce, associated with the University of Minnesota, School of Education, Institute on Community Integration can trace its roots to a normalisation base which reflected their own education and training at the next generation levels.

People with disabilities are not to be viewed as sick, ill, abnormal, subhuman, or unformed, but as people who require significant supports in certain (but not all) areas of their life from daily routines in the home to participation in local community life. With this comes an understanding that all people require supports at certain times or in certain areas of their life, but that most people acquire these supports informally or through socially acceptable avenues. The key issue of support typically comes down to productivity and self-sufficiency, two values that are central to society’s definition of self-worth. If we as a society were able to broaden this concept of self-worth perhaps fewer people would be labelled as “disabled.”

Contemporary Views on Disability

During the mid to late 20th century, people with disabilities were met with fear, stigma, and pity. Their opportunities for a full productive life were minimal at best and often emphasis was placed more on personal characterises that could be enhanced so the attention was taken from their disability. Linkowski developed the Acceptance of Disability Scale (ADS) during this time to help measure a person’s struggle to accept disability. He developed the ADS to reflect the value change process associated with the acceptance of loss theory. In contrast to later trends, the current trend shows great improvement in the quality of life for those with disabilities. Sociopolitical definitions of disability, the independent living movement, improved media and social messages, observation and consideration of situational and environmental barriers, passage of the Americans with Disabilities Act of 1990 have all come together to help a person with disability define their acceptance of what living with a disability means.

Bogdan and Taylor’s (1993) acceptance of sociology, which states that a person need not be defined by personal characterises alone, has become influential in helping persons with disabilities to refuse to accept exclusion from mainstream society. According to some disability scholars, disabilities are created by oppressive relations with society, this has been called the social creationist view of disability. In this view, it is important to grasp the difference between physical impairment and disability. In the article The Mountain written by Eli Clare, Michael Oliver defines impairment as lacking part of or all of a limb, or having a defective limb, organism or mechanism of the body and the societal construct of disability; Oliver defines disability as the disadvantage or restriction of activity caused by a contemporary social organisation which takes no or little account of people who have physical (and/or cognitive/developmental/mental) impairments and thus excludes them from the mainstream of society. In society, language helps to construct reality, for instance, societies way of defining disability which implies that a disabled person lacks a certain ability, or possibility, that could contribute to her personal well-being and enable her to be a contributing member of society versus abilities and possibilities that are considered to be good and useful.

Personal Wounds, Quality of Life and Social Role Valorisation

However, the perspective of Wolfensberger, who served as associated faculty with the Rehabilitation Research and Training Centre on Community Integration (despite concerns of federal funds), is that people he has known in institutions have “suffered deep wounds”. This view, reflected in his early overheads of PASS ratings, is similar to other literature that has reflected the need for hope in situations where aspirations and expectations for quality of life had previously been very low (e.g. brain injury, independent living). Normalisation advocates were among the first to develop models of residential services, and to support contemporary practices in recognising families and supporting employment. Wolfensberger himself found the new term social role valorisation to better convey his theories (and his German Professorial temperament, family life and beliefs) than the constant “misunderstandings” of the term normalisation!

Related Theories and Development

Related theories on integration in the subsequent decades have been termed community integration, self-determination or empowerment theory, support and empowerment paradigms, community building, functional-competency, family support, often not independent living (supportive living),and in 2015, the principle of inclusion which also has roots in service fields in the 1980s.

Misconceptions

Normalisation is so common in the fields of disability, especially intellectual and developmental disabilities, that articles will critique normalisation without ever referencing one of three international leaders: Wolfensberger, Nirje, and Bank Mikkelson or any of the women educators (e.g. Wolfensberger’s Susan Thomas; Syracuse University colleagues Taylor, Biklen or Bogdan; established women academics (e.g. Sari Biklen); or emerging women academics, Traustadottir, Shoultz or Racino in national research and education centres (e.g. Hillyer, 1993). In particular, this may be because Racino (with Taylor) leads an international field on community integration, a neighbouring related concept to the principle of normalisation, and was pleased to have Dr. Wolf Wolfensberger among Centre Associates. Thus it is important to discuss common misconceptions about the principle of normalisation and its implications among the provider-academic sectors:

a) Normalisation does not mean making people normal – forcing them to conform to societal norms.

Wolfensberger himself, in 1980, suggested “Normalizing measures can be offered in some circumstances, and imposed in others.” This view is not accepted by most people in the field, including Nirje. Advocates emphasize that the environment, not the person, is what is normalized, or as known for decades a person-environment interaction.

Normalization is very complex theoretically, and Wolf Wolfensberger’s educators explain his positions such as the conservatism corollary, deviancy unmaking, the developmental model (see below) and social competency, and relevance of social imagery, among others.

b) Normalisation does not support “dumping” people into the community or into schools without support.

Normalisation has been blamed for the closure of services (such as institutions) leading to a lack of support for children and adults with disabilities. Indeed, normalisation personnel are often affiliated with human rights groups. Normalisation is not deinstitutionalisation, though institutions have been found to not “pass” in service evaluations and to be the subject of exposes. Normalisation was described early as alternative special education by leaders of the deinstitutionalisation movement.

However support services which facilitate normal life opportunities for people with disabilities – such as special education services, housing support, employment support and advocacy – are not incompatible with normalization, although some particular services (such as special schools) may actually detract from rather than enhance normal living bearing in mind the concept of normal ‘rhythms’ of life.

c) Normalisation supports community integration, but the principles vary significantly on matters such as gender and disability with community integration directly tackling services in the context of race, ethnicity, class, income and gender.

Some misconceptions and confusions about normalisation are removed by understanding a context for this principle. There has been a general belief that ‘special’ people are best served if society keeps them apart, puts them together with ‘their own kind, and keep them occupied. The principle of normalisation is intended to refute this idea, rather than to deal with subtlety around the question of ‘what is normal?’ The principle of normalisation is congruent in many of its features with “community integration” and has been described by educators as supporting early mainstreaming in community life.

d) Normalisation supports adult services by age range, not “mental age”, and appropriate services across the lifespan.

Arguments about choice and individuality, in connection with normalisation, should also take into account whether society, perhaps through paid support staff, has encouraged them into certain behaviours. For example, in referring to normalisation, a discussion about an adult’s choice to carry a doll with them must be influenced by a recognition that they have previously been encouraged in childish behaviours, and that society currently expects them to behave childishly. Most people who find normalisation to be a useful principle would hope to find a middle way – in this case, an adult’s interest in dolls being valued, but with them being actively encouraged to express it in an age-appropriate way (e.g. viewing museums and doll collections), with awareness of gender in toy selection (e.g. see cars and motorsports), and discouraged from behaving childishly and thus accorded the rights and routines only of a “perpetual child”. However, the principle of normalisation is intended also to refer to the means by which a person is supported, so that (in this example) any encouragement or discouragement offered in a patronising or directive manner is itself seen to be inappropriate.

e) Normalisation is a set of values, and early on (1970s) was validated through quantitative measures (PASS, PASSING).

Normalisation principles were designed to be measured and ranked on all aspects through the development of measures related to homes, facilities, programmes, location (i.e. community development), service activities, and life routines, among others. These service evaluations have been used for training community services personnel, both in institutions and in the community.

Normalisation as the basis for education of community personnel in Great Britain is reflected in a 1990s reader, highlighting Wolf Wolfensberger’s moral concerns as a Christian, right activist, side-by-side (“How to Function with Personal Model Coherency in a Dysfunctional (Human Service) World”) with the common form of normalisation training for evaluations of programmes. Community educators and leaders in Great Britain and the US of different political persuasions include John O’Brien and Connie Lyle O’Brien, Paul Williams and Alan Tyne, Guy Caruso and Joe Osborn, Jim Mansell and Linda Ward, among many others.

References

Nirje, B. (1982) The basis and logic of the normalisatioprinciple, Bengt Nirje, Sixth International Congress of IASSMD, Toronto.

An Overview of Preference Falsification

Introduction

Preference falsification is the act of misrepresenting a preference under perceived public pressures. It involves the selection of a publicly expressed preference that differs from the underlying privately held preference (or simply, a public preference at odds with one’s private preference). People frequently convey to each other preferences that differ from what they would communicate privately under credible cover of anonymity (such as in opinion surveys to researchers or pollsters). Pollsters can use techniques such as list experiments to uncover preference falsification.

The term preference falsification was coined by Timur Kuran in a 1987 article, “Chameleon voters and public choice.” On controversial matters that induce preference falsification, he showed there that, widely disliked policies may appear popular. The distribution of public preferences, which Kuran defines as public opinion, may differ greatly from private opinion, which is the distribution of private preferences known only to individuals themselves.

Kuran developed the implications of this observation in a 1995 book, Private Truths, Public Lies: The Social Consequences of Preference Falsification. This book argues that preference falsification is not only ubiquitous but has huge social and political consequences. It provides a theory of how preference falsification shapes collective illusions, sustains social stability, distorts human knowledge, and conceals political possibilities. Collective illusions is an occurrence when most people in a group go along with an idea or a preference that they do not agree with, because they incorrectly believe that most people in the group agree with it.

Specific Form of Lying

Preference falsification aims specifically at moulding the perceptions others hold about one’s motivations. As such, not all forms of lying entail preference falsification. To withhold bad medical news from a terminally ill person is a charitable lie. But it is not preference falsification, because the motivation is not to conceal a wish.

Preference falsification is not synonymous with self-censorship, which is simply the withholding of information. Whereas self-censorship is a passive act, preference falsification is performative. It entails actions meant to project a contrived preference.

Strategic voting occurs when, in the privacy of an election booth, one votes for candidate B because A, one’s favourite, cannot win. This entails preference manipulation but not preference falsification, which is a response to social pressures. In a private polling booth, there are no social pressures to accommodate and no social reactions to control.

Private Opinion vs. Public Opinion

The term public opinion is commonly used in two senses. The first is the distribution of people’s genuine preferences, often measured through surveys that provide anonymity. The second meaning is the distribution of preferences that people convey in public settings, which is measured through survey techniques that allow the pairing of responses with specific respondents. Kuran distinguishes between the two meanings for analytic clarity, reserving public opinion only for the latter. He uses the term private opinion to describe the distribution of a society’s private preferences, known only to individuals themselves.

On socially controversial issues, preference falsification is often pervasive, and ordinarily public opinion differs from private opinion.

Private Knowledge vs. Public Knowledge

Private preferences over a set of options rest on private knowledge, which consists of the understandings that individuals carry in their own minds. A person who privately favours reforming the educational system does so in the belief that, say, schools are failing students, and a new curriculum would serve them better. But this person need not convey to others his sympathy towards a new curriculum. To avoid alienating powerful political groups, she could pretend to consider the prevailing curriculum optimal. In other words, her public knowledge could be a distorted, if not completely fabricated, version of what she really perceives and understands.

Knowledge falsification causes public knowledge to differ from private knowledge.

Three Main Claims of Kuran’s Theory

Private Truths, Public Lies identifies three basic social consequences of preference falsification:

  1. Distortion of social decisions;
  2. Distortion of private knowledge; and
  3. Unanticipated social discontinuities.

1. Distortion of Social Decisions

Among the social consequences of preference falsification is the distortion of social decisions. In misrepresenting public opinion, it corrupts a society’s collective policy choices. One manifestation is collective conservatism, which Kuran defines as the retention of policies that would be rejected in a vote taken by secret ballot and the implicit rejection of alternative policies that, if voted on, would command stable support.

For an illustration, suppose that a vocal minority within this society takes to shaming the supporters of a certain reform. Simply to protect their personal reputations, people privately favouring the reform might start pretending to be satisfied with the status quo. In falsifying their preferences, they would make the perceived share of the reform opponents discourage other reform sympathizers from publicizing their own desires for change. With enough reform sympathizers opting for comfort through preference falsification, a clear majority privately favouring reform could co-exist with an equally clear majority publicly opposing the same reform. In other words, private opinion could support reform even as public opinion opposes it.

A democracy has a built-in mechanism for correcting distortions in public opinion: periodic elections by secret ballot. On issues where preference falsification is rampant, elections allow hidden majorities to make themselves heard and exert influence through the ballot box. The privacy afforded by secret balloting allows voters to cast ballots aligned with their private preferences. As private opinion gets revealed through the ballot box, preference falsifiers may discover, to their delight, that they form a majority. They may infer that they have little to fear from vocalising honestly what they want. That is the expectation underlying secret balloting.

In practice, however, secret-ballot elections serve their intended corrective function imperfectly. For one thing, on issues that induce rampant preference falsification, elections may offer little choice. All serious contestants will often take the same position, partly to avoid being shamed and partly to position themselves optimally in policy spaces to maximise their appeal to the electorate. For another, in periodic elections citizens of a democracy vote for representatives or political parties that stand for policy packages. They do not vote on individual policies directly. Therefore, the messages that a democratic citizenry conveys through secret balloting are necessarily subject to interpretation. A party opposed to a particular reform may win because of its stands on other issues. Yet, its vote may be interpreted as a rejection of reform.

Nevertheless, periodic secret balloting limits the harms of preference falsification. It keeps public opinion from straying too far from private opinion on matters critical to citizens. By contrast, in nondemocratic political regimes no legal mechanism exists for uncovering hidden sentiments. Therefore, serious distortions of public opinion are correctable only through extra-legal means, such as rioting, a coup, or a revolution.

2. Distortion of Private Knowledge

Private preferences may change through learning. We learn from our personal experiences, and we can think for ourselves. Yet, because our cognitive powers are bounded, we can reflect comprehensively on only a small fraction of the issues on which we decide, or are forced to, express a preference. However much we might want to think independently on every issue, our private knowledge unavoidably rests partly on the public knowledge that enters public discourse—the corpus of suppositions, observations, assertions, arguments, theories, and opinions in the public domain. For example, most people’s private preferences concerning international trade are based, to one degree or another, on the public communications of others, whether through publications, TV, social media, gatherings of friends, or some other medium.

Preference falsification shapes or reshapes private knowledge by distorting the substance of public discourse. The reason is that, to conceal our private preferences successfully, we must control the impressions we convey. Effective control requires careful management of our body language but also of the knowledge that we convey publicly. In other words, credible preference falsification requires engaging in appropriately tailored knowledge falsification as well. To convince an audience that we favour trade quotas, facts and arguments supportive of quotas must accompany our pro-quota public preference.

Knowledge falsification corrupts and impoverishes the knowledge in the public domain, Kuran argues. It exposes others to facts that knowledge falsifiers know to be false. It reinforces the credibility of falsehoods. And it conceals information that the knowledge falsifier considers true.

Preference falsification is thus a source of avoidable misperceptions, even ignorance, about the range of policy options and about their relative merits. This generally harmful effect of preference falsification works largely through the knowledge falsification that accompanies it. The disadvantages of a particular policy, custom, or regime might have been appreciated widely in the past. However, insofar as public discourse excludes criticism of the publicly fashionable options, the objections will tend to be forgotten. Among the mechanisms producing such collective amnesia is population replacement through births and deaths. New generations are exposed not to the unfiltered knowledge in their elders’ heads but, rather, to the reconstructed knowledge that their elders feel safe to communicate. Suppose that an aging generation had disliked a particular institution but refrained from challenging it. Absent experiences that make the young dislike that institution, they will preserve it to avoid social sanctions but also, perhaps mainly, because the impoverishment of public discourse has blinded them to the flaws of the status quo and blunted their capacity to imagine better alternatives. The preference and knowledge falsification of their parents will have left them intellectually handicapped.

Over the long run, then, preference falsification brings intellectual narrowness and ossification. Insofar as it leaves people unequipped to criticise inherited social structures, current preference falsification ceases to be a source of political stability. People support the status quo genuinely, because past preference falsification has removed their inclinations to want something different.

The possibility of such socially induced intellectual incapacitation is highest in contexts where private knowledge is drawn largely from others. It is low, though not nil, on matters where the primary source of private knowledge is personal experience. Two other factors influence the level of ignorance generated by preference falsification. Individuals are more likely to lose touch with alternatives to the status quo if public opinion reaches an equilibrium devoid of dissent than if some dissenters keep publicising the advantages of change. Likewise, widespread ignorance is more likely in a closed society than in one open to outside influences.

3. Generating Surprise

If public discourse were the only determinant of private knowledge, a public consensus, once in place, would be immutable. In fact, private knowledge has other determinants as well, and changes in them can make a public consensus unravel. But this unravelling need not occur in tandem with growing private opposition to the status quo. For a while, its effect may simply be to accentuate preference falsification (for the underlying logic, see also works by Mark Granovetter, Thomas Schelling, Chien-Chun Yin, and Jared Rubin). Just as underground stresses can build up for decades without shaking the ground above, so discontents endured silently may make private opinion keep moving against the status quo without altering public opinion. And just as an earthquake can hit suddenly in response to an intrinsically minor tectonic shift, so public opinion may change explosively in response to an event of minor intrinsic significance to personal political incentives. Summarising Kuran’s logic requires consideration of the incentives and disincentives to express a preference likely to draw adverse reactions from others.

In Kuran’s basic theory, preference falsification imposes a cost on the falsifier in the form of resentment, anger, and humiliation for compromising his individuality. And this psychological cost grows with the extent of preference falsification. Accordingly, a citizen will find it harder to feign approval of the established policy if he favours massive reform than if he favours mild reform. In choosing a public preference with respect to the status quo, the individual must also consider the reputational consequences of the preference he conveys to others. If reformists are stigmatised and ostracised, and establishmentarians are rewarded, solely from a reputational standpoint he would find it more advantageous to appear as an establishmentarian. The reputational payoff from any given choice of a public preference depends on the relative shares of society publicly supporting each political option. That is because each camp’s rewarding and punishing is done by their members themselves. The camps thus form pressure groups. All else equal, the larger a pressure group, the greater the pressure it exerts on members of society.

Unless the established policy happens to coincide with an individual’s private ideal, he thus faces a trade-off between the internal benefits of expressing himself truthfully and the external advantages of being known as an establishmentarian. To any issue, observes Kuran, individuals can bring different wants, different needs for social approval, and different needs to express themselves truthfully. These possibilities imply that people can differ in their responses to prevailing social pressures. Of two reform-minded individuals, one may resist social pressures and express her preference truthfully while the other opts to accommodate the pressures through preference falsification. A further implication is that individuals can differ in terms of the social incentives necessary to make them abandon one public preference for another. The switchover points define individuals’ political thresholds. Political thresholds can vary across individuals for the reasons given above.

We are ready now to explain how, when private opinion and public opinion are far apart, a shock of the right kind can make a critical number of disgruntled individuals reach their thresholds for expressing themselves truthfully to put in motion a public-preference cascade (also known as a public-preference bandwagon, or, when the form of preference is clear from the context, a preference cascade). Until the critical mass is reached, changes in individual dispositions are invisible to outsiders, even to one another. Once it is reached, switches in public preferences impel people with thresholds a bit higher than those of the people within the critical mass add their own voices to the chorus for reform. And support for reform then keeps feeding on itself through growing pro-reform pressure and diminishing pressure favouring the status quo. Each addition to the reformist camp induces further additions until a much larger share of society stands for change. This preference cascade ends when no one is left whose threshold is sufficiently low to be tipped into the reformist camp by one more other individual’s prior switch.

This explosive growth in public support for reform amounts to a political revolution. The revolution will not have been anticipated, because preference falsification had concealed political currents flowing under the visible political landscape. Despite the lack of foresight, the revolution will easily be explained with the benefit of hindsight. Its occurrence lowers the personal risk of publicising preference falsification in the past. Tales of expressive repression expose the vulnerability of the pre-revolutionary social order. Though many of these tales will be completely true, others will be exaggerated, and still others will be outright lies. Indeed, the revolution creates incentives for people who were long satisfied genuinely with the status quo to pretend that, at heart, they were always reformists waiting for a prudent time to speak out.

Good hindsight does not imply good foresight, Kuran insists. To understand why we were fooled in the past does not provide immunity to being surprised by future social discontinuities. Wherever preference falsification exists, an unanticipated social break is possible.

Kuran developed his theory of “unanticipated revolution” in an April 1989 article that gave the French Revolution of 1789, the Russian Revolution of February 1917, and the Iranian Revolution of 1978-79 as examples of earth-shattering events that caught the world by surprise. When the Berlin Wall fell in November 1989, and several East European communist regimes fell in quick succession, he interpreted the surprise through an illustrative form of his theory. Both articles predict that revolutionary political surprises are a fact of political life; no amount of modelling and empirical research will provide full predictability as long as public preferences are interdependent and preference falsification exists. In a 1995 article, he emphasized that his unpredictability prediction is falsifiable. He stated as a proposition: “The ubiquity of preference falsification makes more revolutionary surprises inevitable.” This proposition “can be debunked,” he wrote, “by constructing a theory that predicts future revolutions accurately,” illustrating through examples that the predictions would need to specify the timing.

Case Studies

Kuran’s Private Truths, Public Lies contains three case studies. They involve the trajectory of East European communism, India’s caste system, and racial inequality and related policies in the United States. Many other scholars have applied the concept of preference falsification in myriad contexts. Some prominent cases are summarised here, and additional cases are referenced.

Communism’s Persistence and Sudden Fall

Persistence of Communism

For many decades, the communist regimes of the Eastern Europe, all established during or after World War II as “people’s democracies,” drew public support from millions of dissatisfied citizens. The reason is only partly that authorities punished dissenters. Citizens seeking to prove their loyalty to communism participated in the vilification of nonconformists, even of dissidents whose political positions they privately admired. This insincerity made it highly imprudent to oppose communism publicly. As such, it contributed to the survival of generally despised communist regimes. Vocal dissenters existed. They included Alexander Solzhenitsyn, Andrei Sakharov, and Václav Havel. But East European dissidents were far outnumbered by unhappy citizens who opted to appear supportive of the incumbent regime. By and large, dissidents were people with an enormous capacity for enduring social stigma, harassment, and even imprisonment. In terms of the Kuran model, they had uncommonly low thresholds for speaking their minds. Most East Europeans had much higher thresholds. Accordingly, for all the hardships of life under communism, they remained politically submissive for years on end.

Ideological Influence

One can privately despise a regime without loss of belief in the principles it stands for. By and large, people who came to disdain communist regimes continued, for decades, to believe in its viability. Most attributed its shortcomings to corrupt leaders, remaining sympathetic to communism itself.

Kuran attributes communism’s ideological influence partly to preference falsification on the part of people who felt victimized by it. In concealing their grievances to avoid being punished as an “enemy of the people,” victims had to refrain from communicating their observations about communism’s failures; they also had to pay lip service to Marxist principles. Their knowledge falsification distorted public discourse enormously, sowing confusion about the shortcomings of communism. Not even outspoken dissidents came out unscathed. Until Mikhail Gorbachev’s reforms of the 1980s broke longstanding taboos, most East European dissidents remained committed to some form of socialism.

Well before the fall of communism, during the heyday of Soviet power and apparent invincibility, the dissident Alexander Solzhenitsyn pointed to this phenomenon of intellectual enfeeblement. He said that the Soviet people had become “mental cripples.”

The large dissident literature of the communist world provides evidence. Not even courageous social thinkers escaped the damage of intellectual impoverishment. Certain unusually gifted scholars and statesmen recognised that something essential was wrong. From the Khrushchev era (1953–64) onwards, they spearheaded reforms such as Hungarian market socialism and the Yugoslav labor-managed enterprise. But the architects of these reforms failed to recognize the fatal flaws of the system they tried to salvage. Well into the 1980s, most reformers continued to regard central planning as indispensable. They criticised black markets but rarely understood that communism made black markets inevitable. Likewise, the instigators of Hungary’s crushed revolution of 1956 and the Prague Spring of 1968 were all wedded to “scientific socialism” as a doctrine of emancipation and shared prosperity.

The Hungarian economist János Kornai struggled in the 1960s to the 1980s to reform the Hungarian economy. His history of reform communism characterises the reformers of the 1950s and 1960s (including himself) as naïve. It was ridiculous, he wrote in 1986, to think that the Soviet command system could be reformed in such a way to ensure efficiency, growth, and equality all at once.

Diverse reformers helped expose communism’s unviability. But the biases of socialist public discourse handicapped even them. Their own thinking was warped by the distortions of communist public discourse.

The Sudden Fall of East European Communism

Among the most stunning surprises of the twentieth century is the collapse of several communist regimes in 1989. Practically everyone was stunned by the communist collapse, including scholars, statesmen, futurologists, the CIA, the KGB, and other intelligence organisations, dissidents with great insight into their societies (such as Havel and Solzhenitsyn), and even Gorbachev, whose actions unintentionally triggered this momentous transformation.

A major trigger was the Soviet Union’s twin policies of perestroika (restructuring) and glasnost (openness).[51] Perestroika amounted to an acknowledgment, by the Soviet Communist Party that something was seriously wrong, that the Communist system was not about to overtake the West. Glasnost allowed Soviet citizens to participate in debates about the system, to propose changes, to speak the previously unspeakable, to admit that they had been thinking what had been considered unthinkable. Public discourse broadened, heightening disillusionment with communism and intensifying popular discontent. In the process, millions of East European citizens became increasingly willing to support an opposition movement publicly.

Few would step forward, though, so long as the opposition movement remained minuscule. Hence, no one, not even the East Europeans themselves, knew how ready Eastern Europe had become for regime changes.

In retrospect, a turning point was Gorbachev’s trip to Berlin on 07 October 1989 for celebrations marking the 40th anniversary of East Germany’s communist regime. Crowds fill the streets, chanting “Gorby! Gorby!” The East German police responded with restraint. TV scenes of the demonstrations and the police response signalled, on the one hand, that discontent was very broad and, on the other hand, that the regime was vulnerable. The result was an explosive growth in public opposition, with each demonstration sparking larger demonstrations. The fall of the Berlin Wall came on 09 November. Regimes considered unshakeable crumbled, in quick succession, under the weight of open opposition from the streets.

Preference falsification, for decades a source of communism’s durability, now made the anti-regime movement in public opinion feed on itself. As public opposition grew, East Europeans relatively satisfied with the status quo jumped on the public-preference cascade to secure a place in the emerging new order. Though the world was caught by surprise, the East European revolutions are now easily understood. In line with Kuran’s theory of unanticipated revolutions, abundant information now points to the existence, all along, of massive hidden opposition to the region’s parties.

The data that have surfaced include classified opinion surveys found in Communist Party archives. Like rulers of dictatorial regimes throughout history, Party leaders understood that their support partly feigned. For self-preservation, they conducted anonymous opinion surveys whose results were treated as state secrets. The once-classified data show little variation until 1985. They indicate substantial belief in the efficiency of socialist institutions, but also far more doubt than public discourse suggested. After 1985, faith in communism plummeted and the perception that communism is unworkable spread.

A puzzle is why the East European leaders who had access to this information, and who thus knew that disillusionment with communism was growing, did not block the rise in explosive opposition. They probably could have prevented the cascades that were to unfold through massive force early on. Certain communist leaders thought that reforms would ultimately reverse the process. Others did not realise how quickly private discontent could produce self-reinforcing public opposition.

At the very end, fear simply changed sides: party functionaries who had helped foster repression came to fear ending up on the wrong side of history. In retrospect, it appears that their reticence to respond forcefully at the outset enabled public oppositions to grow explosively in country after country, through a domino effect. Each successful revolution lowered the perceived risks of joining the opposition in other countries.

Religious Preference Falsification

Preference falsification has played a role in the growth and survival of religions. It has contributed also to the shaping of religious institutions and beliefs. Some case studies are summarised here.

India’s Caste System

For several millennia, Indian society has been divided into ranked occupational units, or castes, whose membership is determined primarily by descent. In practice, the caste system became an integral part of Hinduism in most parts of the Indian subcontinent. Over the ages, this system survived anti-Hindu movements, foreign invasions, colonisation, even the challenges of aggressive conversions from Islam and Christianity. Although discrimination against the lower castes became illegal in post-colonial India, caste remains a powerful force in Indian life. Most marriages take place between members of the same caste.

Persistence of Caste System

The extraordinary durability of the caste system has puzzled social scientists, especially because, in most times and places, the system has perpetuated itself with little use of force. A related enigma has been the support given to the system by groups at the foot of the Hindu social hierarchy, namely the Untouchables (Dalits). Because of their deprivations, the Untouchables might be expected to have resisted the caste system en masse.

George Akerlof offers an explanation that hinges on two observations: (1) castes are economically interdependent, and (2) traditional Indian society penalises people who neglect or refuse to abide by caste codes. For example, if a firm hires an Untouchable to fill a post traditionally reserved for an upper caste, the firm loses customers, and the hired Untouchable endures social punishments. Because of these conditions, no individual can break away from the caste system unilaterally. To succeed, he must break away as part of a coalition. But free riding blocks the formation of viable coalitions. Because the rule breaker would suffer negative consequences immediately and without any guarantee of success, no firm and no Untouchable initiates a break.

In Private Truths, Public Lies, Kuran observes that Indians were penalised not just for actions against the caste system but also for expressions of opposition. The caste system discouraged inquiries into its rationale. It also discouraged open criticism of caste rules. By and large, the reservations of Indians remained suppressed. Preference falsification with respect to the system was common, as was knowledge falsification. Based on these findings and focusing on processes that shape public opinion and public knowledge, Kuran extends Akerlof’s theory.

Reticence to publicise preferences and knowledge honestly kept Indians in the dark, Kuran argues, about opportunities for forming anti-caste coalitions. It made them perceive the caste system as inescapable, even in contexts where, collectively, they had the power to change, even overthrow, the system. Hence, before the 1800s, negotiations for a more egalitarian social contract did not get off the ground. Reform-minded Indians could not find each other, let alone initiate discussions leading to reforms.

Caste Ideology

The caste system was legitimised through a tenet of Hinduism, the doctrine of karma. According to this doctrine, an individual’s behaviour in one life affects his social status in his later lives. If a person accepts his caste of birth and fulfils the tasks accepted of him without making a fuss, he gets reincarnated into a higher caste. If instead he neglects the duties of the caste into which he was born or challenges the caste system, he gets demoted in his next life. Accordingly, the karma doctrine treats prevailing status differences as the fair and merited consequences of past conduct.

Many ethnographies find that, to close friends, low-ranked Indians will confess doubts about karma, if not outright lack of belief in it. But preference and knowledge falsification by Indians, even by substantial numbers, does not imply that the doctrine is merely a façade. Over countless generations, many Indians internalised the doctrine of karma. Belief in social mobility through reincarnation has been common; so has belief in ritual impurity, which Hinduism treats as both a source and a manifestation of social inferiority.

These concepts emerged so long ago that their origins are poorly understood. It is clear, though, that, once the caste system got established, the highest-status castes, the brahmins, had incentives to perpetuate the caste system by punishing Indians who misbehaved or expressed disapproval. As Kuran explains, preference and knowledge falsification made some Indians obey the system for fear of reprisals and others out of conviction. In either case, public discourse facilitated the acceptance of karma-based status differences. Most Indians remained ignorant of concepts critical to treating their conditions as unacceptable. Insofar as Indians genuinely believed in caste ideology, the caste system strengthened.

In the 19th century, the caste system came to be questioned widely in public. The key trigger was that growing numbers of Indians became acquainted with egalitarian European movements, such as democratisation, liberalism, and socialism. The ensuing Indian reform movement led, in the second half of the 20th century, to a system of caste-based education and job quotas meant to assist the most disadvantaged groups within Indian society, including the Untouchables.

Shii Islam’s Taqiyya Doctrine

After Islam’s Sunni-Shii schism in 661 CE, Sunni leaders took to persecuting Shiis living in their domains. Their campaign to extinguish Shiism included requiring suspected Shiis to insult the founders of Shiism. Refusal to comply could result in imprisonment, torture, even death. In response, Shii leaders adopted a doctrine that allowed individual Shiis to conceal their Shii beliefs in the face of danger, provided they met two criteria. First, the preference falsifiers would stay devoted, in their hearts, to Shii tenets; and second, they would intend, as soon as the danger passed, to return to practicing Shiism openly. This form of religious preference falsification was known as taqiyya.

Shii leaders gave taqiyya religious legitimacy through Quran verses that speak of God’s omniscience; God saw, on the one hand, people’s private and public preferences, and, on the other hand, the conditions making religious preference falsification a matter of survival. He would sympathise with taqiyya exercised for legitimate reasons by people who, at least privately, retained the correct faith.

Gradually, the taqiyya doctrine turned into a justification for Shii political passivity. Stretching the meaning of this doctrine, many Shiis living under an oppressive regime used it to rationalise inaction, even apathy. In the 20th century, growing numbers of Shii leaders took to telling their followers that taqiyya had been a key source of Shii political and economic weakness. The mastermind of Iran’s Revolution of 1979, Ayatollah Ruhollah Khomeini, opened his campaign to topple the Pahlavi Monarchy by proclaiming: “The time for taqiyya is over. Now is the time for us to stand up and proclaim the things we believe in.” The success of Khomeini’s campaign involved millions of Iranians joining street protests against the Pahlavi regime, at the risk of being caught, if not killed on the spot, by the regime’s widely feared security forces.

Once it consolidated power, Iran’s Islamic Republic founded by Khomeini’s team did not institute religious freedoms. In forcing Iranians to live according to its specific interpretation of Shiism, it effectively induced a new form of taqiyya. Having the regime’s morality police enforce a conservative dress code for women (hijab) resulted in rampant religious preference falsification of a new kind. Against their will, millions of Iranian women started covering their hair and abiding by the regime’s modesty standards in myriad ways, simply to avoid punishment. Evidence lies in the commonness of mini headscarves that cover just enough hair to pass as veiled. These headscarves are known pejoratively as “bad hijab” or “slutty hijab.” They represent attempts by Iranian women to minimise the extent of their religious preference falsification.

Crypto-Protestantism in France, 1685-1787

Between the Edict of Nantes (1598) and the Edict of Fontainebleau (1685), Protestantism enjoyed toleration in France. The latter edict inaugurated a period when Protestantism was officially proscribed, except in Alsace and Lorraine. Many French Protestants emigrated to Switzerland, Great Britain, British North America, Prussia, and other predominantly Protestant territories. At least officially, the Protestants who stayed behind converted to Roman Catholicism. Of these converts, some practiced Catholicism in public even as they performed Protestant rites privately. Such crypto-Protestantism is a form of religious preference falsification.

During the Revocation period, appearing as a Catholic was a matter of survival. But the required public performances varied across groups and by location, as did crypto-Protestant practices. For example, the Jaucourt family, a noble crypto-Protestant house, discreetly fulfilled its religious commitments at the Protestant chapels of Scandinavian embassies. Catholic authorities, both state officials and Catholic clergy, looked the other way.

For most of the crypto-Protestant population, however, the medium for performing Protestant rites was Désert Church. This was a clandestine network of congregations that operated throughout France with the help of lay crypto-Protestants and Reformed Protestant clerics. Initially, these clerics were domestically trained. Eventually, they were all foreign-trained.

Public performance of Catholic rites gave crypto-Protestants access to civil status deeds as well as official registrations of births, baptisms, and marriages. These incentives for religious preference falsification were not trivial. For example, legal marriage gave offspring legitimacy and inheritance rights. Désert marriages had no legal standing.

French Protestants regained the right to legal marriage as Protestants 102 years after the Edict of Fontainebleau, with the Edict of Tolerance (1787).

Covert Judaism and Islam during the Portuguese Inquisition

In 1496, King Manuel I of Portugal decreed the expulsion of Jews and free Moors from his kingdom and dominions, unless they converted to Christianity. Four decades later, during the reign of John III, Portugal’s Holy Office of the Inquisition was established. It began to persecute people accused of crypto-Judaism, crypto-Islam, or some other form of religious preference falsification.

The Portuguese Inquisition functioned as a persecutory organisation against covert practices of other religions, but also against heresies and deviations from sexual mores considered un-Christian, such as bigamy and sodomy. The Inquisition pursued these missions until at least the 1770s, when the government of the Marquis of Pombal repurposed this institution. The Portuguese Inquisition was terminated in 1821.

Iberian-Jewish (Sephardic) converts to Catholicism and their descendants were all known as “New Christians.” The Islamic converts and their descendants were known collectively as Mouriscos. On pain of social rejection and inquisitorial persecution, they were required to display, convincingly enough, their adherence to Roman Catholicism.

Blood purity statutes regulated access to Portugal’s public posts and honorific distinctions (also called limpeza de sangue), denying a broad range of privileges to New Christians on account of their heredity. But they could move upward by a combination of marrying “Old Christians” and having records of their roots altered. Diverse entities, including the Inquisition, were used to help New Christians whitewash their heritage through “blood purity” certificates, invariably in return for fees. This whitewashing process involved knowledge falsification by both sides.

The credibility of blood purity certificates depended on the issuing entity’s place in Portugal’s hierarchy. Accordingly, New Christians could keep rising in social status through blood purity certificates of increasing rigor. More rigorous certificates could be obtained from higher-level investigations also to confront rumours of impure ancestry. Many Portuguese families with New Christian roots progressed upwards in social status by creating availability cascades of positive blood purity certifications. Such families bolstered the availability of information pointing to blood purity also by placing relatives in the Roman Catholic clergy. These placements served themselves as certifications, for the clergy was closed to Christians of “impure” ancestry (namely Jewish, Moorish, and Sub-Saharan African).

Gender Norms

According to a 2020 study, by Leonardo Bursztyn, Alessandra González, and David Yanagizawa-Drott, the vast majority of young married men in Saudi Arabia express private beliefs in support of women working outside the home. At the same time, they substantially underestimate the degree to which other similar men support it. Once they become informed about the widespread nature of the support, they increasingly help their wives obtain jobs.

Ethnic Conflict

In “Ethnic norms and their transformation through reputational cascades,” Kuran applies the concept of preference falsification to ethnic conflict. The article focuses on ethnification, the process whereby ethnic origins, ethnic symbols, and ethnic ties gain salience and practical significance.

Ethnicity often serves as a source of identity without preventing cooperation, exchanges, socialising and intermarriage across ethnic boundaries. In such contexts, social forces may preserve that condition indefinitely. People who harbour ill-will toward other ethnic groups will keep their hatreds in check to avoid being punished for divisiveness. But if political and economic shocks weaken those forces, a process of ethnification may get under way. Specifically, people may start highlighting their ethnic particularities and discriminating against ethnic others. The emerging social pressures will then generate further ethnification through a self-reinforcing process, possibly leading to spiralling ethnic conflict.

An implication of Kuran’s analysis is that culturally, politically, economically, and demographically similar countries may exhibit very different levels of ethnic activity. Another is that ethnically based hatreds may constitute by-products of ethnification rather than their mainspring.

Yugoslav Civil War

Kuran uses the above argument to illuminate how the former Yugoslavia, once touted as the model of a civilised multi-ethnic nation, became ethnically segregated over a short period and dissolved into ethnically based enclaves at war with one another. Preference falsification increased the intensity of the Yugoslav Civil War, he suggests; also, it accelerated Yugoslavia’s break-up into ethnically based independent republics.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Preference_falsification >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Emotional Contagion

Introduction

Emotional contagion is a form of social contagion that involves the spontaneous spread of emotions and related behaviours. Such emotional convergence can happen from one person to another, or in a larger group. Emotions can be shared across individuals in many ways, both implicitly or explicitly. For instance, conscious reasoning, analysis, and imagination have all been found to contribute to the phenomenon. The behaviour has been found in humans, other primates, dogs, and chickens.

Plutchik Wheel

Emotional contagion is important to personal relationships because it fosters emotional synchrony between individuals. A broader definition of the phenomenon suggested by Schoenewolf is:

“a process in which a person or group influences the emotions or behavior of another person or group through the conscious or unconscious induction of emotion states and behavioral attitudes.”

One view developed by Elaine Hatfield, et al., is that this can be done through automatic mimicry and synchronisation of one’s expressions, vocalisations, postures, and movements with those of another person. When people unconsciously mirror their companions’ expressions of emotion, they come to feel reflections of those companions’ emotions.

In a 1993 paper, Psychologists Elaine Hatfield, John Cacioppo, and Richard Rapson define emotional contagion as “the tendency to automatically mimic and synchronize expressions, vocalizations, postures, and movements with those of another person’s [sic] and, consequently, to converge emotionally”. 

Hatfield, et al., theorise emotional contagion as a two-step process: First, we imitate people (e.g. if someone smiles at you, you smile back). Second, our own emotional experiences change based on the non-verbal signals of emotion that we give off. For example, smiling makes one feel happier, and frowning makes one feel worse. Mimicry seems to be one foundation of emotional movement between people.

Emotional contagion and empathy share similar characteristics, with the exception of the ability to differentiate between personal and pre-personal experiences, a process known as individuation. In The Art of Loving (1956), social psychologist Erich Fromm explores these differences, suggesting that autonomy is necessary for empathy, which is not found in emotional contagion.

Etymology

James Baldwin addressed “emotional contagion” in his 1897 work Social and Ethical Interpretations in Mental Development, though using the term “contagion of feeling”. Various 20th century scholars discussed the phenomena under the heading “social contagion”. The term “emotional contagion” first appeared in Arthur S. Reber’s 1985 The Penguin Dictionary of Psychology.

Influencing Factors

Several factors determine the rate and extent of emotional convergence in a group, including membership stability, mood-regulation norms, task interdependence, and social interdependence. Besides these event-structure properties, there are personal properties of the group’s members, such as openness to receive and transmit feelings, demographic characteristics, and dispositional affect that influence the intensity of emotional contagion.

Research

Research on emotional contagion has been conducted from a variety of perspectives, including organisational, social, familial, developmental, and neurological. While early research suggested that conscious reasoning, analysis, and imagination accounted for emotional contagion, some forms of more primitive emotional contagion are far more subtle, automatic, and universal.

Hatfield, Cacioppo, and Rapson’s 1993 research into emotional contagion reported that people’s conscious assessments of others’ feelings were heavily influenced by what others said. People’s own emotions, however, were more influenced by others’ nonverbal clues as to what they were really feeling. Recognizing emotions and acknowledging their origin can be one way to avoid emotional contagion. Transference of emotions has been studied in a variety of situations and settings, with social and physiological causes being two of the largest areas of research.

In addition to the social contexts discussed above, emotional contagion has been studied within organisations. Schrock, Leaf, and Rohr (2008) say organizations, like societies, have emotion cultures that consist of languages, rituals, and meaning systems, including rules about the feelings workers should, and should not, feel and display. They state that emotion culture is quite similar to “emotion climate”, otherwise known as morale, organisational morale, and corporate morale.  Furthermore, Worline, Wrzesniewski, and Rafaeli (2002): 318  mention that organizations have an overall “emotional capability”, while McColl-Kennedy, and Smith (2006)  examine “emotional contagion” in customer interactions. These terms arguably all attempt to describe a similar phenomenon; each term differs in subtle and somewhat indistinguishable ways.

Controversy

A controversial experiment demonstrating emotional contagion by using the social media platform Facebook was carried out in 2014 on 689,000 users by filtering positive or negative emotional content from their news feeds. The experiment sparked uproar among people who felt the study violated personal privacy. The 2014 publication of a research paper resulting from this experiment, “Experimental evidence of massive-scale emotional contagion through social networks”, a collaboration between Facebook and Cornell University, is described by Tony D. Sampson, Stephen Maddison, and Darren Ellis (2018) as a “disquieting disclosure that corporate social media and Cornell academics were so readily engaged with unethical experiments of this kind.” Tony D. Sampson et al. criticise the notion that “academic researchers can be insulated from ethical guidelines on the protection for human research subjects because they are working with a social media business that has ‘no obligation to conform’ to the principle of ‘obtaining informed consent and allowing participants to opt out’.” A subsequent study confirmed the presence of emotional contagion on Twitter without manipulating users’ timelines.

Beyond the ethical concerns, some scholars criticised the methods and reporting of the Facebook findings. John Grohol, writing for Psych Central, argued that despite its title and claims of “emotional contagion,” this study did not look at emotions at all. Instead, its authors used an application (called “Linguistic Inquiry and Word Count” or LIWC 2007) that simply counted positive and negative words in order to infer users’ sentiments. A shortcoming of the LIWC tool is that it does not understand negations. Hence, the tweet “I am not happy” would be scored as positive: “Since the LIWC 2007 ignores these subtle realities of informal human communication, so do the researchers.” Grohol concluded that given these subtleties, the effect size of the findings are little more than a “statistical blip.”

Kramer et al. (2014) found a 0.07%—that’s not 7 percent, that’s 1/15th of one percent!!—decrease in negative words in people’s status updates when the number of negative posts on their Facebook news feed decreased. Do you know how many words you’d have to read or write before you’ve written one less negative word due to this effect? Probably thousands.

Types

Emotions can be shared and mimicked in many ways. Taken broadly, emotional contagion can be either: implicit, undertaken by the receiver through automatic or self-evaluating processes; or explicit, undertaken by the transmitter through a purposeful manipulation of emotional states, to achieve a desired result.

Implicit

Unlike cognitive contagion, emotional contagion is less conscious and more automatic. It relies mainly on non-verbal communication, although emotional contagion can and does occur via telecommunication. For example, people interacting through e-mails and chats are affected by the other’s emotions, without being able to perceive the non-verbal cues.

One view, proposed by Hatfield and colleagues, describes emotional contagion as a primitive, automatic, and unconscious behaviour that takes place through a series of steps. When a receiver is interacting with a sender, he perceives the emotional expressions of the sender. The receiver automatically mimics those emotional expressions. Through the process of afferent feedback, these new expressions are translated into feeling the emotions the sender feels, thus leading to emotional convergence.

Another view, emanating from social comparison theories, sees emotional contagion as demanding more cognitive effort and being more conscious. According to this view, people engage in social comparison to see if their emotional reaction is congruent with the persons around them. The recipient uses the emotion as a type of social information to understand how he or she should be feeling. People respond differently to positive and negative stimuli; negative events tend to elicit stronger and quicker emotional, behavioural, and cognitive responses than neutral or positive events. So unpleasant emotions are more likely to lead to mood contagion than are pleasant emotions. Another variable is the energy level at which the emotion is displayed. Higher energy draws more attention to it, so the same emotional valence (pleasant or unpleasant) expressed with high energy is likely to lead to more contagion than if expressed with low energy.

Explicit

Aside from the automatic infection of feelings described above, there are also times when others’ emotions are being manipulated by a person or a group in order to achieve something. This can be a result of intentional affective influence by a leader or team member. Suppose this person wants to convince the others of something, he may do so by sweeping them up in his enthusiasm. In such a case, his positive emotions are an act with the purpose of “contaminating” the others’ feelings. A different kind of intentional mood contagion would be, for instance, giving the group a reward or treat, in order to alleviate their feelings.

The discipline of organisational psychology researches aspects of emotional labour. This includes the need to manage emotions so that they are consistent with organisational or occupational display rules, regardless of whether they are discrepant with internal feelings. In regard to emotional contagion, in work settings that require a certain display of emotions, one finds oneself obligated to display, and consequently feel, these emotions. If superficial acting develops into deep acting, emotional contagion is the byproduct of intentional affective impression management.

In Workplaces and Organisations

Intra-Group

Many organisations and workplaces encourage teamwork. Studies conducted by organisational psychologists highlight the benefits of work teams. Emotions come into play and a group emotion is formed.

The group’s emotional state influences factors such as cohesiveness, morale, rapport, and the team’s performance. For this reason, organisations need to take into account the factors that shape the emotional state of the work-teams, in order to harness the beneficial sides and avoid the detrimental sides of the group’s emotion. Managers and team leaders should be cautious with their behaviour, since their emotional influence is greater than that of a “regular” team member: leaders are more emotionally “contagious” than others.

Employee/Customer

The interaction between service employees and customers affects both customers’ assessments of service quality and their relationship with the service provider. Positive affective displays in service interactions are positively associated with important customer outcomes, such as intention to return and to recommend the store to a friend. It is the interest of organisations that their customers be happy, since a happy customer is a satisfied one. Research has shown that the emotional state of the customer is directly influenced by the emotions displayed by the employee/service provider via emotional contagion. But this influence depends on authenticity of the employee’s emotional display, such that if the employee is only surface-acting, the contagion is poor, in which case the beneficial effects will not occur.

Neurological Basis

Vittorio Gallese posits that mirror neurons are responsible for intentional attunement in relation to others. Gallese and colleagues at the University of Parma found a class of neurons in the premotor cortex that discharge either when macaque monkeys execute goal-related hand movements or when they watch others doing the same action. One class of these neurons fires with action execution and observation, and with sound production of the same action. Research in humans shows an activation of the premotor cortex and parietal area of the brain for action perception and execution.

Gallese says humans understand emotions through a simulated shared body state. The observers’ neural activation enables a direct experiential understanding. “Unmediated resonance” is a similar theory by Goldman and Sripada (2004). Empathy can be a product of the functional mechanism in our brain that creates embodied simulation. The other we see or hear becomes the “other self” in our minds. Other researchers have shown that observing someone else’s emotions recruits brain regions involved in:

  1. Experiencing similar emotions; and
  2. Producing similar facial expressions.

This combination indicates that the observer activates:

  1. A representation of the emotional feeling of the other individual which leads to emotional contagion; and
  2. A motor representation of the observed facial expression that could lead to facial mimicry.

In the brain, understanding and sharing other individuals’ emotions would thus be a combination of emotional contagion and facial mimicry. Importantly, more empathic individuals experience more brain activation in emotional regions while witnessing the emotions of other individuals.

Amygdala

The amygdala is one part of the brain that underlies empathy and allows for emotional attunement and creates the pathway for emotional contagion. The basal areas including the brain stem form a tight loop of biological connectedness, re-creating in one person the physiological state of the other. Psychologist Howard Friedman thinks this is why some people can move and inspire others. The use of facial expressions, voices, gestures and body movements transmit emotions to an audience from a speaker.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Emotional_contagion >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Social Emotions

Introduction

Social emotions are emotions that depend upon the thoughts, feelings or actions of other people, “as experienced, recalled, anticipated or imagined at first hand”. Examples are embarrassment, guilt, shame, jealousy, envy, coolness, elevation, empathy, and pride. In contrast, basic emotions such as happiness and sadness only require the awareness of one’s own physical state. Therefore, the development of social emotions is tightly linked with the development of social cognition, the ability to imagine other people’s mental states, which generally develops in adolescence. Studies have found that children as young as 2 to 3 years of age can express emotions resembling guilt and remorse. However, while five-year-old children are able to imagine situations in which basic emotions would be felt, the ability to describe situations in which social emotions might be experienced does not appear until seven years of age.

People may not only share emotions with others, but may also experience similar physiological arousal to others if they feel a sense of social connectedness to the other person. A laboratory-based study by Cwir, Car, Walton, and Spencer (2011) showed that, when a participant felt a sense of social connectedness to a stranger (research confederate), the participant experienced similar emotional states and physiological responses to that of the stranger while observing the stranger perform a stressful task.

Social emotions are sometimes called moral emotions, because they play an important role in morality and moral decision making. In neuroeconomics, the role social emotions play in game theory and economic decision-making is just starting to be investigated.

Behavioural Neuroscience

After functional imaging—functional magnetic resonance imaging (fMRI) in particular—became popular roughly a decade ago, researchers have begun to study economic decision-making with this new technology. This allows researchers to investigate, on a neurological level, the role emotions play in decision-making.

Developmental Picture

The ability to describe situations in which a social emotion will be experienced emerges at around age 7, and, by adolescence, the experience of social emotion permeates everyday social exchange. Studies using fMRI have found that different brain regions are involved in different age groups when performing social-cognitive and social-emotional tasks. While brain areas such as medial prefrontal cortex (MPFC), superior temporal sulcus (STS), temporal poles (TP) and precuneus bordering with posterior cingulate cortex are activated in both adults and adolescents when they reason about intentionality of others, the medial PFC is more activated in adolescents and the right STS more in adults. Similar age effects were found with younger participants, such that, when participants perform tasks that involve theory of mind, increase in age is correlated with an increase in activation in the dorsal part of the MPFC and a decrease in the activity in the ventral part of the MPFC were observed.

Studies that compare adults with adolescents in their processing of basic and social emotions also suggest developmental shifts in brain areas being involved. Comparing with adolescents, the left temporal pole has a stronger activity in adults when they read stories that elicit social emotions. The temporal poles are thought to store abstract social knowledge. This suggests that adult might use social semantic knowledge more often when thinking about social-emotional situations than adolescents.

Neuroeconomics

To investigate the function of social emotions in economic behaviours, researchers are interested in the differences in brain regions involved when participants are playing with, or think that they are playing with, another person as opposed to a computer. A study with fMRI found that, for participants who tend to cooperate on two-person “trust and reciprocity” games, believing that they are playing with another participant activated the prefrontal cortex, while believing that they are playing with a computer did not. This difference was not seen with players who tend not to cooperate. The authors interpret this difference as theory of minds that co-operators employ to anticipate the opponents’ strategies. This is an example of the way social decision making differs from other forms of decision making.

In behavioural economics, a heavy criticism is that people do not always act in a fully rational way, as many economic models assume. For example, in the ultimatum game, two players are asked to divide a certain amount of money, say x. One player, called the proposer, decides ratio by which the money gets divided. The other player, called the responder, decides whether or not to accept this offer. If the responder accepts the offer, say, y amount of money, then the proposer gets x-y amount and the responder gets y. But if the responder refuses to accept the offer, both players get nothing. This game is widely studied in behavioural economics. According to the rational agent model, the most rational way for the proposer to act is to make y as small as possible, and the most rational way for the responder to act is to accept the offer, since little amount of money is better than no money. However, what these experiments tend to find is that the proposers tend to offer 40% of x, and offers below 20% would get rejected by the responders. Using fMRI scans, researchers found that social emotions elicited by the offers may play a role in explaining the result. When offers are unfair as opposed to fair, three regions of the brain are active: the dorsolateral prefrontal cortex (DLPFC), the anterior cingulate cortex (ACC), and the insula. The insula is an area active in registering body discomfort. It is activated when people feel, among other things, social exclusion. The authors interpret activity in the insula as the aversive reaction one feels when faced with unfairness, activity in the DLPFC as processing the future reward from keeping the money, and the ACC is an arbiter that weighs these two conflicting inputs to make a decision. Whether or not the offer gets rejected can be predicted (with a correlation of 0.45) by the level of the responder’s insula activity.

Neuroeconomics and social emotions are also tightly linked in the study of punishment. Research using PET scan has found that, when players punish other players, activity in the nucleus accumbens (part of the striatum), a region known for processing rewards derived from actions gets activated. It shows that we not only feel hurtful when we become victims of unfairness, but we also find it psychologically rewarding to punish the wrongdoer, even at a cost to our own utility.

Social or Moral Aspect

Some social emotions are also referred to as moral emotions because of the fundamental role they play in morality. For example, guilt is the discomfort and regret one feels over one’s wrongdoing. It is a social emotion, because it requires the perception that another person is being hurt by this act; and it also has implication in morality, such that the guilty actor, in virtue of feeling distressed and guilty, accepts responsibility for the wrongdoing, which might cause desire to make amends or punish the self.

Not all social emotions are moral emotions. Pride, for instance, is a social emotion which involves the perceived admiration of other people, but research on the role it plays in moral behaviours yields problematic results.

Empathic Response

Empathy is defined by Eisenberg and colleagues as an affective response that stems from the apprehension or comprehension of another’s emotional state or condition and is similar to what the other person is feeling or would be expected to feel. Guilt, which is a social emotion with strong moral implication, is also strongly correlated with empathic responsiveness; whereas shame, an emotion with less moral flavour, is negatively correlated with empathic responsiveness, when controlling for guilt.

Perceived controllability also plays an important role modulating people’s socio-emotional reactions and empathic responses. For example, participants who are asked to evaluate other people’s academic performances are more likely to assign punishments when the low performance is interpreted as low-effort, as opposed to low-ability. Stigmas also elicit more empathic response when they are perceived as uncontrollable (i.e. having a biological origin, such as having certain disease), as opposed to controllable (i.e. having a behavioural origin, such as obesity).

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Social_emotions >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Social Stigma

Introduction

Stigma, originally referring to the visible marking of people considered inferior, has evolved in modern society into a social concept that applies to different groups or individuals based on certain characteristics such as socioeconomic status, culture, gender, race, religion or health status. Social stigma can take different forms and depends on the specific time and place in which it arises. Once a person is stigmatised, they are often associated with stereotypes that lead to discrimination, marginalisation, and psychological problems.

This process of stigmatisation not only affects the social status and behaviour of stigmatised persons, but also shapes their own self-perception, which can lead to psychological problems such as depression and low self-esteem. Stigmatized people are often aware that they are perceived and treated differently, which can start at an early age. Research shows that children are aware of cultural stereotypes at an early age, which affects their perception of their own identity and their interactions with the world around them.

Description

Stigma (plural stigmas or stigmata) is a Greek word that in its origins referred to a type of marking or the tattoo that was cut or burned into the skin of people with criminal records, slaves, or those seen as traitors in order to visibly identify them as supposedly blemished or morally polluted persons. These individuals were to be avoided particularly in public places.

Social stigmas can occur in many different forms. The most common deal with culture, gender, race, religion, illness and disease. Individuals who are stigmatized usually feel different and devalued by others.

Stigma may also be described as a label that associates a person to a set of unwanted characteristics that form a stereotype. It is also affixed. Once people identify and label one’s differences, others will assume that is just how things are and the person will remain stigmatised until the stigmatising attribute is undetectable. A considerable amount of generalisation is required to create groups, meaning that people will put someone in a general group regardless of how well the person actually fits into that group. However, the attributes that society selects differ according to time and place. What is considered out of place in one society could be the norm in another. When society categorises individuals into certain groups the labelled person is subjected to status loss and discrimination. Society will start to form expectations about those groups once the cultural stereotype is secured.

Stigma may affect the behaviour of those who are stigmatised. Those who are stereotyped often start to act in ways that their stigmatisers expect of them. It not only changes their behaviour, but it also shapes their emotions and beliefs. Members of stigmatised social groups often face prejudice that causes depression (i.e. deprejudice). These stigmas put a person’s social identity in threatening situations, such as low self-esteem. Because of this, identity theories have become highly researched. Identity threat theories can go hand-in-hand with labelling theory.

Members of stigmatised groups start to become aware that they are not being treated the same way and know they are likely being discriminated against. Studies have shown that “by 10 years of age, most children are aware of cultural stereotypes of different groups in society, and children who are members of stigmatized groups are aware of cultural types at an even younger age.”

Main Theories and Contributions

Émile Durkheim

French sociologist Émile Durkheim was the first to explore stigma as a social phenomenon in 1895. He wrote:

Imagine a society of saints, a perfect cloister of exemplary individuals. Crimes or deviance, properly so-called, will there be unknown; but faults, which appear venial to the layman, will there create the same scandal that the ordinary offense does in ordinary consciousnesses. If then, this society has the power to judge and punish, it will define these acts as criminal (or deviant) and will treat them as such.

Erving Goffman

Erving Goffman described stigma as a phenomenon whereby an individual with an attribute which is deeply discredited by their society is rejected as a result of the attribute. Goffman saw stigma as a process by which the reaction of others spoils normal identity.

More specifically, he explained that what constituted this attribute would change over time. “It should be seen that a language of relationships, not attributes, is really needed. An attribute that stigmatizes one type of possessor can confirm the usualness of another, and therefore is neither credible nor discreditable as a thing in itself.”

In Goffman’s theory of social stigma, a stigma is an attribute, behavior, or reputation which is socially discrediting in a particular way: it causes an individual to be mentally classified by others in an undesirable, rejected stereotype rather than in an accepted, normal one. Goffman defined stigma as a special kind of gap between virtual social identity and actual social identity:

While a stranger is present before us, evidence can arise of his possessing an attribute that makes him different from others in the category of persons available for him to be, and of a less desirable kind—in the extreme, a person who is quite thoroughly bad, or dangerous, or weak. He is thus reduced in our minds from a whole and usual person to a tainted discounted one. Such an attribute is a stigma, especially when its discrediting effect is very extensive […] It constitutes a special discrepancy between virtual and actual social identity.

The Stigmatised, The Normal, and The Wise

Goffman divides the individual’s relation to a stigma into three categories:

  • The stigmatised being those who bear the stigma;
  • The normals being those who do not bear the stigma; and
  • The wise being those among the normals who are accepted by the stigmatised as understanding and accepting of their condition (borrowing the term from the homosexual community).

The wise normals are not merely those who are in some sense accepting of the stigma; they are, rather, “those whose special situation has made them intimately privy to the secret life of the stigmatized individual and sympathetic with it, and who find themselves accorded a measure of acceptance, a measure of courtesy membership in the clan.” That is, they are accepted by the stigmatized as “honorary members” of the stigmatised group. “Wise persons are the marginal men before whom the individual with a fault need feel no shame nor exert self-control, knowing that in spite of his failing he will be seen as an ordinary other,” Goffman notes that the wise may in certain social situations also bear the stigma with respect to other normals: that is, they may also be stigmatized for being wise. An example is a parent of a homosexual; another is a white woman who is seen socialising with a black man (assuming social milieus in which homosexuals and dark-skinned people are stigmatised).

A 2012 study showed empirical support for the existence of the own, the wise, and normals as separate groups; but the wise appeared in two forms: active wise and passive wise. The active wise encouraged challenging stigmatization and educating stigmatisers, but the passive wise did not.

Ethical Considerations

Goffman emphasizes that the stigma relationship is one between an individual and a social setting with a given set of expectations; thus, everyone at different times will play both roles of stigmatised and stigmatiser (or, as he puts it, “normal”). Goffman gives the example that “some jobs in America cause holders without the expected college education to conceal this fact; other jobs, however, can lead to the few of their holders who have a higher education to keep this a secret, lest they are marked as failures and outsiders. Similarly, a middle-class boy may feel no compunction in being seen going to the library; a professional criminal, however, writes [about keeping his library visits secret].” He also gives the example of blacks being stigmatised among whites, and whites being stigmatised among blacks.

Individuals actively cope with stigma in ways that vary across stigmatised groups, across individuals within stigmatised groups, and within individuals across time and situations.

The Stigmatised

The stigmatised are ostracised, devalued, scorned, shunned and ignored. They experience discrimination in the realms of employment and housing. Perceived prejudice and discrimination is also associated with negative physical and mental health outcomes. Young people who experience stigma associated with mental health difficulties may face negative reactions from their peer group. Those who perceive themselves to be members of a stigmatised group, whether it is obvious to those around them or not, often experience psychological distress and many view themselves contemptuously.

Although the experience of being stigmatised may take a toll on self-esteem, academic achievement, and other outcomes, many people with stigmatised attributes have high self-esteem, perform at high levels, are happy and appear to be quite resilient to their negative experiences.

There are also “positive stigma”: it is possible to be too rich, or too smart. This is noted by Goffman (1963:141) in his discussion of leaders, who are subsequently given license to deviate from some behavioural norms because they have contributed far above the expectations of the group. This can result in social stigma.

The Stigmatiser

From the perspective of the stigmatiser, stigmatisation involves threat, aversion and sometimes the depersonalisation of others into stereotypic caricatures. Stigmatizing others can serve several functions for an individual, including self-esteem enhancement, control enhancement, and anxiety buffering, through downward-comparison—comparing oneself to less fortunate others can increase one’s own subjective sense of well-being and therefore boost one’s self-esteem.

21st-century social psychologists consider stigmatising and stereotyping to be a normal consequence of people’s cognitive abilities and limitations, and of the social information and experiences to which they are exposed.

Current views of stigma, from the perspectives of both the stigmatiser and the stigmatised person, consider the process of stigma to be highly situationally specific, dynamic, complex and nonpathological.

Gerhard Falk

German-born sociologist and historian Gerhard Falk wrote:

All societies will always stigmatize some conditions and some behaviors because doing so provides for group solidarity by delineating “outsiders” from “insiders”.

Falk] describes stigma based on two categories, existential stigma and achieved stigma. He defines existential stigma as “stigma deriving from a condition which the target of the stigma either did not cause or over which he has little control.” He defines Achieved Stigma as “stigma that is earned because of conduct and/or because they contributed heavily to attaining the stigma in question.”

Falk concludes that “we and all societies will always stigmatize some condition and some behavior because doing so provides for group solidarity by delineating ‘outsiders’ from ‘insiders'”. Stigmatisation, at its essence, is a challenge to one’s humanity – for both the stigmatised person and the stigmatiser. The majority of stigma researchers have found the process of stigmatisation has a long history and is cross-culturally ubiquitous.

Link and Phelan Stigmatisation Model

Bruce Link and Jo Phelan propose that stigma exists when four specific components converge:

  1. Individuals differentiate and label human variations.
  2. Prevailing cultural beliefs tie those labeled to adverse attributes.
  3. Labelled individuals are placed in distinguished groups that serve to establish a sense of disconnection between “us” and “them”.
  4. Labelled individuals experience “status loss and discrimination” that leads to unequal circumstances.

In this model stigmatisation is also contingent on “access to social, economic, and political power that allows the identification of differences, construction of stereotypes, the separation of labeled persons into distinct groups, and the full execution of disapproval, rejection, exclusion, and discrimination.” Subsequently, in this model, the term stigma is applied when labelling, stereotyping, disconnection, status loss, and discrimination all exist within a power situation that facilitates stigma to occur.

Differentiation and Labelling

Identifying which human differences are salient, and therefore worthy of labelling, is a social process. There are two primary factors to examine when considering the extent to which this process is a social one. The first issue is that significant oversimplification is needed to create groups. The broad groups of black and white, homosexual and heterosexual, the sane and the mentally ill; and young and old are all examples of this. Secondly, the differences that are socially judged to be relevant differ vastly according to time and place. An example of this is the emphasis that was put on the size of the forehead and faces of individuals in the late 19th century—which was believed to be a measure of a person’s criminal nature.

Linking to Stereotypes

The second component of this model centres on the linking of labelled differences with stereotypes. Goffman’s 1963 work made this aspect of stigma prominent and it has remained so ever since. This process of applying certain stereotypes to differentiated groups of individuals has attracted a large amount of attention and research in recent decades.

Us and Them

Thirdly, linking negative attributes to groups facilitates separation into “us” and “them”. Seeing the labelled group as fundamentally different causes stereotyping with little hesitation. “Us” and “them” implies that the labelled group is slightly less human in nature and at the extreme not human at all.

Disadvantage

The fourth component of stigmatisation in this model includes “status loss and discrimination”. Many definitions of stigma do not include this aspect, however, these authors believe that this loss occurs inherently as individuals are “labeled, set apart, and linked to undesirable characteristics.” The members of the labelled groups are subsequently disadvantaged in the most common group of life chances including income, education, mental well-being, housing status, health, and medical treatment. Thus, stigmatisation by the majorities, the powerful, or the “superior” leads to the Othering of the minorities, the powerless, and the “inferior”. Whereby the stigmatised individuals become disadvantaged due to the ideology created by “the self,” which is the opposing force to “the Other.” As a result, the others become socially excluded and those in power reason the exclusion based on the original characteristics that led to the stigma.

Necessity of Power

The authors also emphasize the role of power (social, economic, and political power) in stigmatisation. While the use of power is clear in some situations, in others it can become masked as the power differences are less stark. An extreme example of a situation in which the power role was explicitly clear was the treatment of Jewish people by the Nazis. On the other hand, an example of a situation in which individuals of a stigmatised group have “stigma-related processes” occurring would be the inmates of a prison. It is imaginable that each of the steps described above would occur regarding the inmates’ thoughts about the guards. However, this situation cannot involve true stigmatisation, according to this model, because the prisoners do not have the economic, political, or social power to act on these thoughts with any serious discriminatory consequences.

“Stigma Allure” and Authenticity

Sociologist Matthew W. Hughey explains that prior research on stigma has emphasized individual and group attempts to reduce stigma by “passing as normal”, by shunning the stigmatised, or through selective disclosure of stigmatized attributes. Yet, some actors may embrace particular markings of stigma (e.g.: social markings like dishonour or select physical dysfunctions and abnormalities) as signs of moral commitment and/or cultural and political authenticity. Hence, Hughey argues that some actors do not simply desire to “pass into normal” but may actively pursue a stigmatised identity formation process in order to experience themselves as causal agents in their social environment. Hughey calls this phenomenon “stigma allure”.

The “Six dimensions of Stigma”

While often incorrectly attributed to Goffman, the “six dimensions of stigma” were not his invention. They were developed to augment Goffman’s two levels – the discredited and the discreditable. Goffman considered individuals whose stigmatising attributes are not immediately evident. In that case, the individual can encounter two distinct social atmospheres. In the first, he is discreditable—his stigma has yet to be revealed but may be revealed either intentionally by him (in which case he will have some control over how) or by some factor, he cannot control. Of course, it also might be successfully concealed; Goffman called this passing. In this situation, the analysis of stigma is concerned only with the behaviours adopted by the stigmatised individual to manage his identity: the concealing and revealing of information. In the second atmosphere, he is discredited—his stigma has been revealed and thus it affects not only his behaviour but the behaviour of others. Jones et al. (1984) added the “six dimensions” and correlate them to Goffman’s two types of stigma, discredited and discreditable.

There are six dimensions that match these two types of stigma:

  1. Concealable – the extent to which others can see the stigma
  2. Course of the mark – whether the stigma’s prominence increases, decreases, or disappears
  3. Disruptiveness – the degree to which the stigma and/or others’ reaction to it impedes social interactions
  4. Aesthetics – the subset of others’ reactions to the stigma comprising reactions that are positive/approving or negative/disapproving but represent estimations of qualities other than the stigmatised person’s inherent worth or dignity
  5. Origin – whether others think the stigma is present at birth, accidental, or deliberate
  6. Peril – the danger that others perceive (whether accurately or inaccurately) the stigma to pose to them

Types

In Unravelling the contexts of stigma, authors Campbell and Deacon describe Goffman’s universal and historical forms of Stigma as the following.

  • Overt or external deformities – such as leprosy, clubfoot, cleft lip or palate and muscular dystrophy.
  • Known deviations in personal traits – being perceived rightly or wrongly, as weak willed, domineering or having unnatural passions, treacherous or rigid beliefs, and being dishonest, e.g., mental disorders, imprisonment, addiction, homosexuality, unemployment, suicidal attempts and radical political behaviour.
  • Tribal stigma – affiliation with a specific nationality, religion, or race that constitute a deviation from the normative, e.g. being African American, or being of Arab descent in the United States after the 9/11 attacks.

Deviance

Stigma occurs when an individual is identified as deviant, linked with negative stereotypes that engender prejudiced attitudes, which are acted upon in discriminatory behaviour. Goffman illuminated how stigmatised people manage their “Spoiled identity” (meaning the stigma disqualifies the stigmatised individual from full social acceptance) before audiences of normals. He focused on stigma, not as a fixed or inherent attribute of a person, but rather as the experience and meaning of difference.

Gerhard Falk expounds upon Goffman’s work by redefining deviant as “others who deviate from the expectations of a group” and by categorising deviance into two types:

  • Societal deviance refers to a condition widely perceived, in advance and in general, as being deviant and hence stigma and stigmatised. “Homosexuality is, therefore, an example of societal deviance because there is such a high degree of consensus to the effect that homosexuality is different, and a violation of norms or social expectation”.
  • Situational deviance refers to a deviant act that is labelled as deviant in a specific situation, and may not be labelled deviant by society. Similarly, a socially deviant action might not be considered deviant in specific situations. “A robber or other street criminal is an excellent example. It is the crime which leads to the stigma and stigmatization of the person so affected.”

The physically disabled, mentally ill, homosexuals, and a host of others who are labelled deviant because they deviate from the expectations of a group, are subject to stigmatisation – the social rejection of numerous individuals, and often entire groups of people who have been labelled deviant.

Stigma Communication

Communication is involved in creating, maintaining, and diffusing stigmas, and enacting stigmatisation. The model of stigma communication explains how and why particular content choices (marks, labels, peril, and responsibility) can create stigmas and encourage their diffusion. A recent experiment using health alerts tested the model of stigma communication, finding that content choices indeed predicted stigma beliefs, intentions to further diffuse these messages, and agreement with regulating infected persons’ behaviours.

More recently, scholars have highlighted the role of social media channels, such as Facebook and Instagram, in stigma communication. These platforms serve as safe spaces for stigmatised individuals to express themselves more freely. However, social media can also reinforce and amplify stigmatisation, as the stigmatised attributes are amplified and virtually available to anyone indefinitely.

Challenging

Stigma, though powerful and enduring, is not inevitable, and can be challenged. There are two important aspects to challenging stigma: challenging the stigmatisation on the part of stigmatisers and challenging the internalized stigma of the stigmatised. To challenge stigmatisation, Campbell et al. 2005 summarise three main approaches.

  1. There are efforts to educate individuals about non-stigmatising facts and why they should not stigmatise.
  2. There are efforts to legislate against discrimination.
  3. There are efforts to mobilise the participation of community members in anti-stigma efforts, to maximise the likelihood that the anti-stigma messages have relevance and effectiveness, according to local contexts.

In relation to challenging the internalised stigma of the stigmatised, Paulo Freire’s theory of critical consciousness is particularly suitable. Cornish provides an example of how sex workers in Sonagachi, a red light district in India, have effectively challenged internalised stigma by establishing that they are respectable women, who admirably take care of their families, and who deserve rights like any other worker. This study argues that it is not only the force of the rational argument that makes the challenge to the stigma successful, but concrete evidence that sex workers can achieve valued aims, and are respected by others.

Stigmatized groups often harbour cultural tools to respond to stigma and to create a positive self-perception among their members. For example, advertising professionals have been shown to suffer from negative portrayal and low approval rates. However, the advertising industry collectively maintains narratives describing how advertisement is a positive and socially valuable endeavour, and advertising professionals draw on these narratives to respond to stigma.

Another effort to mobilise communities exists in the gaming community through organisations like:

  • Take This – who provides AFK rooms at gaming conventions plus has a Streaming Ambassador Programme to reach more than 135,000 viewers each week with positive messages about mental health, and
  • NoStigmas – whose mission “is to ensure that no one faces mental health challenges alone” and envisions “a world without shame or discrimination related to mental health, brain disease, behavioral disorders, trauma, suicide and addiction” plus offers workplaces a NoStigmas Ally course and individual certifications.

Organisational Stigma

In 2008, an article by Hudson coined the term “organizational stigma” which was then further developed by another theory building article by Devers and colleagues. This literature brought the concept of stigma to the organisational level, considering how organisations might be considered as deeply flawed and cast away by audiences in the same way individuals would. Hudson differentiated core-stigma (a stigma related to the very nature of the organisation) and event-stigma (an isolated occurrence which fades away with time). A large literature has debated how organisational stigma relate to other constructs in the literature on social evaluations. A 2020 book by Roulet reviews this literature and disentangle the different concepts – in particular differentiating stigma, dirty work, scandals – and exploring their positive implications.

Current Research

The research was undertaken to determine the effects of social stigma primarily focuses on disease-associated stigmas. Disabilities, psychiatric disorders, and sexually transmitted diseases are among the diseases currently scrutinised by researchers. In studies involving such diseases, both positive and negative effects of social stigma have been discovered.

Stigma in Healthcare Settings

Recent research suggests that addressing perceived and enacted stigma in clinical settings is critical to ensuring delivery of high-quality patient-centred care. Specifically, perceived stigma by patients was associated with longer periods of poor physical or mental health. Additionally, perceived stigma in healthcare settings was associated with higher odds of reporting a depressive disorder. Among other findings, individuals who were married, younger, had higher income, had college degrees, and were employed reported significantly fewer poor physical and mental health days and had lower odds of self-reported depressive disorder. A complementary study conducted in New York City (as opposed to nationwide), found similar outcomes. The researchers’ objectives were to assess rates of perceived stigma in clinical settings reported by racially diverse New York City residents and to examine if this perceived stigma was associated with poorer physical and mental health outcomes. They found that perceived stigma was associated with poorer healthcare access, depression, diabetes, and poor overall general health.

Research on Self-Esteem

Members of stigmatised groups may have lower self-esteem than those of non-stigmatised groups. A test could not be taken on the overall self-esteem of different races. Researchers would have to take into account whether these people are optimistic or pessimistic, whether they are male or female and what kind of place they grew up in. Over the last two decades, many studies have reported that African Americans show higher global self-esteem than whites even though, as a group, African Americans tend to receive poorer outcomes in many areas of life and experience significant discrimination and stigma.

Mental Disorder

Empirical research on the stigma associated with mental disorders, pointed to a surprising attitude of the general public. Those who were told that mental disorders had a genetic basis were more prone to increase their social distance from the mentally ill, and also to assume that the ill were dangerous individuals, in contrast with those members of the general public who were told that the illnesses could be explained by social and environmental factors. Furthermore, those informed of the genetic basis were also more likely to stigmatize the entire family of the ill. Although the specific social categories that become stigmatised can vary over time and place, the three basic forms of stigma (physical deformity, poor personal traits, and tribal outgroup status) are found in most cultures and eras, leading some researchers to hypothesize that the tendency to stigmatise may have evolutionary roots.

The impact of the stigma is significant, leading many individuals to not seek out treatment. For example, evidence from a refugee camp in Jordan suggests that providing mental health care comes with a dilemma: between the clinical desire to make mental health issues visible and actionable through datafication and the need to keep mental health issues hidden and out of the view of the community to avoid stigma. That is, in spite of their suffering the refugees were hesitant to receive mental health care as they worried about stigma.

Currently, several researchers believe that mental disorders are caused by a chemical imbalance in the brain. Therefore, this biological rationale suggests that individuals struggling with a mental illness do not have control over the origin of the disorder. Much like cancer or another type of physical disorder, persons suffering from mental disorders should be supported and encouraged to seek help. The Disability Rights Movement recognises that while there is considerable stigma towards people with physical disabilities, the negative social stigma surrounding mental illness is significantly worse, with those suffering being perceived to have control of their disabilities and being responsible for causing them. “Furthermore, research respondents are less likely to pity persons with mental illness, instead of reacting to the psychiatric disability with anger and believing that help is not deserved.” Although there are effective mental health interventions available across the globe, many persons with mental illnesses do not seek out the help that they need. Only 59.6% of individuals with a mental illness, including conditions such as depression, anxiety, schizophrenia, and bipolar disorder, reported receiving treatment in 2011.

Reducing the negative stigma surrounding mental disorders may increase the probability of affected individuals seeking professional help from a psychiatrist or a non-psychiatric physician. How particular mental disorders are represented in the media can vary, as well as the stigma associated with each. On the social media platform, YouTube, depression is commonly presented as a condition that is caused by biological or environmental factors, is more chronic than short-lived, and different from sadness, all of which may contribute to how people think about depression.

Causes

Arikan found that a stigmatising attitude to psychiatric patients is associated with narcissistic personality traits.

In Taiwan, strengthening the psychiatric rehabilitation system has been one of the primary goals of the Department of Health since 1985. This endeavour has not been successful. It was hypothesized that one of the barriers was social stigma towards the mentally ill. Accordingly, a study was conducted to explore the attitudes of the general population towards patients with mental disorders. A survey method was utilised on 1,203 subjects nationally. The results revealed that the general population held high levels of benevolence, tolerance on rehabilitation in the community, and non-social restrictiveness. Essentially, benevolent attitudes were favouring the acceptance of rehabilitation in the community. It could then be inferred that the belief (held by the residents of Taiwan) in treating the mentally ill with high regard, and the progress of psychiatric rehabilitation may be hindered by factors other than social stigma.

Artists

In the music industry, specifically in the genre of hip-hop or rap, those who speak out on mental illness are heavily criticised. However, according to an article by The Huffington Post, there’s a significant increase in rappers who are breaking their silence on depression and anxiety.

Addiction and Substance Use Disorders

Throughout history, addiction has largely been seen as a moral failing or character flaw, as opposed to an issue of public health. Substance use has been found to be more stigmatised than smoking, obesity, and mental illness. Research has shown stigma to be a barrier to treatment-seeking behaviours among individuals with addiction, creating a “treatment gap”. A systematic review of all epidemiological studies on treatment rates of people with alcohol use disorders found that over 80% had not accessed any treatment for their disorder. The study also found that the treatment gap was larger in low and lower-middle-income countries.

Research shows that the words used to talk about addiction can contribute to stigmatisation, and that the commonly used terms of “abuse” & “abuser” actually increase stigma. Behavioural addictions (i.e. gambling, sex, etc.) are found to be more likely to be attributed to character flaws than substance-use addictions. Stigma is reduced when Substance Use Disorders are portrayed as treatable conditions. Acceptance and Commitment Therapy has been used effectively to help people to reduce shame associated with cultural stigma around substance use treatment.

The use of the drug methamphetamine has been strongly stigmatised. An Australian national population study have shown that the proportion of Australians who nominated methamphetamine as a “drug problem” increased between 2001–2019. The epidemiological study provided evidence that levels of under-reporting have increased over the period, which coincided with the deployment of public health campaigns on the dangers of ice that had stigmatising elements that portrayal of persons who used the drugs in a negative way. The level of under-reporting of methamphetamine use is strongly associated with increasing negative attitudes towards their use over the same period.

Poverty

Recipients of public assistance programs are often scorned as unwilling to work. The intensity of poverty stigma is positively correlated with increasing inequality. As inequality increases, societal propensity to stigmatise increases. This is in part, a result of societal norms of reciprocity which is the expectation that people earn what they receive rather than receiving assistance in the form of what people tend to view as a gift.

Poverty is often perceived as a result of failures and poor choices rather than the result of socioeconomic structures that suppress individual abilities. Disdain for the impoverished can be traced back to its roots in Anglo-American culture where poor people have been blamed and ostracised for their misfortune for hundreds of years. The concept of deviance is at the bed rock of stigma towards the poor. Deviants are people that break important norms of society that everyone shares. In the case of poverty it is breaking the norm of reciprocity that paves the path for stigmatisation.

Public Assistance

Social stigma is prevalent towards recipients of public assistance programs. This includes programmes frequently utilised by families struggling with poverty such as Head Start and AFDC (Aid To Families With Dependent Children). The value of self-reliance is often at the centre of feelings of shame and the fewer people value self reliance the less stigma affects them psychologically. Stigma towards welfare recipients has been proven to increase passivity and dependency in poor people and has further solidified their status and feelings of inferiority.

Caseworkers frequently treat recipients of welfare disrespectfully and make assumptions about deviant behaviour and reluctance to work. Many single mothers cited stigma as the primary reason they wanted to exit welfare as quickly as possible. They often feel the need to conceal food stamps to escape judgement associated with welfare programs. Stigma is a major factor contributing to the duration and breadth of poverty in developed societies which largely affects single mothers. Recipients of public assistance are viewed as objects of the community rather than members allowing for them to be perceived as enemies of the community which is how stigma enters collective thought. Amongst single mothers in poverty, lack of health care benefits is one of their greatest challenges in terms of exiting poverty. Traditional values of self reliance increase feelings of shame amongst welfare recipients making them more susceptible to being stigmatised.

Epilepsy

Hong Kong

Epilepsy, a common neurological disorder characterised by recurring seizures, is associated with various social stigmas. Chung-yan Guardian Fong and Anchor Hung conducted a study in Hong Kong which documented public attitudes towards individuals with epilepsy. Of the 1,128 subjects interviewed, only 72.5% of them considered epilepsy to be acceptable; 11.2% would not let their children play with others with epilepsy; 32.2% would not allow their children to marry persons with epilepsy; additionally, some employers (22.5% of them) would terminate an employment contract after an epileptic seizure occurred in an employee with unreported epilepsy. Suggestions were made that more effort be made to improve public awareness of, attitude toward, and understanding of epilepsy through school education and epilepsy-related organisations.

Media

In the early 21st century, technology has a large impact on the lives of people in multiple countries and has shaped social norms. Many people own a television, computer, and a smartphone. The media can be helpful with keeping people up to date on news and world issues and it is very influential on people. Because it is so influential sometimes the portrayal of minority groups affects attitudes of other groups toward them. Much media coverage has to do with other parts of the world. A lot of this coverage has to do with war and conflict, which people may relate to any person belonging from that country. There is a tendency to focus more on the positive behaviour of one’s own group and the negative behaviours of other groups. This promotes negative Smartphone thoughts of people belonging to those other groups, reinforcing stereotypical beliefs.

“Viewers seem to react to violence with emotions such as anger and contempt. They are concerned about the integrity of the social order and show disapproval of others. Emotions such as sadness and fear are shown much more rarely.” (Unz, Schwab & Winterhoff-Spurk, 2008, p.141).

In a study testing the effects of stereotypical advertisements on students, 75 high school students viewed magazine advertisements with stereotypical female images such as a woman working on a holiday dinner, while 50 others viewed non-stereotypical images such as a woman working in a law office. These groups then responded to statements about women in a “neutral” photograph. In this photo, a woman was shown in a casual outfit not doing any obvious task. The students that saw the stereotypical images tended to answer the questionnaires with more stereotypical responses in 6 of the 12 questionnaire statements. This suggests that even brief exposure to stereotypical ads reinforces stereotypes. (Lafky, Duffy, Steinmaus & Berkowitz, 1996).

Education and Culture

The aforementioned stigmas (associated with their respective diseases) propose effects that these stereotypes have on individuals. Whether effects be negative or positive in nature, ‘labelling’ people causes a significant change in individual perception (of persons with the disease). Perhaps a mutual understanding of stigma, achieved through education, could eliminate social stigma entirely.

Laurence J. Coleman first adapted Erving Goffman’s (1963) social stigma theory to gifted children, providing a rationale for why children may hide their abilities and present alternate identities to their peers. The stigma of giftedness theory was further elaborated by Laurence J. Coleman and Tracy L. Cross in their book entitled, Being Gifted in School, which is a widely cited reference in the field of gifted education. In the chapter on Coping with Giftedness, the authors expanded on the theory first presented in a 1988 article. According to Google Scholar, this article has been cited over 300 times in the academic literature (as of 2022).

Coleman and Cross were the first to identify intellectual giftedness as a stigmatising condition and they created a model based on Goffman’s (1963) work, research with gifted students, and a book that was written and edited by 20 teenage, gifted individuals. Being gifted sets students apart from their peers and this difference interferes with full social acceptance. Varying expectations that exist in the different social contexts which children must navigate, and the value judgements that may be assigned to the child result in the child’s use of social coping strategies to manage his or her identity. Unlike other stigmatising conditions, giftedness is unique because it can lead to praise or ridicule depending on the audience and circumstances.

Gifted children learn when it is safe to display their giftedness and when they should hide it to better fit in with a group. These observations led to the development of the Information Management Model that describes the process by which children decide to employ coping strategies to manage their identities. In situations where the child feels different, she or he may decide to manage the information that others know about him or her. Coping strategies include disidentification with giftedness, attempting to maintain low visibility, or creating a high-visibility identity (playing a stereotypical role associated with giftedness). These ranges of strategies are called the Continuum of Visibility.

Abortion

While abortion is very common throughout the world, people may choose not to disclose their use of such services, in part due to the stigma associated with having had an abortion. Keeping abortion experiences secret has been found to be associated with increased isolation and psychological distress. Abortion providers are also subject to stigma.

Stigmatisation of Prejudice

Cultural norms can prevent displays of prejudice as such views are stigmatised and thus people will express non-prejudiced views even if they believe otherwise (preference falsification). However, if the stigma against such views is lessened, people will be more willing to express prejudicial sentiments. For example, following the 2008 economic crisis, anti-immigration sentiment seemingly increased amongst the US population when in reality the level of sentiment remained the same and instead it simply became more acceptable to openly express opposition to immigration.

Spatial Stigma

Spatial stigma refers to stigmas that are linked to ones geographic location. This can be applied to neighbourhoods, towns, cities or any defined geographical space. A person’s geographic location or place of origin can be a source of stigma. This type of stigma can lead to negative health outcomes.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Social_stigma >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Emotional Isolation

Introduction

Emotional isolation is a state of isolation where one may have a well-functioning social network but still feels emotionally separated from others.

Population-based research indicates that one in five middle-aged and elderly men (50–80 years) in Sweden are emotionally isolated (defined as having no one in whom one can confide). Of those who do have someone in whom they can confide, eight out of ten confide only in their partner. People who have no one in whom they can confide are less likely to feel alert and strong, calm, energetic and happy. Instead, they are more likely to feel depressed, sad, tired and worn out. Many people suffering from this kind of isolation have strong social networks, but lack a significant bond with their friends. While they can build superficial friendships, they are often not able to confide in many people. People who are isolated emotionally usually feel lonely and unable to relate to others.

In Relationships

Emotional isolation can occur as a result of social isolation, or when a person lacks any close confidant or intimate partner. Even though social relationships are necessary for emotional well-being, they can trigger negative feelings and thoughts and emotional isolation can act as a defence mechanism to protect a person from emotional distress. When people are emotionally isolated, they keep their feelings completely to themselves, are unable to receive emotional support from others, feel “shut down” or numb, and are reluctant or unwilling to communicate with others, except perhaps for the most superficial matters. Emotional isolation can occur within an intimate relationship, particularly as a result of infidelity, abuse, or other trust issues. One or both partners may feel alone within the relationship, rather than supported and fulfilled. Identifying the source of the distress and working with a therapist to improve communication and rebuild trust can help couples re-establish their emotional bond.

Effects on the Mind

Cacioppo and his team have found that the brains of lonely people react differently than those with strong social networks. The University of Chicago researchers showed lonely and non-lonely subjects photographs of people in both pleasant settings and unpleasant settings. When viewing the pleasant pictures, non-lonely subjects showed much more activity in a section of the brain known as the ventral striatum than the lonely subjects. The ventral striatum plays an important role in learning. It is also part of the brain’s reward centre, and can be stimulated by rewards like food and love. The lonely subjects displayed far less activity in this region while viewing pleasant pictures, and they also had less brain activity when shown the unpleasant pictures. When non-lonely subjects viewed the unpleasant pictures, they demonstrated activity in the temporoparietal junction, an area of the brain associated with empathy; the lonely subjects had a lesser response.

Social withdrawal is avoiding people and activities one would usually enjoy. For some people, this can progress to a point of social isolation, where people may even want to avoid contact with family and close friends most of the time. They may want to be alone because they feel it is tiring or upsetting to be with other people. Sometimes a cycle can develop where the more time they spend alone, the less they feel like people understand them. When people withdraw themselves from social interaction they tend to stay inside a set place (like a bedroom).

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Emotional_isolation >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.