What is a Serotonin-Norepinephrine-Dopamine Reuptake Inhibitor

Introduction

A serotonin–norepinephrine–dopamine reuptake inhibitor (SNDRI), also known as a triple reuptake inhibitor (TRI), is a type of drug that acts as a combined reuptake inhibitor of the monoamine neurotransmitters serotonin, norepinephrine, and dopamine. It does this by concomitantly inhibiting the serotonin transporter (SERT), norepinephrine transporter (NET), and dopamine transporter (DAT), respectively. Inhibition of the reuptake of these neurotransmitters increases their extracellular concentrations and, therefore, results in an increase in serotonergic, adrenergic, and dopaminergic neurotransmission. The naturally-occurring and potent SNDRI cocaine is widely used recreationally and often illegally for the euphoric effects it produces.

Other SNDRIs were developed as potential antidepressants and treatments for other disorders, such as obesity, cocaine addiction, attention-deficit hyperactivity disorder (ADHD), and chronic pain. They are an extension of selective serotonin reuptake inhibitors (SSRIs) and serotonin-norepinephrine reuptake inhibitors (SNRIs) whereby the addition of dopaminergic action is thought to have the possibility of heightening therapeutic benefit. However, increased side effects and abuse potential are potential concerns of these agents relative to their SSRI and SNRI counterparts.

The SNDRIs are similar to non-selective monoamine oxidase inhibitors (MAOIs) such as phenelzine and tranylcypromine in that they increase the action of all three of the major monoamine neurotransmitters. They are also similar to serotonin–norepinephrine–dopamine releasing agents (SNDRAs) like MDMA (“ecstasy”) and α-ethyltryptamine (αET) for the same reason, although they act via a different mechanism and have differing physiological and qualitative effects.

Although their primary mechanisms of action are as NMDA receptor antagonists, ketamine and phencyclidine are also SNDRIs and are similarly encountered as drugs of abuse.

Indications

Depression

Major depressive disorder (MDD) is the foremost reason supporting the need for development of an SNDRI. According to the World Health Organisation (WHO), depression is the leading cause of disability and the 4th leading contributor to the global burden of disease in 2000. By the year 2020, depression is projected to reach 2nd place in the ranking of DALYs (disability adjusted life years).

About 16% of the population is estimated to be affected by major depression, and another 1% is affected by bipolar disorder, one or more times throughout an individual’s lifetime. The presence of the common symptoms of these disorders are collectively called ‘depressive syndrome’ and includes a long-lasting depressed mood, feelings of guilt, anxiety, and recurrent thoughts of death and suicide. Other symptoms including poor concentration, a disturbance of sleep rhythms (insomnia or hypersomnia), and severe fatigue may also occur. Individual patients present differing subsets of symptoms, which may change over the course of the disease highlighting its multifaceted and heterogeneous nature. Depression is often highly comorbid with other diseases, e.g. cardiovascular disease (myocardial infarction, stroke), diabetes, cancer, Depressed subjects are prone to smoking, substance abuse, eating disorders, obesity, high blood pressure, pathological gambling and internet addiction, and on average have a 15 to 30 year shorter lifetime compared with the general population.

Major depression can strike at virtually any time of life as a function of genetic and developmental pre-disposition in interaction with adverse life-events. Although common in the elderly, over the course of the last century, the average age for a first episode has fallen to ~30 years. However, depressive states (with subtly different characteristics) are now frequently identified in adolescents and even children. The differential diagnosis (and management) of depression in young populations requires considerable care and experience; for example, apparent depression in teenagers may later transpire to represent a prodromal phase of schizophrenia.

The ability to work, familial relationships, social integration, and self-care are all severely disrupted.

The genetic contribution has been estimated as 40-50%. However, combinations of multiple genetic factors may be involved because a defect in a single gene usually fails to induce the multifaceted symptoms of depression.

Pharmacotherapy

There remains a need for more efficacious antidepressant agents. Although two-thirds of patients will ultimately respond to antidepressant treatment, one-third of patients respond to placebo, and remission is frequently sub-maximal (residual symptoms). In addition to post-treatment relapse, depressive symptoms can even recur in the course of long-term therapy (tachyphylaxis). Also, currently available antidepressants all elicit undesirable side-effects, and new agents should be divested of the distressing side-effects of both first and second-generation antidepressants.

Another serious drawback of all antidepressants is the requirement for long-term administration prior to maximal therapeutic efficacy. Although some patients show a partial response within 1–2 weeks, in general one must reckon with a delay of 3–6 weeks before full efficacy is attained. In general, this delay to onset of action is attributed to a spectrum of long-term adaptive changes. These include receptor desensitisation, alterations in intracellular transduction cascades and gene expression, the induction of neurogenesis, and modifications in synaptic architecture and signalling.

Depression has been associated with impaired neurotransmission of serotonergic (5-HT), noradrenergic (NE), and dopaminergic (DA) pathways, although most pharmacologic treatment strategies directly enhance only 5-HT and NE neurotransmission. In some patients with depression, DA-related disturbances improve upon treatment with antidepressants, it is presumed by acting on serotonergic or noradrenergic circuits, which then affect DA function. However, most antidepressant treatments do not directly enhance DA neurotransmission, which may contribute to residual symptoms, including impaired motivation, concentration, and pleasure.

Preclinical and clinical research indicates that drugs inhibiting the reuptake of all three of these neurotransmitters can produce a more rapid onset of action and greater efficacy than traditional antidepressants.

DA may promote neurotrophic processes in the adult hippocampus, as 5-HT and NA do. It is thus possible that the stimulation of multiple signalling pathways resulting from the elevation of all three monoamines may account, in part, for an accelerated and/or greater antidepressant response.

Dense connections exist between monoaminergic neurons. Dopaminergic neurotransmission regulates the activity of 5-HT and NE in the dorsal raphe nucleus (DR) and locus coeruleus (LC), respectively. In turn, the ventral tegmental area (VTA) is sensitive to 5-HT and NE release.

In the case of SSRIs, the promiscuity among transporters means that there may be more than a single type of neurotransmitter to consider (e.g. 5-HT, DA, NE, etc.) as mediating the therapeutic actions of a given medication. MATs are able to transport monoamines other than their “native” neurotransmitter. It was advised to consider the role of the organic cation transporters (OCT) and the plasma membrane monoamine transporter (PMAT).

To examine the role of monoamine transporters in models of depression DAT, NET, and SERT knockout (KO) mice and wild-type littermates were studied in the forced swim test (FST), the tail suspension test, and for sucrose consumption. The effects of DAT KO in animal models of depression are larger than those produced by NET or SERT KO, and unlikely to be simply the result of the confounding effects of locomotor hyperactivity; thus, these data support re-evaluation of the role that DAT expression could play in depression and the potential antidepressant effects of DAT blockade.

The SSRIs were intended to be highly selective at binding to their molecular targets. However it may be an oversimplification, or at least controversial in thinking that complex psychiatric (and neurological) diseases are easily solved by such a monotherapy. While it may be inferred that dysfunction of 5-HT circuits is likely to be a part of the problem, it is only one of many such neurotransmitters whose signalling can be affected by suitably designed medicines attempting to alter the course of the disease state.

Most common CNS disorders are highly polygenic in nature; that is, they are controlled by complex interactions between numerous gene products. As such, these conditions do not exhibit the single gene defect basis that is so attractive for the development of highly-specific drugs largely free of major undesirable side-effects (“the magic bullet”). Second, the exact nature of the interactions that occur between the numerous gene products typically involved in CNS disorders remain elusive, and the biological mechanisms underlying mental illnesses are poorly understood.

Clozapine is an example of a drug used in the treatment of certain CNS disorders, such as schizophrenia, that has superior efficacy precisely because of its broad-spectrum mode of action. Likewise, in cancer chemotherapeutics, it has been recognised that drugs active at more than one target have a higher probability of being efficacious.

In addition, the nonselective MAOIs and the TCA SNRIs are widely believed to have an efficacy that is superior to the SSRIs normally picked as the first-line choice of agents for/in the treatment of MDD and related disorders. The reason for this is based on the fact that SSRIs are safer than nonselective MAOIs and TCAs. This is both in terms of there being less mortality in the event of overdose, but also less risk in terms of dietary restrictions (in the case of the nonselective MAOIs), hepatotoxicity (MAOIs) or cardiotoxicity (TCAs).

Applications other than Depression

  • Alcoholism (c.f. DOV 102,677)
  • Cocaine addiction (e.g., indatraline)
  • Obesity (e.g., amitifadine, tesofensine)
  • Attention-deficit hyperactivity disorder (ADHD) (c.f. NS-2359, EB-1020)
  • Chronic pain (c.f. bicifadine)
  • Parkinson’s disease

List of SNDRIs

Approved pharmaceuticals

  • Mazindol (Mazanor, Sanorex) — anorectic; ki is 50 nM for SERT, 18 nM for NET, 45 nM for DAT[38]
  • Nefazodone (Serzone, Nefadar, Dutonin) — antidepressant; non-selective; ki is 200 nM at SERT, 360 nM at NET, 360 nM at DAT
  • Nefopam (ki SER/NE/DA = 29/33/531 nM) Informative review.

Sibutramine (Meridia) is a withdrawn anorectic that is an SNDRI in vitro with ki values of 298 nM at SERT, 5451 at NET, 943 nM at DAT. However, it appears to act as a prodrug in vivo to metabolites that are considerably more potent and possess different ratios of monoamine reuptake inhibition in comparison, and in accordance, sibutramine behaves contrarily as an SNRI (73% and 54% for norepinephrine and serotonin reuptake inhibition, respectively) in human volunteers with only very weak and probably inconsequential inhibition of dopamine reuptake (16%).

Venlafaxine (Effexor) is sometimes referred to as an SNDRI, but is extremely imbalanced with ki values of 82 nM for SERT, 2480 nM for NET, and 7647 nM for DAT, with a ratio of 1:30:93. It may weakly inhibit the reuptake of dopamine at high doses.

Coincidental

  • Esketamine (Ketanest S) — anesthetic; S-enantiomer of ketamine; weak SNDRI action likely contributes to effects and abuse potential
  • Ketamine (Ketalar) — anesthetic and dissociative drug of abuse; weak SNDRI action likely contributes to effects and abuse potential
  • Phencyclidine (Sernyl) — discontinued anaesthetic and dissociative psychostimulant drug of abuse; SNDRI action likely contributes to effects and abuse potential
  • Tripelennamine (Pyribenzamine) — antihistamine; weak SNDRI; sometimes abused for this reason
  • Mepiprazole

Undergoing Clinical Trials

  • Ansofaxine (LY03005/LPM570065). Completed Phase 2 & 3 trials. FDA accepted NDA application.
  • Centanafadine (EB-1020) — see here for details Archived 2012-05-31 at the Wayback Machine 1 to 6 to 14 ratio for NDS. Completed Phase 3 trials for ADHD.
  • OPC-64005 — In phase 2 trials (2022)
  • Lu AA37096 — see here (SNDRI and 5-HT6 modulator)
  • NS-2360 — principle metabolite of tesofensine
  • Tesofensine (NS-2330) (2001) In trials for obesity.

Failed Clinical Trials

  • Bicifadine (DOV-220,075) (1981)
  • BMS-866,949
  • Brasofensine (NS-2214, BMS-204,756) (1995)
  • Diclofensine (Ro 8–4650) (1982)
  • DOV-216,303 (2004)
  • EXP-561 (1965)
  • Liafensine (BMS-820,836)
  • NS-2359 (GSK-372,475)
  • RG-7166 (2009–2012)
  • SEP-227,162
  • SEP-228,425
  • SEP-432 aka SEP-228432, CID:58954867
  • Amitifadine (DOV-21,947, EB-1010) (2003)
  • Dasotraline (SEP-225,289)
  • Lu AA34893 (SNDRI and 5-HT2A, α1, and 5-HT6 modulator)
  • Tedatioxetine (Lu AA24530) — SNDRI and 5-HT2C, 5-HT3, 5-HT2A, and α1 modulator

Designer Drugs

  • 3-Methyl-PCPy
  • Naphyrone (O-2482, naphthylpyrovalerone, NRG-1) (2006)
  • 5-APB

Toxicological

Toxicological screening is important to ensure safety of the drug molecules. In this regard, the p m-dichloro phenyl analogue of venlafaxine was dropped from further development after its potential mutagenicity was called into question. The mutagenicity of this compound is still doubtful though. It was dropped for other reasons likely related to speed at which it could be released onto the market relative to the more developed compound venlafaxine. More recently, the carcinogenicity of PRC200-SS was likewise reported.

(+)-CPCA (“nocaine”) is the 3R,4S piperidine stereoisomer of (phenyltropane based) RTI-31. It is non addictive, although this might be due to it being a NDRI, not a SNDRI. The β-naphthyl analogue of “Nocaine” is a SNDRI though in the case of both the SS and RR enantiomers. Consider the piperidine analogues of brasofensine and tesofensine. These were prepared by NeuroSearch (In Denmark) by the chemists Peter Moldt (2002), and Frank Wätjen (2004–2009). There are four separate isomers to consider (SS, RR, S/R and R/S). This is because there are two chiral carbon sites of asymmetry (means 2 to the power of n isomers to consider where n is the number of chiral carbons). They are therefore a diastereo(iso)meric pair of racemers. With a racemic pair of diastereomers, there is still the question of syn (cis) or anti (trans). In the case of the phenyltropanes, although there are four chiral carbons, there are only eight possible isomers to consider. This is based on the fact that the compound is bicyclic and therefore does not adhere to the equation given above.

It is complicated to explain which isomers are desired. For example, although Alan P. Kozikowski showed that R/S nocaine is less addictive than SS Nocaine, studies on variously substituted phenyltropanes by F. Ivy Carroll et at. revealed that the ββ isomers were less likely to cause convulsions, tremor and death than the corresponding trans isomers (more specifically, what is meant is the 1R,2R,3S isomers). While it does still have to be conceded that RTI-55 caused death at a dosage of 100 mg/kg, its therapeutic index of safety is still much better than the corresponding trans isomers because it is a more potent compound.

In discussing cocaine and related compounds such as amphetamines, it is clear that these psychostimulants cause increased blood pressure, decreased appetite (and hence weight loss), increased locomotor activity (LMA) etc. In the United States, cocaine overdose is one of the leading causes of ER admissions each year due to drug overdose. People are at increased risk of heart attack and stroke and also present with an array of psychiatric symptoms including anxiety & paranoia etc. On removal of the 2C tropane bridge and on going from RTI-31 to the simpler SS and RS Nocaine it was seen that these compounds still possessed activity as NDRIs but were not powerful psychostimulants. Hence, this might be viewed as a strategy for increasing the safety of the compounds and would also be preferable to use in patients who are not looking to achieve weight loss.

In light of the above paragraph, another way of reducing the psychomotor stimulant and addictive qualities of phenyltropane stimulants is in picking one that is relatively serotonergic. This strategy was employed with success for RTI-112.

Another thing that is important and should be mentioned is the risk for serotonin syndrome when incorporating the element of 5-HT transporter inhibition into a compound that is already fully active as a NDRI (or vice versa). The reasons for serotonin syndrome are complicated and not fully understood.

Addiction

Drug addiction may be regarded as a disease of the brain reward system. This system, closely related to the system of emotional arousal, is located predominantly in the limbic structures of the brain. Its existence was proved by demonstration of the “pleasure centres,” that were discovered as the location from which electrical self-stimulation is readily evoked. The main neurotransmitter involved in the reward is dopamine, but other monoamines and acetylcholine may also participate. The anatomical core of the reward system are dopaminergic neurons of the ventral tegmentum that project to the nucleus accumbens, amygdala, prefrontal cortex and other forebrain structures.

There are several groups of substances that activate the reward system and they may produce addiction, which in humans is a chronic, recurrent disease, characterised by absolute dominance of drug-seeking behaviour.

According to various studies, the relative likelihood of rodents and non-human primates self-administering various psychostimulants that modulate monoaminergic neurotransmission is lessened as the dopaminergic compounds become more serotonergic.

The above finding has been found for amphetamine and some of its variously substituted analogues including PAL-287 etc.

RTI-112 is another good example of the compound becoming less likely to be self-administered by the test subject in the case of a dopaminergic compound that also has a marked affinity for the serotonin transporter.

WIN 35428, RTI-31, RTI-51 and RTI-55 were all compared and it was found that there was a negative correlation between the size of the halogen atom and the rate of self-administration (on moving across the series). Rate of onset was held partly accountable for this, although increasing the potency of the compounds for the serotonin transporter also played a role.

Further evidence that 5-HT dampens the reinforcing actions of dopaminergic medications comes from the co-administration of psychostimulants with SSRIs, and the phen/fen combination was also shown to have limited abuse potential relative to administration of phentermine only.

NET blockade is unlikely to play a major role in mediating addictive behaviour. This finding is based on the premise that desipramine is not self-administered, and also the fact that the NRI atomoxetine was not reinforcing. However, it was still shown to facilitate dopaminergic neurotransmission in certain brain regions such as in the core of the prefrontal cortex (PFC).

Relation to Cocaine

Cocaine is a short-acting SNDRI that also exerts auxiliary pharmacological actions on other receptors. Cocaine is a relatively “balanced” inhibitor, although facilitation of dopaminergic neurotransmission is what has been linked to the reinforcing and addictive effects. In addition, cocaine has some serious limitations in terms of its cardiotoxicity due to its local anaesthetic activity. Thousands of cocaine users are admitted to emergency units in the USA every year because of this; thus, development of safer substitute medications for cocaine abuse could potentially have significant benefits for public health.

Many of the SNDRIs currently being developed have varying degrees of similarity to cocaine in terms of their chemical structure. There has been speculation over whether the new SNDRIs will have an abuse potential like cocaine does. However, for pharmacotherapeutical treatment of cocaine addiction it is advantageous if a substitute medication is at least weakly reinforcing because this can serve to retain addicts in treatment programmes:

… limited reinforcing properties in the context of treatment programs may be advantageous, contributing to improved patient compliance and enhanced medication effectiveness.

However, not all SNDRIs are reliably self-administered by animals. Examples include:

  • PRC200-SS was not reliably self-administered.
  • RTI-112 was not self-administered because at low doses the compound preferentially occupies the SERT and not the DAT.
  • Tesofensine was also not reliably self-administered by human stimulant addicts.
  • The nocaine analogue JZAD-IV-22 only partly substituted for cocaine in animals, but produced none of the psychomotor activation of cocaine, which is a trait marker for stimulant addiction.

Legality

Cocaine is a controlled drug (Class A in the UK; Schedule II in the USA); it has not been entirely outlawed in most countries, as despite having some “abuse potential” it is recognised that it does have medical uses.

Brasofensine was made “class A” in the UK under the MDA (misuse of drugs act). The semi-synthetic procedure for making BF uses cocaine as the starting material.

Naphyrone first appeared in 2006 as one of quite a large number of analogues of pyrovalerone designed by the well-known medicinal chemist P. Meltzer et al. When the designer drugs mephedrone and methylone became banned in the United Kingdom, vendors of these chemicals needed to find a suitable replacement. Mephedrone and methylone affect the same chemicals in the brain as a SNDRI, although they are thought to act as monoamine releasers and not act through the reuptake inhibitor mechanism of activity. A short time later, mephedrone and methylone were banned (which had become quite popular by the time they were illegalised), naphyrone appeared under the trade name NRG-1. NRG-1 was promptly illegalised, although it is not known if its use resulted in any hospitalisations or deaths.

Role of Monoamine Neurotransmitters

Monoamine Hypothesis

The original monoamine hypothesis postulates that depression is caused by a deficiency or imbalances in the monoamine neurotransmitters (5-HT, NE, and DA). This has been the central topic of depression research for approximately the last 50 years; it has since evolved into the notion that depression arises through alterations in target neurons (specifically, the dendrites) in monoamine pathways.

When reserpine (an alkaloid with uses in the treatment of hypertension and psychosis) was first introduced to the West from India in 1953, the drug was unexpectedly shown to produce depression-like symptoms. Further testing was able to reveal that reserpine causes a depletion of monoamine concentrations in the brain. Reserpine’s effect on monoamine concentrations results from blockade of the vesicular monoamine transporter, leading to their increased catabolism by monoamine oxidase. However, not everyone has been convinced by claims that reserpine is depressogenic, some authors (David Healy in particular) have even claimed that it is antidepressant.

Tetrabenazine, a similar agent to reserpine, which also depletes catecholamine stores, and to a lesser degree 5-HT, was shown to induce depression in many patients.

Iproniazid, an inhibitor of MAO, was noted to elevate mood in depressed patients in the early 1950s, and soon thereafter was shown to lead to an increase in NA and 5-HT.

Hertting et al. demonstrated that the first TCA, imipramine, inhibited cellular uptake of NA in peripheral tissues. Moreover, both antidepressant agents were demonstrated to prevent reserpine-induced sedation. Likewise, administration of DOPA to laboratory animals was shown to reverse reserpine induced sedation; a finding reproduced in humans. Amphetamine, which releases NA from vesicles and prevents re-uptake was also used in the treatment of depression at the time with varying success.

In 1965 Schildkraut formulated the catecholamine theory of depression. This was subsequently the most widely cited article in the American Journal of Psychiatry. The theory stated that “some, if not all, depressions are associated with an absolute or relative deficiency of catecholamines, in particular noradrenaline (NA), at functionally important adrenergic receptor sites in the brain. However, elation may be associated with an excess of such amines.”

Shortly after Schildkraut’s catecholamine hypothesis was published, Coppen proposed that 5-HT, rather than NA, was the more important neurotransmitter in depression. This was based on similar evidence to that which produced the NA theory as reserpine, imipramine, and iproniazid affect the 5-HT system, in addition to the noradrenergic system. It was also supported by work demonstrating that if catecholamine levels were depleted by up to 20% but 5-HT neurotransmission remained unaltered there was no sedation in animals. Alongside this, the main observation promoting the 5-HT theory was that administration of a MAOI in conjunction with tryptophan (precursor of 5-HT) elevated mood in control patients and potentiated the antidepressant effect of MAOI. Set against this, combination of an MAOI with DOPA did not produce a therapeutic benefit.

Inserting a chlorine atom into imipramine leads to clomipramine, a drug that is much more SERT selective than the parent compound.

Clomipramine was a predecessor to the development of the more recent SSRIs. There was, in fact, a time prior to the SSRIs when selective NRIs were being considered (c.f. talopram and melitracen). In fact, it is also believed that the selective NRI nisoxetine was discovered prior to the invention of fluoxetine. However, the selective NRIs did not get promoted in the same way as did the SSRIs, possibly due to an increased risk of suicide. This was accounted for on the basis of the energising effect that these agents have. Moreover, NRIs have the additional adverse safety risk of hypertension that is not seen for SSRIs. Nevertheless, NRIs have still found uses.

Further support for the monoamine hypothesis came from monoamine depletion studies:

  • Alpha-methyl-p-tyrosine (AMPT) is a tyrosine hydroxylase enzyme inhibitor that serves to inhibit catecholamine synthesis. AMPT led to a resurgence of depressive symptoms in patients improved by the NE reuptake inhibitor (NRI) desipramine, but not by the SSRI fluoxetine. The mood changes induced by AMPT may be mediated by decreases in norepinephrine, while changes in selective attention and motivation may be mediated by dopamine.
  • Dietary depletion of the DA precursors phenylalanine and tyrosine does not result in the relapse of formerly depressed patients off their medication.
  • Administration of fenclonine (para-chlorophenylalanine) is able to bring about a depletion of 5-HT. The mechanism of action for this is via tryptophan hydroxylase inhibition. In the 1970s administration of parachlorophenylalanine produced a relapse in depressive symptoms of treated patients, but it is considered too toxic for use today.
  • Although depletion of tryptophan — the rate-limiting factor of serotonin synthesis — does not influence the mood of healthy volunteers and untreated patients with depression, it does produce a rapid relapse of depressive symptoms in about 50% of remitted patients who are being, or have recently been treated with serotonin selective antidepressants.

Dopaminergic

There appears to be a pattern of symptoms that are currently inadequately addressed by serotonergic antidepressants — loss of pleasure (anhedonia), reduced motivation, loss of interest, fatigue and loss of energy, motor retardation, apathy and hypersomnia. Addition of a pro-dopaminergic component into a serotonin based therapy would be expected to address some of these short-comings.

Several lines of evidence suggest that an attenuated function of the dopaminergic system may play an important role in depression:

  • Mood disorders are highly prevalent in pathologies characterized by a deficit in central DA transmission such as Parkinson’s disease (PD). The prevalence of depression can reach up to 50% of individuals with PD.
  • Patients taking strong dopaminergic antagonists such as those used in the treatment of psychosis are more likely than the general population to develop symptoms of depression.
  • Data from clinical studies have shown that DA agonists, such as bromocriptine, pramipexole and ropinirole, exhibit antidepressant properties.
  • Amineptine, a TCA-derivative that predominantly inhibits DA re-uptake and has minimal noradrenergic and serotonergic activity has also been shown to possess antidepressant activity. A number of studies have suggested that amineptine has similar efficacy to the TCAs, MAOIs and SSRIs. However, amineptine is no longer available as a treatment for depression due to reports of an abuse potential.
  • The B-subtype selective MAOI selegiline (a drug developed for the treatment of PD) has now been approved for the treatment of depression in the form of a transdermal patch (Emsam). For some reason, there have been numerous reports of users taking this drug in conjunction with β-phenethylamine.
  • Taking psychostimulants for the alleviation of depression is well proven strategy, although in a clinical setting the use of such drugs is usually prohibited because of their strong addiction propensity.
  • When users withdraw from psychostimulant drugs of abuse (in particular, amphetamine), they experience symptoms of depression. This is likely because the brain enters into a hypodopaminergic state, although there might be a role for noradrenaline also.

For these drugs to be reinforcing, they must block more than 50% of the DAT within a relatively short time period (<15 minutes from administration) and clear the brain rapidly to enable fast repeated administration.

In addition to mood, they may also improve cognitive performance, although this remains to be demonstrated in humans.

The rate of clearance from the body is faster for ritalin than it is for regular amphetamine.

Noradrenergic

The decreased levels of NA proposed by Schildkraut, suggested that there would be a compensatory upregulation of β-adrenoceptors. Despite inconsistent findings supporting this, more consistent evidence demonstrates that chronic treatment with antidepressants and electroconvulsive therapy (ECT) decrease β-adrenoceptor density in the rat forebrain. This led to the theory that β-adrenoceptor downregulation was required for clinical antidepressant efficacy. However, some of the newly developed antidepressants do not alter, or even increase β-adrenoceptor density.

Another adrenoceptor implicated in depression is the presynaptic α2-adrenoceptor. Chronic desipramine treatment in rats decreased the sensitivity of α2-adrenoceptors, a finding supported by the fact that clonidine administration caused a significant increase in growth hormone (an indirect measure of α2-adrenoceptor activity) although platelet studies proved inconsistent. This supersensitivity of α2-adrenoceptor was postulated to decrease locus coeruleus (the main projection site of NA in the central nervous system, CNS) NA activity leading to depression.

In addition to enhancing NA release, α2-adrenoceptor antagonism also increases serotonergic neurotransmission due to blockade of α2-adrenoceptors present on 5-HT nerve terminals.

Serotonergic

5-Hydroxytryptamine (5-HT or serotonin) is an important cell-to-cell signalling molecule found in all animal phyla. In mammals, substantial concentrations of 5-HT are present in the central and peripheral nervous systems, gastrointestinal tract and cardiovascular system. 5-HT is capable of exerting a wide variety of biological effects by interacting with specific membrane-bound receptors, and at least 13 distinct 5-HT receptor subtypes have been cloned and characterised. With the exception of the 5-HT3 receptor subtype, which is a transmitter-gated ion channel, 5-HT receptors are members of the 7-transmembrane G protein-coupled receptor superfamily. In humans, the serotonergic system is implicated in various physiological processes such as sleep-wake cycles, maintenance of mood, control of food intake and regulation of blood pressure. In accordance with this, drugs that affect 5-HT-containing cells or 5-HT receptors are effective treatments for numerous indications, including depression, anxiety, obesity, nausea, and migraine.

Because serotonin and the related hormone melatonin are involved in promoting sleep, they counterbalance the wake-promoting action of increased catecholaminergic neurotransmission. This is accounted for by the lethargic feel that some SSRIs can produce, although TCAs and antipsychotics can also cause lethargy albeit through different mechanisms.

Appetite suppression is related to 5-HT2C receptor activation as for example was reported for PAL-287 recently.

Activation of the 5-HT2C receptor has been described as “panicogen” by users of ligands for this receptor (e.g. mCPP). Antagonism of the 5-HT2C receptor is known to augment dopaminergic output. Although SSRIs with 5-HT2C antagonist actions were recommended for the treatment of depression, 5-HT2C receptor agonists were suggested for treating cocaine addiction since this would be anti-addictive. Nevertheless, the 5-HT2C is known to be rapidly downregulated upon repeated administration of an agonist agent, and is actually antagonised.

Azapirone-type drugs (e.g. buspirone), which act as 5-HT1A receptor agonists and partial agonists have been developed as anxiolytic agents that are not associated with the dependence and side-effect profile of the benzodiazepines. The hippocampal neurogenesis produced by various types of antidepressants, likewise, is thought to be mediated by 5-HT1A receptors. Systemic administration of a 5-HT1A agonist also induces growth hormone and adrenocorticotropic hormone (ACTH) release through actions in the hypothalamus.

Current Antidepressants

Most antidepressants on the market today target the monoaminergic system.

SSRIs

The most commonly prescribed class of antidepressants in the USA today are the selective serotonin reuptake inhibitors (SSRIs). These drugs inhibit the uptake of the neurotransmitter 5-HT by blocking the SERT, thus increasing its synaptic concentration, and have shown to be efficacious in the treatment of depression, however sexual dysfunction and weight gain are two very common side-effects that result in discontinuation of treatment.

Although many patients benefit from SSRIs, it is estimated that approximately 50% of depressive individuals do not respond adequately to these agents. Even in remitters, a relapse is often observed following drug discontinuation. The major limitation of SSRIs concerns their delay of action. It appears that the clinical efficacy of SSRIs becomes evident only after a few weeks.

SSRIs can be combined with a host of other drugs including bupropion, α2 adrenergic antagonists (e.g. yohimbine) as well as some of the atypical antipsychotics. The augmentation agents are said to behave synergistically with the SSRI although these are clearly of less value than taking a single compound that contains all of the necessary pharmacophoric elements relative to the consumption of a mixture of different compounds. It is not entirely known what the reason for this is, although ease of dosing is likely to be a considerable factor. In addition, single compounds are more likely to be approved by the FDA than are drugs that contain greater than one pharmaceutical ingredient (polytherapies).

A number of SRIs were under development that had auxiliary interactions with other receptors. Particularly notable were agents behaving as co-joint SSRIs with additional antagonist activity at 5-HT1A receptors. 5-HT1A receptors are located presynaptically as well as post-synaptically. It is the presynaptic receptors that are believed to function as autoreceptors (cf. studies done with pindolol). These agents were shown to elicit a more robust augmentation in the % elevation of extracellular 5-HT relative to baseline than was the case for SSRIs as measured by in vivo microdialysis.

NRIs

Norepinephrine reuptake inhibitors (NRIs) such as reboxetine prevent the reuptake of norepinephrine, providing a different mechanism of action to treat depression. However reboxetine is no more effective than the SSRIs in treating depression. In addition, atomoxetine has found use in the treatment of ADHD as a non-addictive alternative to Ritalin. The chemical structure of atomoxetine is closely related to that of fluoxetine (an SSRI) and also duloxetine (an SNRI).

NDRIs

Bupropion is a commonly prescribed antidepressant that acts as a norepinephrine–dopamine reuptake inhibitor (NDRI). It prevents the reuptake of NA and DA (weakly) by blocking the corresponding transporters, leading to increased noradrenergic and dopaminergic neurotransmission. This drug does not cause sexual dysfunction or weight gain like the SSRIs but has a higher incidence of nausea. Methylphenidate is a much more reliable example of an NDRI (the action that it displays on the DAT usually getting preferential treatment). Methylphenidate is used in the treatment of ADHD; its use in treating depression is not known to have been reported, but it is presumed owing to its psychomotor activating effects and it functioning as a positive reinforcer. There are also reports of methylphenidate being used in the treatment of psychostimulant addiction, in particular cocaine addiction, since the addictive actions of this drug are believed to be mediated by the dopamine neurotransmitter.

SNRIs

Serotonin–norepinephrine reuptake inhibitors (SNRIs) such as venlafaxine (Effexor), its active metabolite desvenlafaxine (Pristiq), and duloxetine (Cymbalta) prevent the reuptake of both serotonin and norepinephrine, however their efficacy appears to be only marginally greater than the SSRIs.

Sibutramine is the name of an SNRI based appetite suppressant with use in the treatment of obesity. This was explored in the treatment of depression, but was shown not to be effective.

Both sibutramine and venlafaxine are phenethylamine-based. At high doses, both venlafaxine and sibutramine will start producing dopaminergic effects. The inhibition of DA reuptake is unlikely to be relevant at clinically approved doses.

MAOIs

Monoamine oxidase inhibitors (MAOIs) were the first antidepressants to be introduced. They were discovered entirely by serendipity. Iproniazide (the first MAOI) was originally developed as an antitubercular agent but was then unexpectedly found to display antidepressant activity.

Isoniazid also displayed activity as an antidepressant, even though it is not a MAOI. This led some people to question whether it is some property of the hydrazine, which is responsible for mediating the antidepressant effect, even going as far as to state that the MAOI activity could be a secondary side-effect. However, with the discovery of tranylcypromine (the first non-hydrazine MAOI), it was shown that MAOI is thought to underlie the antidepressant bioactivity of these agents. Etryptamine is another example of a non-hydrazine MAOI that was introduced.

The MAOIs work by inhibiting the monoamine oxidase enzymes that, as the name suggests, break down the monoamine neurotransmitters. This leads to increased concentrations of most of the monoamine neurotransmitters in the human brain, serotonin, norepinephrine, dopamine and melatonin. The fact that they are more efficacious than the newer generation antidepressants is what leads scientists to develop newer antidepressants that target a greater range of neurotransmitters. The problem with MAOIs is that they have many potentially dangerous side-effects such as hypotension, and there is a risk of food and drug interactions that can result in potentially fatal serotonin syndrome or a hypertensive crisis. Although selective MAOIs can reduce, if not eliminate these risks, their efficacy tends to be lower.

MAOIs may preferentially treat TCA-resistant depression, especially in patients with features such as fatigue, volition inhibition, motor retardation and hypersomnia. This may be a function of the ability of MAOIs to increase synaptic levels of DA in addition to 5-HT and NE. The MAOIs also seem to be effective in the treatment of fatigue associated with fibromyalgia (FM) or chronic fatigue syndrome (CFS).

Although a substantial number of MAOIs were approved in the 1960s, many of these were taken off the market as rapidly as they were introduced. The reason for this is that they were hepatotoxic and could cause jaundice.

TCAs

The first tricyclic antidepressant (TCA), imipramine (Tofranil), was derived from the antipsychotic drug chlorpromazine, which was developed as a useful antihistaminergic agent with possible use as a hypnotic sedative. Imipramine is an iminodibenzyl (dibenzazepine).

The TCAs such as imipramine and amitriptyline typically prevent the reuptake of serotonin or norepinephine.

It is the histaminiergic (H1), muscarinic acetylcholinergic (M1), and alpha adrenergic (α1) blockade that is responsible for the side-effects of TCAs. These include somnolence and lethargy, anticholinergic side-effects, and hypotension. Due to the narrow gap between their ability to block the biogenic amine uptake pumps versus the inhibition of fast sodium channels, even a modest overdose of one of the TCAs could be lethal. TCAs were, for 25 years, the leading cause of death from overdoses in many countries. Patients being treated with antidepressants are prone to attempt suicide and one method they use is to take an overdose of their medications.

Another example of a TCA is amineptine which is the only one believed to function as a DRI. It is no longer available.

Failure of SNDRIs for Depression

SNDRIs have been under investigation for the treatment of major depressive disorder for a number of years but, as of 2015, have failed to meet effectiveness expectations in clinical trials. In addition, the augmentation of a selective serotonin reuptake inhibitor (SSRI) or serotonin-norepinephrine reuptake inhibitor with lisdexamfetamine, a norepinephrine–dopamine releasing agent, recently failed to separate from placebo in phase III clinical trials of individuals with treatment-resistant depression, and clinical development was subsequently discontinued. These occurrences have shed doubt on the potential benefit of dopaminergic augmentation of conventional serotonergic and noradrenergic antidepressant therapy. As such, scepticism has been cast on the promise of the remaining SNDRIs that are still being trialled, such as ansofaxine (currently in phase II trials), in the treatment of depression. Despite being a weak SNDRI, nefazodone has been successful in treating major depressive disorder.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Serotonin%E2%80%93norepinephrine%E2%80%93dopamine_reuptake_inhibitor >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Preference Falsification

Introduction

Preference falsification is the act of misrepresenting a preference under perceived public pressures. It involves the selection of a publicly expressed preference that differs from the underlying privately held preference (or simply, a public preference at odds with one’s private preference). People frequently convey to each other preferences that differ from what they would communicate privately under credible cover of anonymity (such as in opinion surveys to researchers or pollsters). Pollsters can use techniques such as list experiments to uncover preference falsification.

The term preference falsification was coined by Timur Kuran in a 1987 article, “Chameleon voters and public choice.” On controversial matters that induce preference falsification, he showed there that, widely disliked policies may appear popular. The distribution of public preferences, which Kuran defines as public opinion, may differ greatly from private opinion, which is the distribution of private preferences known only to individuals themselves.

Kuran developed the implications of this observation in a 1995 book, Private Truths, Public Lies: The Social Consequences of Preference Falsification. This book argues that preference falsification is not only ubiquitous but has huge social and political consequences. It provides a theory of how preference falsification shapes collective illusions, sustains social stability, distorts human knowledge, and conceals political possibilities. Collective illusions is an occurrence when most people in a group go along with an idea or a preference that they do not agree with, because they incorrectly believe that most people in the group agree with it.

Specific Form of Lying

Preference falsification aims specifically at moulding the perceptions others hold about one’s motivations. As such, not all forms of lying entail preference falsification. To withhold bad medical news from a terminally ill person is a charitable lie. But it is not preference falsification, because the motivation is not to conceal a wish.

Preference falsification is not synonymous with self-censorship, which is simply the withholding of information. Whereas self-censorship is a passive act, preference falsification is performative. It entails actions meant to project a contrived preference.

Strategic voting occurs when, in the privacy of an election booth, one votes for candidate B because A, one’s favourite, cannot win. This entails preference manipulation but not preference falsification, which is a response to social pressures. In a private polling booth, there are no social pressures to accommodate and no social reactions to control.

Private Opinion vs. Public Opinion

The term public opinion is commonly used in two senses. The first is the distribution of people’s genuine preferences, often measured through surveys that provide anonymity. The second meaning is the distribution of preferences that people convey in public settings, which is measured through survey techniques that allow the pairing of responses with specific respondents. Kuran distinguishes between the two meanings for analytic clarity, reserving public opinion only for the latter. He uses the term private opinion to describe the distribution of a society’s private preferences, known only to individuals themselves.

On socially controversial issues, preference falsification is often pervasive, and ordinarily public opinion differs from private opinion.

Private Knowledge vs. Public Knowledge

Private preferences over a set of options rest on private knowledge, which consists of the understandings that individuals carry in their own minds. A person who privately favours reforming the educational system does so in the belief that, say, schools are failing students, and a new curriculum would serve them better. But this person need not convey to others his sympathy towards a new curriculum. To avoid alienating powerful political groups, she could pretend to consider the prevailing curriculum optimal. In other words, her public knowledge could be a distorted, if not completely fabricated, version of what she really perceives and understands.

Knowledge falsification causes public knowledge to differ from private knowledge.

Three Main Claims of Kuran’s Theory

Private Truths, Public Lies identifies three basic social consequences of preference falsification:

  1. Distortion of social decisions;
  2. Distortion of private knowledge; and
  3. Unanticipated social discontinuities.

1. Distortion of Social Decisions

Among the social consequences of preference falsification is the distortion of social decisions. In misrepresenting public opinion, it corrupts a society’s collective policy choices. One manifestation is collective conservatism, which Kuran defines as the retention of policies that would be rejected in a vote taken by secret ballot and the implicit rejection of alternative policies that, if voted on, would command stable support.

For an illustration, suppose that a vocal minority within this society takes to shaming the supporters of a certain reform. Simply to protect their personal reputations, people privately favouring the reform might start pretending to be satisfied with the status quo. In falsifying their preferences, they would make the perceived share of the reform opponents discourage other reform sympathizers from publicizing their own desires for change. With enough reform sympathizers opting for comfort through preference falsification, a clear majority privately favouring reform could co-exist with an equally clear majority publicly opposing the same reform. In other words, private opinion could support reform even as public opinion opposes it.

A democracy has a built-in mechanism for correcting distortions in public opinion: periodic elections by secret ballot. On issues where preference falsification is rampant, elections allow hidden majorities to make themselves heard and exert influence through the ballot box. The privacy afforded by secret balloting allows voters to cast ballots aligned with their private preferences. As private opinion gets revealed through the ballot box, preference falsifiers may discover, to their delight, that they form a majority. They may infer that they have little to fear from vocalising honestly what they want. That is the expectation underlying secret balloting.

In practice, however, secret-ballot elections serve their intended corrective function imperfectly. For one thing, on issues that induce rampant preference falsification, elections may offer little choice. All serious contestants will often take the same position, partly to avoid being shamed and partly to position themselves optimally in policy spaces to maximise their appeal to the electorate. For another, in periodic elections citizens of a democracy vote for representatives or political parties that stand for policy packages. They do not vote on individual policies directly. Therefore, the messages that a democratic citizenry conveys through secret balloting are necessarily subject to interpretation. A party opposed to a particular reform may win because of its stands on other issues. Yet, its vote may be interpreted as a rejection of reform.

Nevertheless, periodic secret balloting limits the harms of preference falsification. It keeps public opinion from straying too far from private opinion on matters critical to citizens. By contrast, in nondemocratic political regimes no legal mechanism exists for uncovering hidden sentiments. Therefore, serious distortions of public opinion are correctable only through extra-legal means, such as rioting, a coup, or a revolution.

2. Distortion of Private Knowledge

Private preferences may change through learning. We learn from our personal experiences, and we can think for ourselves. Yet, because our cognitive powers are bounded, we can reflect comprehensively on only a small fraction of the issues on which we decide, or are forced to, express a preference. However much we might want to think independently on every issue, our private knowledge unavoidably rests partly on the public knowledge that enters public discourse—the corpus of suppositions, observations, assertions, arguments, theories, and opinions in the public domain. For example, most people’s private preferences concerning international trade are based, to one degree or another, on the public communications of others, whether through publications, TV, social media, gatherings of friends, or some other medium.

Preference falsification shapes or reshapes private knowledge by distorting the substance of public discourse. The reason is that, to conceal our private preferences successfully, we must control the impressions we convey. Effective control requires careful management of our body language but also of the knowledge that we convey publicly. In other words, credible preference falsification requires engaging in appropriately tailored knowledge falsification as well. To convince an audience that we favour trade quotas, facts and arguments supportive of quotas must accompany our pro-quota public preference.

Knowledge falsification corrupts and impoverishes the knowledge in the public domain, Kuran argues. It exposes others to facts that knowledge falsifiers know to be false. It reinforces the credibility of falsehoods. And it conceals information that the knowledge falsifier considers true.

Preference falsification is thus a source of avoidable misperceptions, even ignorance, about the range of policy options and about their relative merits. This generally harmful effect of preference falsification works largely through the knowledge falsification that accompanies it. The disadvantages of a particular policy, custom, or regime might have been appreciated widely in the past. However, insofar as public discourse excludes criticism of the publicly fashionable options, the objections will tend to be forgotten. Among the mechanisms producing such collective amnesia is population replacement through births and deaths. New generations are exposed not to the unfiltered knowledge in their elders’ heads but, rather, to the reconstructed knowledge that their elders feel safe to communicate. Suppose that an aging generation had disliked a particular institution but refrained from challenging it. Absent experiences that make the young dislike that institution, they will preserve it to avoid social sanctions but also, perhaps mainly, because the impoverishment of public discourse has blinded them to the flaws of the status quo and blunted their capacity to imagine better alternatives. The preference and knowledge falsification of their parents will have left them intellectually handicapped.

Over the long run, then, preference falsification brings intellectual narrowness and ossification. Insofar as it leaves people unequipped to criticise inherited social structures, current preference falsification ceases to be a source of political stability. People support the status quo genuinely, because past preference falsification has removed their inclinations to want something different.

The possibility of such socially induced intellectual incapacitation is highest in contexts where private knowledge is drawn largely from others. It is low, though not nil, on matters where the primary source of private knowledge is personal experience. Two other factors influence the level of ignorance generated by preference falsification. Individuals are more likely to lose touch with alternatives to the status quo if public opinion reaches an equilibrium devoid of dissent than if some dissenters keep publicising the advantages of change. Likewise, widespread ignorance is more likely in a closed society than in one open to outside influences.

3. Generating Surprise

If public discourse were the only determinant of private knowledge, a public consensus, once in place, would be immutable. In fact, private knowledge has other determinants as well, and changes in them can make a public consensus unravel. But this unravelling need not occur in tandem with growing private opposition to the status quo. For a while, its effect may simply be to accentuate preference falsification (for the underlying logic, see also works by Mark Granovetter, Thomas Schelling, Chien-Chun Yin, and Jared Rubin). Just as underground stresses can build up for decades without shaking the ground above, so discontents endured silently may make private opinion keep moving against the status quo without altering public opinion. And just as an earthquake can hit suddenly in response to an intrinsically minor tectonic shift, so public opinion may change explosively in response to an event of minor intrinsic significance to personal political incentives. Summarising Kuran’s logic requires consideration of the incentives and disincentives to express a preference likely to draw adverse reactions from others.

In Kuran’s basic theory, preference falsification imposes a cost on the falsifier in the form of resentment, anger, and humiliation for compromising his individuality. And this psychological cost grows with the extent of preference falsification. Accordingly, a citizen will find it harder to feign approval of the established policy if he favours massive reform than if he favours mild reform. In choosing a public preference with respect to the status quo, the individual must also consider the reputational consequences of the preference he conveys to others. If reformists are stigmatised and ostracised, and establishmentarians are rewarded, solely from a reputational standpoint he would find it more advantageous to appear as an establishmentarian. The reputational payoff from any given choice of a public preference depends on the relative shares of society publicly supporting each political option. That is because each camp’s rewarding and punishing is done by their members themselves. The camps thus form pressure groups. All else equal, the larger a pressure group, the greater the pressure it exerts on members of society.

Unless the established policy happens to coincide with an individual’s private ideal, he thus faces a trade-off between the internal benefits of expressing himself truthfully and the external advantages of being known as an establishmentarian. To any issue, observes Kuran, individuals can bring different wants, different needs for social approval, and different needs to express themselves truthfully. These possibilities imply that people can differ in their responses to prevailing social pressures. Of two reform-minded individuals, one may resist social pressures and express her preference truthfully while the other opts to accommodate the pressures through preference falsification. A further implication is that individuals can differ in terms of the social incentives necessary to make them abandon one public preference for another. The switchover points define individuals’ political thresholds. Political thresholds can vary across individuals for the reasons given above.

We are ready now to explain how, when private opinion and public opinion are far apart, a shock of the right kind can make a critical number of disgruntled individuals reach their thresholds for expressing themselves truthfully to put in motion a public-preference cascade (also known as a public-preference bandwagon, or, when the form of preference is clear from the context, a preference cascade). Until the critical mass is reached, changes in individual dispositions are invisible to outsiders, even to one another. Once it is reached, switches in public preferences impel people with thresholds a bit higher than those of the people within the critical mass add their own voices to the chorus for reform. And support for reform then keeps feeding on itself through growing pro-reform pressure and diminishing pressure favouring the status quo. Each addition to the reformist camp induces further additions until a much larger share of society stands for change. This preference cascade ends when no one is left whose threshold is sufficiently low to be tipped into the reformist camp by one more other individual’s prior switch.

This explosive growth in public support for reform amounts to a political revolution. The revolution will not have been anticipated, because preference falsification had concealed political currents flowing under the visible political landscape. Despite the lack of foresight, the revolution will easily be explained with the benefit of hindsight. Its occurrence lowers the personal risk of publicising preference falsification in the past. Tales of expressive repression expose the vulnerability of the pre-revolutionary social order. Though many of these tales will be completely true, others will be exaggerated, and still others will be outright lies. Indeed, the revolution creates incentives for people who were long satisfied genuinely with the status quo to pretend that, at heart, they were always reformists waiting for a prudent time to speak out.

Good hindsight does not imply good foresight, Kuran insists. To understand why we were fooled in the past does not provide immunity to being surprised by future social discontinuities. Wherever preference falsification exists, an unanticipated social break is possible.

Kuran developed his theory of “unanticipated revolution” in an April 1989 article that gave the French Revolution of 1789, the Russian Revolution of February 1917, and the Iranian Revolution of 1978-79 as examples of earth-shattering events that caught the world by surprise. When the Berlin Wall fell in November 1989, and several East European communist regimes fell in quick succession, he interpreted the surprise through an illustrative form of his theory. Both articles predict that revolutionary political surprises are a fact of political life; no amount of modelling and empirical research will provide full predictability as long as public preferences are interdependent and preference falsification exists. In a 1995 article, he emphasized that his unpredictability prediction is falsifiable. He stated as a proposition: “The ubiquity of preference falsification makes more revolutionary surprises inevitable.” This proposition “can be debunked,” he wrote, “by constructing a theory that predicts future revolutions accurately,” illustrating through examples that the predictions would need to specify the timing.

Case Studies

Kuran’s Private Truths, Public Lies contains three case studies. They involve the trajectory of East European communism, India’s caste system, and racial inequality and related policies in the United States. Many other scholars have applied the concept of preference falsification in myriad contexts. Some prominent cases are summarised here, and additional cases are referenced.

Communism’s Persistence and Sudden Fall

Persistence of Communism

For many decades, the communist regimes of the Eastern Europe, all established during or after World War II as “people’s democracies,” drew public support from millions of dissatisfied citizens. The reason is only partly that authorities punished dissenters. Citizens seeking to prove their loyalty to communism participated in the vilification of nonconformists, even of dissidents whose political positions they privately admired. This insincerity made it highly imprudent to oppose communism publicly. As such, it contributed to the survival of generally despised communist regimes. Vocal dissenters existed. They included Alexander Solzhenitsyn, Andrei Sakharov, and Václav Havel. But East European dissidents were far outnumbered by unhappy citizens who opted to appear supportive of the incumbent regime. By and large, dissidents were people with an enormous capacity for enduring social stigma, harassment, and even imprisonment. In terms of the Kuran model, they had uncommonly low thresholds for speaking their minds. Most East Europeans had much higher thresholds. Accordingly, for all the hardships of life under communism, they remained politically submissive for years on end.

Ideological Influence

One can privately despise a regime without loss of belief in the principles it stands for. By and large, people who came to disdain communist regimes continued, for decades, to believe in its viability. Most attributed its shortcomings to corrupt leaders, remaining sympathetic to communism itself.

Kuran attributes communism’s ideological influence partly to preference falsification on the part of people who felt victimized by it. In concealing their grievances to avoid being punished as an “enemy of the people,” victims had to refrain from communicating their observations about communism’s failures; they also had to pay lip service to Marxist principles. Their knowledge falsification distorted public discourse enormously, sowing confusion about the shortcomings of communism. Not even outspoken dissidents came out unscathed. Until Mikhail Gorbachev’s reforms of the 1980s broke longstanding taboos, most East European dissidents remained committed to some form of socialism.

Well before the fall of communism, during the heyday of Soviet power and apparent invincibility, the dissident Alexander Solzhenitsyn pointed to this phenomenon of intellectual enfeeblement. He said that the Soviet people had become “mental cripples.”

The large dissident literature of the communist world provides evidence. Not even courageous social thinkers escaped the damage of intellectual impoverishment. Certain unusually gifted scholars and statesmen recognised that something essential was wrong. From the Khrushchev era (1953–64) onwards, they spearheaded reforms such as Hungarian market socialism and the Yugoslav labor-managed enterprise. But the architects of these reforms failed to recognize the fatal flaws of the system they tried to salvage. Well into the 1980s, most reformers continued to regard central planning as indispensable. They criticised black markets but rarely understood that communism made black markets inevitable. Likewise, the instigators of Hungary’s crushed revolution of 1956 and the Prague Spring of 1968 were all wedded to “scientific socialism” as a doctrine of emancipation and shared prosperity.

The Hungarian economist János Kornai struggled in the 1960s to the 1980s to reform the Hungarian economy. His history of reform communism characterises the reformers of the 1950s and 1960s (including himself) as naïve. It was ridiculous, he wrote in 1986, to think that the Soviet command system could be reformed in such a way to ensure efficiency, growth, and equality all at once.

Diverse reformers helped expose communism’s unviability. But the biases of socialist public discourse handicapped even them. Their own thinking was warped by the distortions of communist public discourse.

The Sudden Fall of East European Communism

Among the most stunning surprises of the twentieth century is the collapse of several communist regimes in 1989. Practically everyone was stunned by the communist collapse, including scholars, statesmen, futurologists, the CIA, the KGB, and other intelligence organisations, dissidents with great insight into their societies (such as Havel and Solzhenitsyn), and even Gorbachev, whose actions unintentionally triggered this momentous transformation.

A major trigger was the Soviet Union’s twin policies of perestroika (restructuring) and glasnost (openness).[51] Perestroika amounted to an acknowledgment, by the Soviet Communist Party that something was seriously wrong, that the Communist system was not about to overtake the West. Glasnost allowed Soviet citizens to participate in debates about the system, to propose changes, to speak the previously unspeakable, to admit that they had been thinking what had been considered unthinkable. Public discourse broadened, heightening disillusionment with communism and intensifying popular discontent. In the process, millions of East European citizens became increasingly willing to support an opposition movement publicly.

Few would step forward, though, so long as the opposition movement remained minuscule. Hence, no one, not even the East Europeans themselves, knew how ready Eastern Europe had become for regime changes.

In retrospect, a turning point was Gorbachev’s trip to Berlin on 07 October 1989 for celebrations marking the 40th anniversary of East Germany’s communist regime. Crowds fill the streets, chanting “Gorby! Gorby!” The East German police responded with restraint. TV scenes of the demonstrations and the police response signalled, on the one hand, that discontent was very broad and, on the other hand, that the regime was vulnerable. The result was an explosive growth in public opposition, with each demonstration sparking larger demonstrations. The fall of the Berlin Wall came on 09 November. Regimes considered unshakeable crumbled, in quick succession, under the weight of open opposition from the streets.

Preference falsification, for decades a source of communism’s durability, now made the anti-regime movement in public opinion feed on itself. As public opposition grew, East Europeans relatively satisfied with the status quo jumped on the public-preference cascade to secure a place in the emerging new order. Though the world was caught by surprise, the East European revolutions are now easily understood. In line with Kuran’s theory of unanticipated revolutions, abundant information now points to the existence, all along, of massive hidden opposition to the region’s parties.

The data that have surfaced include classified opinion surveys found in Communist Party archives. Like rulers of dictatorial regimes throughout history, Party leaders understood that their support partly feigned. For self-preservation, they conducted anonymous opinion surveys whose results were treated as state secrets. The once-classified data show little variation until 1985. They indicate substantial belief in the efficiency of socialist institutions, but also far more doubt than public discourse suggested. After 1985, faith in communism plummeted and the perception that communism is unworkable spread.

A puzzle is why the East European leaders who had access to this information, and who thus knew that disillusionment with communism was growing, did not block the rise in explosive opposition. They probably could have prevented the cascades that were to unfold through massive force early on. Certain communist leaders thought that reforms would ultimately reverse the process. Others did not realise how quickly private discontent could produce self-reinforcing public opposition.

At the very end, fear simply changed sides: party functionaries who had helped foster repression came to fear ending up on the wrong side of history. In retrospect, it appears that their reticence to respond forcefully at the outset enabled public oppositions to grow explosively in country after country, through a domino effect. Each successful revolution lowered the perceived risks of joining the opposition in other countries.

Religious Preference Falsification

Preference falsification has played a role in the growth and survival of religions. It has contributed also to the shaping of religious institutions and beliefs. Some case studies are summarised here.

India’s Caste System

For several millennia, Indian society has been divided into ranked occupational units, or castes, whose membership is determined primarily by descent. In practice, the caste system became an integral part of Hinduism in most parts of the Indian subcontinent. Over the ages, this system survived anti-Hindu movements, foreign invasions, colonisation, even the challenges of aggressive conversions from Islam and Christianity. Although discrimination against the lower castes became illegal in post-colonial India, caste remains a powerful force in Indian life. Most marriages take place between members of the same caste.

Persistence of Caste System

The extraordinary durability of the caste system has puzzled social scientists, especially because, in most times and places, the system has perpetuated itself with little use of force. A related enigma has been the support given to the system by groups at the foot of the Hindu social hierarchy, namely the Untouchables (Dalits). Because of their deprivations, the Untouchables might be expected to have resisted the caste system en masse.

George Akerlof offers an explanation that hinges on two observations: (1) castes are economically interdependent, and (2) traditional Indian society penalises people who neglect or refuse to abide by caste codes. For example, if a firm hires an Untouchable to fill a post traditionally reserved for an upper caste, the firm loses customers, and the hired Untouchable endures social punishments. Because of these conditions, no individual can break away from the caste system unilaterally. To succeed, he must break away as part of a coalition. But free riding blocks the formation of viable coalitions. Because the rule breaker would suffer negative consequences immediately and without any guarantee of success, no firm and no Untouchable initiates a break.

In Private Truths, Public Lies, Kuran observes that Indians were penalised not just for actions against the caste system but also for expressions of opposition. The caste system discouraged inquiries into its rationale. It also discouraged open criticism of caste rules. By and large, the reservations of Indians remained suppressed. Preference falsification with respect to the system was common, as was knowledge falsification. Based on these findings and focusing on processes that shape public opinion and public knowledge, Kuran extends Akerlof’s theory.

Reticence to publicise preferences and knowledge honestly kept Indians in the dark, Kuran argues, about opportunities for forming anti-caste coalitions. It made them perceive the caste system as inescapable, even in contexts where, collectively, they had the power to change, even overthrow, the system. Hence, before the 1800s, negotiations for a more egalitarian social contract did not get off the ground. Reform-minded Indians could not find each other, let alone initiate discussions leading to reforms.

Caste Ideology

The caste system was legitimised through a tenet of Hinduism, the doctrine of karma. According to this doctrine, an individual’s behaviour in one life affects his social status in his later lives. If a person accepts his caste of birth and fulfils the tasks accepted of him without making a fuss, he gets reincarnated into a higher caste. If instead he neglects the duties of the caste into which he was born or challenges the caste system, he gets demoted in his next life. Accordingly, the karma doctrine treats prevailing status differences as the fair and merited consequences of past conduct.

Many ethnographies find that, to close friends, low-ranked Indians will confess doubts about karma, if not outright lack of belief in it. But preference and knowledge falsification by Indians, even by substantial numbers, does not imply that the doctrine is merely a façade. Over countless generations, many Indians internalised the doctrine of karma. Belief in social mobility through reincarnation has been common; so has belief in ritual impurity, which Hinduism treats as both a source and a manifestation of social inferiority.

These concepts emerged so long ago that their origins are poorly understood. It is clear, though, that, once the caste system got established, the highest-status castes, the brahmins, had incentives to perpetuate the caste system by punishing Indians who misbehaved or expressed disapproval. As Kuran explains, preference and knowledge falsification made some Indians obey the system for fear of reprisals and others out of conviction. In either case, public discourse facilitated the acceptance of karma-based status differences. Most Indians remained ignorant of concepts critical to treating their conditions as unacceptable. Insofar as Indians genuinely believed in caste ideology, the caste system strengthened.

In the 19th century, the caste system came to be questioned widely in public. The key trigger was that growing numbers of Indians became acquainted with egalitarian European movements, such as democratisation, liberalism, and socialism. The ensuing Indian reform movement led, in the second half of the 20th century, to a system of caste-based education and job quotas meant to assist the most disadvantaged groups within Indian society, including the Untouchables.

Shii Islam’s Taqiyya Doctrine

After Islam’s Sunni-Shii schism in 661 CE, Sunni leaders took to persecuting Shiis living in their domains. Their campaign to extinguish Shiism included requiring suspected Shiis to insult the founders of Shiism. Refusal to comply could result in imprisonment, torture, even death. In response, Shii leaders adopted a doctrine that allowed individual Shiis to conceal their Shii beliefs in the face of danger, provided they met two criteria. First, the preference falsifiers would stay devoted, in their hearts, to Shii tenets; and second, they would intend, as soon as the danger passed, to return to practicing Shiism openly. This form of religious preference falsification was known as taqiyya.

Shii leaders gave taqiyya religious legitimacy through Quran verses that speak of God’s omniscience; God saw, on the one hand, people’s private and public preferences, and, on the other hand, the conditions making religious preference falsification a matter of survival. He would sympathise with taqiyya exercised for legitimate reasons by people who, at least privately, retained the correct faith.

Gradually, the taqiyya doctrine turned into a justification for Shii political passivity. Stretching the meaning of this doctrine, many Shiis living under an oppressive regime used it to rationalise inaction, even apathy. In the 20th century, growing numbers of Shii leaders took to telling their followers that taqiyya had been a key source of Shii political and economic weakness. The mastermind of Iran’s Revolution of 1979, Ayatollah Ruhollah Khomeini, opened his campaign to topple the Pahlavi Monarchy by proclaiming: “The time for taqiyya is over. Now is the time for us to stand up and proclaim the things we believe in.” The success of Khomeini’s campaign involved millions of Iranians joining street protests against the Pahlavi regime, at the risk of being caught, if not killed on the spot, by the regime’s widely feared security forces.

Once it consolidated power, Iran’s Islamic Republic founded by Khomeini’s team did not institute religious freedoms. In forcing Iranians to live according to its specific interpretation of Shiism, it effectively induced a new form of taqiyya. Having the regime’s morality police enforce a conservative dress code for women (hijab) resulted in rampant religious preference falsification of a new kind. Against their will, millions of Iranian women started covering their hair and abiding by the regime’s modesty standards in myriad ways, simply to avoid punishment. Evidence lies in the commonness of mini headscarves that cover just enough hair to pass as veiled. These headscarves are known pejoratively as “bad hijab” or “slutty hijab.” They represent attempts by Iranian women to minimise the extent of their religious preference falsification.

Crypto-Protestantism in France, 1685-1787

Between the Edict of Nantes (1598) and the Edict of Fontainebleau (1685), Protestantism enjoyed toleration in France. The latter edict inaugurated a period when Protestantism was officially proscribed, except in Alsace and Lorraine. Many French Protestants emigrated to Switzerland, Great Britain, British North America, Prussia, and other predominantly Protestant territories. At least officially, the Protestants who stayed behind converted to Roman Catholicism. Of these converts, some practiced Catholicism in public even as they performed Protestant rites privately. Such crypto-Protestantism is a form of religious preference falsification.

During the Revocation period, appearing as a Catholic was a matter of survival. But the required public performances varied across groups and by location, as did crypto-Protestant practices. For example, the Jaucourt family, a noble crypto-Protestant house, discreetly fulfilled its religious commitments at the Protestant chapels of Scandinavian embassies. Catholic authorities, both state officials and Catholic clergy, looked the other way.

For most of the crypto-Protestant population, however, the medium for performing Protestant rites was Désert Church. This was a clandestine network of congregations that operated throughout France with the help of lay crypto-Protestants and Reformed Protestant clerics. Initially, these clerics were domestically trained. Eventually, they were all foreign-trained.

Public performance of Catholic rites gave crypto-Protestants access to civil status deeds as well as official registrations of births, baptisms, and marriages. These incentives for religious preference falsification were not trivial. For example, legal marriage gave offspring legitimacy and inheritance rights. Désert marriages had no legal standing.

French Protestants regained the right to legal marriage as Protestants 102 years after the Edict of Fontainebleau, with the Edict of Tolerance (1787).

Covert Judaism and Islam during the Portuguese Inquisition

In 1496, King Manuel I of Portugal decreed the expulsion of Jews and free Moors from his kingdom and dominions, unless they converted to Christianity. Four decades later, during the reign of John III, Portugal’s Holy Office of the Inquisition was established. It began to persecute people accused of crypto-Judaism, crypto-Islam, or some other form of religious preference falsification.

The Portuguese Inquisition functioned as a persecutory organisation against covert practices of other religions, but also against heresies and deviations from sexual mores considered un-Christian, such as bigamy and sodomy. The Inquisition pursued these missions until at least the 1770s, when the government of the Marquis of Pombal repurposed this institution. The Portuguese Inquisition was terminated in 1821.

Iberian-Jewish (Sephardic) converts to Catholicism and their descendants were all known as “New Christians.” The Islamic converts and their descendants were known collectively as Mouriscos. On pain of social rejection and inquisitorial persecution, they were required to display, convincingly enough, their adherence to Roman Catholicism.

Blood purity statutes regulated access to Portugal’s public posts and honorific distinctions (also called limpeza de sangue), denying a broad range of privileges to New Christians on account of their heredity. But they could move upward by a combination of marrying “Old Christians” and having records of their roots altered. Diverse entities, including the Inquisition, were used to help New Christians whitewash their heritage through “blood purity” certificates, invariably in return for fees. This whitewashing process involved knowledge falsification by both sides.

The credibility of blood purity certificates depended on the issuing entity’s place in Portugal’s hierarchy. Accordingly, New Christians could keep rising in social status through blood purity certificates of increasing rigor. More rigorous certificates could be obtained from higher-level investigations also to confront rumours of impure ancestry. Many Portuguese families with New Christian roots progressed upwards in social status by creating availability cascades of positive blood purity certifications. Such families bolstered the availability of information pointing to blood purity also by placing relatives in the Roman Catholic clergy. These placements served themselves as certifications, for the clergy was closed to Christians of “impure” ancestry (namely Jewish, Moorish, and Sub-Saharan African).

Gender Norms

According to a 2020 study, by Leonardo Bursztyn, Alessandra González, and David Yanagizawa-Drott, the vast majority of young married men in Saudi Arabia express private beliefs in support of women working outside the home. At the same time, they substantially underestimate the degree to which other similar men support it. Once they become informed about the widespread nature of the support, they increasingly help their wives obtain jobs.

Ethnic Conflict

In “Ethnic norms and their transformation through reputational cascades,” Kuran applies the concept of preference falsification to ethnic conflict. The article focuses on ethnification, the process whereby ethnic origins, ethnic symbols, and ethnic ties gain salience and practical significance.

Ethnicity often serves as a source of identity without preventing cooperation, exchanges, socialising and intermarriage across ethnic boundaries. In such contexts, social forces may preserve that condition indefinitely. People who harbour ill-will toward other ethnic groups will keep their hatreds in check to avoid being punished for divisiveness. But if political and economic shocks weaken those forces, a process of ethnification may get under way. Specifically, people may start highlighting their ethnic particularities and discriminating against ethnic others. The emerging social pressures will then generate further ethnification through a self-reinforcing process, possibly leading to spiralling ethnic conflict.

An implication of Kuran’s analysis is that culturally, politically, economically, and demographically similar countries may exhibit very different levels of ethnic activity. Another is that ethnically based hatreds may constitute by-products of ethnification rather than their mainspring.

Yugoslav Civil War

Kuran uses the above argument to illuminate how the former Yugoslavia, once touted as the model of a civilised multi-ethnic nation, became ethnically segregated over a short period and dissolved into ethnically based enclaves at war with one another. Preference falsification increased the intensity of the Yugoslav Civil War, he suggests; also, it accelerated Yugoslavia’s break-up into ethnically based independent republics.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Preference_falsification >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Emotional Contagion

Introduction

Emotional contagion is a form of social contagion that involves the spontaneous spread of emotions and related behaviours. Such emotional convergence can happen from one person to another, or in a larger group. Emotions can be shared across individuals in many ways, both implicitly or explicitly. For instance, conscious reasoning, analysis, and imagination have all been found to contribute to the phenomenon. The behaviour has been found in humans, other primates, dogs, and chickens.

Plutchik Wheel

Emotional contagion is important to personal relationships because it fosters emotional synchrony between individuals. A broader definition of the phenomenon suggested by Schoenewolf is:

“a process in which a person or group influences the emotions or behavior of another person or group through the conscious or unconscious induction of emotion states and behavioral attitudes.”

One view developed by Elaine Hatfield, et al., is that this can be done through automatic mimicry and synchronisation of one’s expressions, vocalisations, postures, and movements with those of another person. When people unconsciously mirror their companions’ expressions of emotion, they come to feel reflections of those companions’ emotions.

In a 1993 paper, Psychologists Elaine Hatfield, John Cacioppo, and Richard Rapson define emotional contagion as “the tendency to automatically mimic and synchronize expressions, vocalizations, postures, and movements with those of another person’s [sic] and, consequently, to converge emotionally”. 

Hatfield, et al., theorise emotional contagion as a two-step process: First, we imitate people (e.g. if someone smiles at you, you smile back). Second, our own emotional experiences change based on the non-verbal signals of emotion that we give off. For example, smiling makes one feel happier, and frowning makes one feel worse. Mimicry seems to be one foundation of emotional movement between people.

Emotional contagion and empathy share similar characteristics, with the exception of the ability to differentiate between personal and pre-personal experiences, a process known as individuation. In The Art of Loving (1956), social psychologist Erich Fromm explores these differences, suggesting that autonomy is necessary for empathy, which is not found in emotional contagion.

Etymology

James Baldwin addressed “emotional contagion” in his 1897 work Social and Ethical Interpretations in Mental Development, though using the term “contagion of feeling”. Various 20th century scholars discussed the phenomena under the heading “social contagion”. The term “emotional contagion” first appeared in Arthur S. Reber’s 1985 The Penguin Dictionary of Psychology.

Influencing Factors

Several factors determine the rate and extent of emotional convergence in a group, including membership stability, mood-regulation norms, task interdependence, and social interdependence. Besides these event-structure properties, there are personal properties of the group’s members, such as openness to receive and transmit feelings, demographic characteristics, and dispositional affect that influence the intensity of emotional contagion.

Research

Research on emotional contagion has been conducted from a variety of perspectives, including organisational, social, familial, developmental, and neurological. While early research suggested that conscious reasoning, analysis, and imagination accounted for emotional contagion, some forms of more primitive emotional contagion are far more subtle, automatic, and universal.

Hatfield, Cacioppo, and Rapson’s 1993 research into emotional contagion reported that people’s conscious assessments of others’ feelings were heavily influenced by what others said. People’s own emotions, however, were more influenced by others’ nonverbal clues as to what they were really feeling. Recognizing emotions and acknowledging their origin can be one way to avoid emotional contagion. Transference of emotions has been studied in a variety of situations and settings, with social and physiological causes being two of the largest areas of research.

In addition to the social contexts discussed above, emotional contagion has been studied within organisations. Schrock, Leaf, and Rohr (2008) say organizations, like societies, have emotion cultures that consist of languages, rituals, and meaning systems, including rules about the feelings workers should, and should not, feel and display. They state that emotion culture is quite similar to “emotion climate”, otherwise known as morale, organisational morale, and corporate morale.  Furthermore, Worline, Wrzesniewski, and Rafaeli (2002): 318  mention that organizations have an overall “emotional capability”, while McColl-Kennedy, and Smith (2006)  examine “emotional contagion” in customer interactions. These terms arguably all attempt to describe a similar phenomenon; each term differs in subtle and somewhat indistinguishable ways.

Controversy

A controversial experiment demonstrating emotional contagion by using the social media platform Facebook was carried out in 2014 on 689,000 users by filtering positive or negative emotional content from their news feeds. The experiment sparked uproar among people who felt the study violated personal privacy. The 2014 publication of a research paper resulting from this experiment, “Experimental evidence of massive-scale emotional contagion through social networks”, a collaboration between Facebook and Cornell University, is described by Tony D. Sampson, Stephen Maddison, and Darren Ellis (2018) as a “disquieting disclosure that corporate social media and Cornell academics were so readily engaged with unethical experiments of this kind.” Tony D. Sampson et al. criticise the notion that “academic researchers can be insulated from ethical guidelines on the protection for human research subjects because they are working with a social media business that has ‘no obligation to conform’ to the principle of ‘obtaining informed consent and allowing participants to opt out’.” A subsequent study confirmed the presence of emotional contagion on Twitter without manipulating users’ timelines.

Beyond the ethical concerns, some scholars criticised the methods and reporting of the Facebook findings. John Grohol, writing for Psych Central, argued that despite its title and claims of “emotional contagion,” this study did not look at emotions at all. Instead, its authors used an application (called “Linguistic Inquiry and Word Count” or LIWC 2007) that simply counted positive and negative words in order to infer users’ sentiments. A shortcoming of the LIWC tool is that it does not understand negations. Hence, the tweet “I am not happy” would be scored as positive: “Since the LIWC 2007 ignores these subtle realities of informal human communication, so do the researchers.” Grohol concluded that given these subtleties, the effect size of the findings are little more than a “statistical blip.”

Kramer et al. (2014) found a 0.07%—that’s not 7 percent, that’s 1/15th of one percent!!—decrease in negative words in people’s status updates when the number of negative posts on their Facebook news feed decreased. Do you know how many words you’d have to read or write before you’ve written one less negative word due to this effect? Probably thousands.

Types

Emotions can be shared and mimicked in many ways. Taken broadly, emotional contagion can be either: implicit, undertaken by the receiver through automatic or self-evaluating processes; or explicit, undertaken by the transmitter through a purposeful manipulation of emotional states, to achieve a desired result.

Implicit

Unlike cognitive contagion, emotional contagion is less conscious and more automatic. It relies mainly on non-verbal communication, although emotional contagion can and does occur via telecommunication. For example, people interacting through e-mails and chats are affected by the other’s emotions, without being able to perceive the non-verbal cues.

One view, proposed by Hatfield and colleagues, describes emotional contagion as a primitive, automatic, and unconscious behaviour that takes place through a series of steps. When a receiver is interacting with a sender, he perceives the emotional expressions of the sender. The receiver automatically mimics those emotional expressions. Through the process of afferent feedback, these new expressions are translated into feeling the emotions the sender feels, thus leading to emotional convergence.

Another view, emanating from social comparison theories, sees emotional contagion as demanding more cognitive effort and being more conscious. According to this view, people engage in social comparison to see if their emotional reaction is congruent with the persons around them. The recipient uses the emotion as a type of social information to understand how he or she should be feeling. People respond differently to positive and negative stimuli; negative events tend to elicit stronger and quicker emotional, behavioural, and cognitive responses than neutral or positive events. So unpleasant emotions are more likely to lead to mood contagion than are pleasant emotions. Another variable is the energy level at which the emotion is displayed. Higher energy draws more attention to it, so the same emotional valence (pleasant or unpleasant) expressed with high energy is likely to lead to more contagion than if expressed with low energy.

Explicit

Aside from the automatic infection of feelings described above, there are also times when others’ emotions are being manipulated by a person or a group in order to achieve something. This can be a result of intentional affective influence by a leader or team member. Suppose this person wants to convince the others of something, he may do so by sweeping them up in his enthusiasm. In such a case, his positive emotions are an act with the purpose of “contaminating” the others’ feelings. A different kind of intentional mood contagion would be, for instance, giving the group a reward or treat, in order to alleviate their feelings.

The discipline of organisational psychology researches aspects of emotional labour. This includes the need to manage emotions so that they are consistent with organisational or occupational display rules, regardless of whether they are discrepant with internal feelings. In regard to emotional contagion, in work settings that require a certain display of emotions, one finds oneself obligated to display, and consequently feel, these emotions. If superficial acting develops into deep acting, emotional contagion is the byproduct of intentional affective impression management.

In Workplaces and Organisations

Intra-Group

Many organisations and workplaces encourage teamwork. Studies conducted by organisational psychologists highlight the benefits of work teams. Emotions come into play and a group emotion is formed.

The group’s emotional state influences factors such as cohesiveness, morale, rapport, and the team’s performance. For this reason, organisations need to take into account the factors that shape the emotional state of the work-teams, in order to harness the beneficial sides and avoid the detrimental sides of the group’s emotion. Managers and team leaders should be cautious with their behaviour, since their emotional influence is greater than that of a “regular” team member: leaders are more emotionally “contagious” than others.

Employee/Customer

The interaction between service employees and customers affects both customers’ assessments of service quality and their relationship with the service provider. Positive affective displays in service interactions are positively associated with important customer outcomes, such as intention to return and to recommend the store to a friend. It is the interest of organisations that their customers be happy, since a happy customer is a satisfied one. Research has shown that the emotional state of the customer is directly influenced by the emotions displayed by the employee/service provider via emotional contagion. But this influence depends on authenticity of the employee’s emotional display, such that if the employee is only surface-acting, the contagion is poor, in which case the beneficial effects will not occur.

Neurological Basis

Vittorio Gallese posits that mirror neurons are responsible for intentional attunement in relation to others. Gallese and colleagues at the University of Parma found a class of neurons in the premotor cortex that discharge either when macaque monkeys execute goal-related hand movements or when they watch others doing the same action. One class of these neurons fires with action execution and observation, and with sound production of the same action. Research in humans shows an activation of the premotor cortex and parietal area of the brain for action perception and execution.

Gallese says humans understand emotions through a simulated shared body state. The observers’ neural activation enables a direct experiential understanding. “Unmediated resonance” is a similar theory by Goldman and Sripada (2004). Empathy can be a product of the functional mechanism in our brain that creates embodied simulation. The other we see or hear becomes the “other self” in our minds. Other researchers have shown that observing someone else’s emotions recruits brain regions involved in:

  1. Experiencing similar emotions; and
  2. Producing similar facial expressions.

This combination indicates that the observer activates:

  1. A representation of the emotional feeling of the other individual which leads to emotional contagion; and
  2. A motor representation of the observed facial expression that could lead to facial mimicry.

In the brain, understanding and sharing other individuals’ emotions would thus be a combination of emotional contagion and facial mimicry. Importantly, more empathic individuals experience more brain activation in emotional regions while witnessing the emotions of other individuals.

Amygdala

The amygdala is one part of the brain that underlies empathy and allows for emotional attunement and creates the pathway for emotional contagion. The basal areas including the brain stem form a tight loop of biological connectedness, re-creating in one person the physiological state of the other. Psychologist Howard Friedman thinks this is why some people can move and inspire others. The use of facial expressions, voices, gestures and body movements transmit emotions to an audience from a speaker.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Emotional_contagion >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Social Emotions

Introduction

Social emotions are emotions that depend upon the thoughts, feelings or actions of other people, “as experienced, recalled, anticipated or imagined at first hand”. Examples are embarrassment, guilt, shame, jealousy, envy, coolness, elevation, empathy, and pride. In contrast, basic emotions such as happiness and sadness only require the awareness of one’s own physical state. Therefore, the development of social emotions is tightly linked with the development of social cognition, the ability to imagine other people’s mental states, which generally develops in adolescence. Studies have found that children as young as 2 to 3 years of age can express emotions resembling guilt and remorse. However, while five-year-old children are able to imagine situations in which basic emotions would be felt, the ability to describe situations in which social emotions might be experienced does not appear until seven years of age.

People may not only share emotions with others, but may also experience similar physiological arousal to others if they feel a sense of social connectedness to the other person. A laboratory-based study by Cwir, Car, Walton, and Spencer (2011) showed that, when a participant felt a sense of social connectedness to a stranger (research confederate), the participant experienced similar emotional states and physiological responses to that of the stranger while observing the stranger perform a stressful task.

Social emotions are sometimes called moral emotions, because they play an important role in morality and moral decision making. In neuroeconomics, the role social emotions play in game theory and economic decision-making is just starting to be investigated.

Behavioural Neuroscience

After functional imaging—functional magnetic resonance imaging (fMRI) in particular—became popular roughly a decade ago, researchers have begun to study economic decision-making with this new technology. This allows researchers to investigate, on a neurological level, the role emotions play in decision-making.

Developmental Picture

The ability to describe situations in which a social emotion will be experienced emerges at around age 7, and, by adolescence, the experience of social emotion permeates everyday social exchange. Studies using fMRI have found that different brain regions are involved in different age groups when performing social-cognitive and social-emotional tasks. While brain areas such as medial prefrontal cortex (MPFC), superior temporal sulcus (STS), temporal poles (TP) and precuneus bordering with posterior cingulate cortex are activated in both adults and adolescents when they reason about intentionality of others, the medial PFC is more activated in adolescents and the right STS more in adults. Similar age effects were found with younger participants, such that, when participants perform tasks that involve theory of mind, increase in age is correlated with an increase in activation in the dorsal part of the MPFC and a decrease in the activity in the ventral part of the MPFC were observed.

Studies that compare adults with adolescents in their processing of basic and social emotions also suggest developmental shifts in brain areas being involved. Comparing with adolescents, the left temporal pole has a stronger activity in adults when they read stories that elicit social emotions. The temporal poles are thought to store abstract social knowledge. This suggests that adult might use social semantic knowledge more often when thinking about social-emotional situations than adolescents.

Neuroeconomics

To investigate the function of social emotions in economic behaviours, researchers are interested in the differences in brain regions involved when participants are playing with, or think that they are playing with, another person as opposed to a computer. A study with fMRI found that, for participants who tend to cooperate on two-person “trust and reciprocity” games, believing that they are playing with another participant activated the prefrontal cortex, while believing that they are playing with a computer did not. This difference was not seen with players who tend not to cooperate. The authors interpret this difference as theory of minds that co-operators employ to anticipate the opponents’ strategies. This is an example of the way social decision making differs from other forms of decision making.

In behavioural economics, a heavy criticism is that people do not always act in a fully rational way, as many economic models assume. For example, in the ultimatum game, two players are asked to divide a certain amount of money, say x. One player, called the proposer, decides ratio by which the money gets divided. The other player, called the responder, decides whether or not to accept this offer. If the responder accepts the offer, say, y amount of money, then the proposer gets x-y amount and the responder gets y. But if the responder refuses to accept the offer, both players get nothing. This game is widely studied in behavioural economics. According to the rational agent model, the most rational way for the proposer to act is to make y as small as possible, and the most rational way for the responder to act is to accept the offer, since little amount of money is better than no money. However, what these experiments tend to find is that the proposers tend to offer 40% of x, and offers below 20% would get rejected by the responders. Using fMRI scans, researchers found that social emotions elicited by the offers may play a role in explaining the result. When offers are unfair as opposed to fair, three regions of the brain are active: the dorsolateral prefrontal cortex (DLPFC), the anterior cingulate cortex (ACC), and the insula. The insula is an area active in registering body discomfort. It is activated when people feel, among other things, social exclusion. The authors interpret activity in the insula as the aversive reaction one feels when faced with unfairness, activity in the DLPFC as processing the future reward from keeping the money, and the ACC is an arbiter that weighs these two conflicting inputs to make a decision. Whether or not the offer gets rejected can be predicted (with a correlation of 0.45) by the level of the responder’s insula activity.

Neuroeconomics and social emotions are also tightly linked in the study of punishment. Research using PET scan has found that, when players punish other players, activity in the nucleus accumbens (part of the striatum), a region known for processing rewards derived from actions gets activated. It shows that we not only feel hurtful when we become victims of unfairness, but we also find it psychologically rewarding to punish the wrongdoer, even at a cost to our own utility.

Social or Moral Aspect

Some social emotions are also referred to as moral emotions because of the fundamental role they play in morality. For example, guilt is the discomfort and regret one feels over one’s wrongdoing. It is a social emotion, because it requires the perception that another person is being hurt by this act; and it also has implication in morality, such that the guilty actor, in virtue of feeling distressed and guilty, accepts responsibility for the wrongdoing, which might cause desire to make amends or punish the self.

Not all social emotions are moral emotions. Pride, for instance, is a social emotion which involves the perceived admiration of other people, but research on the role it plays in moral behaviours yields problematic results.

Empathic Response

Empathy is defined by Eisenberg and colleagues as an affective response that stems from the apprehension or comprehension of another’s emotional state or condition and is similar to what the other person is feeling or would be expected to feel. Guilt, which is a social emotion with strong moral implication, is also strongly correlated with empathic responsiveness; whereas shame, an emotion with less moral flavour, is negatively correlated with empathic responsiveness, when controlling for guilt.

Perceived controllability also plays an important role modulating people’s socio-emotional reactions and empathic responses. For example, participants who are asked to evaluate other people’s academic performances are more likely to assign punishments when the low performance is interpreted as low-effort, as opposed to low-ability. Stigmas also elicit more empathic response when they are perceived as uncontrollable (i.e. having a biological origin, such as having certain disease), as opposed to controllable (i.e. having a behavioural origin, such as obesity).

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Social_emotions >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Social Stigma

Introduction

Stigma, originally referring to the visible marking of people considered inferior, has evolved in modern society into a social concept that applies to different groups or individuals based on certain characteristics such as socioeconomic status, culture, gender, race, religion or health status. Social stigma can take different forms and depends on the specific time and place in which it arises. Once a person is stigmatised, they are often associated with stereotypes that lead to discrimination, marginalisation, and psychological problems.

This process of stigmatisation not only affects the social status and behaviour of stigmatised persons, but also shapes their own self-perception, which can lead to psychological problems such as depression and low self-esteem. Stigmatized people are often aware that they are perceived and treated differently, which can start at an early age. Research shows that children are aware of cultural stereotypes at an early age, which affects their perception of their own identity and their interactions with the world around them.

Description

Stigma (plural stigmas or stigmata) is a Greek word that in its origins referred to a type of marking or the tattoo that was cut or burned into the skin of people with criminal records, slaves, or those seen as traitors in order to visibly identify them as supposedly blemished or morally polluted persons. These individuals were to be avoided particularly in public places.

Social stigmas can occur in many different forms. The most common deal with culture, gender, race, religion, illness and disease. Individuals who are stigmatized usually feel different and devalued by others.

Stigma may also be described as a label that associates a person to a set of unwanted characteristics that form a stereotype. It is also affixed. Once people identify and label one’s differences, others will assume that is just how things are and the person will remain stigmatised until the stigmatising attribute is undetectable. A considerable amount of generalisation is required to create groups, meaning that people will put someone in a general group regardless of how well the person actually fits into that group. However, the attributes that society selects differ according to time and place. What is considered out of place in one society could be the norm in another. When society categorises individuals into certain groups the labelled person is subjected to status loss and discrimination. Society will start to form expectations about those groups once the cultural stereotype is secured.

Stigma may affect the behaviour of those who are stigmatised. Those who are stereotyped often start to act in ways that their stigmatisers expect of them. It not only changes their behaviour, but it also shapes their emotions and beliefs. Members of stigmatised social groups often face prejudice that causes depression (i.e. deprejudice). These stigmas put a person’s social identity in threatening situations, such as low self-esteem. Because of this, identity theories have become highly researched. Identity threat theories can go hand-in-hand with labelling theory.

Members of stigmatised groups start to become aware that they are not being treated the same way and know they are likely being discriminated against. Studies have shown that “by 10 years of age, most children are aware of cultural stereotypes of different groups in society, and children who are members of stigmatized groups are aware of cultural types at an even younger age.”

Main Theories and Contributions

Émile Durkheim

French sociologist Émile Durkheim was the first to explore stigma as a social phenomenon in 1895. He wrote:

Imagine a society of saints, a perfect cloister of exemplary individuals. Crimes or deviance, properly so-called, will there be unknown; but faults, which appear venial to the layman, will there create the same scandal that the ordinary offense does in ordinary consciousnesses. If then, this society has the power to judge and punish, it will define these acts as criminal (or deviant) and will treat them as such.

Erving Goffman

Erving Goffman described stigma as a phenomenon whereby an individual with an attribute which is deeply discredited by their society is rejected as a result of the attribute. Goffman saw stigma as a process by which the reaction of others spoils normal identity.

More specifically, he explained that what constituted this attribute would change over time. “It should be seen that a language of relationships, not attributes, is really needed. An attribute that stigmatizes one type of possessor can confirm the usualness of another, and therefore is neither credible nor discreditable as a thing in itself.”

In Goffman’s theory of social stigma, a stigma is an attribute, behavior, or reputation which is socially discrediting in a particular way: it causes an individual to be mentally classified by others in an undesirable, rejected stereotype rather than in an accepted, normal one. Goffman defined stigma as a special kind of gap between virtual social identity and actual social identity:

While a stranger is present before us, evidence can arise of his possessing an attribute that makes him different from others in the category of persons available for him to be, and of a less desirable kind—in the extreme, a person who is quite thoroughly bad, or dangerous, or weak. He is thus reduced in our minds from a whole and usual person to a tainted discounted one. Such an attribute is a stigma, especially when its discrediting effect is very extensive […] It constitutes a special discrepancy between virtual and actual social identity.

The Stigmatised, The Normal, and The Wise

Goffman divides the individual’s relation to a stigma into three categories:

  • The stigmatised being those who bear the stigma;
  • The normals being those who do not bear the stigma; and
  • The wise being those among the normals who are accepted by the stigmatised as understanding and accepting of their condition (borrowing the term from the homosexual community).

The wise normals are not merely those who are in some sense accepting of the stigma; they are, rather, “those whose special situation has made them intimately privy to the secret life of the stigmatized individual and sympathetic with it, and who find themselves accorded a measure of acceptance, a measure of courtesy membership in the clan.” That is, they are accepted by the stigmatized as “honorary members” of the stigmatised group. “Wise persons are the marginal men before whom the individual with a fault need feel no shame nor exert self-control, knowing that in spite of his failing he will be seen as an ordinary other,” Goffman notes that the wise may in certain social situations also bear the stigma with respect to other normals: that is, they may also be stigmatized for being wise. An example is a parent of a homosexual; another is a white woman who is seen socialising with a black man (assuming social milieus in which homosexuals and dark-skinned people are stigmatised).

A 2012 study showed empirical support for the existence of the own, the wise, and normals as separate groups; but the wise appeared in two forms: active wise and passive wise. The active wise encouraged challenging stigmatization and educating stigmatisers, but the passive wise did not.

Ethical Considerations

Goffman emphasizes that the stigma relationship is one between an individual and a social setting with a given set of expectations; thus, everyone at different times will play both roles of stigmatised and stigmatiser (or, as he puts it, “normal”). Goffman gives the example that “some jobs in America cause holders without the expected college education to conceal this fact; other jobs, however, can lead to the few of their holders who have a higher education to keep this a secret, lest they are marked as failures and outsiders. Similarly, a middle-class boy may feel no compunction in being seen going to the library; a professional criminal, however, writes [about keeping his library visits secret].” He also gives the example of blacks being stigmatised among whites, and whites being stigmatised among blacks.

Individuals actively cope with stigma in ways that vary across stigmatised groups, across individuals within stigmatised groups, and within individuals across time and situations.

The Stigmatised

The stigmatised are ostracised, devalued, scorned, shunned and ignored. They experience discrimination in the realms of employment and housing. Perceived prejudice and discrimination is also associated with negative physical and mental health outcomes. Young people who experience stigma associated with mental health difficulties may face negative reactions from their peer group. Those who perceive themselves to be members of a stigmatised group, whether it is obvious to those around them or not, often experience psychological distress and many view themselves contemptuously.

Although the experience of being stigmatised may take a toll on self-esteem, academic achievement, and other outcomes, many people with stigmatised attributes have high self-esteem, perform at high levels, are happy and appear to be quite resilient to their negative experiences.

There are also “positive stigma”: it is possible to be too rich, or too smart. This is noted by Goffman (1963:141) in his discussion of leaders, who are subsequently given license to deviate from some behavioural norms because they have contributed far above the expectations of the group. This can result in social stigma.

The Stigmatiser

From the perspective of the stigmatiser, stigmatisation involves threat, aversion and sometimes the depersonalisation of others into stereotypic caricatures. Stigmatizing others can serve several functions for an individual, including self-esteem enhancement, control enhancement, and anxiety buffering, through downward-comparison—comparing oneself to less fortunate others can increase one’s own subjective sense of well-being and therefore boost one’s self-esteem.

21st-century social psychologists consider stigmatising and stereotyping to be a normal consequence of people’s cognitive abilities and limitations, and of the social information and experiences to which they are exposed.

Current views of stigma, from the perspectives of both the stigmatiser and the stigmatised person, consider the process of stigma to be highly situationally specific, dynamic, complex and nonpathological.

Gerhard Falk

German-born sociologist and historian Gerhard Falk wrote:

All societies will always stigmatize some conditions and some behaviors because doing so provides for group solidarity by delineating “outsiders” from “insiders”.

Falk] describes stigma based on two categories, existential stigma and achieved stigma. He defines existential stigma as “stigma deriving from a condition which the target of the stigma either did not cause or over which he has little control.” He defines Achieved Stigma as “stigma that is earned because of conduct and/or because they contributed heavily to attaining the stigma in question.”

Falk concludes that “we and all societies will always stigmatize some condition and some behavior because doing so provides for group solidarity by delineating ‘outsiders’ from ‘insiders'”. Stigmatisation, at its essence, is a challenge to one’s humanity – for both the stigmatised person and the stigmatiser. The majority of stigma researchers have found the process of stigmatisation has a long history and is cross-culturally ubiquitous.

Link and Phelan Stigmatisation Model

Bruce Link and Jo Phelan propose that stigma exists when four specific components converge:

  1. Individuals differentiate and label human variations.
  2. Prevailing cultural beliefs tie those labeled to adverse attributes.
  3. Labelled individuals are placed in distinguished groups that serve to establish a sense of disconnection between “us” and “them”.
  4. Labelled individuals experience “status loss and discrimination” that leads to unequal circumstances.

In this model stigmatisation is also contingent on “access to social, economic, and political power that allows the identification of differences, construction of stereotypes, the separation of labeled persons into distinct groups, and the full execution of disapproval, rejection, exclusion, and discrimination.” Subsequently, in this model, the term stigma is applied when labelling, stereotyping, disconnection, status loss, and discrimination all exist within a power situation that facilitates stigma to occur.

Differentiation and Labelling

Identifying which human differences are salient, and therefore worthy of labelling, is a social process. There are two primary factors to examine when considering the extent to which this process is a social one. The first issue is that significant oversimplification is needed to create groups. The broad groups of black and white, homosexual and heterosexual, the sane and the mentally ill; and young and old are all examples of this. Secondly, the differences that are socially judged to be relevant differ vastly according to time and place. An example of this is the emphasis that was put on the size of the forehead and faces of individuals in the late 19th century—which was believed to be a measure of a person’s criminal nature.

Linking to Stereotypes

The second component of this model centres on the linking of labelled differences with stereotypes. Goffman’s 1963 work made this aspect of stigma prominent and it has remained so ever since. This process of applying certain stereotypes to differentiated groups of individuals has attracted a large amount of attention and research in recent decades.

Us and Them

Thirdly, linking negative attributes to groups facilitates separation into “us” and “them”. Seeing the labelled group as fundamentally different causes stereotyping with little hesitation. “Us” and “them” implies that the labelled group is slightly less human in nature and at the extreme not human at all.

Disadvantage

The fourth component of stigmatisation in this model includes “status loss and discrimination”. Many definitions of stigma do not include this aspect, however, these authors believe that this loss occurs inherently as individuals are “labeled, set apart, and linked to undesirable characteristics.” The members of the labelled groups are subsequently disadvantaged in the most common group of life chances including income, education, mental well-being, housing status, health, and medical treatment. Thus, stigmatisation by the majorities, the powerful, or the “superior” leads to the Othering of the minorities, the powerless, and the “inferior”. Whereby the stigmatised individuals become disadvantaged due to the ideology created by “the self,” which is the opposing force to “the Other.” As a result, the others become socially excluded and those in power reason the exclusion based on the original characteristics that led to the stigma.

Necessity of Power

The authors also emphasize the role of power (social, economic, and political power) in stigmatisation. While the use of power is clear in some situations, in others it can become masked as the power differences are less stark. An extreme example of a situation in which the power role was explicitly clear was the treatment of Jewish people by the Nazis. On the other hand, an example of a situation in which individuals of a stigmatised group have “stigma-related processes” occurring would be the inmates of a prison. It is imaginable that each of the steps described above would occur regarding the inmates’ thoughts about the guards. However, this situation cannot involve true stigmatisation, according to this model, because the prisoners do not have the economic, political, or social power to act on these thoughts with any serious discriminatory consequences.

“Stigma Allure” and Authenticity

Sociologist Matthew W. Hughey explains that prior research on stigma has emphasized individual and group attempts to reduce stigma by “passing as normal”, by shunning the stigmatised, or through selective disclosure of stigmatized attributes. Yet, some actors may embrace particular markings of stigma (e.g.: social markings like dishonour or select physical dysfunctions and abnormalities) as signs of moral commitment and/or cultural and political authenticity. Hence, Hughey argues that some actors do not simply desire to “pass into normal” but may actively pursue a stigmatised identity formation process in order to experience themselves as causal agents in their social environment. Hughey calls this phenomenon “stigma allure”.

The “Six dimensions of Stigma”

While often incorrectly attributed to Goffman, the “six dimensions of stigma” were not his invention. They were developed to augment Goffman’s two levels – the discredited and the discreditable. Goffman considered individuals whose stigmatising attributes are not immediately evident. In that case, the individual can encounter two distinct social atmospheres. In the first, he is discreditable—his stigma has yet to be revealed but may be revealed either intentionally by him (in which case he will have some control over how) or by some factor, he cannot control. Of course, it also might be successfully concealed; Goffman called this passing. In this situation, the analysis of stigma is concerned only with the behaviours adopted by the stigmatised individual to manage his identity: the concealing and revealing of information. In the second atmosphere, he is discredited—his stigma has been revealed and thus it affects not only his behaviour but the behaviour of others. Jones et al. (1984) added the “six dimensions” and correlate them to Goffman’s two types of stigma, discredited and discreditable.

There are six dimensions that match these two types of stigma:

  1. Concealable – the extent to which others can see the stigma
  2. Course of the mark – whether the stigma’s prominence increases, decreases, or disappears
  3. Disruptiveness – the degree to which the stigma and/or others’ reaction to it impedes social interactions
  4. Aesthetics – the subset of others’ reactions to the stigma comprising reactions that are positive/approving or negative/disapproving but represent estimations of qualities other than the stigmatised person’s inherent worth or dignity
  5. Origin – whether others think the stigma is present at birth, accidental, or deliberate
  6. Peril – the danger that others perceive (whether accurately or inaccurately) the stigma to pose to them

Types

In Unravelling the contexts of stigma, authors Campbell and Deacon describe Goffman’s universal and historical forms of Stigma as the following.

  • Overt or external deformities – such as leprosy, clubfoot, cleft lip or palate and muscular dystrophy.
  • Known deviations in personal traits – being perceived rightly or wrongly, as weak willed, domineering or having unnatural passions, treacherous or rigid beliefs, and being dishonest, e.g., mental disorders, imprisonment, addiction, homosexuality, unemployment, suicidal attempts and radical political behaviour.
  • Tribal stigma – affiliation with a specific nationality, religion, or race that constitute a deviation from the normative, e.g. being African American, or being of Arab descent in the United States after the 9/11 attacks.

Deviance

Stigma occurs when an individual is identified as deviant, linked with negative stereotypes that engender prejudiced attitudes, which are acted upon in discriminatory behaviour. Goffman illuminated how stigmatised people manage their “Spoiled identity” (meaning the stigma disqualifies the stigmatised individual from full social acceptance) before audiences of normals. He focused on stigma, not as a fixed or inherent attribute of a person, but rather as the experience and meaning of difference.

Gerhard Falk expounds upon Goffman’s work by redefining deviant as “others who deviate from the expectations of a group” and by categorising deviance into two types:

  • Societal deviance refers to a condition widely perceived, in advance and in general, as being deviant and hence stigma and stigmatised. “Homosexuality is, therefore, an example of societal deviance because there is such a high degree of consensus to the effect that homosexuality is different, and a violation of norms or social expectation”.
  • Situational deviance refers to a deviant act that is labelled as deviant in a specific situation, and may not be labelled deviant by society. Similarly, a socially deviant action might not be considered deviant in specific situations. “A robber or other street criminal is an excellent example. It is the crime which leads to the stigma and stigmatization of the person so affected.”

The physically disabled, mentally ill, homosexuals, and a host of others who are labelled deviant because they deviate from the expectations of a group, are subject to stigmatisation – the social rejection of numerous individuals, and often entire groups of people who have been labelled deviant.

Stigma Communication

Communication is involved in creating, maintaining, and diffusing stigmas, and enacting stigmatisation. The model of stigma communication explains how and why particular content choices (marks, labels, peril, and responsibility) can create stigmas and encourage their diffusion. A recent experiment using health alerts tested the model of stigma communication, finding that content choices indeed predicted stigma beliefs, intentions to further diffuse these messages, and agreement with regulating infected persons’ behaviours.

More recently, scholars have highlighted the role of social media channels, such as Facebook and Instagram, in stigma communication. These platforms serve as safe spaces for stigmatised individuals to express themselves more freely. However, social media can also reinforce and amplify stigmatisation, as the stigmatised attributes are amplified and virtually available to anyone indefinitely.

Challenging

Stigma, though powerful and enduring, is not inevitable, and can be challenged. There are two important aspects to challenging stigma: challenging the stigmatisation on the part of stigmatisers and challenging the internalized stigma of the stigmatised. To challenge stigmatisation, Campbell et al. 2005 summarise three main approaches.

  1. There are efforts to educate individuals about non-stigmatising facts and why they should not stigmatise.
  2. There are efforts to legislate against discrimination.
  3. There are efforts to mobilise the participation of community members in anti-stigma efforts, to maximise the likelihood that the anti-stigma messages have relevance and effectiveness, according to local contexts.

In relation to challenging the internalised stigma of the stigmatised, Paulo Freire’s theory of critical consciousness is particularly suitable. Cornish provides an example of how sex workers in Sonagachi, a red light district in India, have effectively challenged internalised stigma by establishing that they are respectable women, who admirably take care of their families, and who deserve rights like any other worker. This study argues that it is not only the force of the rational argument that makes the challenge to the stigma successful, but concrete evidence that sex workers can achieve valued aims, and are respected by others.

Stigmatized groups often harbour cultural tools to respond to stigma and to create a positive self-perception among their members. For example, advertising professionals have been shown to suffer from negative portrayal and low approval rates. However, the advertising industry collectively maintains narratives describing how advertisement is a positive and socially valuable endeavour, and advertising professionals draw on these narratives to respond to stigma.

Another effort to mobilise communities exists in the gaming community through organisations like:

  • Take This – who provides AFK rooms at gaming conventions plus has a Streaming Ambassador Programme to reach more than 135,000 viewers each week with positive messages about mental health, and
  • NoStigmas – whose mission “is to ensure that no one faces mental health challenges alone” and envisions “a world without shame or discrimination related to mental health, brain disease, behavioral disorders, trauma, suicide and addiction” plus offers workplaces a NoStigmas Ally course and individual certifications.

Organisational Stigma

In 2008, an article by Hudson coined the term “organizational stigma” which was then further developed by another theory building article by Devers and colleagues. This literature brought the concept of stigma to the organisational level, considering how organisations might be considered as deeply flawed and cast away by audiences in the same way individuals would. Hudson differentiated core-stigma (a stigma related to the very nature of the organisation) and event-stigma (an isolated occurrence which fades away with time). A large literature has debated how organisational stigma relate to other constructs in the literature on social evaluations. A 2020 book by Roulet reviews this literature and disentangle the different concepts – in particular differentiating stigma, dirty work, scandals – and exploring their positive implications.

Current Research

The research was undertaken to determine the effects of social stigma primarily focuses on disease-associated stigmas. Disabilities, psychiatric disorders, and sexually transmitted diseases are among the diseases currently scrutinised by researchers. In studies involving such diseases, both positive and negative effects of social stigma have been discovered.

Stigma in Healthcare Settings

Recent research suggests that addressing perceived and enacted stigma in clinical settings is critical to ensuring delivery of high-quality patient-centred care. Specifically, perceived stigma by patients was associated with longer periods of poor physical or mental health. Additionally, perceived stigma in healthcare settings was associated with higher odds of reporting a depressive disorder. Among other findings, individuals who were married, younger, had higher income, had college degrees, and were employed reported significantly fewer poor physical and mental health days and had lower odds of self-reported depressive disorder. A complementary study conducted in New York City (as opposed to nationwide), found similar outcomes. The researchers’ objectives were to assess rates of perceived stigma in clinical settings reported by racially diverse New York City residents and to examine if this perceived stigma was associated with poorer physical and mental health outcomes. They found that perceived stigma was associated with poorer healthcare access, depression, diabetes, and poor overall general health.

Research on Self-Esteem

Members of stigmatised groups may have lower self-esteem than those of non-stigmatised groups. A test could not be taken on the overall self-esteem of different races. Researchers would have to take into account whether these people are optimistic or pessimistic, whether they are male or female and what kind of place they grew up in. Over the last two decades, many studies have reported that African Americans show higher global self-esteem than whites even though, as a group, African Americans tend to receive poorer outcomes in many areas of life and experience significant discrimination and stigma.

Mental Disorder

Empirical research on the stigma associated with mental disorders, pointed to a surprising attitude of the general public. Those who were told that mental disorders had a genetic basis were more prone to increase their social distance from the mentally ill, and also to assume that the ill were dangerous individuals, in contrast with those members of the general public who were told that the illnesses could be explained by social and environmental factors. Furthermore, those informed of the genetic basis were also more likely to stigmatize the entire family of the ill. Although the specific social categories that become stigmatised can vary over time and place, the three basic forms of stigma (physical deformity, poor personal traits, and tribal outgroup status) are found in most cultures and eras, leading some researchers to hypothesize that the tendency to stigmatise may have evolutionary roots.

The impact of the stigma is significant, leading many individuals to not seek out treatment. For example, evidence from a refugee camp in Jordan suggests that providing mental health care comes with a dilemma: between the clinical desire to make mental health issues visible and actionable through datafication and the need to keep mental health issues hidden and out of the view of the community to avoid stigma. That is, in spite of their suffering the refugees were hesitant to receive mental health care as they worried about stigma.

Currently, several researchers believe that mental disorders are caused by a chemical imbalance in the brain. Therefore, this biological rationale suggests that individuals struggling with a mental illness do not have control over the origin of the disorder. Much like cancer or another type of physical disorder, persons suffering from mental disorders should be supported and encouraged to seek help. The Disability Rights Movement recognises that while there is considerable stigma towards people with physical disabilities, the negative social stigma surrounding mental illness is significantly worse, with those suffering being perceived to have control of their disabilities and being responsible for causing them. “Furthermore, research respondents are less likely to pity persons with mental illness, instead of reacting to the psychiatric disability with anger and believing that help is not deserved.” Although there are effective mental health interventions available across the globe, many persons with mental illnesses do not seek out the help that they need. Only 59.6% of individuals with a mental illness, including conditions such as depression, anxiety, schizophrenia, and bipolar disorder, reported receiving treatment in 2011.

Reducing the negative stigma surrounding mental disorders may increase the probability of affected individuals seeking professional help from a psychiatrist or a non-psychiatric physician. How particular mental disorders are represented in the media can vary, as well as the stigma associated with each. On the social media platform, YouTube, depression is commonly presented as a condition that is caused by biological or environmental factors, is more chronic than short-lived, and different from sadness, all of which may contribute to how people think about depression.

Causes

Arikan found that a stigmatising attitude to psychiatric patients is associated with narcissistic personality traits.

In Taiwan, strengthening the psychiatric rehabilitation system has been one of the primary goals of the Department of Health since 1985. This endeavour has not been successful. It was hypothesized that one of the barriers was social stigma towards the mentally ill. Accordingly, a study was conducted to explore the attitudes of the general population towards patients with mental disorders. A survey method was utilised on 1,203 subjects nationally. The results revealed that the general population held high levels of benevolence, tolerance on rehabilitation in the community, and non-social restrictiveness. Essentially, benevolent attitudes were favouring the acceptance of rehabilitation in the community. It could then be inferred that the belief (held by the residents of Taiwan) in treating the mentally ill with high regard, and the progress of psychiatric rehabilitation may be hindered by factors other than social stigma.

Artists

In the music industry, specifically in the genre of hip-hop or rap, those who speak out on mental illness are heavily criticised. However, according to an article by The Huffington Post, there’s a significant increase in rappers who are breaking their silence on depression and anxiety.

Addiction and Substance Use Disorders

Throughout history, addiction has largely been seen as a moral failing or character flaw, as opposed to an issue of public health. Substance use has been found to be more stigmatised than smoking, obesity, and mental illness. Research has shown stigma to be a barrier to treatment-seeking behaviours among individuals with addiction, creating a “treatment gap”. A systematic review of all epidemiological studies on treatment rates of people with alcohol use disorders found that over 80% had not accessed any treatment for their disorder. The study also found that the treatment gap was larger in low and lower-middle-income countries.

Research shows that the words used to talk about addiction can contribute to stigmatisation, and that the commonly used terms of “abuse” & “abuser” actually increase stigma. Behavioural addictions (i.e. gambling, sex, etc.) are found to be more likely to be attributed to character flaws than substance-use addictions. Stigma is reduced when Substance Use Disorders are portrayed as treatable conditions. Acceptance and Commitment Therapy has been used effectively to help people to reduce shame associated with cultural stigma around substance use treatment.

The use of the drug methamphetamine has been strongly stigmatised. An Australian national population study have shown that the proportion of Australians who nominated methamphetamine as a “drug problem” increased between 2001–2019. The epidemiological study provided evidence that levels of under-reporting have increased over the period, which coincided with the deployment of public health campaigns on the dangers of ice that had stigmatising elements that portrayal of persons who used the drugs in a negative way. The level of under-reporting of methamphetamine use is strongly associated with increasing negative attitudes towards their use over the same period.

Poverty

Recipients of public assistance programs are often scorned as unwilling to work. The intensity of poverty stigma is positively correlated with increasing inequality. As inequality increases, societal propensity to stigmatise increases. This is in part, a result of societal norms of reciprocity which is the expectation that people earn what they receive rather than receiving assistance in the form of what people tend to view as a gift.

Poverty is often perceived as a result of failures and poor choices rather than the result of socioeconomic structures that suppress individual abilities. Disdain for the impoverished can be traced back to its roots in Anglo-American culture where poor people have been blamed and ostracised for their misfortune for hundreds of years. The concept of deviance is at the bed rock of stigma towards the poor. Deviants are people that break important norms of society that everyone shares. In the case of poverty it is breaking the norm of reciprocity that paves the path for stigmatisation.

Public Assistance

Social stigma is prevalent towards recipients of public assistance programs. This includes programmes frequently utilised by families struggling with poverty such as Head Start and AFDC (Aid To Families With Dependent Children). The value of self-reliance is often at the centre of feelings of shame and the fewer people value self reliance the less stigma affects them psychologically. Stigma towards welfare recipients has been proven to increase passivity and dependency in poor people and has further solidified their status and feelings of inferiority.

Caseworkers frequently treat recipients of welfare disrespectfully and make assumptions about deviant behaviour and reluctance to work. Many single mothers cited stigma as the primary reason they wanted to exit welfare as quickly as possible. They often feel the need to conceal food stamps to escape judgement associated with welfare programs. Stigma is a major factor contributing to the duration and breadth of poverty in developed societies which largely affects single mothers. Recipients of public assistance are viewed as objects of the community rather than members allowing for them to be perceived as enemies of the community which is how stigma enters collective thought. Amongst single mothers in poverty, lack of health care benefits is one of their greatest challenges in terms of exiting poverty. Traditional values of self reliance increase feelings of shame amongst welfare recipients making them more susceptible to being stigmatised.

Epilepsy

Hong Kong

Epilepsy, a common neurological disorder characterised by recurring seizures, is associated with various social stigmas. Chung-yan Guardian Fong and Anchor Hung conducted a study in Hong Kong which documented public attitudes towards individuals with epilepsy. Of the 1,128 subjects interviewed, only 72.5% of them considered epilepsy to be acceptable; 11.2% would not let their children play with others with epilepsy; 32.2% would not allow their children to marry persons with epilepsy; additionally, some employers (22.5% of them) would terminate an employment contract after an epileptic seizure occurred in an employee with unreported epilepsy. Suggestions were made that more effort be made to improve public awareness of, attitude toward, and understanding of epilepsy through school education and epilepsy-related organisations.

Media

In the early 21st century, technology has a large impact on the lives of people in multiple countries and has shaped social norms. Many people own a television, computer, and a smartphone. The media can be helpful with keeping people up to date on news and world issues and it is very influential on people. Because it is so influential sometimes the portrayal of minority groups affects attitudes of other groups toward them. Much media coverage has to do with other parts of the world. A lot of this coverage has to do with war and conflict, which people may relate to any person belonging from that country. There is a tendency to focus more on the positive behaviour of one’s own group and the negative behaviours of other groups. This promotes negative Smartphone thoughts of people belonging to those other groups, reinforcing stereotypical beliefs.

“Viewers seem to react to violence with emotions such as anger and contempt. They are concerned about the integrity of the social order and show disapproval of others. Emotions such as sadness and fear are shown much more rarely.” (Unz, Schwab & Winterhoff-Spurk, 2008, p.141).

In a study testing the effects of stereotypical advertisements on students, 75 high school students viewed magazine advertisements with stereotypical female images such as a woman working on a holiday dinner, while 50 others viewed non-stereotypical images such as a woman working in a law office. These groups then responded to statements about women in a “neutral” photograph. In this photo, a woman was shown in a casual outfit not doing any obvious task. The students that saw the stereotypical images tended to answer the questionnaires with more stereotypical responses in 6 of the 12 questionnaire statements. This suggests that even brief exposure to stereotypical ads reinforces stereotypes. (Lafky, Duffy, Steinmaus & Berkowitz, 1996).

Education and Culture

The aforementioned stigmas (associated with their respective diseases) propose effects that these stereotypes have on individuals. Whether effects be negative or positive in nature, ‘labelling’ people causes a significant change in individual perception (of persons with the disease). Perhaps a mutual understanding of stigma, achieved through education, could eliminate social stigma entirely.

Laurence J. Coleman first adapted Erving Goffman’s (1963) social stigma theory to gifted children, providing a rationale for why children may hide their abilities and present alternate identities to their peers. The stigma of giftedness theory was further elaborated by Laurence J. Coleman and Tracy L. Cross in their book entitled, Being Gifted in School, which is a widely cited reference in the field of gifted education. In the chapter on Coping with Giftedness, the authors expanded on the theory first presented in a 1988 article. According to Google Scholar, this article has been cited over 300 times in the academic literature (as of 2022).

Coleman and Cross were the first to identify intellectual giftedness as a stigmatising condition and they created a model based on Goffman’s (1963) work, research with gifted students, and a book that was written and edited by 20 teenage, gifted individuals. Being gifted sets students apart from their peers and this difference interferes with full social acceptance. Varying expectations that exist in the different social contexts which children must navigate, and the value judgements that may be assigned to the child result in the child’s use of social coping strategies to manage his or her identity. Unlike other stigmatising conditions, giftedness is unique because it can lead to praise or ridicule depending on the audience and circumstances.

Gifted children learn when it is safe to display their giftedness and when they should hide it to better fit in with a group. These observations led to the development of the Information Management Model that describes the process by which children decide to employ coping strategies to manage their identities. In situations where the child feels different, she or he may decide to manage the information that others know about him or her. Coping strategies include disidentification with giftedness, attempting to maintain low visibility, or creating a high-visibility identity (playing a stereotypical role associated with giftedness). These ranges of strategies are called the Continuum of Visibility.

Abortion

While abortion is very common throughout the world, people may choose not to disclose their use of such services, in part due to the stigma associated with having had an abortion. Keeping abortion experiences secret has been found to be associated with increased isolation and psychological distress. Abortion providers are also subject to stigma.

Stigmatisation of Prejudice

Cultural norms can prevent displays of prejudice as such views are stigmatised and thus people will express non-prejudiced views even if they believe otherwise (preference falsification). However, if the stigma against such views is lessened, people will be more willing to express prejudicial sentiments. For example, following the 2008 economic crisis, anti-immigration sentiment seemingly increased amongst the US population when in reality the level of sentiment remained the same and instead it simply became more acceptable to openly express opposition to immigration.

Spatial Stigma

Spatial stigma refers to stigmas that are linked to ones geographic location. This can be applied to neighbourhoods, towns, cities or any defined geographical space. A person’s geographic location or place of origin can be a source of stigma. This type of stigma can lead to negative health outcomes.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Social_stigma >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

Who was M. Scott Peck (1936-2005)?

Introduction

Morgan Scott Peck (1936–2005) was an American psychiatrist and best-selling author who wrote the book The Road Less Traveled (see below), published in 1978.

Early Life

Peck was born on May 22, 1936, in New York City, the son of Zabeth (née Saville) and David Warner Peck, an attorney and judge. His parents were Quakers. Peck was raised a Protestant (his paternal grandmother was from a Jewish family, but Peck’s father identified himself as a WASP (White Anglo-Saxon Protestants) and not as Jewish).

His parents sent him to the prestigious boarding school Phillips Exeter Academy in Exeter, New Hampshire, when he was 13. In his book, The Road Less Traveled, he confides the story of his brief stay at Exeter, and admits that it was a most miserable time. Finally, at age 15, during the spring holiday of his third year, he came home and refused to return to the school, whereupon his parents sought psychiatric help for him and he was (much to his amusement in later life) diagnosed with depression and recommended for a month’s stay in a psychiatric hospital (unless he chose to return to school). He then transferred to Friends Seminary (a private K–12 school) in late 1952, and graduated in 1954, after which he received a BA from Harvard in 1958, and an MD degree from Case Western Reserve University in 1963.

Career

Peck served in administrative posts in the government during his career as a psychiatrist. He also served in the US Army and rose to the rank of lieutenant colonel. His army assignments included stints as chief of psychology at the Army Medical Centre in Okinawa, Japan, and assistant chief of psychiatry and neurology in the office of the surgeon general in Washington, DC. He was the medical director of the New Milford Hospital Mental Health Clinic and a psychiatrist in private practice in New Milford, Connecticut. His first and best-known book, The Road Less Traveled, sold more than 10 million copies.

Peck’s works combined his experiences from his private psychiatric practice with a distinctly religious point of view. In his second book, People of the Lie, he wrote, “After many years of vague identification with Buddhist and Islamic mysticism, I ultimately made a firm Christian commitment – signified by my non-denominational baptism on the ninth of March 1980…” (Peck, 1983/1988, p11). One of his views was that people who are evil attack others rather than face their own failures.

In December 1984, Peck co-founded the Foundation for Community Encouragement (FCE), a tax-exempt, non-profit, public educational foundation, whose stated mission is “to teach the principles of community to individuals and organizations.” FCE ceased day-to-day operations from 2002 to 2009. In late 2009, almost 25 years after FCE was first founded, the organisation resumed functioning, and began offering community building and training events in 2010.

Personal Life

Peck married Lily Ho in 1959, and they had three children. In 1994, they jointly received the Community of Christ International Peace Award.

While Peck’s writings emphasized the virtues of a disciplined life and delayed gratification, his personal life was far more turbulent. For example, in his book In Search of Stones, Peck acknowledged having extramarital affairs and being estranged from two of his children. In 2004, just a year before his death, Peck was divorced by Lily and married Kathleen Kline Yeates.

Death

Peck died at his home in Connecticut on 25 September 2005. He had had Parkinson’s disease and pancreatic and liver duct cancer. Fuller Theological Seminary houses the archives of his publications, awards, and correspondence.

The Road Less Traveled

The Road Less Traveled, published in 1978, is Peck’s best-known work, and the one that made his reputation. It is, in short, a description of the attributes that make for a fulfilled human being, based largely on his experiences as a psychiatrist and a person.

The book consists of four parts. In the first part Peck examines the notion of discipline, which he considers essential for emotional, spiritual, and psychological health, and which he describes as “the means of spiritual evolution”. The elements of discipline that make for such health include the ability to delay gratification, accepting responsibility for oneself and one’s actions, a dedication to truth, and “balancing”. “Balancing” refers to the problem of reconciling multiple, complex, possibly conflicting factors that impact an important decision—on one’s own behalf or on behalf of another.

In the second part, Peck addresses the nature of love, which he considers the driving force behind spiritual growth. He contrasts his own views on the nature of love against a number of common misconceptions about love, including:

  • That love is identified with romantic love (he considers it a very destructive myth when it is solely relying on “falling in love”),
  • That love is related to dependency,
  • That true love is linked with the feeling of “falling in love”.

Peck argues that “true” love is rather an action that one undertakes consciously to extend one’s ego boundaries by including others or humanity, and is therefore the spiritual nurturing—which can be directed toward oneself, as well as toward one’s beloved.

In the third part Peck deals with religion, and the commonly accepted views and misconceptions concerning religion. He recounts experiences from several patient case histories, and the evolution of the patients’ notion of God, religion, atheism—especially of their own “religiosity” or atheism—as their therapy with Peck progressed.

The fourth and final part concerns “grace”, the powerful force originating outside human consciousness that nurtures spiritual growth in human beings. To focus on the topic, he describes the miracles of health, the unconscious, and serendipity—phenomena which Peck says:

  • Nurture human life and spiritual growth,
  • Are incompletely understood by scientific thinking,
  • Are commonplace among humanity,
  • Originate outside the conscious human will.

He concludes that “the miracles described indicate that our growth as human beings is being assisted by a force other than our conscious will”. (Peck, 1978/1992, p.281)

Random House, where the then little-known psychiatrist first tried to publish his original manuscript, turned him down, saying the final section was “too Christ-y.” Thereafter, Simon & Schuster published the work for $7,500 and printed a modest hardback run of 5,000 copies. The book took off only after Peck hit the lecture circuit and personally sought reviews in key publications. Later reprinted in paperback in 1980, The Road first made best-seller lists in 1984 – six years after its initial publication.

People of the Lie

First published in 1983, People of the Lie: Toward a Psychology of Evil [subsequent vols subtitled The Hope For Healing Human Evil and Possession and Group Evil] (ISBN 0 7126 1857 0) followed on from Peck’s first book. Peck describes the stories of several people who came to him whom he found particularly resistant to any form of help. He came to think of them as evil and goes on to describe the characteristics of evil in psychological terms, proposing that it could become a psychiatric diagnosis. Peck points to narcissism as a type of evil in this context.

Theories

Love

His perspective on love (in The Road Less Traveled) is that love is not a feeling, it is an activity and an investment. He defines love as, “The will to extend one’s self for the purpose of nurturing one’s own or another’s spiritual growth” (Peck, 1978/1992, p.85). Peck expands on the work of Thomas Aquinas over 700 years ago, that love is primarily actions towards nurturing the spiritual growth of another.

Peck seeks to differentiate between love and cathexis. Cathexis is what explains sexual attraction, the instinct for cuddling pets and pinching babies’ cheeks. However, cathexis is not love. All the same, love cannot begin in isolation; a certain amount of cathexis is necessary to get sufficiently close to be able to love.

Once through the cathexis stage, the work of love begins. It is not a feeling. It consists of what you do for another person. As Peck says in The Road Less Traveled, “Love is as love does.” It is about giving yourself and the other person what they need to grow.

Discipline

The Road Less Traveled begins with the statement “Life is difficult”.  Life was never meant to be easy and is essentially a series of problems which can either be solved or ignored. Peck wrote of the importance of discipline, describing four aspects of it:

  • Delaying gratification: Sacrificing present comfort for future gains.
  • Acceptance of responsibility: Accepting responsibility for one’s own decisions.
  • Dedication to truth: Honesty, both in word and deed.
  • Balancing: Handling conflicting requirements.

Peck argues that these are techniques of suffering, that enable the pain of problems to be worked through and systematically solved, producing growth. He argues that most people avoid the pain of dealing with their problems and suggests that it is through facing the pain of problem-solving that life becomes more meaningful.

Neurotic and Legitimate Suffering

Peck believes that it is only through suffering and agonizing using the four aspects of discipline (delaying gratification, acceptance of responsibility, dedication to truth, and balancing) that we can resolve the many puzzles and conflicts that we face. This is what he calls undertaking legitimate suffering. Peck argues that by trying to avoid legitimate suffering, people actually ultimately end up suffering more. This extra unnecessary suffering is what Scott Peck terms neurotic suffering. He references Carl Jung ‘Neurosis is always a substitute for legitimate suffering’. Peck says that our aim must be to eliminate neurotic suffering and to work through our legitimate suffering to achieve our individual goals.

Evil

Peck discusses evil in his three-volume book People of the Lie, as well as in a chapter of The Road Less Traveled. Peck characterises evil as a malignant type of self-righteousness in which there is an active rather than passive refusal to tolerate imperfection (sin) and its consequent guilt. This syndrome results in a projection of evil onto selected specific innocent victims (often children), which is the paradoxical mechanism by which the People of the Lie commit their evil. Peck argues that these people are the most difficult of all to deal with, and extremely hard to identify. He describes in some detail several individual cases involving his patients. In one case which Peck considers as the most typical because of its subtlety, he describes Roger, a depressed teenage son of respected, well-off parents. In a series of parental decisions justified by often subtle distortions of the truth, they exhibit a consistent disregard for their son’s feelings, and a consistent willingness to destroy his growth. With false rationality and normality, they aggressively refuse to consider that they are in any way responsible for his resultant depression, eventually suggesting his condition must be incurable and genetic.

Peck makes a distinction between those who are on their way to becoming evil and those who have already crossed the line and are irretrievably evil. In the first instance, he describes George. Peck says, “Basically, George, you’re a kind of a coward. Whenever the going gets a little bit rough, you sell out.” Of note, this is the kind of evil that inspired the film Session 9. When asked where evil lives, Simon concludes, “I live in the weak and the wounded.” On the other hand, those who have crossed the line and are irretrievably evil are described as having malignant narcissism.

Some of Peck’s conclusions about the psychiatric condition that he designates as “evil” are derived from his close study of one patient he names Charlene. Although Charlene is not dangerous, she is ultimately unable to have empathy for others in any way. According to Peck, people like her see others as playthings or tools to be manipulated for their own uses or entertainment. Peck states that these people are rarely seen by psychiatrists, and have never been treated successfully.

Evil is described by Peck as “militant ignorance”. The original Judeo-Christian concept of “sin” is as a process that leads us to “miss the mark” and fall short of perfection. Peck argues that while most people are conscious of this, at least on some level, those who are evil actively and militantly refuse this consciousness. Peck considers those he calls evil to be attempting to escape and hide from their own conscience (through self-deception), and views this as being quite distinct from the apparent absence of conscience evident in sociopathy.

According to Peck, an evil person:

  • is consistently self-deceiving, with the intent of avoiding guilt and maintaining a self-image of perfection
  • deceives others as a consequence of their own self-deception
  • projects his or her evils and sins onto very specific targets (scapegoats) while being apparently normal with everyone else (“their insensitivity toward him was selective” (Peck, 1983/1988, p.105))
  • commonly hates with the pretence of love, for the purposes of self-deception as much as deception of others
  • abuses political (emotional) power (“the imposition of one’s will upon others by overt or covert coercion” (Peck, 1978/1992, p.298))
  • maintains a high level of respectability, and lies incessantly to do so
  • is consistent in his or her sins. Evil persons are characterised not so much by the magnitude of their sins, but by their consistency (of destructiveness)
  • is unable to think from the viewpoint of their victim (scapegoating)
  • has a covert intolerance to criticism and other forms of narcissistic injury

Most evil people realise the evil deep within themselves, but are unable to tolerate the pain of introspection, or admit to themselves that they are evil. Thus, they constantly run away from their evil by putting themselves in a position of moral superiority and putting the focus of evil on others. Evil is an extreme form of what Peck, in The Road Less Traveled, calls a character and personality disorder.

Using the My Lai massacre as a case study, Peck also examines group evil, discussing how human group morality is strikingly less than individual morality. Partly, he considers this to be a result of specialization, which allows people to avoid individual responsibility and pass the buck, resulting in a reduction of group conscience.

Though the topic of evil has historically been the domain of religion, Peck makes great efforts to keep much of his discussion on a scientific basis, explaining the specific psychological mechanisms by which evil operates. He was also particularly conscious of the danger of a psychology of evil being misused for personal or political ends. Peck considered that such a psychology should be used with great care, as falsely labelling people as evil is one of the very characteristics of evil. He argued that a diagnosis of evil should come from the standpoint of healing and safety for its victims, but also with the possibility even if remote, that the evil themselves may be cured.

Ultimately, Peck says that evil arises out of free choice. He describes it thus: Every person stands at a crossroads, with one path leading to God, and the other path leading to the devil. The path of God is the right path, and accepting this path is akin to submission to a higher power. However, if a person wants to convince himself and others that he has free choice, he would rather take a path which cannot be attributed to its being the right path. Thus, he chooses the path of evil.

Peck also discussed the question of the devil. Initially he believed, as with “99% of psychiatrists and the majority of clergy” (Peck, 1983/1988, p.182), that the devil did not exist; but, after starting to believe in the reality of human evil, he then began to contemplate the reality of spiritual evil. Eventually, after having been referred several possible cases of possession and being involved in two exorcisms, he was converted to a belief in the existence of Satan. Peck considered people who are possessed as being victims of evil, but of not being evil themselves. Peck, however, considered possession to be rare, and human evil common. He did believe there was some relationship between Satan and human evil, but was unsure of its exact nature. Peck’s writings and views on possession and exorcism are to some extent influenced and based on specific accounts by Malachi Martin; however, the veracity of these accounts and Peck’s own diagnostic approach to possession have both since been questioned by a Catholic priest who is a professor of theology. It has been argued that it is not possible to find formal records to establish the veracity of Father Malachi Martin’s described cases of possession, as all exorcism files are sealed by the Archdiocese of New York, where all but one of the cases took place.

The Four Stages of Spiritual Development

Peck postulates that there are four stages of human spiritual development:

  • Stage I is chaotic, disordered, and reckless. Very young children are in Stage I. They may defy and disobey and are unwilling to accept a will greater than their own. They are egoistical and lack empathy for others. Criminals are often people who have never grown out of Stage I.
  • Stage II is the stage at which a person has blind faith in authority figures and sees the world as divided simply into good and evil, right and wrong, us and them. Once children learn to obey their parents and other authority figures (often out of fear or shame), they reach Stage II. Many religious people are Stage II. With blind faith comes humility and a willingness to obey and serve. The majority of conventionally moralistic, law-abiding citizens never move out of Stage II.
  • Stage III is the stage of scientific scepticism and questioning. A Stage III person does not accept claims based on faith, but is only convinced with logic. Many people working in scientific and technological research are in Stage III. Often they reject the existence of spiritual or supernatural forces, since these are difficult to measure or prove scientifically. Those who do retain their spiritual beliefs move away from the simple, official doctrines of fundamentalism.
  • Stage IV is the stage at which an individual enjoys the mystery and beauty of nature and existence. While retaining scepticism, s/he starts perceiving grand patterns in nature and develops a deeper understanding of good and evil, forgiveness and mercy, compassion and love. His/her religiousness and spirituality differ from that of a Stage II person, in the sense that s/he does not accept things through blind faith or out of fear, but from genuine belief. S/he does not judge people harshly or seek to inflict punishment on them for their transgressions. This is the stage of loving others as yourself, losing your attachment to your ego, and forgiving your enemies. Stage IV people are labelled mystics.

Peck argues that while transitions from Stage I to Stage II are sharp, transitions from Stage III to Stage IV are gradual. Nonetheless, these changes are noticeable and mark a significant difference in the personality of the individual.

Community Building

In his book The Different Drum: Community Making and Peace, Peck says that community has three essential ingredients:

  • Inclusivity
  • Commitment
  • Consensus

Based on his experience with community building workshops, Peck says that community building typically goes through four stages:

  1. Pseudocommunity: In the first stage, well-intentioned people try to demonstrate their ability to be friendly and sociable, but they do not really delve beneath the surface of each other’s ideas or emotions. They use obvious generalities and mutually established stereotypes in speech. Instead of conflict resolution, pseudocommunity involves conflict avoidance, which maintains the appearance or facade of true community. It also serves only to maintain positive emotions, instead of creating a safe space for honesty and love through bad emotions as well. While they still remain in this phase, members will never really obtain evolution or change, as individuals or as a bunch.
  2. Chaos: The first step towards real positivity is, paradoxically, a period of negativity. Once the mutually sustained façade of bonhomie is shed, negative emotions flood through: members start to vent their mutual frustrations, annoyances, and differences. It is a chaotic stage, but Peck describes it as a “beautiful chaos” because it is a sign of healthy growth (this relates closely to Dabrowski’s concept of disintegration).
  3. Emptiness: To transcend the stage of “Chaos”, members are forced to shed that which prevents real communication. Biases and prejudices, need for power and control, self-superiority, and other similar motives which are only mechanisms of self-validation and/or ego-protection, must yield to empathy, openness to vulnerability, attention, and trust. Hence, this stage does not mean people should be “empty” of thoughts, desires, ideas or opinions. Rather, it refers to emptiness of all mental and emotional distortions which reduce one’s ability to really share, listen to, and build on those thoughts, ideas, etc. It is often the hardest step in the four-level process, as it necessitates the release of patterns which people develop over time in a subconscious attempt to maintain self-worth and positive emotion. While this is therefore a stage of “Fana (Sufism)” in a certain sense, it should be viewed not merely as a “death”, but as a rebirth—of one’s true self at the individual level, and at the social level of the genuine and True community.
  4. True community: Having worked through emptiness, the people in the community enter a place of complete empathy with one another. There is a great level of tacit understanding. People are able to relate to each other’s feelings. Discussions, even when heated, never get sour, and motives are not questioned. A deeper and more sustainable level of happiness obtains between the members, which does not have to be forced. Even, and perhaps especially, when conflicts arise, it is understood that they are part of positive change.

The four stages of community formation are somewhat related to a model in organization theory for the five stages that a team goes through during development. These five stages are:

  1. Forming where the team members have some initial discomfort with each other, but nothing comes out in the open. They are insecure about their role and position with respect to the team. This corresponds to the initial stage of pseudocommunity.
  2. Storming where the team members start arguing heatedly, and differences and insecurities come out in the open. This corresponds to the second stage given by Scott Peck, namely chaos.
  3. Norming where the team members lay out rules and guidelines for interaction that help define the roles and responsibilities of each person. This corresponds to emptiness, where the community members think within and empty themselves of their obsessions to be able to accept and listen to others.
  4. Performing where the team finally starts working as a cohesive whole, and to effectively achieve the tasks set of themselves. In this stage individuals are aided by the group as a whole, where necessary, to move further collectively than they could achieve as a group of separated individuals.
  5. Transforming This corresponds to the stage of true community. This represents the stage of celebration, and when individuals leave, as they invariably must, there is a genuine feeling of grief, and a desire to meet again. Traditionally, this stage was often called “Mourning”.

It is in this third stage that Peck’s community-building methods differ in principle from team development. While teams in business organisations need to develop explicit rules, guidelines and protocols during the norming stage, the emptiness stage of community building is characterised, not by laying down the rules explicitly, but by shedding the resistance within the minds of the individuals.

Peck started the Foundation for Community Encouragement (FCE) to promote the formation of communities, which, he argues, are a first step towards uniting humanity and saving us from self-destruction.

The Blue Heron Farm is an intentional community in central North Carolina, whose founders stated that they were inspired by Peck’s writings on community. Peck himself had no involvement with this project, however.

The Exosphere Academy of Science & the Arts uses community building in their teaching methodology to help students practice deeper communication, remove their “masks”, and feel more comfortable collaborating and building innovative projects and startups.

Based on research by Robert E. Roberts (1943–2013), Chattanooga Endeavors has used Community Building since 1996 as a group intervention to improve the learning experience of former offenders participating in work-readiness training. Roberts’ research demonstrates that groups that are exposed to Community Building achieve significantly better training outcomes.

Characteristics of True Community

Peck describes what he considers to be the most salient characteristics of a true community:

  • Inclusivity, commitment, and consensus: members accept and embrace each other, celebrating their individuality and transcending their differences. They commit themselves to the effort and the people involved. They make decisions and reconcile their differences through consensus.
  • Realism: members bring together multiple perspectives to better understand the whole context of the situation. Decisions are more well-rounded and humble, rather than one-sided and arrogant.
  • Contemplation: members examine themselves. They are individually and collectively self-aware of the world outside themselves, the world inside themselves, and the relationship between the two.
  • A safe place: members allow others to share their vulnerability, heal themselves, and express who they truly are.
  • A laboratory for personal disarmament: members experientially discover the rules for peacemaking and embrace its virtues. They feel and express compassion and respect for each other as fellow human beings.
  • A group that can fight gracefully: members resolve conflicts with wisdom and grace. They listen and understand, respect each other’s gifts, accept each other’s limitations, celebrate their differences, bind each other’s wounds, and commit to a struggle together rather than against each other.
  • A group of all leaders: members harness the “flow of leadership” to make decisions and set a course of action. It is the spirit of community itself that leads, and not any single individual.
  • A spirit: The true spirit of community is the spirit of peace, love, wisdom and power. Members may view the source of this spirit as an outgrowth of the collective self or as the manifestation of a Higher Will.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/M._Scott_Peck#Love >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

Who is Lyn Yvonne Abramson (1950-Present)?

Introduction

Lyn Yvonne Abramson (born 07 February 1950) is a professor of psychology at the University of Wisconsin–Madison. She was born in Benson, Minnesota. She took her undergraduate degree at the University of Wisconsin–Madison in 1972 before attaining her Ph.D. in clinical psychology at University of Pennsylvania in 1978.

Refer to Depressive Realism.

Achievements

As a clinical psychologist, her main areas of research interest have been exploring vulnerability to major depressive disorder and psychobiological and cognitive approaches to depression, bipolar disorder, and eating disorders. She was the senior author of the paper “Learned Helplessness in Humans: Critique and Reformulation” published in the Journal of Abnormal Psychology, 1978, proposing a link between a particular explanatory style and depression.

With her co-authors William T.L. Cox, Patricia Devine, and Steven D. Hollon, she proposed the integrated perspective on prejudice and depression, which combines cognitive theories of depression with cognitive theories of prejudice. Lyn and her co-authors propose that many cases of depression may be caused by prejudice from the self or from another person.

“This depression caused by prejudice – which the researchers call deprejudice — can occur at many levels. In the classic case, prejudice causes depression at the societal level (e.g., Nazis’ prejudice causing Jews’ depression), but this causal chain can also occur at the interpersonal level (e.g., an abuser’s prejudice causing an abusee’s depression), or even at the intrapersonal level, within a single person (e.g., a man’s prejudice against himself causing his depression).”

Along with her frequent collaborator Lauren Alloy, Abramson was awarded the James McKeen Cattell Fellow Award for 2008–2009 by the Association for Psychological Science. She is on the Institute for Scientific Information list of highly cited researchers.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Lyn_Yvonne_Abramson >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Depressive Realism?

Introduction

Depressive realism is the hypothesis developed by Lauren Alloy and Lyn Yvonne Abramson that depressed individuals make more realistic inferences than non-depressed individuals. Although depressed individuals are thought to have a negative cognitive bias that results in recurrent, negative automatic thoughts, maladaptive behaviours, and dysfunctional world beliefs, depressive realism argues not only that this negativity may reflect a more accurate appraisal of the world but also that non-depressed individuals’ appraisals are positively biased.

Evidence

For

When participants were asked to press a button and rate the control they perceived they had over whether or not a light turned on, depressed individuals made more accurate ratings of control than non-depressed individuals. Among participants asked to complete a task and rate their performance without any feedback, depressed individuals made more accurate self-ratings than non-depressed individuals. For participants asked to complete a series of tasks, given feedback on their performance after each task, and who self-rated their overall performance after completing all the tasks, depressed individuals were again more likely to give an accurate self-rating than non-depressed individuals. When asked to evaluate their performance both immediately and some time after completing a task, depressed individuals made accurate appraisals both immediately before and after time had passed.

In a functional magnetic resonance imaging study of the brain, depressed patients were shown to be more accurate in their causal attributions of positive and negative social events than non-depressed participants, who demonstrated a positive bias. This difference was also reflected in the differential activation of the fronto-temporal network, higher activation for non self-serving attributions in non-depressed participants and for self-serving attributions in depressed patients, and reduced coupling of the dorsomedial prefrontal cortex seed region and the limbic areas when depressed patients made self-serving attributions.

Against

When asked to rate both their performance and the performance of others, non-depressed individuals demonstrated positive bias when rating themselves but no bias when rating others. Depressed individuals conversely showed no bias when rating themselves but a positive bias when rating others.

When assessing participant thoughts in public versus private settings, the thoughts of non-depressed individuals were more optimistic in public than private, while depressed individuals were less optimistic in public.

When asked to rate their performance immediately after a task and after some time had passed, depressed individuals were more accurate when they rated themselves immediately after the task but were more negative after time had passed whereas non-depressed individuals were positive immediately after and some time after.

Although depressed individuals make accurate judgments about having no control in situations where they in fact have no control, this appraisal also carries over to situations where they do have control, suggesting that the depressed perspective is not more accurate overall.

One study suggested that in real-world settings, depressed individuals are actually less accurate and more overconfident in their predictions than their non-depressed peers. Participants’ attributional accuracy may also be more related to their overall attributional style rather than the presence and severity of their depressive symptoms.

Criticism of the Evidence

Some have argued that the evidence is not more conclusive because no standard for reality exists, the diagnoses are dubious, and the results may not apply to the real world. Because many studies rely on self-report of depressive symptoms and self-reports are known to be biased, the diagnosis of depression in these studies may not be valid, necessitating the use of other objective measures. Due to most of these studies using designs that do not necessarily approximate real-world phenomena, the external validity of the depressive realism hypothesis is unclear. There is also concern that the depressive realism effect is merely a byproduct of the depressed person being in a situation that agrees with their negative bias.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Depressive_realism >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

Who is Lauren B. Alloy (1953-Present)?

Introduction

Lauren B. Alloy (born Lauren Helene Bersh; 22 November 1953) is an American psychologist, recognised for her research on mood disorders. Along with colleagues Lyn Abramson and Gerald Metalsky, she developed the hopelessness theory of depression. With Abramson, she also developed the depressive realism hypothesis. Alloy is a professor of psychology at Temple University in Philadelphia, Pennsylvania.

Biography

Alloy was born in Philadelphia in 1953. She earned her B.A. in Psychology in 1974 and her Ph.D. in experimental and clinical psychology in 1979, both from the University of Pennsylvania. Her graduate school mentors were psychologists Martin Seligman and Richard Solomon.

Alloy was a faculty member at Northwestern University from 1979 to 1989. She has been a professor of psychology in the Department of Psychology at Temple University since 1989. Her research focuses on cognitive, interpersonal, and biopsychosocial processes in the onset and maintenance of depression and bipolar disorder. She is the author of over 250 scholarly publications.

In the late 1970s, Alloy and her long-time collaborator Abramson demonstrated that depressed individuals held a more accurate view than their non-depressed counterparts in a test which measured illusion of control. This finding, termed “depressive realism”, held true even when the depression was manipulated experimentally.

Selected Awards

  • 2014 – Association for Behavioral and Cognitive Therapies Lifetime Achievement Award (jointly with Lyn Abramson)
  • 2014 – Society for Research in Psychopathology Joseph Zubin Award
  • 2009 – Association for Psychological Science James McKeen Cattell Award for Lifetime Achievement in Applied Psychological Research (jointly with Lyn Abramson)
  • 2003 – Society for a Science of Clinical Psychology Distinguished Scientist Award (jointly with Lyn Abramson)
  • 2002 – American Psychological Association Master Lecturer Award in Psychopathology (jointly with Lyn Abramson)
  • 1984 – American Psychological Association Young Psychologist Award

Selected Works

  • Alloy, L.B., & Abramson, L.Y. (2007). Depressive realism. In R. Baumeister & K. Vohs (Eds.), Encyclopedia of Social Psychology (pp. 242–243). New York: Sage Publications.
  • Alloy, L. B., Kelly, K. A., Mineka, S., & Clements, C. M. (1990). Comorbidity of anxiety and depressive disorders: a helplessness-hopelessness perspective.
  • Abramson, L. Y., Metalsky, G. I., & Alloy, L. B. (1989). Hopelessness depression: A theory-based subtype of depression. Psychological review, 96(2), 358.
  • Alloy, L.B., & Abramson, L.Y. (1988). Depressive realism: Four theoretical perspectives. In L.B. Alloy (Ed.), Cognitive processes in depression. New York: Guilford.
  • Alloy, L. B., & Tabachnik, N. (1984). Assessment of covariation by humans and animals: the joint influence of prior expectations and current situational information. Psychological review, 91(1), 112.
  • Alloy, L. B., & Abramson, L. Y. (1979). Judgment of contingency in depressed and nondepressed students: Sadder but wiser?. Journal of experimental psychology: General, 108(4), 441.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Lauren_Alloy >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

Who was Karl Jaspers (1883-1969)?

Introduction

Karl Theodor Jaspers (23 February 1883 to 26 February 1969) was a German-Swiss psychiatrist and philosopher who had a strong influence on modern theology, psychiatry, and philosophy. His 1913 work General Psychopathology influenced many later diagnostic criteria, and argued for a distinction between “primary” and “secondary” delusions.

After being trained in and practising psychiatry, Jaspers turned to philosophical inquiry and attempted to develop an innovative philosophical system. He was often viewed as a major exponent of existentialism in Germany, though he did not accept the label.

Life

Jaspers was born in Oldenburg in 1883 to a mother from a local farming community, and a jurist father. He showed an early interest in philosophy, but his father’s experience with the legal system influenced his decision to study law at Heidelberg University. Jaspers first studied law in Heidelberg and later in Munich for three semesters. It soon became clear that Jaspers did not particularly enjoy law, and he switched to studying medicine in 1902 with a thesis about criminology. In 1910 he married Gertrud Mayer (1879–1974), the sister of his close friends Gustav Mayer and Ernst Mayer.

Jaspers earned his medical doctorate from the Heidelberg University medical school in 1908 and began work at a psychiatric hospital in Heidelberg under Franz Nissl, the successor of Emil Kraepelin and Karl Bonhoeffer, and Karl Wilmans. Jaspers became dissatisfied with the way the medical community of the time approached the study of mental illness and gave himself the task of improving the psychiatric approach. In 1913 Jaspers habilitated at the philosophical faculty of the Heidelberg University and gained there in 1914 a post as a psychology teacher. The post later became a permanent philosophical one, and Jaspers never returned to clinical practice. During this time Jaspers was a close friend of the Weber family (Max Weber also having held a professorship at Heidelberg).

In 1921, at the age of 38, Jaspers turned from psychology to philosophy, expanding on themes he had developed in his psychiatric works. He became a well-known philosopher across Germany and Europe.

After the Nazi seizure of power in 1933, Jaspers was considered to have a “Jewish taint” (jüdische Versippung, in the jargon of the time) due to his Jewish wife, Gertrude Mayer, and was forced to retire from teaching in 1937. In 1938 he fell under a publication ban as well. Many of his long-time friends stood by him, however, and he was able to continue his studies and research without being totally isolated. But he and his wife were under constant threat of removal to a concentration camp until 30 March 1945, when Heidelberg was occupied by American troops.

In 1948 Jaspers moved to the University of Basel in Switzerland. In 1963 he was awarded the honorary citizenship of the city of Oldenburg in recognition of his outstanding scientific achievements and services to occidental culture. He remained prominent in the philosophical community and became a naturalised citizen of Switzerland living in Basel until his death on his wife’s 90th birthday in 1969.

Contributions to Psychiatry

Jaspers’s dissatisfaction with the popular understanding of mental illness led him to question both the diagnostic criteria and the methods of clinical psychiatry. He published a paper in 1910 in which he addressed the problem of whether paranoia was an aspect of personality or the result of biological changes. Although it did not broach new ideas, this article introduced a rather unusual method of study, at least according to the norms then prevalent. Not unlike Freud, Jaspers studied patients in detail, giving biographical information about the patients as well as notes on how the patients themselves felt about their symptoms. This has become known as the biographical method and now forms a mainstay of psychiatric and above all psychotherapeutic practice.

Jaspers set down his views on mental illness in a book which he published in 1913, General Psychopathology. This work has become a classic in the psychiatric literature and many modern diagnostic criteria stem from ideas found within it. One of Jaspers’s central tenets was that psychiatrists should diagnose symptoms of mental illness (particularly of psychosis) by their form rather than by their content. For example, in diagnosing a hallucination, it is more important to note that a person experiences visual phenomena when no sensory stimuli account for them than to note what the patient sees. What the patient sees is the “content”, but the discrepancy between visual perception and objective reality is the “form”.

Jaspers thought that psychiatrists could diagnose delusions in the same way. He argued that clinicians should not consider a belief delusional based on the content of the belief, but only based on the way in which a patient holds such a belief. (See delusion for further discussion.) Jaspers also distinguished between primary and secondary delusions. He defined primary delusions as autochthonous, meaning that they arise without apparent cause, appearing incomprehensible in terms of a normal mental process. (This is a slightly different use of the word autochthonous than the ordinary medical or sociological use as a synonym for indigenous.) Secondary delusions, on the other hand, he defined as those influenced by the person’s background, current situation or mental state.

Jaspers considered primary delusions to be ultimately “un-understandable” since he believed no coherent reasoning process existed behind their formation. This view has caused some controversy, and the likes of R.D. Laing and Richard Bentall (1999, p.133–135) have criticised it, stressing that this stance can lead therapists into the complacency of assuming that because they do not understand a patient, the patient is deluded and further investigation on the part of the therapist will have no effect. For instance, Huub Engels (2009) argues that schizophrenic disordered speech may be understandable, just as Emil Kraepelin’s dream speech is understandable.

Contributions to Philosophy and Theology

Most commentators associate Jaspers with the philosophy of existentialism, in part because he draws largely upon the existentialist roots of Nietzsche and Kierkegaard, and in part because the theme of individual freedom permeates his work. In Philosophy (3 vols, 1932), Jaspers gave his view of the history of philosophy and introduced his major themes. Beginning with modern science and empiricism, Jaspers points out that as people question reality, they confront borders that an empirical (or scientific) method simply cannot transcend. At this point, the individual faces a choice: sink into despair and resignation, or take a leap of faith toward what Jaspers calls Transcendence. In making this leap, individuals confront their own limitless freedom, which Jaspers calls Existenz, and can finally experience authentic existence.

Transcendence (paired with the term The Encompassing in later works) is, for Jaspers, that which exists beyond the world of time and space. Jaspers’s formulation of Transcendence as ultimate non-objectivity (or no-thing-ness) has led many philosophers to argue that ultimately, Jaspers became a monist, though Jaspers himself continually stressed the necessity of recognising the validity of the concepts both of subjectivity and of objectivity.

Although he rejected explicit religious doctrines, including the notion of a personal God, Jaspers influenced contemporary theology through his philosophy of transcendence and the limits of human experience. Mystic Christian traditions influenced Jaspers himself tremendously, particularly those of Meister Eckhart and of Nicholas of Cusa. He also took an active interest in Eastern philosophies, particularly Buddhism, and developed the theory of an Axial Age, a period of substantial philosophical and religious development. Jaspers also entered public debates with Rudolf Bultmann, wherein Jaspers roundly criticized Bultmann’s “demythologizing” of Christianity.

Jaspers wrote extensively on the threat to human freedom posed by modern science and modern economic and political institutions. During World War II, he had to abandon his teaching post because his wife was Jewish. After the war, he resumed his teaching position, and in his work The Question of German Guilt he unabashedly examined the culpability of Germany as a whole in the atrocities of Hitler’s Third Reich.

The following quote about the Second World War and its atrocities was used at the end of the sixth episode of the BBC documentary series The Nazis: A Warning from History: “That which has happened is a warning. To forget it is guilt. It must be continually remembered. It was possible for this to happen, and it remains possible for it to happen again at any minute. Only in knowledge can it be prevented.”

Jaspers’s major works, lengthy and detailed, can seem daunting in their complexity. His last great attempt at a systematic philosophy of Existenz – Von der Wahrheit (On Truth) – has not yet appeared in English. However, he also wrote shorter works, most notably Philosophy Is for Everyman. The two major proponents of phenomenological hermeneutics, namely Paul Ricœur (a student of Jaspers) and Hans-Georg Gadamer (Jaspers’s successor at Heidelberg), both display Jaspers’s influence in their works.

Political Views

Jaspers identified with the liberal political philosophy of Max Weber, although he rejected Weber’s nationalism. He valued humanism and cosmopolitanism and, influenced by Immanuel Kant, advocated an international federation of states with shared constitutions, laws, and international courts. He strongly opposed totalitarian despotism and warned about the increasing tendency towards technocracy, or a regime that regards humans as mere instruments of science or of ideological goals. He was also sceptical of majoritarian democracy. Thus, he supported a form of governance that guaranteed individual freedom and limited government, and shared Weber’s belief that democracy needed to be guided by an intellectual elite. His views were seen as anti-communist.

Influences

Jaspers held Kierkegaard and Nietzsche to be two of the most important figures in post-Kantian philosophy. In his compilation, The Great Philosophers (Die großen Philosophen), he wrote: “I approach the presentation of Kierkegaard with some trepidation. Next to Nietzsche, or rather, prior to Nietzsche, I consider him to be the most important thinker of our post-Kantian age. With Goethe and Hegel, an epoch had reached its conclusion, and our prevalent way of thinking – that is, the positivistic, natural-scientific one – cannot really be considered as philosophy.” Jaspers also questions whether the two philosophers could be taught. For Kierkegaard, at least, Jaspers felt that Kierkegaard’s whole method of indirect communication precludes any attempts to properly expound his thought into any sort of systematic teaching.

Though Jaspers was certainly indebted to Kierkegaard and Nietzsche, he also owes much to Kant and Plato. Walter Kaufmann argues in From Shakespeare to Existentialism that, though Jaspers was certainly indebted to Kierkegaard and Nietzsche, he was closest to Kant’s philosophy:

Jaspers is too often seen as the heir of Nietzsche and Kierkegaard to whom he is in many ways less close than to Kant … the Kantian antinomies and Kant’s concern with the realm of decision, freedom, and faith have become exemplary for Jaspers. And even as Kant “had to do away with knowledge to make room for faith,” Jaspers values Nietzsche in large measure because he thinks that Nietzsche did away with knowledge, thus making room for Jaspers’ “philosophic faith”.

In his essay “On My Philosophy”, Jaspers states: “While I was still at school Spinoza was the first. Kant then became the philosopher for me and has remained so … Nietzsche gained importance for me only late as the magnificent revelation of nihilism and the task of overcoming it.” Jaspers is also indebted to his contemporaries, such as Heinrich Blücher, from whom he borrowed the term, “the anti-political principle” to describe totalitarianism’s destruction of a space of resistance.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Karl_Jaspers >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.