What is the Scale of Protective Factors?

Introduction

The Scale of Protective Factors (SPF) is a measure of aspects of social relationships, planning behaviours and confidence. These factors contribute to psychological resilience in emerging adults and adults.

Brief History

The SPF was developed by Dr. Elisabeth Ponce-Garcia at the science of protective factors laboratory (SPF Lab) to capture multiple aspects of adult resilience. A Confirmatory Factor Analysis was subsequently published as collaborative research. The SPF was found to assess resilience effectively in both men and women, across risk and socio-economic status, and ethnic/racial categories.

In order to verify effectiveness in comparison to other measures, Madewell and Ponce-Garcia (2016) analysed the SPF and four other commonly used measures of adult resilience. They found that the SPF was the only measure that assessed social and cognitive aspects and that it outperformed three other measures and performed comparably with a fourth.

The structure of the SPF in comparison to four other adult resilience measures, as well as comparison data, is available as a Data in Brief article. Noticing the absence of research examining the effectiveness of adult resilience measures in child or adult sexual assault, Ponce-Garcia, Madewell and Brown (2016) demonstrated SPF’s effectiveness in that domain. An investigation of the effectiveness of the SPF in the Southern Plains Tribes of the Native American and American Indian community in 2016.

A brief version of the 24 item SPF was developed in 2019 to result in 12 item measure that can be taken as a self-assessment. The SPF-24 and the SPF-12 have been used throughout the United States and in several other countries to include Saudi Arabia, Pakistan, India, Australia, Malesia, Paraguay, Mexico, and Canada. It is listed as a resource by Harvard University, was included in the United States Army Substance Abuse Programme (ASAP-Fort Sill, OK), and is provided by the State of Oklahoma ReEntry Programme.

Contents

The SPF consists of twenty-four statements for which individuals are asked to rate the degree to which each statement describes them. The SPF assesses a wider range of protective factors than other scales. The SPF is the only measure that has been shown to assess social and cognitive protective factors. The SPF includes four sub-scales that indicate the strengths and weaknesses that contribute to overall resilience. The SPF is the only measure to have been used in measuring resilience in sexual assault survivors within the United States.

Properties

The SPF consists of four sub-scales, two social protective factors and two cognitive protective factors.

Social Subscales

Social support measures the availability of social resources in the form of family and/or friends. Social skill measures the ability to make and maintain relationships. The two should be positively correlated. Higher scores on the social sub-scales indicate unity with friends and/or family, friend/family group optimism and general friend/family support.

Cognitive Subscales

The goal efficacy sub-scale measures confidence in the ability to achieve goals. The planning and prioritising behaviour sub-scale measures the ability to recognise the relative importance of tasks, the tendency to approach tasks in order of importance, and the use of lists for organisation.

Scoring

Adding the scores from the four sub-scales results in an overall resilience score. Adding scores from either the two social sub-scales or the two cognitive sub-scales results in a social resilience or cognitive resilience score, respectively. The sub-scale scores can also be viewed as an individual profile of strengths and deficits to indicate priorities for therapeutic plans.

This additive approach could theoretically allow varying subscale scores to cancel each other out and incorrectly indicate low overall resilience. However, research shows that social and cognitive characteristics work together to support resilience. This concern is also not supported by the characteristics of the SPF. Rather than assessing the number of friends or the frequency of social interaction, the SPF assesses the level of comfort in interacting socially. Similarly, rather than assessing the number of goals or tasks, the SPF assesses confidence in reaching goals once set.

The sub-scales are moderately positively correlated and that they all contribute to overall resilience.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Scale_of_Protective_Factors >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Neuropsychopharmacology?

Introduction

Neuropsychopharmacology, an interdisciplinary science related to psychopharmacology (study of effects of drugs on the mind) and fundamental neuroscience, is the study of the neural mechanisms that drugs act upon to influence behaviour.

It entails research of mechanisms of neuropathology, pharmacodynamics (drug action), psychiatric illness, and states of consciousness. These studies are instigated at the detailed level involving neurotransmission/receptor activity, bio-chemical processes, and neural circuitry. Neuropsychopharmacology supersedes psychopharmacology in the areas of “how” and “why”, and additionally addresses other issues of brain function. Accordingly, the clinical aspect of the field includes psychiatric (psychoactive) as well as neurologic (non-psychoactive) pharmacology-based treatments. Developments in neuropsychopharmacology may directly impact the studies of anxiety disorders, affective disorders, psychotic disorders, degenerative disorders, eating behaviour, and sleep behaviour.

Brief History

Drugs such as opium, alcohol, and certain plants have been used for millennia by humans to ease suffering or change awareness, but until the modern scientific era knowledge of how the substances actually worked was quite limited, most pharmacological knowledge being more a series of observation than a coherent model. The first half of the 20th century saw psychology and psychiatry as largely phenomenological, in that behaviours or themes which were observed in patients could often be correlated to a limited variety of factors such as childhood experience, inherited tendencies, or injury to specific brain areas. Models of mental function and dysfunction were based on such observations. Indeed, the behavioural branch of psychology dispensed altogether with what actually happened inside the brain, regarding most mental dysfunction as what could be dubbed as “software” errors. In the same era, the nervous system was progressively being studied at the microscopic and chemical level, but there was virtually no mutual benefit with clinical fields – until several developments after World War II began to bring them together. Neuropsychopharmacology may be regarded to have begun in the earlier 1950s with the discovery of drugs such as MAO inhibitors, tricyclic antidepressants, thorazine and lithium which showed some clinical specificity for mental illnesses such as depression and schizophrenia. Until that time, treatments that actually targeted these complex illnesses were practically non-existent. The prominent methods which could directly affect brain circuitry and neurotransmitter levels were the prefrontal lobotomy, and electroconvulsive therapy, the latter of which was conducted without muscle relaxants and both of which often caused the patient great physical and psychological injury.

The field now known as neuropsychopharmacology has resulted from the growth and extension of many previously isolated fields which have met at the core of psychiatric medicine, and engages a broad range of professionals from psychiatrists to researchers in genetics and chemistry. The use of the term has gained popularity since 1990 with the founding of several journals and institutions such as the Hungarian College of Neuropsychopharmacology. This rapidly maturing field shows some degree of flux, as research hypotheses are often restructured based on new information.

Overview

An implicit premise in neuropsychopharmacology with regard to the psychological aspects is that all states of mind, including both normal and drug-induced altered states, and diseases involving mental or cognitive dysfunction, have a neurochemical basis at the fundamental level, and certain circuit pathways in the central nervous system at a higher level. Thus the understanding of nerve cells or neurons in the brain is central to understanding the mind. It is reasoned that the mechanisms involved can be elucidated through modern clinical and research methods such as genetic manipulation in animal subjects, imaging techniques such as functional magnetic resonance imaging (fMRI), and in vitro studies using selective binding agents on live tissue cultures. These allow neural activity to be monitored and measured in response to a variety of test conditions. Other important observational tools include radiological imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). These imaging techniques are extremely sensitive and can image tiny molecular concentrations on the order of 10-10 M such as found with extrastriatal D1 receptor for dopamine.

One of the ultimate goals is to devise and develop prescriptions of treatment for a variety of neuropathological conditions and psychiatric disorders. More profoundly, though, the knowledge gained may provide insight into the very nature of human thought, mental abilities like learning and memory, and perhaps consciousness itself. A direct product of neuropsychopharmacological research is the knowledge base required to develop drugs which act on very specific receptors within a neurotransmitter system. These “hyperselective-action” drugs would allow the direct targeting of specific sites of relevant neural activity, thereby maximising the efficacy (or technically the potency) of the drug within the clinical target and minimising adverse effects. However, there are some cases when some degree of pharmacological promiscuity is tolerable and even desirable, producing more desirable results than a more selective agent would. An example of this is Vortioxetine, a drug which is not particularly selective as a serotonin reuptake inhibitor, having a significant degree of serotonin modulatory activity, but which has demonstrated reduced discontinuation symptoms (and reduced likelihood of relapse) and greatly reduced incidence of sexual dysfunction, without loss in antidepressant efficacy.

The groundwork is currently being paved for the next generation of pharmacological treatments, which will improve quality of life with increasing efficiency. For example, contrary to previous thought, it is now known that the adult brain does to some extent grow new neurons – the study of which, in addition to neurotrophic factors, may hold hope for neurodegenerative diseases like Alzheimer’s, Parkinson’s, ALS, and types of chorea. All of the proteins involved in neurotransmission are a small fraction of the more than 100,000 proteins in the brain. Thus there are many proteins which are not even in the direct path of signal transduction, any of which may still be a target for specific therapy. At present, novel pharmacological approaches to diseases or conditions are reported at a rate of almost one per week.

Neurotransmission

So far as we know, everything we perceive, feel, think, know, and do are a result of neurons firing and resetting. When a cell in the brain fires, small chemical and electrical swings called the action potential may affect the firing of as many as a thousand other neurons in a process called neurotransmission. In this way signals are generated and carried through networks of neurons, the bulk electrical effect of which can be measured directly on the scalp by an EEG device.

By the last decade of the 20th century, the essential knowledge of all the central features of neurotransmission had been gained. These features are:

  • The synthesis and storage of neurotransmitter substances;
  • The transport of synaptic vesicles and subsequent release into the synapse;
  • Receptor activation and cascade function; and
  • Transport mechanisms (reuptake) and/or enzyme degradation.

The more recent advances involve understanding at the organic molecular level; biochemical action of the endogenous ligands, enzymes, receptor proteins, etc. The critical changes affecting cell firing occur when the signalling neurotransmitters from one neuron, acting as ligands, bind to receptors of another neuron. Many neurotransmitter systems and receptors are well known, and research continues toward the identification and characterisation of a large number of very specific subtypes of receptors. For the six more important neurotransmitters Glu, GABA, Ach, NE, DA, and 5HT (listed at neurotransmitter) there are at least 29 major subtypes of receptor. Further “sub-subtypes” exist together with variants, totalling in the hundreds for just these 6 transmitters (refer to serotonin receptor, for example). It is often found that receptor subtypes have differentiated function, which in principle opens up the possibility of refined intentional control over brain function.

It has previously been known that ultimate control over the membrane voltage or potential of a nerve cell, and thus the firing of the cell, resides with the transmembrane ion channels which control the membrane currents via the ions K+, Na+, and Ca++, and of lesser importance Mg++ and Cl. The concentration differences between the inside and outside of the cell determine the membrane voltage.

Precisely how these currents are controlled has become much clearer with the advances in receptor structure and G-protein coupled processes. Many receptors are found to be pentameric clusters of five transmembrane proteins (not necessarily the same) or receptor subunits, each a chain of many amino acids. Transmitters typically bind at the junction between two of these proteins, on the parts that protrude from the cell membrane. If the receptor is of the ionotropic type, a central pore or channel in the middle of the proteins will be mechanically moved to allow certain ions to flow through, thus altering the ion concentration difference. If the receptor is of the metabotropic type, G-proteins will cause metabolism inside the cell that may eventually change other ion channels. Researchers are better understanding precisely how these changes occur based on the protein structure shapes and chemical properties.

The scope of this activity has been stretched even further to the very blueprint of life since the clarification of the mechanism underlying gene transcription. The synthesis of cellular proteins from nuclear DNA has the same fundamental machinery for all cells; the exploration of which now has a firm basis thanks to the Human Genome Project which has enumerated the entire human DNA sequence, although many of the estimated 35,000 genes remain to be identified. The complete neurotransmission process extends to the genetic level. Gene expression determines protein structures through type II RNA polymerase. So enzymes which synthesize or breakdown neurotransmitters, receptors, and ion channels are each made from mRNA via the DNA transcription of their respective gene or genes. But neurotransmission, in addition to controlling ion channels either directly or otherwise through metabotropic processes, also actually modulates gene expression. This is most prominently achieved through modification of the transcription initiation process by a variety of transcription factors produced from receptor activity.

Aside from the important pharmacological possibilities of gene expression pathways, the correspondence of a gene with its protein allows the important analytical tool of gene knockout. Living specimens can be created using homolog recombination in which a specific gene cannot be expressed. The organism will then be deficient in the associated protein which may be a specific receptor. This method avoids chemical blockade which can produce confusing or ambiguous secondary effects so that the effects of a lack of receptor can be studied in a purer sense.

Drugs

The inception of many classes of drugs is in principle straightforward: any chemical that can enhance or diminish the action of a target protein could be investigated further for such use. The trick is to find such a chemical that is receptor-specific (cf. “dirty drug”) and safe to consume. The 2005 Physicians’ Desk Reference lists twice the number of prescription drugs as the 1990 version. Many people by now are familiar with “selective serotonin reuptake inhibitors“, or SSRIs which exemplify modern pharmaceuticals. These SSRI antidepressant drugs, such as Paxil and Prozac, selectively and therefore primarily inhibit the transport of serotonin which prolongs the activity in the synapse. There are numerous categories of selective drugs, and transport blockage is only one mode of action. The FDA has approved drugs which selectively act on each of the major neurotransmitters such as NE reuptake inhibitor antidepressants, DA blocker anti-psychotics, and GABA agonist tranquilisers (benzodiazepines).

New endogenous chemicals are continually identified. Specific receptors have been found for the drugs THC (cannabis) and GHB, with endogenous transmitters anandamide and GHB. Another recent major discovery occurred in 1999 when orexin, or hypocretin, was found to have a role in arousal, since the lack of orexin receptors mirrors the condition of narcolepsy. Orexin agonism may explain the antinarcoleptic action of the drug modafinil which was already being used only a year prior.

The next step, which major pharmaceutical companies are currently working hard to develop, are receptor subtype-specific drugs and other specific agents. An example is the push for better anti-anxiety agents (anxiolytics) based on GABAA(α2) agonists, CRF1 antagonists, and 5HT2c antagonists. Another is the proposal of new routes of exploration for antipsychotics such as glycine reuptake inhibitors. Although the capabilities exist for receptor-specific drugs, a shortcoming of drug therapy is the lack of ability to provide anatomical specificity. By altering receptor function in one part of the brain, abnormal activity can be induced in other parts of the brain due to the same type of receptor changes. A common example is the effect of D2 altering drugs (neuroleptics) which can help schizophrenia, but cause a variety of dyskinesias by their action on motor cortex.

Modern studies are revealing details of mechanisms of damage to the nervous system such as apoptosis (programmed cell death) and free-radical disruption. Phencyclidine has been found to cause cell death in striatopallidal cells and abnormal vacuolisation in hippocampal and other neurons. The hallucinogen persisting perception disorder (HPPD), also known as post-psychedelic perception disorder, has been observed in patients as long as 26 years after LSD use. The plausible cause of HPPD is damage to the inhibitory GABA circuit in the visual pathway (GABA agonists such as midazolam can decrease some effects of LSD intoxication). The damage may be the result of an excitotoxic response of 5HT2 interneurons (Note: the vast majority of LSD users do not experience HPPD. Its manifestation may be equally dependent on individual brain chemistry as on the drug use itself). As for MDMA, aside from persistent losses of 5HT and SERT, long-lasting reduction of serotonergic axons and terminals is found from short-term use, and regrowth may be of compromised function.

Neural Circuits

It is a not-so-recent discovery that many functions of the brain are somewhat localized to associated areas like motor and speech ability. Functional associations of brain anatomy are now being complemented with clinical, behavioural, and genetic correlates of receptor action, completing the knowledge of neural signalling (refer to Human Cognome Project). The signal paths of neurons are hyperorganised beyond the cellular scale into often complex neural circuit pathways. Knowledge of these pathways is perhaps the easiest to interpret, being most recognizable from a systems analysis point of view, as may be seen in the following abstracts.

Almost all drugs with a known potential for abuse have been found to modulate activity (directly or indirectly) in the mesolimbic dopamine system, which includes and connects the ventral tegmental area in the midbrain to the hippocampus, medial prefrontal cortex, and amygdala in the forebrain; as well as the nucleus accumbens in the ventral striatum of the basal ganglia. In particular, the nucleus accumbens (NAc) plays an important role in integrating experiential memory from the hippocampus, emotion from the amygdala, and contextual information from the PFC to help associate particular stimuli or behaviours with feelings of pleasure and reward; continuous activation of this reward indicator system by an addictive drug can also cause previously neutral stimuli to be encoded as cues that the brain is about to receive a reward. This happens via the selective release of dopamine, a neurotransmitter responsible for feelings of euphoria and pleasure. The use of dopaminergic drugs alters the amount of dopamine released throughout the mesolimbic system, and regular or excessive use of the drug can result in a long-term downregulation of dopamine signalling, even after an individual stops ingesting the drug. This can lead the individual to engage in mild to extreme drug-seeking behaviours as the brain begins to regularly expect the increased presence of dopamine and the accompanying feelings of euphoria, but how problematic this is depends highly on the drug and the situation.

Significant progress has been made on central mechanisms of certain hallucinogenic drugs. It is at this point known with relative certainty that the primary shared effects of a broad pharmacological group of hallucinogens, sometimes called the “classical psychedelics”, can be attributed largely to agonism of serotonin receptors. The 5HT2A receptor, which seems to be the most critical receptor for psychedelic activity, and the 5HT2C receptor, which is a significant target of most psychedelics but which has no clear role in hallucinogenesis, are involved by releasing glutamate in the frontal cortex, while simultaneously in the locus coeruleus sensory information is promoted and spontaneous activity decreases. 5HT2A activity has a net pro-dopaminergic effect, whereas 5HT2C receptor agonism has an inhibitory effect on dopaminergic activity, particularly in the prefrontal cortex. One hypothesis suggests that in the frontal cortex, 5HT2A promotes late asynchronous excitatory postsynaptic potentials, a process antagonised by serotonin itself through 5HT1 receptors, which may explain why SSRIs and other serotonin-affecting drugs do not normally cause a patient to hallucinate. However, the fact that many classical psychedelics do in fact have significant affinity for 5HT1 receptors throws this claim into question. The head twitch response, a test used for assessing classical psychedelic activity in rodents, is produced by serotonin itself only in the presence of beta-Arrestins, but is triggered by classical psychedelics independent of beta-Arrestin recruitment. This may better explain the difference between the pharmacology of serotonergic neurotransmission (even if promoted by drugs such as SSRIs) and that of classical psychedelics. Newer findings, however, indicate that binding to the 5HT2A-mGlu2 heterodimer is also necessary for classical psychedelic activity. This, too, may be relevant to the pharmacological differences between the two. While early in the history of psychedelic drug research it was assumed that these hallucinations were comparable to those produced by psychosis and thus that classical psychedelics could serve as a model of psychosis, it is important to note that modern neuropsychopharmacological knowledge of psychosis has progressed significantly since then, and we now know that psychosis shows little similarity to the effects of classical psychedelics in mechanism, reported experience or most other respects aside from the surface similarity of “hallucination”.

Circadian rhythm, or sleep/wake cycling, is centred in the suprachiasmatic nucleus (SCN) within the hypothalamus, and is marked by melatonin levels 2000-4,000% higher during sleep than in the day. A circuit is known to start with melanopsin cells in the eye which stimulate the SCN through glutamate neurons of the hypothalamic tract. GABAergic neurons from the SCN inhibit the paraventricular nucleus, which signals the superior cervical ganglion (SCG) through sympathetic fibres. The output of the SCG, stimulates NE receptors (β) in the pineal gland which produces N-acetyltransferase, causing production of melatonin from serotonin. Inhibitory melatonin receptors in the SCN then provide a positive feedback pathway. Therefore, light inhibits the production of melatonin which “entrains” the 24-hour cycle of SCN activity. The SCN also receives signals from other parts of the brain, and its (approximately) 24-hour cycle does not only depend on light patterns. In fact, sectioned tissue from the SCN will exhibit daily cycle in vitro for many days. Additionally, (not shown in diagram), the basal nucleus provides GABA-ergic inhibitory input to the pre-optic anterior hypothalamus (PAH). When adenosine builds up from the metabolism of ATP throughout the day, it binds to adenosine receptors, inhibiting the basal nucleus. The PAH is then activated, generating slow-wave sleep activity. Caffeine is known to block adenosine receptors, thereby inhibiting sleep among other things.

Research

Research in the field of neuropsychopharmacology encompasses a wide range of objectives. These might include the study of a new chemical compound for potentially beneficial cognitive or behavioural effects, or the study of an old chemical compound in order to better understand its mechanism of action at the cell and neural circuit levels. For example, the addictive stimulant drug cocaine has long been known to act upon the reward system in the brain, increasing dopamine and norepinephrine levels and inducing euphoria for a short time. More recently published studies however have gone deeper than the circuit level and found that a particular G-protein coupled receptor complex called A2AR-D2R-Sigma1R is formed in the NAc following cocaine usage; this complex reduces D2R signalling in the mesolimbic pathway and may be a contributing factor to cocaine addiction. Other cutting-edge studies have focused on genetics to identify specific biomarkers that may predict an individual’s specific reactions or degree of response to a drug or their tendency to develop addictions in the future. These findings are important because they provide detailed insight into the neural circuitry involved in drug use and help refine old as well as develop new treatment methods for disorders or addictions. Different treatment-related studies are investigating the potential role of peptide nucleic acids in treating Parkinson’s disease and schizophrenia while still others are attempting to establish previously unknown neural correlates underlying certain phenomena.

Research in neuropsychopharmacology comes from a wide range of activities in neuroscience and clinical research. This has motivated organizations such as the American College of Neuropsychopharmacology (ACNP), the European College of Neuropsychopharmacology (ECNP), and the Collegium Internationale Neuro-psychopharmacologicum (CINP) to be established as a measure of focus. The ECNP publishes European Neuropsychopharmacology, and as part of the Reed Elsevier Group, the ACNP publishes the journal Neuropsychopharmacology, and the CINP publishes the journal International Journal of Neuropsychopharmacology with Cambridge University Press. In 2002, a recent comprehensive collected work of the ACNP, “Neuropsychopharmacology: The Fifth Generation of Progress” was compiled. It is one measure of the state of knowledge in 2002, and might be said to represent a landmark in the century-long goal to establish the basic neurobiological principles which govern the actions of the brain.

Many other journals exist which contain relevant information such as Neuroscience. Some of them are listed at Brown University Library.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Neuropsychopharmacology >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is the Scale of Protective Factors?

Introduction

The Scale of Protective Factors (SPF) is a measure of aspects of social relationships, planning behaviours and confidence.

These factors contribute to psychological resilience in emerging adults and adults.

Brief History

The SPF was developed by Dr. Elisabeth Ponce-Garcia at the science of protective factors laboratory (SPF Lab) to capture multiple aspects of adult resilience. A Confirmatory Factor Analysis was subsequently published as collaborative research. The SPF was found to assess resilience effectively in both men and women, across risk and socio-economic status, and ethnic/racial categories.

In order to verify effectiveness in comparison to other measures, Madewell and Ponce-Garcia (2016) analysed the SPF and four other commonly used measures of adult resilience. They found that the SPF was the only measure that assessed social and cognitive aspects and that it outperformed three other measures and performed comparably with a fourth.

The structure of the SPF in comparison to four other adult resilience measures, as well as comparison data, is available as a Data in Brief article. Noticing the absence of research examining the effectiveness of adult resilience measures in child or adult sexual assault, Ponce-Garcia, Madewell and Brown (2016) demonstrated SPF’s effectiveness in that domain. An investigation of the effectiveness of the SPF in the Southern Plains Tribes of the Native American and American Indian community in 2016.

A brief version of the 24 item SPF was developed in 2019 to result in 12 item measure that can be taken as a self-assessment. The SPF-24 and the SPF-12 have been used throughout the United States and in several other countries to include Saudi Arabia, Pakistan, India, Australia, Malesia, Paraguay, Mexico, and Canada. It is listed as a resource by Harvard University, was included in the United States Army Substance Abuse Programme (ASAP-Fort Sill, OK), and is provided by the State of Oklahoma ReEntry Programme.

Contents

The SPF consists of twenty-four statements for which individuals are asked to rate the degree to which each statement describes them. The SPF assesses a wider range of protective factors than other scales. The SPF is the only measure that has been shown to assess social and cognitive protective factors. The SPF includes four sub-scales that indicate the strengths and weaknesses that contribute to overall resilience. The SPF is the only measure to have been used in measuring resilience in sexual assault survivors within the United States.

Properties

The SPF consists of four sub-scales, two social protective factors and two cognitive protective factors.

Social Subscales

Social support measures the availability of social resources in the form of family and/or friends. Social skill measures the ability to make and maintain relationships. The two should be positively correlated. Higher scores on the social sub-scales indicate unity with friends and/or family, friend/family group optimism and general friend/family support.

Cognitive Subscales

The goal efficacy sub-scale measures confidence in the ability to achieve goals. The planning and prioritising behaviour sub-scale measures the ability to recognise the relative importance of tasks, the tendency to approach tasks in order of importance, and the use of lists for organisation.

Scoring

Adding the scores from the four sub-scales results in an overall resilience score. Adding scores from either the two social sub-scales or the two cognitive sub-scales results in a social resilience or cognitive resilience score, respectively. The sub-scale scores can also be viewed as an individual profile of strengths and deficits to indicate priorities for therapeutic plans.

This additive approach could theoretically allow varying subscale scores to cancel each other out and incorrectly indicate low overall resilience. However, research shows that social and cognitive characteristics work together to support resilience. This concern is also not supported by the characteristics of the SPF. Rather than assessing the number of friends or the frequency of social interaction, the SPF assesses the level of comfort in interacting socially. Similarly, rather than assessing the number of goals or tasks, the SPF assesses confidence in reaching goals once set.

The sub-scales are moderately positively correlated and that they all contribute to overall resilience.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Scale_of_Protective_Factors >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is the Health Belief Model?

Introduction

The health belief model (HBM) is a social psychological health behaviour change model developed to explain and predict health-related behaviours, particularly in regard to the uptake of health services.

The HBM was developed in the 1950s by social psychologists at the US Public Health Service and remains one of the best known and most widely used theories in health behaviour research. The HBM suggests that people’s beliefs about health problems, perceived benefits of action and barriers to action, and self-efficacy explain engagement (or lack of engagement) in health-promoting behaviour. A stimulus, or cue to action, must also be present in order to trigger the health-promoting behaviour.

Original Health Belief Model.

Brief History

One of the first theories of health behaviour, the HBM was developed in 1950s by social psychologists Irwin M. Rosenstock, Godfrey M. Hochbaum, S. Stephen Kegeles, and Howard Leventhal at the US Public Health Service. At that time, researchers and health practitioners were worried because few people were getting screened for tuberculosis (TB), even if mobile X-ray cars went to neighbourhoods. The HBM has been applied to predict a wide variety of health-related behaviours such as being screened for the early detection of asymptomatic diseases and receiving immunisations. More recently, the model has been applied to understand intentions to vaccinate (e.g. COVID-19), responses to symptoms of disease, compliance with medical regimens, lifestyle behaviours (e.g. sexual risk behaviours), and behaviours related to chronic illnesses, which may require long-term behaviour maintenance in addition to initial behaviour change. Amendments to the model were made as late as 1988 to incorporate emerging evidence within the field of psychology about the role of self-efficacy in decision-making and behaviour.

Health Belief Model in action.

Theoretical Constructs

The HBM theoretical constructs originate from theories in Cognitive Psychology. In the early twentieth century, cognitive theorists believed that reinforcements operated by affecting expectations rather than by affecting behaviour directly. Mental processes are severe consists of cognitive theories that are seen as expectancy-value models, because they propose that behaviour is a function of the degree to which people value a result and their evaluation of the expectation, that a certain action will lead that result. In terms of the health-related behaviours, the value is avoiding sickness. The expectation is that a certain health action could prevent the condition for which people consider they might be at risk.

The following constructs of the HBM are proposed to vary between individuals and predict engagement in health-related behaviours.

Perceived Susceptibility

Perceived susceptibility refers to subjective assessment of risk of developing a health problem. The HBM predicts that individuals who perceive that they are susceptible to a particular health problem will engage in behaviours to reduce their risk of developing the health problem. Individuals with low perceived susceptibility may deny that they are at risk for contracting a particular illness. Others may acknowledge the possibility that they could develop the illness, but believe it is unlikely. Individuals who believe they are at low risk of developing an illness are more likely to engage in unhealthy, or risky, behaviours. Individuals who perceive a high risk that they will be personally affected by a particular health problem are more likely to engage in behaviours to decrease their risk of developing the condition.

The combination of perceived severity and perceived susceptibility is referred to as perceived threat. Perceived severity and perceived susceptibility to a given health condition depend on knowledge about the condition. The HBM predicts that higher perceived threat leads to a higher likelihood of engagement in health-promoting behaviours.

Perceived Severity

Perceived severity refers to the subjective assessment of the severity of a health problem and its potential consequences. The HBM proposes that individuals who perceive a given health problem as serious are more likely to engage in behaviours to prevent the health problem from occurring (or reduce its severity). Perceived seriousness encompasses beliefs about the disease itself (e.g. whether it is life-threatening or may cause disability or pain) as well as broader impacts of the disease on functioning in work and social roles. For instance, an individual may perceive that influenza is not medically serious, but if he or she perceives that there would be serious financial consequences as a result of being absent from work for several days, then he or she may perceive influenza to be a particularly serious condition.

Through studying Australians and their self-reporting in 2019 of receiving the influenza vaccine, researchers found that by studying perceived severity they could determine the likelihood that Australians would receive the shot. They asked, “On a scale from 0 to 10, how severe do you think the flu would be if you got it?” to measure the perceived severity and they found that 31% perceived the severity of getting the flu as low, 44% as moderate, and 25% as high. Additionally, the researchers found those with a high perceived severity were significantly more likely to have received the vaccine than those with a moderate perceived severity. Furthermore, self-reported vaccination was similar for individuals with low and moderate perceived severity of influenza.

Perceived Benefits

Health-related behaviours are also influenced by the perceived benefits of taking action. Perceived benefits refer to an individual’s assessment of the value or efficacy of engaging in a health-promoting behaviour to decrease risk of disease. If an individual believes that a particular action will reduce susceptibility to a health problem or decrease its seriousness, then he or she is likely to engage in that behaviour regardless of objective facts regarding the effectiveness of the action. For example, individuals who believe that wearing sunscreen prevents skin cancer are more likely to wear sunscreen than individuals who believe that wearing sunscreen will not prevent the occurrence of skin cancer.

Perceived Barriers

Health-related behaviours are also a function of perceived barriers to taking action. Perceived barriers refer to an individual’s assessment of the obstacles to behaviour change. Even if an individual perceives a health condition as threatening and believes that a particular action will effectively reduce the threat, barriers may prevent engagement in the health-promoting behaviour. In other words, the perceived benefits must outweigh the perceived barriers in order for behaviour change to occur. Perceived barriers to taking action include the perceived inconvenience, expense, danger (e.g. side effects of a medical procedure) and discomfort (e.g. pain, emotional upset) involved in engaging in the behaviour. For instance, lack of access to affordable health care and the perception that a flu vaccine shot will cause significant pain may act as barriers to receiving the flu vaccine. In a study about the breast and cervical cancer screening among Hispanic women, perceived barriers, like fear of cancer, embarrassment, fatalistic views of cancer and language, was proved to impede screening.

Modifying Variables

Individual characteristics, including demographic, psychosocial, and structural variables, can affect perceptions (i.e. perceived seriousness, susceptibility, benefits, and barriers) of health-related behaviours. Demographic variables include age, sex, race, ethnicity, and education, among others. Psychosocial variables include personality, social class, and peer and reference group pressure, among others. Structural variables include knowledge about a given disease and prior contact with the disease, among other factors. The HBM suggests that modifying variables affect health-related behaviours indirectly by affecting perceived seriousness, susceptibility, benefits, and barriers.

Cues to Action

The HBM posits that a cue, or trigger, is necessary for prompting engagement in health-promoting behaviours. Cues to action can be internal or external. Physiological cues (e.g. pain, symptoms) are an example of internal cues to action. External cues include events or information from close others, the media, or health care providers promoting engagement in health-related behaviours. Examples of cues to action include a reminder postcard from a dentist, the illness of a friend or family member, mass media campaigns on health issues, and product health warning labels. The intensity of cues needed to prompt action varies between individuals by perceived susceptibility, seriousness, benefits, and barriers. For example, individuals who believe they are at high risk for a serious illness and who have an established relationship with a primary care doctor may be easily persuaded to get screened for the illness after seeing a public service announcement, whereas individuals who believe they are at low risk for the same illness and also do not have reliable access to health care may require more intense external cues in order to get screened.

Self-Efficacy

Self-efficacy was added to the four components of the HBM (i.e. perceived susceptibility, severity, benefits, and barriers) in 1988. Self-efficacy refers to an individual’s perception of his or her competence to successfully perform a behaviour. Self-efficacy was added to the HBM in an attempt to better explain individual differences in health behaviours. The model was originally developed in order to explain engagement in one-time health-related behaviours such as being screened for cancer or receiving an immunisation. Eventually, the HBM was applied to more substantial, long-term behaviour change such as diet modification, exercise, and smoking. Developers of the model recognised that confidence in one’s ability to effect change in outcomes (i.e. self-efficacy) was a key component of health behaviour change. For example, Schmiege et al. found that when dealing with calcium consumption and weight-bearing exercises, self-efficacy was a more powerful predictors than beliefs about future negative health outcomes.

Rosenstock et al. argued that self-efficacy could be added to the other HBM constructs without elaboration of the model’s theoretical structure. However, this was considered short-sighted because related studies indicated that key HBM constructs have indirect effects on behaviour as a result of their effect on perceived control and intention, which might be regarded as more proximal factors of action.

Empirical Support

The HBM has gained substantial empirical support since its development in the 1950s. It remains one of the most widely used and well-tested models for explaining and predicting health-related behaviour. A 1984 review of 18 prospective and 28 retrospective studies suggests that the evidence for each component of the HBMl is strong. The review reports that empirical support for the HBM is particularly notable given the diverse populations, health conditions, and health-related behaviours examined and the various study designs and assessment strategies used to evaluate the model. A more recent meta-analysis found strong support for perceived benefits and perceived barriers predicting health-related behaviours, but weak evidence for the predictive power of perceived seriousness and perceived susceptibility. The authors of the meta-analysis suggest that examination of potential moderated and mediated relationships between components of the model is warranted.

Several studies have provided empirical support from the chronic illness perspective. Becker et al. used the model to predict and explain a mother’s adherence to a diet prescribed for their obese children. Cerkoney et al. interviewed insulin-treated diabetic individuals after diabetic classes at a community hospital. It empirically tested the HBM’s association with the compliance levels of persons chronically ill with diabetes mellitus.

Applications

The HBM has been used to develop effective interventions to change health-related behaviours by targeting various aspects of the model’s key constructs. Interventions based on the HBM may aim to increase perceived susceptibility to and perceived seriousness of a health condition by providing education about prevalence and incidence of disease, individualised estimates of risk, and information about the consequences of disease (e.g. medical, financial, and social consequences). Interventions may also aim to alter the cost-benefit analysis of engaging in a health-promoting behaviour (i.e. increasing perceived benefits and decreasing perceived barriers) by providing information about the efficacy of various behaviours to reduce risk of disease, identifying common perceived barriers, providing incentives to engage in health-promoting behaviours, and engaging social support or other resources to encourage health-promoting behaviours. Furthermore, interventions based on the HBM may provide cues to action to remind and encourage individuals to engage in health-promoting behaviours. Interventions may also aim to boost self-efficacy by providing training in specific health-promoting behaviours, particularly for complex lifestyle changes (e.g. changing diet or physical activity, adhering to a complicated medication regimen). Interventions can be aimed at the individual level (i.e. working one-on-one with individuals to increase engagement in health-related behaviours) or the societal level (e.g. through legislation, changes to the physical environment, mass media campaigns).

Multiple studies have used the Health Belief Model to understand an individual’s intention to change a particular behavior and the factors that influence their ability to do so. Researchers analysed the correlation between young adult women’s intention to stop smoking and their perceived factors in the construction of HBM. The intention to stop smoking among young adult women had a significant correlation with the perceived factors of the Health Belief Model.

Another use of the HBM was in 2016 in a study that was interested in examining the factors associated with physical activity among people with mental illness (PMI) in Hong Kong (Mo et al., 2016). The study used the HBM model because it was one of the most frequently used models to explain health behaviours and the HBM was used as a framework to understand the PMI physical activity levels. The study had 443 PMI complete the survey with the mean age being 45 years old. The survey found that among the HBM variables, perceived barriers were significant in predicting physical activity. Additionally, the research demonstrated that self-efficacy had a positive correlation for physical activity among PMI. These findings support previous literature that self-efficacy and perceived barriers plays a significant role in physical activity and it should be included in interventions. The study also stated that the participants acknowledged that most of their attention is focused on their psychiatric conditions with little focus on their physical health needs.

This study is important to note in regards to the HBM because it illustrates how culture can play a role in this model. The Chinese culture holds different health beliefs than the United States, placing a greater emphasis on fate and the balance of spiritual harmony than on their physical fitness. Since the HBM does not consider these outside variables it highlights a limitation associated with the model and how multiple factors can impact health decisions, not just the ones noted in the model.

Applying the Health Belief Model to Women’s Safety Movements

Movements such as the #MeToo movements and current political tensions surrounding abortion laws have moved women’s rights and violence against women to the forefront of topical conversation. Additionally, many organisations, such as Women On Guard, have begun to place emphasize on trying to educate women on what measures to take in order to increase their safety when walking alone at night. The murder of Sarah Everard on 03 March 2021, has placed further attention on the need for women to protect themselves and stay vigilant when walking alone at night. Everard was kidnapped and murdered while walking home from work in South London, England. The health belief model can provide insight into the steps that need to be taken in order to reach more women and convince them to take the necessary steps to increase safety when walking alone.

Perceived Susceptibility

As stated, perceived susceptibility refers to how susceptible an individual perceives themselves to be to any given risk. In the case of encountering violence while walking along, research shows that many women have high amounts of perceived susceptibility in regards to how susceptible they believe themselves to be to the risk of being attacked while walking along. Studies show that around 50% of women feel unsafe when walking alone at night. Since women may already have increased perceive susceptibility to night-violence, according to the health belief model, they may be more apt to engage in behaviour changes to help them increase their safety/ defend themselves.

Perceived Severity

As the statistics on perceived susceptibility demonstrate, many women feel they are at risk for encountering night-violence. Thus, women also have a higher perception of the severity of the violence as stories such as the tragic death of Sarah Everard demonstrate that night-violence attacks can be not only severe, but fatal.

Perceived Benefits and Barriers

As the health belief model states, individuals must consider the potential benefits of adopting the change in behaviour that is being suggested to them. In the case of night-violence against women organisations that seek to prevent it do so by using advertising to demonstrate to women that tools such as pocket knives, pepper spray, self-defence classes, alarm systems, and traveling with a “buddy” can outweigh barriers such as the cost, time, and other inconveniences that pursuing these changes in behaviour may require. The benefit to implementing these behaviours would be that women could feel more safe when walking alone at night.

Modifying Variable

It is not surprise that the modifying variable of sex plays a large role in applying the health belief model to women’s safety agendas/ movements. While studies show that around 50% of women feel unsafe walking at night, they also show that fewer than one fifth of men feel the same fear and discomfort. Thus, it is evident that the modifying variable of gender plays a large role in how night-time violence is perceived. According to the model, women may be more likely to change their behaviour toward preventing night-time violence than men.

Cues to Action

Cues to action are perhaps the most powerful part of the health belief model and of getting individual to change their behaviour. In regards to preventing night-time violence against women, stories of the horrific violent acts committed against women while they are walking at night serve as external cues to action that can spur individuals to take the necessary precautions and make the necessary change to their behaviour in order to reduce the likelihood of them encountering night-time violence. Cues to action further factor into increased perceived susceptibility and severity of the given risk.

Self Efficacy

Self efficacy is another important factor both in the health belief model and in behaviour change in general. When people believe that they actually have the power to prevent the given risk, then they are more likely to take the appropriate measures to do so. When individuals believe that they cannot change their behaviour or prevent the risk no matter what they do, then they are less likely to engage in behaviour to stop the risk. This concept factors greatly into initiatives to help women defend themselves against night violence because, based on the statistics, many women do feel that if they carry items such as tasers, pepper spray, or alarms they will be able to defend themselves against attackers. Self-defence classes are also things that organisations offer in order to teach individuals that they have the power to learn how to defend themselves and acquire the proper skills to do so. These classes can help to increase self efficacy. Organisations such as community centres may offer classes along these lines.

The issues of night violence against women is an issue of safety and wellness which makes it applicable to a health belief model approach. Defence and preparation for night violence can require behavioural changes on behalf of women if they feel that doing so will help them protect themselves should they ever be attacked.

Limitations

The HBM attempts to predict health-related behaviours by accounting for individual differences in beliefs and attitudes. However, it does not account for other factors that influence health behaviours. For instance, habitual health-related behaviours (e.g. smoking, seatbelt buckling) may become relatively independent of conscious health-related decision-making processes. Additionally, individuals engage in some health-related behaviours for reasons unrelated to health (e.g. exercising for aesthetic reasons). Environmental factors outside an individual’s control may prevent engagement in desired behaviours. For example, an individual living in a dangerous neighbourhood may be unable to go for a jog outdoors due to safety concerns. Furthermore, the HBM does not consider the impact of emotions on health-related behaviour. Evidence suggests that fear may be a key factor in predicting health-related behaviour.

Alternative factors may predict health behaviour, such as outcome expectancy (i.e. whether the person feels they will be healthier as a result of their behaviour) and self-efficacy (i.e. the person’s belief in their ability to carry out preventive behaviour).

The theoretical constructs that constitute the HBM are broadly defined. Furthermore, the HBM does not specify how constructs of the model interact with one another. Therefore, different operationalisations of the theoretical constructs may not be strictly comparable across studies.

Research assessing the contribution of cues to action in predicting health-related behaviours is limited. Cues to action are often difficult to assess, limiting research in this area. For instance, individuals may not accurately report cues that prompted behaviour change. Cues such as a public service announcement on television or on a billboard may be fleeting and individuals may not be aware of their significance in prompting them to engage in a health-related behaviour. Interpersonal influences are also particularly difficult to measure as cues.

Another reason why research does not always support the HBM is that factors other than health beliefs also heavily influence health behaviour practices. These factors may include: special influences, cultural factors, socioeconomic status, and previous experiences. Scholars extend the HBM by adding four more variables (self-identity, perceived importance, consideration of future consequences and concern for appearance) as possible determinants of healthy behaviour. They prove that consideration of future consequences, self-identity, concern for appearance, perceived importance, self-efficacy, perceived susceptibility are significant determinants of healthy eating behaviour that can be manipulated by healthy eating intervention design.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Health_belief_model >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Social Inhibition?

Introduction

Social inhibition is a conscious or subconscious avoidance of a situation or social interaction.

With a high level of social inhibition, situations are avoided because of the possibility of others disapproving of their feelings or expressions. Social inhibition is related to behaviour, appearance, social interactions, or a subject matter for discussion. Related processes that deal with social inhibition are social evaluation concerns, anxiety in social interaction, social avoidance, and withdrawal.

Also related are components such as cognitive brain patterns, anxious apprehension during social interactions, and internalising problems. It also describes those who suppress anger, restrict social behaviour, withdraw in the face of novelty, and have a long latency to interact with strangers. Individuals can also have a low level of social inhibition, but certain situations may generally cause people to be more or less inhibited. Social inhibition can sometimes be reduced by the short-term use of drugs including alcohol or benzodiazepines.

Major signs of social inhibition in children are cessation of play, long latencies to approaching the unfamiliar person, signs of fear and negative affect, and security seeking. Also in high level cases of social inhibition, other social disorders can emerge through development, such as social anxiety disorder and social phobia.

Background

Social inhibition can range from normal reactions to social situations to a pathological level, associated with psychological disorders like social anxiety or social phobia. Life events are important and are related to our well-being and inhibition levels. In a lab study conducted by Buck and colleagues, social inhibition in everyday life was reviewed. Researchers observed how individuals interacted and communicated about different stimuli. In this study, there were female participants called “senders” who viewed twelve emotionally loaded stimuli. There were also participants in the study called “received” who had to guess which stimuli was viewed by the senders. The senders were either alone, with a friend, or with a stranger while viewing the slides. The results of the study revealed that being with a stranger had inhibitory effects on communication, whereas being with a friend had facilitative effects with some stimuli and inhibitory effects with others. The results show how anyone can be inhibited in daily life, with strangers or even friends. Inhibition can also be determined by one’s sensitivity levels to different social cues throughout the day. Gable and colleagues conducted a study in which they examined different events participants would record at the end of their day. Participants were also measured on the behavioural activation system and the behavioural inhibition system. The results revealed that individuals with more sensitivity on the behavioural inhibition system reported having more negative effects from daily events.

Expression can also be inhibited or suppressed because of anxiety to social situations or simple display rules. Yarczower and Daruns’ study about social inhibition of expression defined inhibition of expression as a suppression of one’s facial behaviour in the presences of someone or a perceived anxious situation. They addressed the display rules we all learn as children; we are told what expressions are suitable for what situations. Then as age increases we are socialised into not expressing strong facial emotions. However, leaving the face with a reduced expression hinders communication. In turn this makes the face a less reliable social cue during social interactions. Friedmen and Miller-Herringer bring these nonverbal expressions to the next level by studying individuals that have a greater level of emotional suppression. They state that without proper emotional expression social interactions can be much more difficult because others may not understand another individual’s emotional state.

This being said, there are also four commonly seen irrational cognitive patterns involved in social inhibition. The first pattern centres on self-esteem and perfectionism. In these cases, an individual would inhibit themselves through self-criticism; they want to do everything the “right” way. The second pattern deals with unrealistic approval needs; here individuals want to gain the approval of others and will fear rejection if they express too much. In the third pattern, unrealistic labelling of aggressive and assertive behaviour depicts how many individuals that inhibit themselves may feel as though aggression or assertiveness is bad. They believe if they express these behaviours they will receive a negative label. The last pattern discusses criticism of others, this pattern is a spin-off from the first. They will be highly critical of others much like they are to themselves. Shyness is another factor that is a part of social inhibition. Shyness is associated with low emotional regulations and high negative emotions. In many cases shy individuals have a greater change of social inhibition.

Although social inhibition is a common part of life, individuals can also have high levels of inhibition. Social Inhibition on higher levels can sometimes be a precursor to disorders such as Social Anxiety Disorder. Essex and colleagues found that some early risk factors may play a role in having chronically high inhibition. In this study, mothers, teachers, and the child reported on the child’s behavioural inhibition. The factors that were found to be contributors to social inhibition were female gender, exposure to maternal stress during infancy and the preschool period, and early manifestation of behavioural inhibition. In severe cases, clinical treatment, such as therapy, may be necessary to help with social inhibition or the manifesting social disorder.

Over the Lifespan

Social inhibition can develop over a lifespan. Children can be withdrawn, adolescents can have anxiety to social situations, and adults may have a hard time adjusting to social situations which they have to initiate on their own. To be inhibited can change and be different for many. In many cases, inhibition can lead to other social disorders and phobias.

Infants and Children

In infants and children, social inhibition is characterised by a temperament style that will have children responding negatively and withdrawing from unfamiliar people, situations and objects. In addition to cessation of play, inhibited children may display long latencies to approaching an unfamiliar person, signs of fear and negative affect, and security seeking. Avoiding behaviour can be seen at a very young age. In one study, Fox and colleagues found that even at four months of age some infants had negative responses to unfamiliar visual and audio stimuli. The study was longitudinal; therefore, follow ups revealed that half the infants who had high negative responses continued to show behavioural inhibition through the age of two. Fox’s longitudinal study reported that the expression of behavioural inhibition showed a small degree of continuity. Over time, the toddlers who were quiet and restrained continued the trend into childhood by being cautious, quiet, and socially withdrawn. The uninhibited control group of the same ages continued to interact easily with unfamiliar people and situations. There has also been a link between inhibition at childhood age with social disorders in adolescents and adulthood. Schwartz and Kagan found that in a longitudinal study from ages two to thirteen, sixty-one percent of teens who had inhibitor traits as toddlers reported social anxiety symptoms as adolescents, compared to twenty-seven percent of adolescents who were uninhibited in earlier life. However, not every child that has some withdrawn or inhibited behaviour will be inhibited as an adolescent or manifest a social disorder.

The caregiver alone is not solely responsible for inhibition in children; however, in some cases it can be a factor. Caregivers can affect the inhibition levels of their child by exposing the child to maternal stress during infancy and the preschool period. In addition, in some situations the child may simply have early manifestation of behavioural inhibition. There seems to be no parenting style that researchers agree on to be the best to combat social inhibition. Park and Crinic say that a sensitive, accepting, overprotective parenting is best to reduce the negative behaviours because it will allow the child to be themselves without judgement. However, Kagan hypothesized that firm parenting styles are better suited for socially inhibited children. Researchers supporting sensitive parenting believe that too firm of a parenting style will send a message to children that says they need to change.

Adolescence

Social inhibition has been widely studied in children; however, research on how it develops through adolescence and adulthood is not as prevalent, although anxiety-related social problems are most commonly seen in adolescents. Many of the behavioural traits are the same in adolescence as they are in childhood: withdrawing from unfamiliar people, situations and objects. However, it has been tested that adolescents are more aware of their social situations and are more likely to be inhibited in public settings. Researchers found younger individuals to be more likely to differentiate between public and private settings when inquiring about potentially embarrassing issues. It is also thought that inhibition is in many ways addressed in childhood and adolescence simply because schools facilitate interactions with others. As an adult, the same facilitating circumstance may not occur unless the individual prompts them on their own. Gest states that adults do not have as many casual peer interactions and friendship opportunities that guide and support relationships unless they facilitate them on their own. Adolescent research has also shown that social inhibition is associated with a more negative emotional state in young men than women.

This is in contrast to a study that measured inhibition levels through self reports from the adolescent and their parents. West and Newman found that young American Indian women and their parents reported higher levels of inhibition than young American Indian men; in addition, the parental reports also predicted social anxiety in young American Indian women over young American Indian men. In this same study, relationship development with peers was investigated over time. West and Newman stated that low levels of behavioural inhibition had an association with early social and school situations and that were related to greater levels of socially mediated anxiety, especially negative evaluation of fear by peers. This study then speculates about the possibility that adolescents and children who have a generally positive social experience will be more aware of the status of these positive relationships, therefore more anxious about failure in their social domain. Other studies also discussed how in many cases, early behavioural inhibition is a risk factor for the development of chronic high school-age inhibition and possible social anxiety disorder. Although social inhibition can be a predictor of other social disorders there is not an extremely large portion of adolescents who have developed an anxiety disorder and also had a history of inhibition in childhood.

Besic and Kerr believes that appearance can be a factor for social inhibition. In their study they hypothesized that a way to handle difficult situations with behavioural inhibition was to present an off-putting appearance. They examined “radical” crowds, such as those labelled as goths and punks and if their appearances fulfilled a functions for their inhibition. They state that a radical style could be used to draw away the social boundaries and relieve them of pressures or expectations to interact in unfamiliar situations with unfamiliar peers. Another possibility is that an individual may be self-handicapping to ensure that they will not have to interact with unfamiliar peers. The results revealed that radicals were significantly more inhibited than other groups. However, there are other inhibited individuals in other social classifications. The highest inhibited radical was no more inhibited than the highest inhibited individual in other groups.

Adulthood

Adult cases of social inhibition are hard to come by simply because many see it as something that happens through development. Although research is lacking, developmental considerations suggest there may be a stronger association between behavioral inhibition and peer relations in adulthood. One researcher says this lack of information may be because adults are not put in as many socially interactive situations that would guide them through the situation. It would seem that adults have an increased responsibility to initiate or structure their own social peer relationships; this is where social inhibition could have a more problematic role in adulthood than in childhood. One study that did contribute to adult research used questionnaires to study both clinical and nonclinical adults. Like in adolescence, behavioral inhibition was also found to be associated with anxiety disorders in adulthood. In addition the study found that childhood inhibition was specifically a factor in a lifetime diagnosis of social phobia. Gest also measured adult peer relations, and to what degree they had a positive and active social life. For example, researchers wanted to know if they participated in any recreational activities with others, how often they met with others, and if they had any close confiding relationships. The participants were rated on a 5-point scale on each peer relationship they disclosed. The results revealed that social inhibition had nothing to do with popularity, however it was correlated with peer relations in both genders and emotional stress in only men.

A similar study found that some shy men had a low occupational status at age forty because they entered their career later in life. However, another researcher has commented on this giving this example, perhaps remaining at home longer allows young adults to accumulate educational and financial resources, before moving out and becoming more independent. Additionally it was found that young adults who were inhibited as children were less likely to move away from their families. There is also some discussion of the inhibition through generations and children mirroring their parents. Results indicated that children whose birth mothers met criteria for the diagnosis of social phobia showed elevated levels of observed behavioural inhibition. Social inhibition can decrease with age due to cognitive deficits that can occur in old age. Age-related deficits have an effect on older adults’ ability to differentiate between public and private settings when discussing potentially embarrassing issues, leading them to discuss personal issues in inappropriately public situations. This suggests that deficits in inhibitory ability that lead to inappropriateness are out of the individual’s control.

In Different Contexts

In Schools

Schools can be a place for children to facilitate different social interactions; however, it can also uncover social and school adjustment problems. Coplan claims that Western children with inhibition problems may be at a higher risk of developmental problems in school. Although social inhibition may be a predictor of social and school adjustment problems in children, Chen argues that the effect of social inhibition on school adjustment differs between Western cultures and Chinese culture. Chen found that in Chinese children, behavioural inhibition was associated with greater peer liking, social interaction, positive school attitudes, and school competence and fewer later learning problems, which is also different from western cultures. In other studies, researchers such as Oysterman found there to be difficulties in adjustment in children that were experiencing inhibition. In Western cultures, these difficulties are seen more because of the emphasis on social assertiveness and self-expression as traits that are valued in development. In other cultures children are sometimes expected to be inhibited. This does not contrast with other cultures in which children are socialised and assert themselves. Despite these differences there are also similarities between gender. Boys were more antagonistic in peer interaction and seemed to have more learning problems in school. Girls were more cooperative in peer interaction and had a more positive outlook on school. They formed more affiliations with peers, and performed more completely in school.

Other researchers like Geng have looked to understand social inhibition, effortful control, and attention in school. In Geng’s study, gender came in to play with high socially inhibited girls being extremely aware of their surroundings, possibly paying too much attention to potentially anxious situations. It is well known in a large number of research studies social inhibition had been linked to other anxiety disorders. However Degnan and colleagues believe that being able to regulate your effortful control may serve to reduce the anxiety the comes from inhibition. Nesdale and Dalton investigated inhibition of social group norms in school children between the ages of seven and nine. In schools there becomes an increase in social in-groups and out-groups as children increase in age. This study created different in-groups or exclusive groups and out-groups or inclusive groups. The results showed that students in the inclusive group liked all students more, while students in the exclusive group like their group over other groups. This study could help in the future to facilitate school peer groups more efficiently.

In the Workplace

Social inhibition can manifest in all social situations and relationships. One place that we can see the effects of social inhibition is in the workplace. Research has shown that social inhibition can actually affect the way that one completes a given amount of work. In one experiment, participants completed a task in a laboratory setting, varying whether or not another individual was present in the room with the participants while they attempted to complete the task. The results showed that when another individual was present in the room the person focused on completing the experimental task decreased their body movements, hand movements, and vocalisation, even though the other person did not speak to or even look at the participant. This suggests that just the mere presence of another person in a social situation can inhibit an individual. However, although the individual in charge of completing the experimental task was socially inhibited by the presence of another person in the laboratory, there were no significant links between their social inhibition when completing the task and improved performance on said task. These findings suggest that an individual may socially inhibit themselves in the work place if another person is also in the room, however, such inhibition does not suggest that the inhibited individual is actually performing the duties assigned to them with more accuracy or focus.

In Psychological Disorders

Depression

Links between social inhibition and depression can be found in individuals who experienced social inhibited behaviours during childhood. Researchers from the UK conducted a study in an attempt to explain possible links between social inhibition in infancy and later signs of depression. The researchers based their study on previous information from literature acknowledging that there are social and non-social forms of inhibition, and that social inhibition is significantly related to early social fears. The researchers hypothesized that social inhibition in childhood would be linked to higher levels of depression in later years. Participants completed a number of questionnaires about their experiences of social inhibition in childhood and their current levels of depression. Results showed a significant relationship between depression and recalled social fears, or, social inhibitions during childhood. Furthermore, the researchers related their findings to another study conducted by Muris et al., in 2001 which found that there is an association between social inhibition and depression in adolescents. The study compared adolescents who were not inhibited to those who are, and found that:

“adolescents experiencing high levels of behavioral inhibition were more depressed than their counterparts who experienced intermediate or low levels of behavioral inhibition”.

Another study set out to examine the link between social inhibition and depression, with the basis for their study being that social inhibition (which they explain as a part of type D personality, or distressed personality) is related to emotional distress. The researchers explain that a major factor related to social inhibition is the inhibited individual not expressing their emotions and feelings, a factor that the researchers cite in relation to the link between social inhibition and depression. Overall, the results of the study show that social inhibition (as a factor of type D personality) predicts depression, regardless of the baseline depression level of the individual. Significantly, this study was conducted with young, healthy adults, as opposed to working with those in self-help groups or with individuals who have a pre-existing medical or psychological condition.

Fear

Social inhibition can be affected by fear responses that one has in the early “toddler years” of their life. In 2011, researchers Elizabeth J. Kiel and Kristin A. Buss examined “how attention toward an angry-looking gorilla mask in a room with alternative opportunities for play in 24-month-old toddlers predicted social inhibition when children entered kindergarten”. In the study, the researchers specifically looked at the toddlers’ attention to threat and their fear of novelty in other situations. The researchers paid special attention to these two factors due to previous research suggesting that “sustained attention to putatively threatening novelty relates to anxious behavior in the first 2 years of life”. Also, it has been found in earlier research conducted by Buss and colleagues that no matter the differences, individual responses to novelty during early childhood can be related to later social inhibition. These results already link fear responses, particularly in children, to social inhibition, mainly such inhibition that manifests later on in the individual’s life. Overall, the researchers based their experiment on the notion that the more time a toddler spends being attentive towards a novel potential threat the greater the chance that they will experience issues with the regulation of distress, which can predict anxious behaviour such as social inhibition.

Through a study intended to further connect and understand links between fear and late social inhibitions, the researchers conducted a study where they worked with 24-month-old toddlers. They placed the toddlers in a room called the “risk room” which is set up with a number of play areas for the toddlers to interact with, with one of those areas being a potentially threatening stimulus, in this case, an angry looking gorilla mask. The children are left alone, with only their primary caregiver sitting in the corner of the room, to explore the play areas for three minutes, and then the experimenter returns and instructs the toddler to interact with each of the play areas. The purpose of this was to allow for other experimenters to code the reactions of the toddler to the stimuli around him or her, paying special attention to their attention to threat, their proximity to the threat, and their fear of novelty.

The results of this study indicate that attention to threat (attention given, by the toddler to the feared stimuli) predicts social inhibition in kindergarten. Further, if the child approaches the feared stimuli, the relation to later social inhibition is not significant. When a child’s behaviour is to keep more than two feet away from the threatening stimulus, their behaviour can be seen as linked to later social inhibition. Another important factor that the researchers found when looking at the prediction of social inhibition is the child paying a significant amount of attention to a feared or threatening stimuli in the presence of other, enjoyable activities. Mainly, if the child’s duration of attention to the threatening stimuli is significant even when there are other enjoyable activities available for them to interact with, the link to later social inhibition is stronger due to the fact that “toddler-aged children have increased motoric skill and independence in exploring their environments; so they are capable of using more sophisticated distraction techniques, such as involvement with other activities”.

In another study looking at social inhibition and fear, the researchers made the distinction between different forms of inhibition. Mainly looking at behavioural inhibition the researchers separated the category into two subcategories, social behavioural inhibition and non-social behavioural inhibition. The researchers cite an experiment conducted by Majdandzic and Van den Boom where they used a laboratory setting to attempt to elicit fear in the children. They did this by using both social and non-social stimuli. What Majdandizic and Van der Boom found was a variability in the way that fear was elicited in the children when using either the social or non-social stimuli. Essentially, this study realised that there is a correlation between social stimuli producing fear expressions in children, whereas non-social stimuli is not correlated to fear. This can be evidence of social inhibition due to the social stimuli that result in fear expressions in children.

The researchers of the current study took the results from the Majdandizic and Van der Boom study and expanded on their work by looking at variability in fear expressions in both socially inhibited children and non-socially inhibited children. What they found was that mainly socially inhibited children have effects such as shyness and inhibition with peers, adults, and in performance situations, as well as social phobia and separation anxiety. The stronger link with fear reactions comes mainly from those children who were non-socially behaviourally inhibited. While these results go against previous findings, what the researchers were eager to stipulate was that “the normative development of fear in children have indicated that many specific fears (e.g. fear of animals) decline with age, whereas social fears increase as children get older”.

Social Phobia

Social inhibition is linked to social phobia, in so much as social inhibition during childhood can be seen as a contributing factor to developing social phobia later on in life. While social inhibition is also linked to social anxiety, it is important to point out the difference between social anxiety and social phobia. Social anxiety is marked by a tendency to have high anxiety before a social interaction, but not experience the avoidance of the social activity that is associated with social phobia. Social phobia and social inhibition are linked in a few different ways, one being physiologically. When one is experiencing extreme levels of inhibition they can suffer from symptoms such as accelerated heart rate, increased morning salivary cortisol levels, and muscle tension in their vocal cords. These symptoms are also reported by those with social phobia, which indicates that both social inhibition and social phobia interact with the sympathetic nervous system when the individual encounters a stressful situation.

Further, it is suggested throughout literature that social inhibition during childhood is linked to later social phobia. Beyond that research has indicated that continuity in inhibition plays an important role in the later development of social phobia. Continuity of social inhibition means someone experiencing social inhibition for a number of year continuously. The research explains work done with young teenagers, which found that the teenagers who had been classified as inhibited 12 years earlier were significantly more likely to develop social phobia than young teenagers who were not classified as inhibited. This research pertains to the link between social inhibition and generalised social phobia, rather than specific phobias. When looking at continuity in social inhibition some research offers reasoning as to why the social inhibition may continue long enough to be a predictor of social phobia. Researchers have suggested that if the early childhood relationships are not satisfactory they can influence the child to respond to situations in certain inhibitory ways. When this happens it is often then associated with poor self-evaluation for the child, which can lead to increased social inhibition and social phobia. Also, if a child is neglected or rejected by their peers, rather than by their caregiver, they often develop a sense of social failure, which often extends into social inhibition, and later social phobia. The link between social inhibition and social phobia is somewhat exclusive, when testing for a possible link between non-social inhibition and social phobia no predictive elements were found. It is particularly social inhibition that is linked to social phobia.

The research also suggests that social inhibitions can be divided between different kinds of social fears, or different patterns of inhibition can be seen in individuals. The researchers suggest that certain patterns, or certain social fears, can be better predictors of social phobia than others. Mainly, the researchers suggest that there can be different patterns of social inhibition in relation to an unfamiliar object or encounter. These specific patterns should be looked at in conjunction with motivation and the psychophysiological reaction to the object or encounter to determine the specific patterns that are the better predictors of social phobia.

Another study aimed to examine the link between social inhibition and social phobia also found that social phobia is linked to the social phobic being able to recall their own encounters with social inhibition during childhood. The social phobic participants were able to recall social and school fears from their childhood, but they also were able to recall sensory-processing sensitivity which indicates that the social phobic participants in the study were able to recall having increased sensitivity to the situations and behaviours around them.

Another study explains that social phobia itself has a few different ways it can manifest. The study aims at understanding the link between social inhibition and social phobia, as well as depression in social phobia. What the study found was an important link connecting the severity of social inhibition during childhood to the severity of social phobia and factors of social phobia in later years. Severe social inhibition during childhood can be related to lifetime social phobia. Further, the researchers point out that inhibition during childhood is significantly linked to avoidant personality disorder in social phobia as well as childhood inhibition linked with major depressive disorder in social phobia that spans across the individual’s lifetime. A major suggestion related to the results of the study suggested that while inhibition can be a general predictor of risk factors related to social phobia, it may not be a specific predictor of social phobia alone

Social Anxiety Disorder

Social anxiety disorder is characterised by a fear of scrutiny or disapproval from others. Individuals believe this negative reaction will bring about rejections. Individuals with social anxiety disorder have stronger anxious feeling over a long period of time and are more anxious more often. In many cases, researchers have found that social inhibition can be a factor in developing other disorders such as social anxiety disorder. Being inhibited does not mean that an individual will develop another disorder; however, Clauss and colleagues conducted a study to measure the association between behavioural inhibition and social anxiety disorder. The results of the study discovered that 15% of all children have behavioural inhibition and about half of those children will eventually develop social anxiety disorder. This is why behavioural inhibition is seen as a larger risk factor.

That being said, Lim and colleagues researched the differences between early and late onset of social anxiety disorder and its relation to social inhibition. Through the duration of their study, they found those diagnosed as early onset had complaints other than ones about social anxiety symptoms. Early onset individuals would frequently have more severe symptoms and higher levels of behavioural inhibition. Additional behavioural inhibition was more severe especially in social and school situations with only the early onset cases. Lorian and Grisham researched the relationship between behavioural inhibition, risk-avoidance, and social anxiety symptoms. They found that all three factors correlated with each other and risk avoidance is potentially a mechanism linked to an anxiety pathology.

Reduction

Alcohol Consumption

Social inhibition can be lowered by a few different factors, one of them being alcohol. Alcohol consumption can be seen to lower inhibitions in both men and women. Social inhibitions generally act to control or affect the way that one conducts themselves in a social setting. By lowering inhibitions alcohol can work to increase social behaviours either negatively or positively. Importantly, one must remember that the higher the dosage of alcohol, the greater the damage it will cause to inhibitory control.

By lowering inhibitions, alcohol can cause social behaviours such as aggression, self disclosure, and violent acts. Researchers have suggested that situational cues used to inhibit social behaviours are not perceived the same way after someone consumes enough alcohol to qualify them as drunk:

“interacting parties who are impaired by alcohol are less likely to see justifications for the other’s behavior, are thus more likely to interpret the behavior as arbitrary and provocative, and then, having less access to inhibiting cues and behavioral standards, are more likely to react extremely.”

This idea of increased extreme social behaviours is believed to come as a result of lowered inhibitions after consuming alcohol. Alcohol can lower inhibitions for a number of reasons, it can reduce one’s self-awareness, impair perceptual and cognitive functioning, allows for instigator pressures to have more influence over an individual, and can reduce one’s ability to read inhibitory social cues and standards of conduct.

When attempting to examine the effects that alcohol consumption has on social inhibition researchers found that after being provoked sober individuals used inhibiting cues, such as the innocence of the instigator and the severity of the retaliation to control their response to the aggressive provocation. However, the researchers found that an intoxicated individual did not have these same inhibitions and, as a result, exhibited more extreme behaviours of retaliated aggression to the provocation without processing information they would normally consider about the situation. On average, drunken individuals exhibited more aggression, self-disclosure, risk taking behaviours, and laughter than sober individuals. Extreme behaviours are not as common in sober individuals because they are able to read inhibitory cues and social conduct norms that drunken individuals are not as inclined to consider. These negative social behaviours, then, are a result of lowered social inhibitions.

Alcohol consumption also has the ability to lower inhibitions in a positive way. Research has been conducted looking at the way an intoxicated person is more inclined to be helpful. Researchers were of the same opinion that alcohol lowers inhibitions and allows for more extreme behaviours, however, they tested to see if this would be true for more socially acceptable situations, such as helping another person. The researchers acknowledged that, generally, an impulse to help another is initiated but then inhibitions will cause the potential helper to consider all factors going into their decision to help or not to help such as, lost time, boredom, fatigue, monetary costs, and possibility of personal harm. The researchers suggest that while one may be inhibited and therefore less likely to offer help when completely sober, after consuming alcohol enough damage will be done to their inhibitory functioning to actually increase helping. While this suggestion differs from socially negative behaviours that are seen after social inhibitions have been lowered, it is consistent with the idea that alcohol consumption can lower inhibitions and, as a result, produce more socially extreme behaviours when compared to a sober counterpart.

Alcohol consumption can lower social inhibitions in both men and women, producing social behaviours not typical in the individuals’ day-to-day sober lives. For example, in social settings women will tend to be uncomfortable with sexual acts and provocations as well as feeling uncomfortable in social settings that are generally male dominated such as strip clubs or bars. However, consumption of alcohol has been seen to lower these inhibitions, making women feel freer and more ready to participate socially in events and behaviours that they would normally feel inhibited from participating in if they were sober. As an example, women participating in bachelorette parties generally consume copious amounts of alcohol for the event. As a result, the females feel less inhibited and are more likely to then engage in behaviour that they would normally view as deviant or inappropriate. In an examination of bachelorette parties it was found that when those attending the party consumed only a couple of drinks behaviour minimally reflected any alcohol consumption, assuming that the party guests were still socially inhibited and less inclined to perform deviant behaviours. Similarly, “levels of intoxication were correlated with the atmosphere of the party, such that parties with little or no alcohol were perceived as less ‘wild’ than parties a lot of alcohol consumption.” Conceivably, the bachelorette parties show tendencies of “wild” behaviour after excessive alcohol consumption, which consequently lowers the inhibitions of the consumers.

When surveyed a number of women who had attended a bachelorette party, or had one in their honour, in the past year reported that their behaviour when under the influence of alcohol was different from their behaviour when sober. One party guest reported:

“People drink … to lose inhibitions and stuff that is done… I would never do sober. It lowers inhibitions – that is the main point of it.”

These reports suggest that “alcohol was used to lower inhibitions about being too sexual, about the risk of being perceived as promiscuous, or about being sexual in public. Women commented that they felt freer to talk about sex while under the influence of alcohol, to flirt with male strangers, or to dance with a male stripper.” The research collected surrounding women and their alcohol consumption in these settings provide examples of the reduction of social inhibitions in relation to excess alcohol consumption

Power

Social inhibitions can also be reduced by means unrelated to an actual substance. Another way that social inhibition can be decreased is by the attainment of power. Research has examined the way that having either elevated or reduced power affects social interactions and well-being in social situations. Such research has shown a relationship between elevated power and decreased social inhibitions. This relationship of those with elevated power and those with reduced power can be seen in all forms of social interactions, and is marked by elevated power individuals often having access to resources that the reduced power individuals do not have. Decreased social inhibition is seen in those with elevated power for two main reasons, one being that they have more access to resources, providing them with comforts and stability. The second reason is that their status as a high power individual often provides the powerful individual a sense of being above social consequences, allowing them to act in ways that a reduced power individual may not.

The elevated power individuals will experience reduced social inhibition in various ways, one being that they are more likely to approach, rather than avoid, another person. Also, with the reduced inhibition associated with high power individuals, they are more likely to initiate physical contact with another person, enter into their personal space, and they are more likely to indicate interest in intimacy. High power people tend to be socially disinhibited when it comes to sexual behaviour and sexual concepts. Consistent with this expectation, a study working with male and female participants found that when the male and female felt equally powerful they tended to interact socially with one another in a disinhibited manner.

Further, the research suggests that as a result of their reduced social inhibition, powerful individuals will be guided to behave in a way that fits with their personality traits in a social situation in which they feel powerful. Similarly, in a laboratory study it was found that when one person in a group feels powerful their reduced social inhibition can result in decreased manners. The study found that, when offered food, the powerful individual is more likely to take more than the other individuals in the room. This can be seen as the powerful individual exhibiting reduced social inhibitions, as they reduce their attention to common social niceties such as manners and sharing.

Increase

Power

Certain factors can increase social inhibition in individuals. Increased inhibitions can occur in different situations and for different reasons. One major factor that contributes to the increase of social inhibition is power. Reduced power is linked to an array of negative affect, one of which being increased social inhibitions. Power, in this instance, can be defined as a fundamental factor in social relationships that is central to interactions, influencing behaviour and emotional display. Further, power is such an essential factor in social relationships because power determines who is the giver and who is the receiver in the exchange of rewards and resources. Power is present in all social relationships, not just typical hierarchical establishments such as in employment or school settings. Power, then, is related to increased social inhibitions when an individual feels that they are in a powerless or diminished power position. Those who are deemed to be high in power are generally richer in resources and freedom, as well as decreased levels of social inhibition, whereas those who are deemed to be low in power are generally low in resources, constrained, and prone to experiencing increased social inhibition.

Research shows that individuals who are considered to be low in power experience more social threats and punishments, and generally have less access to social resources. As a result of this these individuals are prone to developing more sensitivity to criticism from others, and are more susceptible to accepting when someone constrains them. These factors contribute to increasing social inhibition in those individuals. Similarly, studies have shown that the absence of power can heighten the processes associated with social inhibition. Experiments on the interaction between power and inhibition have shown that when participants are in a situation where they perceive more punishments and threats their cognition and behaviour will show more signs of social inhibition related affect. Environments which distinguish the differences between the powerful and the powerless can lead to the social inhibition of the power reduced individuals as a response to their social interactions with the heightened power individuals.

Some of the social inhibited behaviours that a low-power individual will experience in these social situations will be embarrassment and fear and they may even go on to feel guilt, sadness, and shame. Further, low power individuals can be seen socially inhibiting themselves in ways that can, in the end, favour the high-power individuals. These can include inhibiting themselves from providing input on ideas, hesitating in normal speech, and even increasing their facial muscle actions in order to keep themselves from displaying emotions. When the low-power individuals are in a social situation with a high-power individual they will also commonly exhibit social inhibition by inhibiting their postural constriction and reducing their gestures. Researchers have generalised these suggestions of interaction between a high-power individual and low-power individuals to say that these expressions of social inhibition are expected to carry over into all areas of social interaction for the low-power individual. That is to say that low-power individuals will not only exhibit social inhibition when in the presence of a high-power individual. They will continue to be socially inhibited in all social aspects of their lives as a result of their low-power status. Further, low-power individuals tend to devote increased attention to the actions and behaviours of others.

Biological Factors

Another possible explanation for increased social inhibition has to do with biological factors. A study of brain activity in those who rate high on the scale for social inhibition showed a number of brain areas that are related to the heightened inhibitions. In their study the researchers aimed to find the link between socially inhibited individuals and an over activation of the cortical social brain network. The researchers did this by examining the brain activity of individuals who rate high in social inhibition as they respond to video clips of facial and bodily expressions that were potentially threatening. What the researchers found was that those who rate high in social inhibition show an overactive orbitofrontal cortex, left temporo-parietal junction, and right extrastriate body area. When the threat -related activity was being presented to the participants, these areas of the brain showed increased activity in comparison to those who do not rate high for social inhibition. What the researchers speculate is that, in this instance, hyperactivity in these brain structures does not mean better functioning. Further, “the orbitofrontal cortex is connected with areas that underlie emotional function and empathy”. This relates to one’s ability to stimulate how another person feels in their own facial displays. The over activity and decreased function of these brain structures can affect individuals by increasing social inhibition and behaviours related to social inhibition.

Personality Traits

Further, there is speculation that social inhibition can also be increased by the type of personality an individual has and behaviours that those individuals inherently display. Namely, those who are dependent and reassurance seeking are more commonly likely to display increased social inhibition.

Clinical Levels

Although social inhibition can occur as part of ordinary social situations, a chronically high level of social inhibition may lead some individuals to develop other social or anxiety disorders that would also need to be handled clinically. Through childhood, adolescence, and adulthood, clinical levels of social inhibition can be measured. Social inhibition can be a precursors for other social disorders that can develop in adolescence or adulthood

Measures

There are many implications for the diagnoses of social inhibition, however there are many cost-efficient ways to measure and treat this social disorder. One measure that has reliably assessed the traits of social inhibition is the seven-item inhibition scale of the Type D Scale-14. Another measure is the Behavioural Inhibition Observation System (BIOS). In clinical trials this measure is to be used for children completed by parents, teachers, and clinicians. Other scales are the:

  • Behavioural Inhibition Questionnaire (BIQ);
  • Behavioural Inhibition Instrument (BII);
  • Behavioural Inhibition Scale (BIS);
  • Preschool Behavioural Inhibition Scale (P-BIS); and
  • Behavioural Inhibition Scale for children ages 3-6.

There are also many versions of these scales that are specifically for parents, teachers, or even the child or possibly an inhibited individual to take. There are also times when these measures are grouped together; in many cases the Behavioural Inhibition System scale and Behavioural Activation System scale are used together. These two measure are the most widely used and together they consist of behavioural inhibition and behavioural activation scales that deal with reward response and fun seeking. The Behavioural Paradigm System is an observation system that allows measurements of behavioural inhibition in systematic natural environments. With this system researchers will observe cessation of play and vocalisation, long latencies to approaching the unfamiliar person, signs of fear and negative affect, and security seeking in environments such as classrooms, playgrounds, and in home settings. This paradigm was followed by many adaptations, one specifically was the adaptation of the Observational Paradigm. In an additional study by Ballespi and colleagues the paradigm was changed to be more suitable for a school environment. The adapted paradigm met three important criteria, the tests were suitable for a school environment, there had to be materials for the test that could be transported easily, and the observation of behavioural inhibition signs had to have the potential to be seen in a short period of time.

Ballespi and colleagues discussed one of the most recent measurement systems in the Behavioural Inhibition Observation System. This new system will allow clinicians to provide a quick measure for behavioural inhibition. This system is used during the first meeting with the child. In this first meeting, the child will be exposed to a strange, unfamiliar situation. The scale will then be completed after the therapist has time to observe the child in an interview setting. Researchers want to find a way to have an actual measure for inhibition, however this is difficult. There is a difference in observations, a parent or teachers is going to observe the child over long periods of time in several natural situations. The parents do not actually observe the child but instead rate the behaviour inhibition on the ideas they have formed about the child. The clinician will not have all this information and will base his or her first measure on observation alone; they measure state while parents and teachers measure traits. This is where the differences come up in measure however after several visits the measures of the clinicians, teachers, and parents become more similar.

Treatments

Treatments used for social inhibition are primarily assertive trainings introduced by therapies. These treatments are about teaching the inhibited individual to express and assert their feeling instead of inhibiting them. Assertiveness training is an important operation for behavioural therapist because it can help with behavioural issues, as well as interpersonal inadequacies, and anxiety in adults. In some cases this training can go by a different name because assertiveness is sometimes categorised by aggression therefore it can also be called appropriate expression training.

In one study discussing assertive training Ludwig and Lazarus found irrational cognitive patterns that inhibited individuals have to deal with and how to overcome them. The four patterns are self-criticism/Perfectionism, unrealistic approval needs, unrealistic labelling of aggression/assertive behaviour, and criticism of others. There are three different phases that work to combat the irrational cognitive patterns and inhibitory actions during social situations. These phases are meant to be actively practiced. The individual will receive homework assignments, and have to do role-playing exercises to overcome their inhibitions. The first phase discussed was about talking more. Ludwig states that there cannot just be an increase in talking but also an increase in expressing and talking about how one feels. The point of this phase is to get an individual talking no matter how ridiculous or trivial it may seem. Phase two is about dealing with the responses that come from talking more. When an inhibited individual starts talking more they may become embarrassed. However, with positive reactions from others they will learn that being embarrassed about some of the comments made is not devastating, and in turn the individual may talk and act more freely. In addition to the positive feedback the individual will review particularly embarrassing moment to assess why they were embarrassed to help combat those thoughts. If the inhibited person can understand the irrational thoughts they will eventually feel less embarrassed and act more freely. Role playing is also a way to help the individual understand different social behaviours. Mirroring is a way some therapist will show the client their own behaviour. The last phase deals with additional strategies that can help through social situation such as expressing disagreement, dealing with interruptions, initiating more conversations topics, and more self-disclosure. Ludwig and colleagues also make sure to explain that no one should compulsively apply these behavioural techniques in all situations. An individual should not go over board using them; additionally there are times when initiating some conversation topics and talking more are inappropriate.

Group therapies are also used in the treatment using assertiveness. Hedquist and Weinhold investigated two group counselling strategies with socially anxious and unassertive college students. The first strategy is a behavioural rehearsal group, which aims to assist members to learn more efficient responses in social situations. This was to be accomplished by rehearsing several difficult social situations. The second strategy was a social learning group that was about honesty about everything; any withholding behaviours were seen as being dishonest. Another rule was every individual had to take responsibility for everything that said. The results of this study showed that both strategies helped significantly in treating the anxiety and unassertiveness.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Social_inhibition >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What are Life Skills?

Introduction

Life skills are abilities for adaptive and positive behaviour that enable humans to deal effectively with the demands and challenges of life.

This concept is also termed as psychosocial competency. The subject varies greatly depending on social norms and community expectations but skills that function for well-being and aid individuals to develop into active and productive members of their communities are considered as life skills.

Enumeration and Categorisation

The UNICEF Evaluation Office suggests that “there is no definitive list” of psychosocial skills; nevertheless UNICEF enumerates psychosocial and interpersonal skills that are generally well-being oriented, and essential alongside literacy and numeracy skills. Since it changes its meaning from culture to culture and life positions, it is considered a concept that is elastic in nature. But UNICEF acknowledges social and emotional life skills identified by Collaborative for Academic, Social and Emotional Learning (CASEL). Life skills are a product of synthesis: many skills are developed simultaneously through practice, like humour, which allows a person to feel in control of a situation and make it more manageable in perspective. It allows the person to release fears, anger, and stress & achieve a qualitative life.

For example, decision-making often involves critical thinking (“what are my options?”) and values clarification (“what is important to me?”), (“How do I feel about this?”). Ultimately, the interplay between the skills is what produces powerful behavioural outcomes, especially where this approach is supported by other strategies.

Life skills can vary from financial literacy, through substance-abuse prevention, to therapeutic techniques to deal with disabilities such as autism.

Core Skills

The World Health Organisation (WHO) in 1999 identified the following core cross-cultural areas of life skills:

  • Decision-making and problem-solving;
  • Creative thinking (see also: lateral thinking) and critical thinking;
  • Communication and interpersonal skills;
  • Self-awareness and empathy;
  • Assertiveness and equanimity; and
  • Resilience and coping with emotions and coping with stress.

UNICEF listed similar skills and related categories in its 2012 report.

Life skills curricular designed for K-12 often emphasize communications and practical skills needed for successful independent living as well as for developmental-disabilities/special-education students with an Individualized Education Programme (IEP).

There are various courses being run based on WHO’s list supported by UNFPA. In Madhya Pradesh, India, the programme is being run with Government to teach these through Government Schools.

Skills for Work and Life

Skills for work and life, known as technical and vocational education and training (TVET) is comprising education, training and skills development relating to a wide range of occupational fields, production, services and livelihoods. TVET, as part of lifelong learning, can take place at secondary, post-secondary and tertiary levels, and includes work-based learning and continuing training and professional development which may lead to qualifications. TVET also includes a wide range of skills development opportunities attuned to national and local contexts. Learning to learn and the development of literacy and numeracy skills, transversal skills and citizenship skills are integral components of TVET.

Parenting: A Venue of Life Skills Nourishment

Life skills are often taught in the domain of parenting, either indirectly through the observation and experience of the child, or directly with the purpose of teaching a specific skill. Parenting itself can be considered as a set of life skills which can be taught or comes natural to a person. Educating a person in skills for dealing with pregnancy and parenting can also coincide with additional life skills development for the child and enable the parents to guide their children in adulthood.

Many life skills programs are offered when traditional family structures and healthy relationships have broken down, whether due to parental lapses, divorce, psychological disorders or due to issues with the children (such as substance abuse or other risky behaviour). For example, the International Labour Organisation is teaching life skills to ex-child laborers and at-risk children in Indonesia to help them avoid and to recover from worst forms of child abuse.

Models: Behaviour Prevention vs. Positive Development

While certain life skills programs focus on teaching the prevention of certain behaviours, they can be relatively ineffective. Based upon their research, the Family and Youth Services Bureau, a division of the US Department of Health and Human Services advocates the theory of positive youth development (PYD) as a replacement for the less effective prevention programmes. PYD focuses on the strengths of an individual as opposed to the older decrepit models which tend to focus on the “potential” weaknesses that have yet to be shown. The Family and Youth Services Bureau has found that individuals who were trained in life skills by positive development model identified themselves with a greater sense of confidence, usefulness, sensitivity and openness rather than that of preventive model.

What is Functional Analysis (Psychology)?

Introduction

Functional analysis in behavioural psychology is the application of the laws of operant and respondent conditioning to establish the relationships between stimuli and responses.

To establish the function of operant behaviour, one typically examines the “four-term contingency”: first by identifying the motivating operations (EO or AO), then identifying the antecedent or trigger of the behaviour, identifying the behaviour itself as it has been operationalised, and identifying the consequence of the behaviour which continues to maintain it.

Functional assessment in behaviour analysis employs principles derived from the natural science of behaviour analysis to determine the “reason”, purpose, or motivation for a behaviour. The most robust form of functional assessment is functional analysis, which involves the direct manipulation, using some experimental design (e.g. a multielement design or a reversal design) of various antecedent and consequent events and measurement of their effects on the behaviour of interest; this is the only method of functional assessment that allows for demonstration of clear cause of behaviour.

Applications in Clinical Psychology

Functional analysis and consequence analysis are commonly used in certain types of psychotherapy to better understand, and in some cases change, behaviour. It is particularly common in behavioural therapies such as behavioural activation, although it is also part of Aaron Beck’s cognitive therapy. In addition, functional analysis modified into a behaviour chain analysis is often used in dialectical behaviour therapy.

There are several advantages to using functional analysis over traditional assessment methods. Firstly, behavioural observation is more reliable than traditional self-report methods. This is because observing the individual from an objective stand point in their regular environment allows the observer to observe both the antecedent and the consequence of the problem behaviour. Secondly, functional analysis is advantageous as it allows for the development of behavioural interventions, either antecedent control or consequence control, specifically designed to reduce a problem behaviour. Thirdly, functional analysis is advantageous for interventions for young children or developmentally delayed children with problem behaviours, who may not be able to answer self-report questions about the reasons for their actions.

Despite these benefits, functional analysis also has some disadvantages. The first that no standard methods for determining function have been determined and meta-analysis shows that different methodologies appear to bias results toward particular functions as well as not effective in improving outcomes. Second, Gresham and colleagues (2004) in a meta-analytic review of JABA articles found that functional assessment did not produce greater effect sizes compared to simple contingency management programmes. However, Gresham et al. combined the three types of functional assessment, of which descriptive assessment and indirect assessment have been reliably found to produce results with limited validity Third, although functional assessment has been conducted with a variety host of populations much of the current functional assessment research has been limited to children with developmental disabilities.

Professional Organisations

The Association for Behavioural and Cognitive Therapies (ABCT) also has an interest group in behaviour analysis, which focuses on the use of behaviour analysis in the school setting including functional analysis.

Doctoral level behaviour analysts who are psychologists belong to the American Psychological Association’s division 25 – Behaviour analysis. APA offers a diplomate in behavioural psychology and school psychology both of which focus on the use of functional analysis in the school setting.

The World Association for Behaviour Analysis offers a certification for clinical behaviour therapy and behavioural consultation, which covers functional analysis.

The UK Society for Behaviour Analysis also provides a forum for behaviour analysts for accreditation, professional development, continuing education and networking, and serves as an advocate body in public debate on issues relating to behaviour analysis. The UK-SBA promotes the ethical and effective application of the principles of behaviour and learning to a wide range of areas including education, rehabilitation and health care, business and the community and is committed to maintaining the availability of high-quality evidence-based professional behaviour analysis practice in the UK. The society also promotes and supports the academic field of behaviour analysis with in the UK both in terms of university-based training and research, and theoretical develop.

What is Self-Regulation Theory?

Introduction

Self-regulation theory (SRT) is a system of conscious personal management that involves the process of guiding one’s own thoughts, behaviours and feelings to reach goals.

Self-regulation consists of several stages and individuals must function as contributors to their own motivation, behaviour and development within a network of reciprocally interacting influences.

Background

Roy Baumeister, one of the leading social psychologists who have studied self-regulation, claims it has four components:

  • Standards of desirable behaviour;
  • Motivation to meet standards;
  • Monitoring of situations and thoughts that precede breaking said standards; and
  • Willpower.

Baumeister along with other colleagues developed three models of self-regulation designed to explain its cognitive accessibility: self-regulation as a knowledge structure, strength, or skill. Studies have been done to determine that the strength model is generally supported, because it is a limited resource in the brain and only a given amount of self-regulation can occur until that resource is depleted.

SRT can be applied to:

  • Impulse control, the management of short-term desires.
    • People with low impulse control are prone to acting on immediate desires.
    • This is one route for such people to find their way to jail as many criminal acts occur in the heat of the moment.
    • For non-violent people it can lead to losing friends through careless outbursts, or financial problems caused by making too many impulsive purchases.
  • The cognitive bias known as illusion of control.
    • To the extent that people are driven by internal goals concerned with the exercise of control over their environment, they will seek to reassert control in conditions of chaos, uncertainty or stress.
    • Failing genuine control, one coping strategy will be to fall back on defensive attributions of control – leading to illusions of control (Fenton-O’Creevy et al., 2003).
  • Goal attainment and motivation.
  • Sickness behaviour.

SRT consists of several stages. First, the patient deliberately monitors one’s own behaviour and evaluates how this behaviour affects one’s health. If the desired effect is not realised, the patient changes personal behaviour. If the desired effect is realised, the patient reinforces the effect by continuing the behaviour (Kanfer, 1970; 1971; 1980).

Another approach is for the patient to realise a personal health issue and understand the factors involved in that issue. The patient must decide upon an action plan for resolving the health issue. The patient will need to deliberately monitor the results in order to appraise the effects, checking for any necessary changes in the action plan (Leventhal & Nerenz, 1984).

Another factor that can help the patient reach their own goal of personal health is to relate to the patient the following:

  • Help them figure out the personal/community views of the illness;
  • Appraise the risks involved; and
  • Give them potential problem-solving/coping skills.

Four components of self-regulation described by Baumeister et al. (2007) are:

  • Standards: Of desirable behaviour.
  • Motivation: To meet standards.
  • Monitoring: Of situations and thoughts that precede breaking standards.
  • Willpower: Internal strength to control urges.

Brief History and Contributors

Albert Bandura

There have been numerous researchers, psychologists and scientists who have studied self-regulatory processes. Albert Bandura, a cognitive psychologist had significant contributions focusing on the acquisition of behaviours that led to the social cognitive theory and social learning theory. His work brought together behavioural and cognitive components in which he concluded that “humans are able to control their behaviour through a process known as self-regulation.” This led to his known process that contained: self observation, judgment and self response. Self observation (also known as introspection) is a process involving assessing one’s own thoughts and feelings in order to inform and motivate the individual to work towards goal setting and become influenced by behavioural changes. Judgement involves an individual comparing his or her performance to their personal or created standards. Lastly, self-response is applied, in which an individual may reward or punish his or herself for success or failure in meeting standard(s). An example of self-response would be rewarding oneself with an extra slice of pie for doing well on an exam.

Dale Schunk

According to Schunk (2012), Lev Vygotsky who was a Russian psychologist and was a major influence on the rise of constructivism, believed that self-regulation involves the coordination of cognitive processes such as planning, synthesizing and formulating concepts (Henderson & Cunningham, 1994); however, such coordination does not proceed independently of the individual’s social environment and culture. In fact, self-regulation is inclusive of the gradual internalisation of language and concepts.

Roy Baumeister

As a widely studied theory, SRT was also greatly impacted by the well-known social psychologist Roy Baumeister. He described the ability to self-regulate as limited in capacity and through this he coined the term ego depletion. The four components of self-regulation theory described by Roy Baumeister are standards of desirable behaviour, motivation to meet standards, monitoring of situations and thoughts that precede breaking standards and willpower, or the internal strength to control urges. In Baumeister’s paper titled Self-Regulation Failure: An Overview, he express that self-regulation is complex and multifaceted. Baumeister lays out his “three ingredients” of self-regulation as a case for self-regulation failure.

Research

Many studies have been done to test different variables regarding self-regulation. Albert Bandura studied self-regulation before, after and during the response. He created the triangle of reciprocal determinism that includes behaviour, environment and the person (cognitive, emotional and physical factors) that all influence one another. Bandura concluded that the processes of goal attainment and motivation stem from an equal interaction of self-observation, self-reaction, self-evaluation and self-efficacy.

In addition to Bandura’s work, psychologists Muraven, Tice and Baumeister conducted a study for self control as a limited resource. They suggested there were three competing models to self-regulation: self-regulation as a strength, knowledge structure and a skill. In the strength model, they indicated it is possible self-regulation could be considered a strength because it requires willpower and thus is a limited resource. Failure to self-regulate could then be explained by depletion of this resource. For self-regulation as a knowledge structure, they theorised it involves a certain amount of knowledge to exert self control, so as with any learned technique, failure to self-regulate could be explained by insufficient knowledge. Lastly, the model involving self-regulation as a skill referred to self-regulation being built up over time and unable to be diminished; therefore, failure to exert would be explained by a lack of skill. They found that self-regulation as a strength is the most feasible model due to studies that have suggested self-regulation is a limited resource.

Dewall, Baumeister, Gailliot and Maner performed a series of experiments instructing participants to perform ego depletion tasks to diminish the self-regulatory resource in the brain, that they theorized to be glucose. This included tasks that required participants to break a familiar habit, where they read an essay and circled words containing the letter ‘e’ for the first task, then were asked to break that habit by performing a second task where they circled words containing ‘e’ and/or ‘a’. Following this trial, participants were randomly assigned to either the glucose category, where they drank a glass of lemonade made with sugar, or the control group, with lemonade made from Splenda. They were then asked their individual likelihoods of helping certain people in hypothetical situations, for both kin and non-kin and found that excluding kin, people were much less likely to help a person in need if they were in the control group (with Splenda) than if they had replenished their brain glucose supply with the lemonade containing real sugar. This study also supports the model for self-regulation as a strength because it confirms it is a limited resource.

Baumeister and colleagues expanded on this and determined the four components to self-regulation. Those include standards of desirable behaviour, motivation to meet these standards, monitoring of situations and thoughts that precede breaking standards and willpower.

Applications and Examples

Impulse control in self-regulation involves the separation of our immediate impulses and long-term desires. We can plan, evaluate our actions and refrain from doing things we will regret. Research shows that self-regulation is a strength necessary for emotional well-being. Violation of one’s deepest values results in feelings of guilt, which will undermine well-being. The illusion of control involves people overestimating their own ability to control events. Such as, when an event occurs an individual may feel greater a sense of control over the outcome that they demonstrably do not influence. This emphasizes the importance of perception of control over life events.

The self-regulated learning is the process of taking control and evaluating one’s own learning and behaviour. This emphasizes control by the individual who monitors, directs and regulates actions toward goals of information. In goal attainment self-regulation it is generally described in these four components of self-regulation. Standards, which is the desirable behaviour. Motivation, to meet the standards. Monitoring, situations and thoughts that precede breaking standards. Willpower, internal strength to control urges.

Illness behaviour in self-regulation deals with issues of tension that arise between holding on and letting go of important values and goals as those are threatened by disease processes. Also people who have poor self-regulatory skills do not succeed in relationships or cannot hold jobs. Sayette (2004) describes failures in self-regulation as in two categories: under regulation and misregulation. Under regulation is when people fail to control oneself whereas misregulation deals with having control but does not bring up the desired goal (Sayette, 2004).

Criticisms/Challenges

One challenge of self-regulation is that researchers often struggle with conceptualising and operationalising self-regulation (Carver & Scheier, 1990). The system of self-regulation comprises a complex set of functions, including research cognition, problem solving, decision making and meta cognition.

Ego depletion refers to self control or willpower drawing from a limited pool of mental resources. If an individual has low mental activity, self control is typically impaired, which may lead to ego depletion. Self control plays a valuable role in the functioning of self in people. The illusion of control involves the overestimation of an individual’s ability to control certain events. It occurs when someone feels a sense of control over outcomes although they may not possess this control. Psychologists have consistently emphasized the importance of perceptions of control over life events. Heider proposed that humans have a strong motive to control their environment.

Reciprocal determinism is a theory proposed by Albert Bandura, stating that a person’s behaviour is influenced both by personal factors and the social environment. Bandura acknowledges the possibility that individual’s behaviour and personal factors may impact the environment. These can involve skills that are either under or overcompensating the ego and will not benefit the outcome of the situation.

Recently, Baumeister’s strength model of ego depletion has been criticised in multiple ways. Meta-analyses found little evidence for the strength model of self-regulation and for glucose as the limited resource that is depleted. A pre-registered trial did not find any evidence for ego depletion. Several commentaries have raised criticism on this particular study. In summary, many central assumptions of the strength model of self-regulation seem to be in need of revision, especially the view of self-regulation as a limited resource that can be depleted and glucose as the fuel that is depleted seems to be hardly defensible without major revisions.

Conclusion

Self-regulation can be applied to many aspects of everyday life, including social situations, personal health management, impulse control and more. Since the strength model is generally supported, ego depletion tasks can be performed to temporarily tax the amount of self-regulatory capabilities in a person’s brain. It is theorised that self-regulation depletion is associated with willingness to help people in need, excluding members of an individual’s kin. Many researchers have contributed to these findings, including Albert Bandura, Roy Baumeister and Robert Wood.

What is Rationalisation (Psychology)?

Introduction

Rationalisation is a defence mechanism (ego defence) in which apparent logical reasons are given to justify behaviour that is motivated by unconscious instinctual impulses.

It is an attempt to find reasons for behaviours, especially ones own. Rationalisations are used to defend against feelings of guilt, maintain self-respect, and protect oneself from criticism.

Rationalisation happens in two steps:

  • A decision, action, judgement is made for a given reason, or no (known) reason at all.
  • A rationalisation is performed, constructing a seemingly good or logical reason, as an attempt to justify the act after the fact (for oneself or others).

Rationalisation encourages irrational or unacceptable behaviour, motives, or feelings and often involves ad hoc hypothesizing. This process ranges from fully conscious (e.g. to present an external defence against ridicule from others) to mostly unconscious (e.g. to create a block against internal feelings of guilt or shame). People rationalise for various reasons – sometimes when we think we know ourselves better than we do. Rationalisation may differentiate the original deterministic explanation of the behaviour or feeling in question.

Many conclusions individuals come to do not fall under the definition of rationalisation as the term is denoted above.

Brief History

Quintilian and classical rhetoric used the term colour for the presenting of an action in the most favourable possible perspective. Laurence Sterne in the eighteenth century took up the point, arguing that, were a man to consider his actions, “he will soon find, that such of them, as strong inclination and custom have prompted him to commit, are generally dressed out and painted with all the false beauties [colour] which, a soft and flattering hand can give them”.

DSM Definition

According to the DSM-IV, rationalisation occurs “when the individual deals with emotional conflict or internal or external stressors by concealing the true motivations for their own thoughts, actions, or feelings through the elaboration of reassuring or self serving but incorrect explanations”.

Examples

Individual

  • Rationalisation can be used to avoid admitting disappointment: “I didn’t get the job that I applied for, but I really didn’t want it in the first place.”

Egregious rationalisations intended to deflect blame can also take the form of ad hominem attacks or DARVO (deny, attack, and reverse victim and offender). Some rationalisations take the form of a comparison. Commonly, this is done to lessen the perception of an action’s negative effects, to justify an action, or to excuse culpability:

  • “At least [what occurred] is not as bad as [a worse outcome].”
  • In response to an accusation: “At least I didn’t [worse action than accused action].”
  • As a form of false choice: “Doing [undesirable action] is a lot better than [a worse action].”
  • In response to unfair or abusive behaviour: “I must have done something wrong if they treat me like this.”

Based on anecdotal and survey evidence, John Banja states that the medical field features a disproportionate amount of rationalisation invoked in the “covering up” of mistakes. Common excuses made are:

  • “Why disclose the error? The patient was going to die anyway.”
  • “Telling the family about the error will only make them feel worse.”
  • “It was the patient’s fault. If he wasn’t so (sick, etc.), this error wouldn’t have caused so much harm.”
  • “Well, we did our best. These things happen.”
  • “If we’re not totally and absolutely certain the error caused the harm, we don’t have to tell.”
  • “They’re dead anyway, so there’s no point in blaming anyone.”

In 2018 Muel Kaptein and Martien van Helvoort developed a model, called the Amoralisations Alarm Clock, that covers all existing amoralisations in a logical way. Amoralisations, also called neutralisations, or rationalisations, are defined as justifications and excuses for deviant behaviour. Amoralisations are important explanations for the rise and persistence of deviant behaviour. There exist many different and overlapping techniques of amoralisations.

Collective

  • Collective rationalisations are regularly constructed for acts of aggression, based on exaltation of the in-group and demonisation of the opposite side: as Fritz Perls put it, “Our own soldiers take care of the poor families; the enemy rapes them”.
  • Celebrity culture can be seen as rationalising the gap between rich and poor, powerful and powerless, by offering participation to both dominant and subaltern views of reality.

Criticism

Some scientists criticise the notion that brains are wired to rationalise irrational decisions, arguing that evolution would select against spending more nutrients at mental processes that do not contribute to the improvement of decisions such as rationalisation of decisions that would have been taken anyway. These scientists argue that learning from mistakes would be decreased rather than increased by rationalisation, and criticise the hypothesis that rationalisation evolved as a means of social manipulation by noting that if rational arguments were deceptive there would be no evolutionary chance for breeding individuals that responded to the arguments and therefore making them ineffective and not capable of being selected for by evolution.

Psychoanalysis

Ernest Jones introduced the term “rationalisation” to psychoanalysis in 1908, defining it as “the inventing of a reason for an attitude or action the motive of which is not recognized” – an explanation which (though false) could seem plausible. The term (Rationalisierung in German) was taken up almost immediately by Sigmund Freud to account for the explanations offered by patients for their own neurotic symptoms.

As psychoanalysts continued to explore the glossed of unconscious motives, Otto Fenichel distinguished different sorts of rationalisation – both the justifying of irrational instinctive actions on the grounds that they were reasonable or normatively validated and the rationalising of defensive structures, whose purpose is unknown on the grounds that they have some quite different but somehow logical meaning.

Later psychoanalysts are divided between a positive view of rationalisation as a stepping-stone on the way to maturity, and a more destructive view of it as splitting feeling from thought, and so undermining the powers of reason.

Cognitive Dissonance

Leon Festinger highlighted in 1957 the discomfort caused to people by awareness of their inconsistent thought. Rationalisation can reduce such discomfort by explaining away the discrepancy in question, as when people who take up smoking after previously quitting decide that the evidence for it being harmful is less than they previously thought.

What is Personality Psychology?

Introduction

Personality psychology is a branch of psychology that examines personality and its variation among individuals. It aims to show how people are individually different due to psychological forces. Its areas of focus include:

  • Construction of a coherent picture of the individual and their major psychological processes;
  • Investigation of individual psychological differences; and
  • Investigation of human nature and psychological similarities between individuals.

“Personality” is a dynamic and organised set of characteristics possessed by an individual that uniquely influences their environment, cognition, emotions, motivations, and behaviours in various situations. The word personality originates from the Latin persona, which means “mask”.

Personality also pertains to the pattern of thoughts, feelings, social adjustments, and behaviours persistently exhibited over time that strongly influences one’s expectations, self-perceptions, values, and attitudes. Personality also predicts human reactions to other people, problems, and stress. Gordon Allport (1937) described two major ways to study personality: the nomothetic and the idiographic. Nomothetic psychology seeks general laws that can be applied to many different people, such as the principle of self-actualisation or the trait of extraversion. Idiographic psychology is an attempt to understand the unique aspects of a particular individual.

The study of personality has a broad and varied history in psychology, with an abundance of theoretical traditions. The major theories include dispositional (trait) perspective, psychodynamic, humanistic, biological, behaviourist, evolutionary, and social learning perspective. Many researchers and psychologists do not explicitly identify themselves with a certain perspective and instead take an eclectic approach. Research in this area is empirically driven – such as dimensional models, based on multivariate statistics such as factor analysis – or emphasizes theory development, such as that of the psychodynamic theory. There is also a substantial emphasis on the applied field of personality testing. In psychological education and training, the study of the nature of personality and its psychological development is usually reviewed as a prerequisite to courses in abnormal psychology or clinical psychology.

Philosophical Assumptions

Many of the ideas conceptualised by historical and modern personality theorists stem from the basic philosophical assumptions they hold. The study of personality is not a purely empirical discipline, as it brings in elements of art, science, and philosophy to draw general conclusions. The following five categories are some of the most fundamental philosophical assumptions on which theorists disagree:

AssumptionOutline
Freedom versus DeterminismThis is the question of whether humans have control over their own behaviour and understand the motives behind it, or if their behaviour is causally determined by forces beyond their control. Behaviour is categorised as being either unconscious, environmental or biological by various theories.
Heredity (Nature) versus Environment (Nurture)Personality is thought to be determined largely either by genetics and biology, or by environment and experiences. Contemporary research suggests that most personality traits are based on the joint influence of genetics and environment. One of the forerunners in this arena is C. Robert Cloninger, who pioneered the Temperament and Character model.
Uniqueness versus UniversalityThis question discusses the extent of each human’s individuality (uniqueness) or similarity in nature (universality). Gordon Allport, Abraham Maslow, and Carl Rogers were all advocates of the uniqueness of individuals. Behaviourists and cognitive theorists, in contrast, emphasize the importance of universal principles, such as reinforcement and self-efficacy.
Active versus ReactiveThis question explores whether humans primarily act through individual initiative (active) or through outside stimuli. Traditional behavioural theorists typically believed that humans are passively shaped by their environments, whereas humanistic and cognitive theorists believe that humans play a more active role. Most modern theorists agree that both are important, with aggregate behaviour being primarily determined by traits and situational factors being the primary predictor of behaviour in the short term.
Optimistic versus PessimisticPersonality theories differ with regard to whether humans are integral in the changing of their own personalities. Theories that place a great deal of emphasis on learning are often more optimistic than those that do not.

Personality Theories

Type Theories

Personality type refers to the psychological classification of people into different classes. Personality types are distinguished from personality traits, which come in different degrees. There are many theories of personality, but each one contains several and sometimes many sub theories. A “theory of personality” constructed by any given psychologist will contain multiple relating theories or sub theories often expanding as more psychologists explore the theory. For example, according to type theories, there are two types of people, introverts and extroverts. According to trait theories, introversion and extroversion are part of a continuous dimension with many people in the middle. The idea of psychological types originated in the theoretical work of Carl Jung, specifically in his 1921 book Psychologische Typen (Psychological Types) and William Marston.

Building on the writings and observations of Jung during World War II, Isabel Briggs Myers and her mother, Katharine C. Briggs, delineated personality types by constructing the Myers-Briggs Type Indicator. This model was later used by David Keirsey with a different understanding from Jung, Briggs and Myers. In the former Soviet Union, Lithuanian Aušra Augustinavičiūtė independently derived a model of personality type from Jung’s called socionics. Later on many other tests were developed on this model e.g. Golden, PTI-Pro and JTI.

Theories could also be considered an “approach” to personality or psychology and is generally referred to as a model. The model is an older and more theoretical approach to personality, accepting extroversion and introversion as basic psychological orientations in connection with two pairs of psychological functions:

  • Perceiving functions: sensing and intuition (trust in concrete, sensory-oriented facts vs. trust in abstract concepts and imagined possibilities).
  • Judging functions: thinking and feeling (basing decisions primarily on logic vs. deciding based on emotion).

Briggs and Myers also added another personality dimension to their type indicator to measure whether a person prefers to use a judging or perceiving function when interacting with the external world. Therefore, they included questions designed to indicate whether someone wishes to come to conclusions (judgement) or to keep options open (perception).

This personality typology has some aspects of a trait theory: it explains people’s behavior in terms of opposite fixed characteristics. In these more traditional models, the sensing/intuition preference is considered the most basic, dividing people into “N” (intuitive) or “S” (sensing) personality types. An “N” is further assumed to be guided either by thinking or feeling and divided into the “NT” (scientist, engineer) or “NF” (author, humanitarian) temperament. An “S”, in contrast, is assumed to be guided more by the judgment/perception axis and thus divided into the “SJ” (guardian, traditionalist) or “SP” (performer, artisan) temperament. These four are considered basic, with the other two factors in each case (including always extraversion/introversion) less important. Critics of this traditional view have observed that the types can be quite strongly stereotyped by professions (although neither Myers nor Keirsey engaged in such stereotyping in their type descriptions), and thus may arise more from the need to categorise people for purposes of guiding their career choice. This among other objections led to the emergence of the five-factor view, which is less concerned with behaviour under work conditions and more concerned with behaviour in personal and emotional circumstances (The MBTI is not designed to measure the “work self”, but rather what Myers and McCaulley called the “shoes-off self.”).

Type A and Type B personality theory: During the 1950s, Meyer Friedman and his co-workers defined what they called Type A and Type B behaviour patterns. They theorised that intense, hard-driving Type A personalities had a higher risk of coronary disease because they are “stress junkies.” Type B people, on the other hand, tended to be relaxed, less competitive, and lower in risk. There was also a Type AB mixed profile.

John L. Holland’s RIASEC vocational model, commonly referred to as the Holland Codes, stipulates that six personality types lead people to choose their career paths. In this circumplex model, the six types are represented as a hexagon, with adjacent types more closely related than those more distant. The model is widely used in vocational counselling.

Eduard Spranger’s personality-model, consisting of six (or, by some revisions, 6 +1) basic types of value attitudes, described in his book Types of Men (Lebensformen; Halle (Saale): Niemeyer, 1914; English translation by P.J.W. Pigors – New York: G. E. Stechert Company, 1928).

The Enneagram of Personality, a model of human personality which is principally used as a typology of nine interconnected personality types. It has been criticised as being subject to interpretation, making it difficult to test or validate scientifically.

Perhaps the most ancient attempt at personality psychology is the personality typology outlined by the Indian Buddhist Abhidharma schools. This typology mostly focuses on negative personal traits (greed, hatred, and delusion) and the corresponding positive meditation practices used to counter those traits.

Psychoanalytical Theories

Psychoanalytic theories explain human behaviour in terms of the interaction of various components of personality. Sigmund Freud was the founder of this school of thought. He drew on the physics of his day (thermodynamics) to coin the term psychodynamics. Based on the idea of converting heat into mechanical energy, Freud proposed psychic energy could be converted into behaviour. His theory places central importance on dynamic, unconscious psychological conflicts.

Freud divides human personality into three significant components: the id, ego and super-ego. The id acts according to the pleasure principle, demanding immediate gratification of its needs regardless of external environment; the ego then must emerge in order to realistically meet the wishes and demands of the id in accordance with the outside world, adhering to the reality principle. Finally, the superego (conscience) inculcates moral judgment and societal rules upon the ego, thus forcing the demands of the id to be met not only realistically but morally. The superego is the last function of the personality to develop, and is the embodiment of parental/social ideals established during childhood. According to Freud, personality is based on the dynamic interactions of these three components.

The channelling and release of sexual (libidal) and aggressive energies, which ensues from the “Eros” (sex; instinctual self-preservation) and “Thanatos” (death; instinctual self-annihilation) drives respectively, are major components of his theory. It is important to note that Freud’s broad understanding of sexuality included all kinds of pleasurable feelings experienced by the human body.

Freud proposed five psychosexual stages of personality development. He believed adult personality is dependent upon early childhood experiences and largely determined by age five. Fixations that develop during the infantile stage contribute to adult personality and behaviour.

One of Sigmund Freud’s earlier associates, Alfred Adler, agreed with Freud that early childhood experiences are important to development, and believed birth order may influence personality development. Adler believed that the oldest child was the individual who would set high achievement goals in order to gain attention lost when the younger siblings were born. He believed the middle children were competitive and ambitious. He reasoned that this behaviour was motivated by the idea of surpassing the firstborn’s achievements. He added, however, that the middle children were often not as concerned about the glory attributed to their behaviour. He also believed the youngest would be more dependent and sociable. Adler finished by surmising that an only child loves being the centre of attention and matures quickly but in the end fails to become independent.

Heinz Kohut thought similarly to Freud’s idea of transference. He used narcissism as a model of how people develop their sense of self. Narcissism is the exaggerated sense of self in which one is believed to exist in order to protect one’s low self-esteem and sense of worthlessness. Kohut had a significant impact on the field by extending Freud’s theory of narcissism and introducing what he called the ‘self-object transferences’ of mirroring and idealisation. In other words, children need to idealize and emotionally “sink into” and identify with the idealised competence of admired figures such as parents or older siblings. They also need to have their self-worth mirrored by these people. Such experiences allow them to thereby learn the self-soothing and other skills that are necessary for the development of a healthy sense of self.

Another important figure in the world of personality theory is Karen Horney. She is credited with the development of “Feminist Psychology”. She disagrees with Freud on some key points, one being that women’s personalities are not just a function of “Penis Envy”, but that girl children have separate and different psychic lives unrelated to how they feel about their fathers or primary male role models. She talks about three basic Neurotic needs “Basic Anxiety”, “Basic Hostility” and “Basic Evil”. She posits that to any anxiety an individual experiences they would have one of three approaches, moving toward people, moving away from people or moving against people. It is these three that give us varying personality types and characteristics. She also places a high premium on concepts like Overvaluation of Love and romantic partners.

Behaviourist Theories

Behaviourists explain personality in terms of the effects external stimuli have on behaviour. The approaches used to evaluate the behavioural aspect of personality are known as behavioural theories or learning-conditioning theories. These approaches were a radical shift away from Freudian philosophy. One of the major tenets of this concentration of personality psychology is a strong emphasis on scientific thinking and experimentation. This school of thought was developed by B.F. Skinner who put forth a model which emphasized the mutual interaction of the person or “the organism” with its environment. Skinner believed children do bad things because the behaviour obtains attention that serves as a reinforcer. For example: a child cries because the child’s crying in the past has led to attention. These are the response, and consequences. The response is the child crying, and the attention that child gets is the reinforcing consequence. According to this theory, people’s behaviour is formed by processes such as operant conditioning. Skinner put forward a “three term contingency model” which helped promote analysis of behaviour based on the “Stimulus – Response – Consequence Model” in which the critical question is: “Under which circumstances or antecedent ‘stimuli’ does the organism engage in a particular behavior or ‘response’, which in turn produces a particular ‘consequence’?”

Richard Herrnstein extended this theory by accounting for attitudes and traits. An attitude develops as the response strength (the tendency to respond) in the presences of a group of stimuli become stable. Rather than describing conditionable traits in non-behavioural language, response strength in a given situation accounts for the environmental portion. Herrstein also saw traits as having a large genetic or biological component, as do most modern behaviourists.

Ivan Pavlov is another notable influence. He is well known for his classical conditioning experiments involving dogs, which led him to discover the foundation of behaviourism.

Social Cognitive Theories

In cognitive theory, behaviour is explained as guided by cognitions (e.g. expectations) about the world, especially those about other people. Cognitive theories are theories of personality that emphasize cognitive processes, such as thinking and judging.

Albert Bandura, a social learning theorist suggested the forces of memory and emotions worked in conjunction with environmental influences. Bandura was known mostly for his “Bobo doll experiment”. During these experiments, Bandura video taped a college student kicking and verbally abusing a bobo doll. He then showed this video to a class of kindergarten children who were getting ready to go out to play. When they entered the play room, they saw bobo dolls, and some hammers. The people observing these children at play saw a group of children beating the doll. He called this study and his findings observational learning, or modelling.

Early examples of approaches to cognitive style are listed by Baron (1982). These include Witkin’s (1965) work on field dependency, Gardner’s (1953) discovering people had consistent preference for the number of categories they used to categorise heterogeneous objects, and Block and Petersen’s (1955) work on confidence in line discrimination judgments. Baron relates early development of cognitive approaches of personality to ego psychology. More central to this field have been:

  • Attributional style theory dealing with different ways in which people explain events in their lives. This approach builds upon locus of control, but extends it by stating we also need to consider whether people attribute to stable causes or variable causes, and to global causes or specific causes.

Various scales have been developed to assess both attributional style and locus of control. Locus of control scales include those used by Rotter and later by Duttweiler, the Nowicki and Strickland (1973) Locus of Control Scale for Children and various locus of control scales specifically in the health domain, most famously that of Kenneth Wallston and his colleagues, The Multidimensional Health Locus of Control Scale. Attributional style has been assessed by the Attributional Style Questionnaire, the Expanded Attributional Style Questionnaire, the Attributions Questionnaire, the Real Events Attributional Style Questionnaire and the Attributional Style Assessment Test.

  • Achievement style theory focuses upon identification of an individual’s Locus of Control tendency, such as by Rotter’s evaluations, and was found by Cassandra Bolyard Whyte to provide valuable information for improving academic performance of students. Individuals with internal control tendencies are likely to persist to better academic performance levels, presenting an achievement personality, according to Cassandra B. Whyte.

Recognition that the tendency to believe that hard work and persistence often results in attainment of life and academic goals has influenced formal educational and counselling efforts with students of various ages and in various settings since the 1970s research about achievement. Counselling aimed toward encouraging individuals to design ambitious goals and work toward them, with recognition that there are external factors that may impact, often results in the incorporation of a more positive achievement style by students and employees, whatever the setting, to include higher education, workplace, or justice programming.

Walter Mischel (1999) has also defended a cognitive approach to personality. His work refers to “Cognitive Affective Units”, and considers factors such as encoding of stimuli, affect, goal-setting, and self-regulatory beliefs. The term “Cognitive Affective Units” shows how his approach considers affect as well as cognition.

Cognitive-Experiential Self-Theory (CEST) is another cognitive personality theory. Developed by Seymour Epstein, CEST argues that humans operate by way of two independent information processing systems: experiential system and rational system. The experiential system is fast and emotion-driven. The rational system is slow and logic-driven. These two systems interact to determine our goals, thoughts, and behaviolr.

Personal construct psychology (PCP) is a theory of personality developed by the American psychologist George Kelly in the 1950s. Kelly’s fundamental view of personality was that people are like naïve scientists who see the world through a particular lens, based on their uniquely organised systems of construction, which they use to anticipate events. But because people are naïve scientists, they sometimes employ systems for construing the world that are distorted by idiosyncratic experiences not applicable to their current social situation. A system of construction that chronically fails to characterise and/or predict events, and is not appropriately revised to comprehend and predict one’s changing social world, is considered to underlie psychopathology (or mental illness). From the theory, Kelly derived a psychotherapy approach and also a technique called The Repertory Grid Interview that helped his patients to uncover their own “constructs” with minimal intervention or interpretation by the therapist. The repertory grid was later adapted for various uses within organisations, including decision-making and interpretation of other people’s world-views.

Humanistic Theories

Humanistic psychology emphasizes that people have free will and that this plays an active role in determining how they behave. Accordingly, humanistic psychology focuses on subjective experiences of persons as opposed to forced, definitive factors that determine behaviour. Abraham Maslow and Carl Rogers were proponents of this view, which is based on the “phenomenal field” theory of Combs and Snygg (1949). Rogers and Maslow were among a group of psychologists that worked together for a decade to produce the Journal of Humanistic Psychology. This journal was primarily focused on viewing individuals as a whole, rather than focusing solely on separate traits and processes within the individual.

Robert W. White wrote the book The Abnormal Personality that became a standard text on abnormal psychology. He also investigated the human need to strive for positive goals like competence and influence, to counterbalance the emphasis of Freud on the pathological elements of personality development.

Maslow spent much of his time studying what he called “self-actualizing persons”, those who are “fulfilling themselves and doing the best they are capable of doing”. Maslow believes all who are interested in growth move towards self-actualizing (growth, happiness, satisfaction) views. Many of these people demonstrate a trend in dimensions of their personalities. Characteristics of self-actualisers according to Maslow include the four key dimensions:

DimensionOutline
Awarenessmaintaining constant enjoyment and awe of life. These individuals often experienced a “peak experience”. He defined a peak experience as an “intensification of any experience to the degree there is a loss or transcendence of self”. A peak experience is one in which an individual perceives an expansion of themselves, and detects a unity and meaningfulness in life. Intense concentration on an activity one is involved in, such as running a marathon, may invoke a peak experience.
Reality and Problem CentredHaving a tendency to be concerned with “problems” in surroundings.
Acceptance/SpontaneityAccepting surroundings and what cannot be changed.
Unhostile Sense of Humour/DemocraticDo not take kindly to joking about others, which can be viewed as offensive. They have friends of all backgrounds and religions and hold very close friendships.

Maslow and Rogers emphasized a view of the person as an active, creative, experiencing human being who lives in the present and subjectively responds to current perceptions, relationships, and encounters. They disagree with the dark, pessimistic outlook of those in the Freudian psychoanalysis ranks, but rather view humanistic theories as positive and optimistic proposals which stress the tendency of the human personality toward growth and self-actualization. This progressing self will remain the centre of its constantly changing world; a world that will help mould the self but not necessarily confine it. Rather, the self has opportunity for maturation based on its encounters with this world. This understanding attempts to reduce the acceptance of hopeless redundancy. Humanistic therapy typically relies on the client for information of the past and its effect on the present, therefore the client dictates the type of guidance the therapist may initiate. This allows for an individualised approach to therapy. Rogers found patients differ in how they respond to other people. Rogers tried to model a particular approach to therapy – he stressed the reflective or empathetic response. This response type takes the client’s viewpoint and reflects back their feeling and the context for it. An example of a reflective response would be, “It seems you are feeling anxious about your upcoming marriage”. This response type seeks to clarify the therapist’s understanding while also encouraging the client to think more deeply and seek to fully understand the feelings they have expressed.

Biopsychological Theories

Biology plays a very important role in the development of personality. The study of the biological level in personality psychology focuses primarily on identifying the role of genetic determinants and how they mould individual personalities. Some of the earliest thinking about possible biological bases of personality grew out of the case of Phineas Gage. In an 1848 accident, a large iron rod was driven through Gage’s head, and his personality apparently changed as a result, although descriptions of these psychological changes are usually exaggerated.

In general, patients with brain damage have been difficult to find and study. In the 1990s, researchers began to use electroencephalography (EEG), positron emission tomography (PET), and more recently functional magnetic resonance imaging (fMRI), which is now the most widely used imaging technique to help localise personality traits in the brain.

Genetic Basis of Personality

Ever since the Human Genome Project allowed for a much more in depth comprehension of genetics, there has been an ongoing controversy involving heritability, personality traits, and environmental vs. genetic influence on personality. The human genome is known to play a role in the development of personality.

Previously, genetic personality studies focused on specific genes correlating to specific personality traits. Today’s view of the gene-personality relationship focuses primarily on the activation and expression of genes related to personality and forms part of what is referred to as behavioural genetics. Genes provide numerous options for varying cells to be expressed; however, the environment determines which of these are activated. Many studies have noted this relationship in varying ways in which our bodies can develop, but the interaction between genes and the shaping of our minds and personality is also relevant to this biological relationship.

DNA-environment interactions are important in the development of personality because this relationship determines what part of the DNA code is actually made into proteins that will become part of an individual. While different choices are made available by the genome, in the end, the environment is the ultimate determinant of what becomes activated. Small changes in DNA in individuals are what leads to the uniqueness of every person as well as differences in looks, abilities, brain functioning, and all the factors that culminate to develop a cohesive personality.

Cattell and Eysenck have proposed that genetics have a powerful influence on personality. A large part of the evidence collected linking genetics and the environment to personality have come from twin studies. This “twin method” compares levels of similarity in personality using genetically identical twins. One of the first of these twin studies measured 800 pairs of twins, studied numerous personality traits, and determined that identical twins are most similar in their general abilities. Personality similarities were found to be less related for self-concepts, goals, and interests.

Twin studies have also been important in the creation of the five factor personality model: neuroticism, extraversion, openness, agreeableness, and conscientiousness. Neuroticism and extraversion are the two most widely studied traits. Individuals scoring high in trait extraversion more often display characteristics such as impulsiveness, sociability, and activeness. Individuals scoring high in trait neuroticism are more likely to be moody, anxious, or irritable. Identical twins, however, have higher correlations in personality traits than fraternal twins. One study measuring genetic influence on twins in five different countries found that the correlations for identical twins were .50, while for fraternal they were about .20. It is suggested that heredity and environment interact to determine one’s personality.

Evolutionary Theory

Charles Darwin is the founder of the theory of the evolution of the species. The evolutionary approach to personality psychology is based on this theory. This theory examines how individual personality differences are based on natural selection. Through natural selection organisms change over time through adaptation and selection. Traits are developed and certain genes come into expression based on an organism’s environment and how these traits aid in an organism’s survival and reproduction.

Polymorphisms, such as gender and blood type, are forms of diversity which evolve to benefit a species as a whole. The theory of evolution has wide-ranging implications on personality psychology. Personality viewed through the lens of evolutionary psychology places a great deal of emphasis on specific traits that are most likely to aid in survival and reproduction, such as conscientiousness, sociability, emotional stability, and dominance. The social aspects of personality can be seen through an evolutionary perspective. Specific character traits develop and are selected for because they play an important and complex role in the social hierarchy of organisms. Such characteristics of this social hierarchy include the sharing of important resources, family and mating interactions, and the harm or help organisms can bestow upon one another.

Drive Theories

In the 1930s, John Dollard and Neal Elgar Miller met at Yale University, and began an attempt to integrate drives, into a theory of personality, basing themselves on the work of Clark Hull. They began with the premise that personality could be equated with the habitual responses exhibited by an individual – their habits. From there, they determined that these habitual responses were built on secondary, or acquired drives.

Secondary drives are internal needs directing the behaviour of an individual that results from learning. Acquired drives are learned, by and large in the manner described by classical conditioning. When we are in a certain environment and experience a strong response to a stimulus, we internalise cues from the said environment. When we find ourselves in an environment with similar cues, we begin to act in anticipation of a similar stimulus. Thus, we are likely to experience anxiety in an environment with cues similar to one where we have experienced pain or fear – such as the dentist’s office.

Secondary drives are built on primary drives, which are biologically driven, and motivate us to act with no prior learning process – such as hunger, thirst or the need for sexual activity. However, secondary drives are thought to represent more specific elaborations of primary drives, behind which the functions of the original primary drive continue to exist. Thus, the primary drives of fear and pain exist behind the acquired drive of anxiety. Secondary drives can be based on multiple primary drives and even in other secondary drives. This is said to give them strength and persistence. Examples include the need for money, which was conceptualised as arising from multiple primary drives such as the drive for food and warmth, as well as from secondary drives such as imitativeness (the drive to do as others do) and anxiety.

Secondary drives vary based on the social conditions under which they were learned – such as culture. Dollard and Miller used the example of food, stating that the primary drive of hunger manifested itself behind the learned secondary drive of an appetite for a specific type of food, which was dependent on the culture of the individual.

Secondary drives are also explicitly social, representing a manner in which we convey our primary drives to others. Indeed, many primary drives are actively repressed by society (such as the sexual drive). Dollard and Miller believed that the acquisition of secondary drives was essential to childhood development. As children develop, they learn not to act on their primary drives, such as hunger but acquire secondary drives through reinforcement. Friedman and Schustack describe an example of such developmental changes, stating that if an infant engaging in an active orientation towards others brings about the fulfilment of primary drives, such as being fed or having their diaper changed, they will develop a secondary drive to pursue similar interactions with others – perhaps leading to an individual being more gregarious. Dollard and Miller’s belief in the importance of acquired drives led them to reconceive Sigmund Freud’s theory of psychosexual development. They found themselves to be in agreement with the timing Freud used but believed that these periods corresponded to the successful learning of certain secondary drives.

Dollard and Miller gave many examples of how secondary drives impact our habitual responses – and by extension our personalities, including anger, social conformity, imitativeness or anxiety, to name a few. In the case of anxiety, Dollard and Miller note that people who generalise the situation in which they experience the anxiety drive will experience anxiety far more than they should. These people are often anxious all the time, and anxiety becomes part of their personality. This example shows how drive theory can have ties with other theories of personality – many of them look at the trait of neuroticism or emotional stability in people, which is strongly linked to anxiety.

Personality Tests

There are two major types of personality tests, projective and objective.

Projective tests assume personality is primarily unconscious and assess individuals by how they respond to an ambiguous stimulus, such as an ink blot. Projective tests have been in use for about 60 years and continue to be used today. Examples of such tests include the Rorschach test and the Thematic Apperception Test.

The Rorschach Test involves showing an individual a series of note cards with ambiguous ink blots on them. The individual being tested is asked to provide interpretations of the blots on the cards by stating everything that the ink blot may resemble based on their personal interpretation. The therapist then analyses their responses. Rules for scoring the test have been covered in manuals that cover a wide variety of characteristics such as content, originality of response, location of “perceived images” and several other factors. Using these specific scoring methods, the therapist will then attempt to relate test responses to attributes of the individual’s personality and their unique characteristics. The idea is that unconscious needs will come out in the person’s response, e.g. an aggressive person may see images of destruction.

The Thematic Apperception Test (TAT) involves presenting individuals with vague pictures/scenes and asking them to tell a story based on what they see. Common examples of these “scenes” include images that may suggest family relationships or specific situations, such as a father and son or a man and a woman in a bedroom. Responses are analysed for common themes. Responses unique to an individual are theoretically meant to indicate underlying thoughts, processes, and potentially conflicts present within the individual. Responses are believed to be directly linked to unconscious motives. There is very little empirical evidence available to support these methods.

Objective tests assume personality is consciously accessible and that it can be measured by self-report questionnaires. Research on psychological assessment has generally found objective tests to be more valid and reliable than projective tests. Critics have pointed to the Forer effect to suggest some of these appear to be more accurate and discriminating than they really are. Issues with these tests include false reporting because there is no way to tell if an individual is answering a question honestly or accurately.

The Myers-Briggs Type Indicator (also known as the MBTI) is self-reporting questionnaire based on Carl Jung’s Type theory. However, the MBTI modified Jung’s theory into their own by disregarding certain processes held in the unconscious mind and the impact these have on personality.

Personality Theory Assessment Criteria

  • Verifiability – the theory should be formulated in such a way that the concepts, suggestions and hypotheses involved in it are defined clearly and unambiguously, and logically related to each other.
  • Heuristic value – to what extent the theory stimulates scientists to conduct further research.
  • Internal consistency – the theory should be free from internal contradictions.
  • Economy – the fewer concepts and assumptions required by the theory to explain any phenomenon, the better it is Hjelle, Larry (1992). Personality Theories: Basic Assumptions, Research, and Applications.

Psychology has traditionally defined personality through its behavioural patterns, and more recently with neuroscientific studies of the brain. In recent years, some psychologists have turned to the study of inner experiences for insight into personality as well as individuality. Inner experiences are the thoughts and feelings to an immediate phenomenon. Another term used to define inner experiences is qualia. Being able to understand inner experiences assists in understanding how humans behave, act, and respond. Defining personality using inner experiences has been expanding due to the fact that solely relying on behavioural principles to explain one’s character may seem incomplete. Behavioural methods allow the subject to be observed by an observer, whereas with inner experiences the subject is its own observer.

Methods Measuring Inner Experience

Descriptive Experience Sampling (DES)Developed by psychologist Russel Hurlburt. This is an idiographic method that is used to help examine inner experiences. This method relies on an introspective technique that allows an individual’s inner experiences and characteristics to be described and measured. A beep notifies the subject to record their experience at that exact moment and 24 hours later an interview is given based on all the experiences recorded. DES has been used in subjects that have been diagnosed with schizophrenia and depression. It has also been crucial to studying the inner experiences of those who have been diagnosed with common psychiatric diseases.
Articulated Thoughts in Stimulated Situations (ATSS)ATSS is a paradigm which was created as an alternative to the TA (think aloud) method. This method assumes that people have continuous internal dialogues that can be naturally attended to. ATSS also assesses a person’s inner thoughts as they verbalise their cognitions. In this procedure, subjects listen to a scenario via a video or audio player and are asked to imagine that they are in that specific situation. Later, they are asked to articulate their thoughts as they occur in reaction to the playing scenario. This method is useful in studying emotional experience given that the scenarios used can influence specific emotions. Most importantly, the method has contributed to the study of personality. In a study conducted by Rayburn and Davison (2002), subjects’ thoughts and empathy toward anti-gay hate crimes were evaluated. The researchers found that participants showed more aggressive intentions towards the offender in scenarios which mimicked hate crimes.
Experimental MethodThis method is an experimental paradigm used to study human experiences involved in the studies of sensation and perception, learning and memory, motivation, and biological psychology. The experimental psychologist usually deals with intact organisms although studies are often conducted with organisms modified by surgery, radiation, drug treatment, or long-standing deprivations of various kinds or with organisms that naturally present organic abnormalities or emotional disorders. Economists and psychologists have developed a variety of experimental methodologies to elicit and assess individual attitudes where each emotion differs for each individual. The results are then gathered and quantified to conclude if specific experiences have any common factors. This method is used to seek clarity of the experience and remove any biases to help understand the meaning behind the experience to see if it can be generalised.