An Overview of Situationism (in Psychology)

Introduction

Under the controversy of person–situation debate, situationism is the theory that changes in human behaviour are factors of the situation rather than the traits a person possesses. Behaviour is believed to be influenced by external, situational factors rather than internal traits or motivations. Situationism therefore challenges the positions of trait theorists, such as Hans Eysenck or Raymond B. Cattell. This is an ongoing debate that has truth to both sides; psychologists are able to prove each of the view points through human experimentation.

Brief History and Conceptions

Situationists believe that thoughts, feelings, dispositions, and past experiences and behaviours do not determine what someone will do in a given situation, rather, the situation itself does. Situationists tend to assume that character traits are distinctive, meaning that they do not completely disregard the idea of traits, but suggest that situations have a greater impact on behaviour than those traits. Situationism is also influenced by culture, in that the extent to which people believe that situations impact behaviours varies between cultures. Situationism has been perceived as arising in response to trait theories, and correcting the notion that everything we do is because of our traits. However, situationism has also been criticised for ignoring individuals’ inherent influences on behaviour. There are many experiments and evidence supporting this topic, and shown in the sources below but also in the article itself. But these experiments do not test what people would do in situations that are forced or rushed, most mistakes are made from rushing and or forgetting something due to lack of concentration. Situationism can be looked at in many different ways, this means that situationism needs to be tested and experimented in many different ways.

Criticisms for Situationism

While situationism has become an increasingly popular theory in the field of philosophy, some wonder why it never quite garnered the same attention in the field of psychology. One reason for this could be the criticisms put forth by psychologists who believe that there just because a personality effect does not account for the entirety of an observed behaviour, there is no reason to believe that the rest is determined by situational effect. Rather, many psychologists believe that trait-situation interactions are more likely responsible for observed behaviours; that is, we cannot attribute behaviour to just personality traits or just situational effects, but rather an interaction between the two processes. Additionally, the popularity of the Big Five-Factor Model of Personality within the field of psychology has overshadowed the theory of situationism. Because this model of personality identifies specific personality traits and claims they can explain behaviour and decisions of an individual, situationism has become a bit obsolete.

Experimental Evidence

Evidence For Situationism

Many studies have found series of evidence supporting situationism. One notable situationist study is Philip Zimbardo’s Stanford prison experiment. This study was considered one of the most unethical because the participants were deceived and were physically and psychologically abused. The goal of the study was that Zimbardo wanted to discover two things. If prison guards abused prisoners because of their nature, or because of the power and authority they were given in the situation. They also wanted to figure out if prisoners acted violent and savage because of their nature or because of being in a secluded and violent environment. To carry out this experiment, Zimbardo gathered 24 college men and paid them 15 dollars each an hour to live two weeks in a mock prison. The participants were told that they were chosen to be guard or prisoner because of their personality traits, but they were randomly selected. The prisoners were booked and given prison clothes and no possessions. They were also assigned a number to be referred to with the intent of farther dehumanizing them. Within the first night, the prisoner and guard dynamics began to take place. The guards started waking up the prisoners in the middle of the night for count, and they would yell and ridicule them. The prisoners also started developing hostile traits against the guards and having prison related conversations. By the second day, the guards started abusing the prisoners by forcing them to do push ups, and the prisoners started rebelling by removing their caps and numbers, and hiding in their cells with their mattresses blocking the door. As the days passed the relationship between the guards and prisoners became extremely hostile- the prisoners fought for their independence, and the guards fought to strip them of it.

There were many cases where the prisoners began breaking down psychologically, and it all started with prisoner 8612. After one day after the experiment started, prisoner number 8612 has anxiety attacks and asked to leave. He was then told “You can’t leave. You can’t quit.” He then went back to the prison and “began to act ‘crazy,’ to scream, to curse, to go into a rage that seemed out of control.” After this, he was sent home. The other prisoner that broke down was 819. 819 had broken down and was told to rest in a room. When Dr. Zimbardo went to check on him he said ” what I found was a boy crying hysterically while in the background his fellow prisoners were yelling and chanting that he was a bad prisoner, that they were being punished because of him.” Zimbardo then allowed him to leave but he said he could not because he was labelled as a bad prisoner, to which Zimbardo responded “Listen, you are not 819. My name is Dr. Zimbardo, I am a psychologist, and this is not a prison. This is just an experiment and those are students, just like you. Let’s go.” He stopped crying suddenly and looked up at me just like a small child awakened from a nightmare and said, “OK, let’s go.”

The guards also began to have extremely abusive relations with the prisoners. Zimbardo claimed there were three types of guards. The first were the guards that followed all the rules but got the job done, the second felt bad for the prisoners, and the third were extremely hostile and treated them like animals. This last type showed behaviours of actual guards and seemed to have forgotten they were college students, they got into their roles faster, and seemed to enjoy tormenting the prisoners. On Thursday night, 6 days into the experiment, Zimbardo described the guards as having “sadistic” behaviour, and then decided to close down the study early.

This study showed how regular people can completely disassociate with who they are when their environment changes. Regular college boys turned into broken down prisoners and sadistic guards.

Studies investigating bystander effects also support situationism. For example, in 1973, Darley and Batson conducted a study where they asked students at a seminary school to give a presentation in a separate building. They gave each individual participant a topic, and would then tell a participant that they were supposed to be there immediately, or in a few minutes, and sent them on their way to the building. On the way, each participant encountered a confederate who was on the ground, clearly in need of medical attention. Darley and Batson observed that more participants who had extra time stopped to help the confederate than those who were in a hurry. Helping was not predicted by religious personality measures, and the results therefore indicate that the situation influenced their behaviour.

A third well-known study supporting situationism is an obedience study, the Milgram experiment. Stanley Milgram made his obedience study to explain the obedience phenomenon, specifically the holocaust. He wanted to explain how people follow orders, and how people are likely to do unmoral things when ordered to by people of authority. The way the experiment was devised was that Milgram picked 40 men from a newspaper add to take part in a study at Yale University. The men were between 20 and 50 years old, and were paid $4.50 for showing up. In this study, a participant was assigned to be a “teacher” and a confederate was assigned to be a “learner”. The teachers were told the learners had to memorise word pairs, and every time they got it wrong they were shocked with increasing voltages. The voltages ranged from 15 to 450, and in order for the participants to believe the shock was real, the experimenters administered to them a real 45v shock, The participant was unaware that the learner was a confederate. The participant would test the learner, and for each incorrect answer the learner gave, the participant would have to shock the learner with increasing voltages. The shocks were not actually administered, but the participant believed they were. When the shocks reached 300v, the learner began to protest and show discomfort. Milgram expected participants to stop the procedure, but 65% of them continued to completion, administering shocks that could have been fatal, even if they were uncomfortable or upset. Even though most of the participants continued administering the shocks, they had distressed reactions when administering the shocks, such as laughing hysterically. Participants felt compelled to listen to the experimenter, who was the authority figure present in the room and continued to encourage the participant throughout the study. Out of 40 participants, 26 went all the way to the end.

Evidence against Situationism

Personality traits have a very weak relationship to behaviour. In contrast, situational factors usually have a stronger impact on behaviour; this is the core evidence for situationism. In addition, people are also able to describe character traits of close to such as friends and family, which goes to show that there are opposing reasons showing why people can recall these traits.

In addition, there are other studies that show these same trends. For example, twin studies have shown that identical twins share more traits than fraternal twins. This also implies that there is a genetic basis for behaviour, which directly contradicts situationist views that behaviour is determined by the situation. When observing one instance of extroverted or honest behaviour, it shows how in different situations a person would behave in a similarly honest or extroverted way. It shows that when many people are observed in a range of situations the trait-related reactions to behaviour is about .20 or less. People think the correlation is around .80. This shows that the situation itself is more dependent on characteristics and circumstances in contrast to what is taking place at that point in time.

These recent challenges to the Traditional View have not gone unnoticed. Some have attempted to modify the Traditional View to insulate it from these challenges, while others have tried to show how these challenges fail to undermine the Traditional View at all. For example, Dana Nelkin (2005), Christian Miller (2003), Gopal Sreenivasan (2002), and John Sabini and Maury Silver (2005), among others, have argued that the empirical evidence cited by the Situationists does not show that individuals lack robust character traits.

Current Views: Interactionism

In addition to the debate between trait influences and situational influences on behaviour, a psychological model of “interactionism” exists, which is a view that both internal dispositions and external situational factors affect a person’s behaviour in a given situation. This model emphasizes both sides of the person-situation debate, and says that internal and external factors interact with each other to produce a behaviour. Interactionism is currently an accepted personality theory, and there has been sufficient empirical evidence to support interactionism. However, it is also important to note that both situationists and trait theorists contributed to explaining facets of human behaviour.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Situationism_(psychology) >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of the Cognitive Miser

Introduction

In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers.

The term cognitive miser was first introduced by Susan Fiske and Shelley Taylor in 1984. It is an important concept in social cognition theory and has been influential in other social sciences such as economics and political science.

Simply put, people are limited in their capacity to process information, so they take shortcuts whenever they can.

Assumption

The metaphor of the cognitive miser assumes that the human mind is limited in time, knowledge, attention, and cognitive resources. Usually people do not think rationally or cautiously, but use cognitive shortcuts to make inferences and form judgements. These shortcuts include the use of schemas, scripts, stereotypes, and other simplified perceptual strategies instead of careful thinking. For example, people tend to make correspondent reasoning and are likely to believe that behaviours should be correlated to or representative of stable characteristics.

Background

The Naïve Scientist and Attribution Theory

Before Fiske and Taylor’s cognitive miser theory, the predominant model of social cognition was the naïve scientist. First proposed in 1958 by Fritz Heider in The Psychology of Interpersonal Relations, this theory holds that humans think and act with dispassionate rationality whilst engaging in detailed and nuanced thought processes for both complex and routine actions. In this way, humans were thought to think like scientists, albeit naïve ones, measuring and analysing the world around them. Applying this framework to human thought processes, naïve scientists seek the consistency and stability that comes from a coherent view of the world and need for environmental control.

In order to meet these needs, naïve scientists make attributions. Thus, attribution theory emerged from the study of the ways in which individuals assess causal relationships and mechanisms. Through the study of causal attributions, led by Harold Kelley and Bernard Weiner amongst others, social psychologists began to observe that subjects regularly demonstrate several attributional biases including but not limited to the fundamental attribution error.

The study of attributions had two effects: it created further interest in testing the naïve scientist and opened up a new wave of social psychology research that questioned its explanatory power. This second effect helped to lay the foundation for Fiske and Taylor’s cognitive miser.

Stereotypes

According to Walter Lippmann’s arguments in his classic book Public Opinion, people are not equipped to deal with complexity. Attempting to observe things freshly and in detail is mentally exhausting, especially among busy affairs. The term stereotype is thus introduced: people have to reconstruct the complex situation on a simpler model before they can cope with it, and the simpler model can be regarded as a stereotype. Stereotypes are formed from outside sources which identified with people’s interests and can be reinforced since people could be impressed by those facts that fit their philosophy.

On the other hand, in Lippmann’s view, people are told about the world before they see it. People’s behaviour is not based on direct and certain knowledge, but pictures made or given to them. Hence, influence from external factors are unneglectable in shaping people’s stereotypes.

“The subtlest and most pervasive of all influences are those which create and maintain the repertory of stereotypes.”

That is to say, people live in a second-handed world with mediated reality, where the simplified model for thinking (i.e. stereotypes) could be created and maintained by external forces. Lippmann suggested that the public “cannot be wise”, since they can be easily misled by overly simplified reality which is consistent with their pre-existing pictures in mind, and any disturbance of the existing stereotypes will seem like “an attack upon the foundation of the universe”.

Although Lippmann did not directly define the term cognitive miser, stereotypes have important functions in simplifying people’s thinking process. As cognitive simplification, it is useful for realistic economic management, otherwise people will be overwhelmed by the complexity of the real rationales. Stereotype, as a phenomenon, has become a standard topic in sociology and social psychology.

Heuristics

Much of the cognitive miser theory is built upon work done on heuristics in judgment and decision-making, most notably Amos Tversky and Daniel Kahneman results published in a series of influential articles. Heuristics can be defined as the “judgmental shortcuts that generally get us where we need to go—and quickly—but at the cost of occasionally sending us off course.” In their work, Kahneman and Tversky demonstrated that people rely upon different types of heuristics or mental short cuts in order to save time and mental energy. However, in relying upon heuristics instead of detailed analysis, like the information processing employed by Heider’s naïve scientist, biased information processing is more likely to occur. Some of these heuristics include:

  • Representativeness heuristic (the inclination to assign specific attributes to an individual the more he/she matches the prototype of that group).
  • Availability heuristic (the inclination to judge the likelihood of something occurring because of the ease of thinking of examples of that event occurring).
  • Anchoring and adjustment heuristic (the inclination to overweight the importance and influence of an initial piece of information, and then adjusting one’s answer away from this anchor).

The frequency with which Kahneman and Tversky and other attribution researchers found the individuals employed mental shortcuts to make decisions and assessments laid important groundwork for the overarching idea that individuals and their minds act efficiently instead of analytically.

Cognitive Miser Theory

The wave of research on attributional biases done by Kahneman, Tversky and others effectively ended the dominance of Heider’s naïve scientist within social psychology. Fiske and Taylor, building upon the prevalence of heuristics in human cognition, offered their theory of the cognitive miser. It is, in many ways, a unifying theory of ad-hoc decision-making which suggests that humans engage in economically prudent thought processes instead of acting like scientists who rationally weigh cost and benefit data, test hypotheses, and update expectations based upon the results of the discrete experiments that are our everyday actions. In other words, humans are more inclined to act as cognitive misers using mental short cuts to make assessments and decisions regarding issues and ideas about which they know very little, including issues of great salience. Fiske and Taylor argue that it is rational to act as a cognitive miser due to the sheer volume and intensity of information and stimuli humans intake. Given the limited information processing capabilities of individuals, people try to adopt strategies that economise complex problems. Cognitive misers usually act in two ways: by disregarding part of the information to reduce their own cognitive load, or by overusing some kind of information to avoid the burden of finding and processing more information.

Other psychologists also argue that the cognitively miserly tendency of humans is a primary reason why “humans are often less than rational”. This view holds that evolution has made the brain’s allocation and use of cognitive resources extremely embarrassing. The basic principle is to save mental energy as much as possible, even when it is required to “use your head”. Unless the cognitive environment meets certain criteria, we will, by default, try to avoid thinking as much as possible.

Implications

The implications of this theory raise important questions about both cognition and human behaviour. In addition to streamlining cognition in complicated, analytical tasks, the cognitive miser approach is also used when dealing with unfamiliar issues and issues of great importance.

Politics

Voting behaviour in democracies are an arena in which the cognitive miser is at work. Acting as a cognitive miser should lead those with expertise in an area to more efficient information processing and streamlined decision making. However, as Lau and Redlawsk note, acting as cognitive miser who employs heuristics can have very different results for high-information and low-information voters. They write:

“…cognitive heuristics are at times employed by almost all voters, and that they are particularly likely to be used when the choice situation facing voters is complex… heuristic use generally increases the probability of a correct vote by political experts but decreases the probability of a correct vote by novices.”

In democracies, where no vote is weighted more or less because of the expertise behind its casting, low-information voters, acting as cognitive misers, can have broad and potentially deleterious choices for a society.

Samuel Popkin argues that voters make rational choices by using information shortcuts that they receive during campaigns, usually using something akin to a drunkard’s search. Voters use small amounts of personal information to construct a narrative about candidates. Essentially, they ask themselves this:

“Based on what I know about the candidate personally, what is the probability that this presidential candidate was a good governor? What is the probability that he will be a good president?”

Popkin’s analysis is based on one main premise: voters use low information rationality gained in their daily lives, through the media and through personal interactions, to evaluate candidates and facilitate electoral choices.

Economics

Cognitive misers could also be one of the contributors to the prisoner’s dilemma in game theory. To save cognitive energy, cognitive misers tend to assume that other people are similar to themselves. That is, habitual co-operators assume most of the others as co-operators, and habitual defectors assume most of the others as defectors. Experimental research has shown that since co-operators offer to play more often, and fellow co-operators will also more often accept their offer, co-operators would have a higher expected payoff compared with defectors when certain boundary conditions are met.

Mass Communication

Lack of public support towards emerging techniques are commonly attributed to lack of relevant information and the low scientific literacy among the public. Known as the knowledge deficit model, this point of view is based on idealistic assumptions that education for science literacy could increase public support of science, and the focus of science communication should be increasing scientific understanding among lay public. However, the relationship between information and attitudes towards scientific issues are not empirically supported.

Based on the assumption that human beings are cognitive misers and tend to minimize the cognitive costs, low-information rationality was introduced as an empirically grounded alternative in explaining decision making and attitude formation. Rather than using an in-depth understanding of scientific topics, people make decisions based on other shortcuts or heuristics such as ideological predistortions or cues from mass media due to the subconscious compulsion to use only as much information as necessary. The less expertise citizens have on an issue initially, the more likely they will rely on these shortcuts. Further, people spend less cognitive effort in buying toothpaste than they do when picking a new car, and that difference in information-seeking is largely a function of the costs.

The cognitive miser theory thus has implications for persuading the public: attitude formation is a competition between people’s value systems and prepositions (or their own interpretive schemata) on a certain issue, and how public discourses frame it. Framing theory suggest that the same topic will result in different interpretations among audience, if the information is presented in different ways. Audiences’ attitude change is closely connected with relabelling or re-framing the certain issue. In this sense, effective communication can be achieved if media provide audiences with cognitive shortcuts or heuristics that are resonate with underlying audience schemata.

Risk Assessment

The metaphor of cognitive misers could assist people in drawing lessons from risks, which is the possibility that an undesirable state of reality may occur. People apply a number of shortcuts or heuristics in making judgements about the likelihood of an event, because the rapid answers provided by heuristics are often right. Yet certain pitfalls may be neglected in these shortcuts. A practical example of the cognitively miserly way of thinking in the context of a risk assessment of Deepwater Horizon explosion, is presented below.

  • People have trouble in imagining how small failings can pile up to form a catastrophe;
  • People tend to get accustomed to risk. Due to the seemingly smooth current situation, people unconsciously adjust their acceptance of risk;
  • People tend to over-express their faith and confidence in backup systems and safety devices;
  • People regard complicated technical systems in line with complicated governing structures;
  • When concerned with a certain issue, people tend to spread good news and hide bad news; and
  • People tend to think alike if they are in the same field (see also: echo chamber), regardless of their position in a project’s hierarchy.

Psychology

The theory that human beings are cognitive misers, also shed light on the dual process theory in psychology. Dual process theory proposes that there are two types of cognitive processes in human mind. Daniel Kahneman described these as intuitive (System 1) and reasoning (System 2), respectively.

When processing with System 1, which starts automatically and without control, people expend little to no effort, but can generate complex patterns of ideas. When processing with System 2, people actively consider how best to distribute mental effort to accurately process data, and can construct thoughts in an orderly series of steps. These two cognitive processing systems are not separate and can have interactions with each other. Here is an example of how people’s beliefs are formed under the dual process model:

  • System 1 generates suggestions for System 2, with impressions, intuitions, intentions or feelings;
  • If System 1’s proposal is endorsed by System 2, those impressions and intuitions will turn into beliefs, and the sudden inspiration generated by System 1 will turn into voluntary actions;
  • When everything goes smoothly (as is often the case), System 2 adopts the suggestions of System 1 with little or no modification. Herein there is a window for bias to form, as System 2 may be trained to incorrectly regard the accuracy of data derived from observations gathered via System 1.

The reasoning process can be activated to help with the intuition when:

  • A question arises, but System 1 does not generate an answer
  • An event is detected to violate the model of world that System 1 maintains.

Conflicts also exists in this dual-process. A brief example provided by Kahneman is that when we try not to stare at the oddly dressed couple at the neighbouring table in a restaurant, our automatic reaction (System 1) makes us stare at them, but conflicts emerge as System 2 tries to control this behaviour.

The dual processing system can produce cognitive illusions. System 1 always operates automatically, with our easiest shortcut but often with error. System 2 may also have no clue to the error. Errors can be prevented only by enhanced monitoring of System 2, which costs a plethora of cognitive efforts.

Limitations

Omission of Motivation

The cognitive miser theory did not originally specify the role of motivation. In Fiske’s subsequent research, the omission of the role of intent in the metaphor of cognitive miser is recognised. Motivation does affect the activation and use of stereotypes and prejudices.

Updates and Later Research

Motivated Tactician

People tend to use heuristic shortcuts when making decisions. But the problem remains that although these shortcuts could not compare to effortful thoughts in accuracy, people should have a certain parameter to help them adopt one of the most adequate shortcuts. Kruglanski proposed that people are combination of naïve scientists and cognitive misers: people are flexible social thinkers who choose between multiple cognitive strategies (i.e. speed/ease vs. accuracy/logic) based on their current goals, motives, and needs.

Later models suggest that the cognitive miser and the naïve scientist create two poles of social cognition that are too monolithic. Instead, Fiske, Taylor, and Arie W. Kruglanski and other social psychologists offer an alternative explanation of social cognition: the motivated tactician. According to this theory, people employ either shortcuts or thoughtful analysis based upon the context and salience of a particular issue. In other words, this theory suggests that humans are, in fact, both naïve scientists and cognitive misers. In this sense people are strategic instead of passively choosing the most effortless shortcuts when they allocate their cognitive efforts, and therefore they can decide to be naïve scientists or cognitive misers depending on their goals.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Cognitive_miser >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

An Overview of Schema (in Psychology)

Introduction

In psychology and cognitive science, a schema (pl.: schemata or schemas) describes a pattern of thought or behaviour that organises categories of information and the relationships among them. It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organising and perceiving new information, such as a mental schema or conceptual model. Schemata influence attention and the absorption of new knowledge: people are more likely to notice things that fit into their schema, while re-interpreting contradictions to the schema as exceptions or distorting them to fit. Schemata have a tendency to remain unchanged, even in the face of contradictory information. Schemata can help in understanding the world and the rapidly changing environment. People can organise new perceptions into schemata quickly as most situations do not require complex thought when using schema, since automatic thought is all that is required.

People use schemata to organise current knowledge and provide a framework for future understanding. Examples of schemata include mental models, social schemas, stereotypes, social roles, scripts, worldviews, heuristics, and archetypes. In Piaget’s theory of development, children construct a series of schemata, based on the interactions they experience, to help them understand the world.

Refer to Schema Therapy.

Brief History

“Schema” comes from the Greek word schēmat or schēma, meaning “figure”.

Prior to its use in psychology, the term “schema” had primarily seen use in philosophy. For instance, “schemata” (especially “transcendental schemata”) are crucial to the architectonic system devised by Immanuel Kant in his Critique of Pure Reason.

Early developments of the idea in psychology emerged with the gestalt psychologists (founded originally by Max Wertheimer) and Jean Piaget. The term schéma was introduced by Piaget in 1923. In Piaget’s later publications, action (operative or procedural) schémes were distinguished from figurative (representational) schémas, although together they may be considered a schematic duality. In subsequent discussions of Piaget in English, schema was often a mistranslation of Piaget’s original French schéme. The distinction has been of particular importance in theories of embodied cognition and ecological psychology.

This concept was first described in the works of British psychologist Frederic Bartlett, who drew on the term body schema used by neurologist Henry Head in 1932. In 1952, Jean Piaget, who was credited with the first cognitive development theory of schemas, popularised this ideology. By 1977, it was expanded into schema theory by educational psychologist Richard C. Anderson. Since then, other terms have been used to describe schema such as “frame”, “scene”, and “script”.

Schematic Processing

Through the use of schemata, a heuristic technique to encode and retrieve memories, the majority of typical situations do not require much strenuous processing. People can quickly organise new perceptions into schemata and act without effort. The process, however, is not always accurate, and people may develop illusory correlations, which is the tendency to form inaccurate or unfounded associations between categories, especially when the information is distinctive.

Nevertheless, schemata can influence and hamper the uptake of new information, such as when existing stereotypes, giving rise to limited or biased discourses and expectations, lead an individual to “see” or “remember” something that has not happened because it is more believable in terms of his/her schema. For example, if a well-dressed businessman draws a knife on a vagrant, the schemata of onlookers may (and often do) lead them to “remember” the vagrant pulling the knife. Such distortion of memory has been demonstrated. (refer to Background research next) Furthermore, it has also been seen to affect the formation of episodic memory in humans. For instance, one is more likely to remember a pencil case in an office than a skull, even if both were present in the office, when tested on certain recall conditions.

Schemata are interrelated and multiple conflicting schemata can be applied to the same information. Schemata are generally thought to have a level of activation, which can spread among related schemata. Through different factors such as current activation, accessibility, priming, and emotion, a specific schema can be selected.

Accessibility is how easily a schema can come to mind, and is determined by personal experience and expertise. This can be used as a cognitive shortcut, meaning it allows the most common explanation to be chosen for new information.

With priming (an increased sensitivity to a particular schema due to a recent experience), a brief imperceptible stimulus temporarily provides enough activation to a schema so that it is used for subsequent ambiguous information. Although this may suggest the possibility of subliminal messages, the effect of priming is so fleeting that it is difficult to detect outside laboratory conditions.

Background Research

Frederic Bartlett

The original concept of schemata is linked with that of reconstructive memory as proposed and demonstrated in a series of experiments by Frederic Bartlett. Bartlett began presenting participants with information that was unfamiliar to their cultural backgrounds and expectations while subsequently monitoring how they recalled these different items of information (stories, etc). Bartlett was able to establish that individuals’ existing schemata and stereotypes influence not only how they interpret “schema-foreign” new information but also how they recall the information over time. One of his most famous investigations involved asking participants to read a Native American folk tale, “The War of the Ghosts”, and recall it several times up to a year later. All the participants transformed the details of the story in such a way that it reflected their cultural norms and expectations, i.e. in line with their schemata. The factors that influenced their recall were:

  • Omission of information that was considered irrelevant to a participant;
  • Transformation of some of the details, or of the order in which events, etc., were recalled; a shift of focus and emphasis in terms of what was considered the most important aspects of the tale;
  • Rationalisation: details and aspects of the tale that would not make sense would be “padded out” and explained in an attempt to render them comprehensible to the individual in question; and
  • Cultural shifts: the content and the style of the story were altered in order to appear more coherent and appropriate in terms of the cultural background of the participant.

Bartlett’s work was crucially important in demonstrating that long-term memories are neither fixed nor unchanging but are constantly being adjusted as schemata evolve with experience. His work contributed to a framework of memory retrieval in which people construct the past and present in a constant process of narrative/discursive adjustment. Much of what people “remember” is confabulated narrative (adjusted and rationalised) which allows them to think of the past as a continuous and coherent string of events, even though it is probable that large sections of memory (both episodic and semantic) are irretrievable or inaccurate at any given time.

An important step in the development of schema theory was taken by the work of D.E. Rumelhart describing the understanding of narrative and stories. Further work on the concept of schemata was conducted by W.F. Brewer and J.C. Treyens, who demonstrated that the schema-driven expectation of the presence of an object was sometimes sufficient to trigger its incorrect recollection. An experiment was conducted where participants were requested to wait in a room identified as an academic’s study and were later asked about the room’s contents. A number of the participants recalled having seen books in the study whereas none were present. Brewer and Treyens concluded that the participants’ expectations that books are present in academics’ studies were enough to prevent their accurate recollection of the scenes.

In the 1970s, computer scientist Marvin Minsky was trying to develop machines that would have human-like abilities. When he was trying to create solutions for some of the difficulties he encountered he came across Bartlett’s work and concluded that if he was ever going to get machines to act like humans he needed them to use their stored knowledge to carry out processes. A frame construct was a way to represent knowledge in machines, while his frame construct can be seen as an extension and elaboration of the schema construct. He created the frame knowledge concept as a way to interact with new information. He proposed that fixed and broad information would be represented as the frame, but it would also be composed of slots that would accept a range of values; but if the world did not have a value for a slot, then it would be filled by a default value. Because of Minsky’s work, computers now have a stronger impact on psychology. In the 1980s, David Rumelhart extended Minsky’s ideas, creating an explicitly psychological theory of the mental representation of complex knowledge.

Roger Schank and Robert Abelson developed the idea of a script, which was known as a generic knowledge of sequences of actions. This led to many new empirical studies, which found that providing relevant schema can help improve comprehension and recall on passages.

Schemata have also been viewed from a sociocultural perspective with contributions from Lev Vygotsky, in which there is a transactional relationship between the development of a schema and the environment that influences it, such that the schema does not develop independently as a construct in the mind, but carries all the aspects of the history, social, and cultural meaning which influences its development. Schemata are not just scripts or frameworks to be called upon, but are active processes for solving problems and interacting with the world. However, schemas can also contribute to influential outside sociocultural perspectives, like the development of racism tendencies, disregard for marginalised communities and cultural misconceptions.

Modification

New information that falls within an individual’s schema is easily remembered and incorporated into their worldview. However, when new information is perceived that does not fit a schema, many things can happen. One of the most common reactions is for a person to simply ignore or quickly forget the new information they acquired. This can happen on an unconscious level—meaning, unintentionally an individual may not even perceive the new information. People may also interpret the new information in a way that minimises how much they must change their schemata. For example, Bob thinks that chickens do not lay eggs. He then sees a chicken laying an egg. Instead of changing the part of his schema that says “chickens don’t lay eggs”, he is likely to adopt the belief that the animal in question that he has just seen laying an egg is not a real chicken. This is an example of disconfirmation bias, the tendency to set higher standards for evidence that contradicts one’s expectations. This is also known as cognitive dissonance. However, when the new information cannot be ignored, existing schemata must be changed or new schemata must be created (accommodation).

Jean Piaget (1896–1980) was known best for his work with development of human knowledge. He believed knowledge was constructed on cognitive structures, and he believed people develop cognitive structures by accommodating and assimilating information. Accommodation is creating new schema that will fit better with the new environment or adjusting old schema. Accommodation could also be interpreted as putting restrictions on a current schema, and usually comes about when assimilation has failed. Assimilation is when people use a current schema to understand the world around them. Piaget thought that schemata are applied to everyday life and therefore people accommodate and assimilate information naturally. For example, if this chicken has red feathers, Bob can form a new schemata that says “chickens with red feathers can lay eggs”. This schemata, in the future, will either be changed or removed entirely.

Assimilation is the reuse of schemata to fit the new information. For example, when a person sees an unfamiliar dog, they will probably just integrate it into their dog schema. However, if the dog behaves strangely, and in ways that does not seem dog-like, there will be an accommodation as a new schema is formed for that particular dog. With accommodation and assimilation comes the idea of equilibrium. Piaget describes equilibrium as a state of cognition that is balanced when schema are capable of explaining what it sees and perceives. When information is new and cannot fit into a previous existing schema, disequilibrium can happen. When disequilibrium happens, it means the person is frustrated and will try to restore the coherence of his or her cognitive structures through accommodation. If the new information is taken then assimilation of the new information will proceed until they find that they must make a new adjustment to it later down the road, but for now the person remains at equilibrium again. The process of equilibration is when people move from the equilibrium phase to the disequilibrium phase and back into equilibrium.

In view of this, a person’s new schemata may be an expansion of the schemata into a subtype. This allows for the information to be incorporated into existing beliefs without contradicting them. An example in social psychology would be the combination of a person’s beliefs about women and their beliefs about business. If women are not generally perceived to be in business, but the person meets a woman who is, a new subtype of businesswoman may be created, and the information perceived will be incorporated into this subtype. Activation of either woman or business schema may then make further available the schema of “businesswoman”. This also allows for previous beliefs about women or those in business to persist. Rather than modifying the schemata related to women or to business persons, the subtype is its own category.

Self-schema

Schemata about oneself are considered to be grounded in the present and based on past experiences. Memories are framed in the light of one’s self-conception. For example, people who have positive self-schemata (i.e. most people) selectively attend to flattering information and ignore unflattering information, with the consequence that flattering information is subject to deeper encoding, and therefore superior recall. Even when encoding is equally strong for positive and negative feedback, positive feedback is more likely to be recalled. Moreover, memories may even be distorted to become more favourable: for example, people typically remember exam grades as having been better than they actually were. However, when people have negative self views, memories are generally biased in ways that validate the negative self-schema; people with low self-esteem, for instance, are prone to remember more negative information about themselves than positive information. Thus, memory tends to be biased in a way that validates the agent’s pre-existing self-schema.

There are three major implications of self-schemata. First, information about oneself is processed faster and more efficiently, especially consistent information. Second, one retrieves and remembers information that is relevant to one’s self-schema. Third, one will tend to resist information in the environment that is contradictory to one’s self-schema. For instance, students with a particular self-schema prefer roommates whose view of them is consistent with that schema. Students who end up with roommates whose view of them is inconsistent with their self-schema are more likely to try to find a new roommate, even if this view is positive. This is an example of self-verification.

As researched by Aaron Beck, automatically activated negative self-schemata are a large contributor to depression. According to Cox, Abramson, Devine, and Hollon (2012), these self-schemata are essentially the same type of cognitive structure as stereotypes studied by prejudice researchers (e.g. they are both well-rehearsed, automatically activated, difficult to change, influential toward behavior, emotions, and judgments, and bias information processing).

The self-schema can also be self-perpetuating. It can represent a particular role in society that is based on stereotype, for example: “If a mother tells her daughter she looks like a tom boy, her daughter may react by choosing activities that she imagines a tom boy would do. Conversely, if the mother tells her she looks like a princess, her daughter might choose activities thought to be more feminine.” This is an example of the self-schema becoming self-perpetuating when the person at hand chooses an activity that was based on an expectation rather than their desires.

Schema Therapy

Schema therapy was founded by Jeffrey Young and represents a development of cognitive behavioural therapy (CBT) specifically for treating personality disorders. Early maladaptive schemata are described by Young as broad and pervasive themes or patterns made up of memories, feelings, sensations, and thoughts regarding oneself and one’s relationships with others; they can be a contributing factor to treatment outcomes of mental disorders and the maintenance of ideas, beliefs, and behaviours towards oneself and others. They are considered to develop during childhood or adolescence, and to be dysfunctional in that they lead to self-defeating behaviour. Examples include schemata of abandonment/instability, mistrust/abuse, emotional deprivation, and defectiveness/shame.

Schema therapy blends CBT with elements of Gestalt therapy, object relations, constructivist and psychoanalytic therapies in order to treat the characterological difficulties which both constitute personality disorders and which underlie many of the chronic depressive or anxiety-involving symptoms which present in the clinic. Young said that CBT may be an effective treatment for presenting symptoms, but without the conceptual or clinical resources for tackling the underlying structures (maladaptive schemata) which consistently organize the patient’s experience, the patient is likely to lapse back into unhelpful modes of relating to others and attempting to meet their needs. Young focused on pulling from different therapies equally when developing schema therapy. Cognitive behavioural methods work to increase the availability and strength of adaptive schemata while reducing the maladaptive ones. This may involve identifying the existing schema and then identifying an alternative to replace it. Difficulties arise as these types of schema often exist in absolutes; modification then requires replacement to be in absolutes, otherwise the initial belief may persist. The difference between cognitive behavioural therapy and schema therapy according to Young is the latter “emphasizes lifelong patterns, affective change techniques, and the therapeutic relationship, with special emphasis on limited reparenting”. He recommended this therapy would be ideal for clients with difficult and chronic psychological disorders. Some examples would be eating disorders and personality disorders. He has also had success with this therapy in relation to depression and substance abuse.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Schema_(psychology) >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Reachback (in Psychotherapy)?

Introduction

Reachback is a psychological term coined by Eric Berne. Reachback, in Berne’s lexicon, is the period of time during which an impending event begins to influence an individual’s behaviour, including their level of stress.

Berne’s Formulation

Berne, the founder of transactional analysis, coined the term in his book What Do You Say After You Say Hello?. He considered that reachback “is most dramatically seen in people with phobias whose whole functioning may be disturbed for days ahead at the prospect of getting into a feared situation, such as a medical examination or a journey.”

For instance, a person expecting to take a trip on Monday starts getting irritable and worried on Friday. He may start trying to clear his overflowing inbox, cut short his evening relaxation, start preparing and packing for the trip, worry about what clothes to take, and so on. However, “for people who have unusual difficulties with anticipatory stress, the reach-back of an event such as a major vacation trip or a wedding may be several weeks.”

Berne differentiates reachback from forward planning, which is done to mitigate negative effects such as reachback.

The flip side of reachback is afterburn, which is defined as the effect a past atypical event continues to have on a person’s schedule, activities and mental state even after it is materially over. Berne considered that “each person has a sort of standard ‘reachback time’ and ‘afterburn time’ for various kinds of situations […] domestic quarrels, examination or hearings, work deadlines, travel, visits from or to relatives, etc.”

Prevention

Following William Osler’s prescription for equable living day-by-day, Berne explained that “living day by day means living a well-planned and organized life, and sleeping well between each day, so that the day ends without reachback, since tomorrow is well planned, and begins without afterburn, since yesterday was well-organized”.

Defence Usage

Reachback is also used in the US Department of Defence as the process of obtaining products, services, applications, forces, equipment, or material from organisations that are not forward deployed.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Reachback >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Afterburn (in Psychotherapy)?

Introduction

Afterburn is a psychological term coined by Eric Berne, who defined it as “the period of time before a past event is assimilated”.

Berne’s Formulation

Eric Berne, the founding father of transactional analysis, used the term “afterburn” to indicate the effect an atypical past event continues to exert on a person’s daily schedule, activities and mental state even after it is over: to “those occasions when it disturbs normal patterns for an appreciable period, rather than being assimilated into them or excluded from them by repression and other psychological mechanisms”.

For Berne, afterburn is the flip side of reachback, which is the effect that the event, thanks to the stress of anticipation, has on the person’s life before it. He considered that “in most cases one or the other can be tolerated without serious consequences. It can be dangerous for almost anyone, however, if the after-burn of the last event overlaps with the reach-back from the next … this is a good definition of overwork”.

Remedies

Berne considered that “dreaming is probably the normal mechanism for adjusting after-burn and reach-back”, but that sex and holidays were also useful remedies. “Most normal after-burns and reach-backs run their courses in about six days, so that a two-week vacation allows the superficial after-burns to burn out, after which there are a few days of carefree living. …For the assimilation of more chronic after-burns and deeper, repressed reach-backs, however, a vacation of at least six weeks is probably necessary.”

Other Views

In terms of exam stress management, “afterburn is the time needed after the exam to… set it to rest”, a period of “afterburn time… [with] a host of unexpressed feelings and incomplete tasks”.

“Referring to soldiers recently returned from Iraq, Sara Corbett described this type of delayed reaction as ‘psychological afterburn’… [quoting soldiers who spoke of it to the effect of:] ‘My body’s here, but my mind is there.'”

With respect to therapy, some consider that “you are not ending well when you find that you are thinking about the person’s problems after sessions. This is called afterburn”. Others however see opportunity in such occasions: “You’re sorting out your countertransference, you’re owning your projections, you’re separating out you from the family”—in short, one is usefully employing “those lagging emotions that afterburn following a session”.

Goffman

Erving Goffman has a related but rather different usage of the term “to refer to a sotto voce comment, one meant not to be a ratified part of an encounter, an afterburn … a remonstrance conveyed collusively by virtue of the fact that its targets are in the process of leaving the field”.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Afterburn_(psychotherapy) >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Meant by “Fake It Till You Make It”?

Introduction

“Fake it till you make it” (or “Fake it until you make it”) is an aphorism that suggests that by imitating confidence, competence, and an optimistic mindset, a person can realise those qualities in their real life and achieve the results they seek.

The phrase is first attested some time before 1973. The earliest reference to a similar phrase occurs in the Simon & Garfunkel song “Fakin’ It”, released in 1968 as a single and also on their Bookends album. Simon sings, “And I know I’m fakin’ it, I’m not really makin’ it.”

Similar advice has been offered by a number of writers over time:

Action seems to follow feeling, but really action and feeling go together; and by regulating the action, which is under the more direct control of the will, we can indirectly regulate the feeling, which is not. Thus the sovereign voluntary path to cheerfulness, if our spontaneous cheerfulness be lost, is to sit up cheerfully, to look round cheerfully, and to act and speak as if cheerfulness were already there. If such conduct does not make you soon feel cheerful, nothing else on that occasion can. So to feel brave, act as if we were brave, use all our will to that end, and a courage-fit will very likely replace the fit of fear. ( William James, “The Gospel of Relaxation”, On Vital Reserves, 1922).

In the law of attraction movement, “act as if you already have it”, or simply “act as if”, is a central concept:

How do you get yourself to a point of believing? Start make-believing. Be like a child, and make-believe. Act as if you have it already. As you make-believe, you will begin to believe you have received. ( Rhonda Byrne, The Secret, 2006).

In Psychology

In the 1920s, Alfred Adler developed a therapeutic technique that he called “acting as if”, asserting that “if you want a quality, act as if you already have it”. This strategy gave his clients an opportunity to practice alternatives to dysfunctional behaviours. Adler’s method is still used today and is often described as role play.

“Faking it till you make it” is a psychological tool discussed in neuroscientific research. A 1988 experiment by Fritz Strack claimed to show that mood can be improved by holding a pen between the user’s teeth to force a smile, but a posterior experiment failed to replicate it, due to which Strack was awarded the Ig Nobel Prize for psychology in 2019. A later 2022 study about strategies to counter emotional distress found forced smiling not more effective than forced neutral expressions and other strategies of emotional regulation.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Fake_it_till_you_make_it >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Emotional Detachment?

Introduction

In psychology, emotional detachment, also known as emotional blunting, is a condition or state in which a person lacks emotional connectivity to others, whether due to an unwanted circumstance or as a positive means to cope with anxiety.

Such a coping strategy, also known as emotion-focused coping, is used when avoiding certain situations that might trigger anxiety. It refers to the evasion of emotional connections. Emotional detachment may be a temporary reaction to a stressful situation, or a chronic condition such as depersonalisationderealisation disorder. It may also be caused by certain antidepressants. Emotional blunting, also known as reduced affect display, is one of the negative symptoms of schizophrenia.

Signs and Symptoms

Emotional detachment may not be as outwardly obvious as other psychiatric symptoms. Patients diagnosed with emotional detachment have reduced ability to express emotion, to empathize with others or to form powerful emotional connections. Patients are also at an increased risk for many anxiety and stress disorders. This can lead to difficulties in creating and maintaining personal relationships. The person may move elsewhere in their mind and appear preoccupied or “not entirely present”, or they may seem fully present but exhibit purely intellectual behaviour when emotional behaviour would be appropriate. They may have a hard time being a loving family member, or they may avoid activities, places, and people associated with past traumas. Their dissociation can lead to lack of attention and, hence, to memory problems and in extreme cases, amnesia. In some cases, they present an extreme difficulty in giving or receiving empathy which can be related to the spectrum of narcissistic personality disorder. Additionally, emotional blunting is negatively correlated with remission quality. The negative symptoms are far less likely to disappear when a patient is experiencing emotional blunting.

In a study of children ages 4–12, traits of aggression and antisocial behaviours were found to be correlated with emotional detachment. Researchers determined that these could be early signs of emotional detachment, suggesting parents and clinicians to evaluate children with these traits for a higher behavioural problem in order to avoid bigger problems (such as emotional detachment) in the future.

A correlation was found of higher emotional blunting among patients treated with depression who scored higher on the Hospital Anxiety and Depression Scale (HADS) and were male (though the frequency difference was slight).

Emotional detachment in small amounts is normal. For example, being able to emotionally and psychologically detach from work when one is not in the workplace is a normal behaviour. Emotional detachment becomes an issue when it impairs a person’s ability to function on a day-to-day level.

Scales

While some depression severity scales provide insight to emotional blunting levels, many symptoms are not adequately covered. An attempt to resolve this issue is the Oxford Depression Questionnaire (ODQ), a scale specifically designed for full assessment of emotional blunting symptoms. The ODQ is designed specifically for patients with Major Depressive Disorder (MDD) in order to assess individual levels of emotional blunting.

Another scale, known as the Oxford Questionnaire on the Emotional Side-Effects of Antidepressants (OQESA), was developed using qualitative methods.

Causes

Emotional detachment and/or emotional blunting have multiple causes, as the cause can vary from person to person. Emotional detachment or emotional blunting often arises due to adverse childhood experiences, for example physical, sexual or emotional abuse. Emotional detachment is a maladaptive coping mechanism for trauma, especially in young children who have not developed coping mechanisms. Emotional detachments can also be due to psychological trauma in adulthood, like abuse, or traumatic experiences like war, automobile accidents etc.

Emotional blunting is often caused by antidepressants, in particular selective serotonin reuptake inhibitors (SSRIs) used in MDD and often as an add-on treatment in other psychiatric disorders. Individuals with MDD usually experience emotional blunting as well. Emotional blunting is a symptom of MDD, as depression is negatively correlated with emotional (both positive and negative) experiences.

Schizophrenia often occurs with negative symptoms, extrapyramidal signs (EPS), and depression. The latter overlaps with emotional blunting and is shown to be a core part of the present effects. Schizophrenia in general causes abnormalities in emotional understanding of individuals, all of which are clinically considered as an emotional blunting symptom. Individuals with schizophrenia show less emotional experiences, display less emotional expressions, and fail to recognize the emotional experiences and/or expressions of other individuals.

The changes in fronto-limbic activity in conjunction with depression succeeding a left hemisphere basal ganglia stroke (LBG stroke) may contribute to emotional blunting. LBG strokes are associated with depression and often caused by disorders of the basal ganglia (BG). Such disorders alter the emotional perception and experiences of the patient.

In many cases people with eating disorders (ED) show signs of emotional detachment. This is due to the fact that many of the circumstances that often lead to an ED are the same as the circumstances that lead to emotional detachment. For example, people with ED often have experienced childhood abuse. Eating disorders on their own are a maladaptive coping mechanism and to cope with the effects of an eating disorder, people may turn to emotional detachment.

Bereavement or losing a loved one can also be causes of emotional detachment.

Unfortunately, the prevalence of emotional blunting is not fully known.

Behavioural Mechanism

Emotional detachment is a maladaptive coping mechanism, which allows a person to react calmly to highly emotional circumstances. Emotional detachment in this sense is a decision to avoid engaging emotional connections, rather than an inability or difficulty in doing so, typically for personal, social, or other reasons. In this sense it can allow people to maintain boundaries, and avoid undesired impact by or upon others, related to emotional demands. As such it is a deliberate mental attitude which avoids engaging the emotions of others.

This detachment does not necessarily mean avoiding empathy; rather, it allows the person to rationally choose whether or not to be overwhelmed or manipulated by such feelings. Examples where this is used in a positive sense might include emotional boundary management, where a person avoids emotional levels of engagement related to people who are in some way emotionally overly demanding, such as difficult co-workers or relatives, or is adopted to aid the person in helping others.

Emotional detachment can also be “emotional numbing”, “emotional blunting”, i.e. dissociation, depersonalisation or in its chronic form depersonalisation disorder. This type of emotional numbing or blunting is a disconnection from emotion, it is frequently used as a coping survival skill during traumatic childhood events such as abuse or severe neglect. After continually using this coping mechanism, it can become a response to daily stresses.

Emotional detachment may allow acts of extreme cruelty and abuse, supported by the decision to not connect empathically with the person concerned. Social ostracism, such as shunning and parental alienation, are other examples where decisions to shut out a person creates a psychological trauma for the shunned party.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Emotional_detachment >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is the Journal of Personality and Social Psychology?

Introduction

The Journal of Personality and Social Psychology is a monthly peer-reviewed scientific journal published by the American Psychological Association that was established in 1965. It covers the fields of social and personality psychology. The editors-in-chief are Shinobu Kitayama (University of Michigan; Attitudes and Social Cognition Section), Colin Wayne Leach (Barnard College; Interpersonal Relations and Group Processes Section), and Richard E. Lucas (Michigan State University; Personality Processes and Individual Differences Section).

The journal has implemented the Transparency and Openness Promotion (TOP) Guidelines. The TOP Guidelines provide structure to research planning and reporting and aim to make research more transparent, accessible, and reproducible.

Contents

The journal’s focus is on empirical research reports; however, specialized theoretical, methodological, and review papers are also published. For example, the journal’s most highly cited paper, cited over 90,000 times, is a statistical methods paper discussing mediation and moderation.

Articles typically involve a lengthy introduction and literature review, followed by several related studies that explore different aspects of a theory or test multiple competing hypotheses. Some researchers see the multiple-experiments requirement as an excessive burden that delays the publication of valuable work, but this requirement also helps maintain the impression that research that is published in JPSP has been thoroughly vetted and is less likely to be the result of a type I error or an unexplored confound.

The journal is divided into three independently edited sections. Attitudes and Social Cognition addresses those domains of social behaviour in which cognition plays a major role, including the interface of cognition with overt behaviour, affect, and motivation. Interpersonal Relations and Group Processes focuses on psychological and structural features of interaction in dyads and groups. Personality Processes and Individual Differences publishes research on all aspects of personality psychology. It includes studies of individual differences and basic processes in behaviour, emotions, coping, health, motivation, and other phenomena that reflect personality.

Replicability

JPSP is one of the journals analysed in the Open Science Collaboration’s Reproducibility Project after JPSP’s publication of questionable research for mental time travel.

The journal refused to publish refuting replications performed by Ritchie’s team, in relation to an earlier article they published in 2010 that suggested that psychic abilities may have been involved (backward causality).

In Popular Culture

Non-fiction author Malcolm Gladwell writes frequently about findings that are reported in the journal.] Gladwell, upon being asked where he would like to be buried, replied “I’d like to be buried in the current-periodicals room, maybe next to the unbound volumes of the Journal of Personality and Social Psychology (my favorite journal).”

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Journal_of_Personality_and_Social_Psychology >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Reparation (In Psychology)?

Introduction

The term reparation was used by Melanie Klein (1921) (an Austrian-British author and psychoanalyst) to indicate a psychological process of making mental repairs to a damaged internal world. In object relations theory, it represents a key part of the movement from the paranoid-schizoid position to the depressive position — the pain of the latter helping to fuel the urge to reparation.

Melanie Klein

Melanie Klein considered the ability to recognise our destructive impulses towards those we love and to make reparation for the damage we have caused them, to be an essential part of mental health. A key condition for that to take place is the recognition of one’s separateness from one’s parents, which makes possible the reparative attempt to restore their inner representations, however damaged they may be felt to be.

Acceptance of reality, inner and outer, forms a major part of the process and involves both abandoning fantasies of omnipotence and accepting the independent existence of one’s objects of attachment.

Where the damage done to the internal world is felt by a patient to be extreme, however, the task of reparation may seem too great, which is one of the obstacles facing the analytic attempt at cure.

Manic Reparation

Kleinian thought distinguishes between true reparation and manic reparation, the latter being driven by guilt rather than overcoming it. Manic reparation denies the pain and concern of feeling guilty by using magical methods of repair which maintain omnipotent control of the object in question, and refuse to allow it its separate existence. Thus manic reparation has to be endlessly repeated, since success would free the object from the manic person’s (contemptuous) power.

Donald Winnicott

Refer to Donald Winnicott.

Donald Winnicott made his own distinctive contribution to the role of reparation in the “personalising” of the individual, the move from the ruthless use of the external object to a sense of concern. Winnicott focused on the way at a certain stage of development a feeling of guilt or concern begins to appear after the wholehearted instinctual experience of a feed. But once the reparative gesture—a smile, a gift—has been successfully acknowledged by the mother, Winnicott writes: “The breast (body, mother) is now mended and the day’s work is done. Tomorrow’s instincts can be awaited with limited fear”. The child’s contribution is a way of accepting the debt owed to the mother, for their survival and their participation in the work of reparation. If, on the other hand, the reparative gesture is not accepted, the infant is left with a feeling of depression or meaninglessness.

A similar dynamic may later appear between patient and analyst, with the making of progress being offered as a means of reparation.

Art

Kleinians considered that artistic creation was driven by the phantasy of repairing the loved object (mother).

Marion Milner in the Independent tradition also saw art as a way of both symbolizing and enacting inner reparation; but was criticised by Kleinians for giving too large a role to the omnipotent feelings of the artist in reparation.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Reparation_(psychoanalysis) >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.

What is Immersion Therapy?

Introduction

Immersion therapy is a psychological technique which allows a patient to overcome fears (phobias), but can be used for anxiety and panic disorders.

Refer to Flooding.

Outline

First a fear-hierarchy is created: the patient is asked a series of questions to determine the level of discomfort the fear causes in various conditions. Can the patient talk about the object of their fear, can the patient tolerate a picture of it or watch a movie which has the object of their fear, can they be in the same room with the object of their fear, and/or can they be in physical contact with it?

Once these questions have been ordered beginning with least discomfort to most discomfort, the patient is taught a relaxation exercise. Such an exercise might be tensing all the muscles in the patient’s body then relaxing them and saying “relax”, and then repeating this process until the patient is calm.

Next, the patient is exposed to the object of their fear in a condition with which they are most comfortable – such as merely talking about the object of their fear. Then, while in such an environment, the patient performs the relaxation exercise until they are comfortable at that level.

After that, the patient moves up the hierarchy to the next condition, such as a picture or movie of the object of fear, and then to the next level in the hierarchy and so on until the patient is able to cope with the fear directly.

This specific therapy can create a safe space, where individuals are able to become comfortable with their fears, anxieties or traumatic experiences. One may say it is linked to exposure, as the patient is immersed into an experience until they eventually become much more relaxed in it.

Although it may take several sessions to achieve a resolution, the technique is regarded as successful. Many research studies are being conducted in regard to achieving immersion therapy goals in a virtual computer based programme, although results are not conclusive.

‘Immersive therapy through virtual reality represents a novel strategy used in psychological interventions, but there is still a need to strengthen the evidence on its effects on health professionals’ mental health’ (Linares-Chamorro et al., 2022).

Virtual Therapy

As mentioned previously, Immersion Therapy can occur in the form of a virtual reality (VR) therapy. This usually involves transporting the user to a simulated environment, creating a realistic real life setting, and combining video, audio, haptic and motion sensory input to create an immersive experience. Virtual therapy may use videos in either a 2D or 3D immersion using a head-mounted display (Hodges et al., 2002).

There have been many studies looking at this type of therapy and combatting anxiety and phobias, such as acrophobia. It assesses a patient’s cognitive, emotional and physiological functioning. It can be useful for both prevention and treatment of psychiatric conditions. This method goes beyond the simple exposure therapy, as it can be a more comprehensive treatment compared to other interventions. A study conducted in Olot, Spain aimed to look at levels of anxiety and the wellbeing of female hospital staff. A sample size of 35 female health professionals undertook immersive therapy for 8 weeks. The way the anxiety levels were measured was through the Hamilton scale and well-being through the Eudemon scale. This specific immersive therapy was executed through Virtual Reality, in which the VR experience used a projection device with light and sound control that provided an immersive experience, creating an environment that enhanced self awareness to approach anxiety management. Results suggested that a significant improvement was found in anxiety and wellbeing, both statistically and clinically.

Another study in the UK looking at helping acrophobia. Researchers recruited 100 adults with a fear of heights, if they scored more than 29 on the heights interpretation questionnaire, suggested they had a fear of heights. Participants were randomly allocated by computer to either an automated VR delivered in roughly six 30 minute sessions, administered about 2-3 times a week over 2 weeks and a control group was present which received no treatment. The virtual coach worked alongside the VR programmed and would mention things like “We’re discovering what happens when we venture into a situation we’d normally try to avoid.” The aim of the virtual coach was to put the participants’ expectations to the test and experiencing citations where they would usually feel anxious. Then the tasks began, where they underwent different levels of heights in different activities. Overall, participants in the control group compared to the VR group had reduced fear of heights by the end of the treatment.

Although, this is evidence to suggest how virtual computer based immersion therapy works, the research within this area of psychology is scare, thus more testing needs to occur, to fully implement this type of technology.

Advantages

Immersive virtual reality may be identified as something that is a potentially revolutionary tool for psychological treatment of mental disorders, which may gradually be adopted in regular clinical practice in the coming years. (Geraets et al., 2021). Virtual reality has significantly been evolving over the last few years due to many advancements in technology, thus enabling us to understand the constant need for new research to take place.

The benefits of Immersive virtual reality therapy could significantly enhance effective psychological interventions. Treatments can be given automatically, without a therapist’s physical presence, resulting in a more low cost route. Another benefit of VR is that it can offer ‘direct therapeutic intervention’, which is often lacking in conventional clinical settings, allowing for treatments to be delivered faster and more efficiently. Patients can be placed in simulated environments whilst wearing a VR headset, teaching them how to react more effectively. Additionally, patients are more open to experimenting with new therapies because they are aware they are in a secure stimulation setting, in which the exposure to the stimuli can occur in different stages and not just one go.

VR has been used successfully over the past 25 years for assessment, understanding, and treatment of mental health disorders. The increased accessibility and affordability of VR mean that this technique is now ready to move from specialist laboratories into clinics (Freeman et al., 2018).

Immersive therapy can provide a distinctive and engaging experience that allows for overcoming fears, gaining self-confidence and creating coping strategies. It allows people to experience real life situations in a controlled and safe setting. It is much more interactive and rather than just talking about their phobia or anxiety, they can actually relive it but overcome it too, generating a greater sense of self-confidence, reducing the feelings of anxieties and managing their feelings during stressful situations.

This page is based on the copyrighted Wikipedia article < https://en.wikipedia.org/wiki/Immersion_therapy >; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.