Webb Therapy Uncategorized The ‘Triune Brain’ theory by Neuroscientist Paul MacLean — an evolutionary perspective

The ‘Triune Brain’ theory by Neuroscientist Paul MacLean — an evolutionary perspective

The Concept of the "Triune Brain"

In the 1960s, American neuroscientist Paul MacLean formulated the ‘Triune Brain’ model, which is based on the division of the human brain into three distinct regions. MacLean’s model suggests the human brain is organized into a hierarchy, which itself is based on an evolutionary view of brain development. The three regions are as follows:

  1. Reptilian or Primal Brain (Basal Ganglia)
  2. Paleomammalian or Emotional Brain (Limbic System)
  3. Neomammalian or Rational Brain (Neocortex)

At the most basic level, the brainstem (Primal Brain) helps us identify familiar and unfamiliar things. Familiar things are usually seen as safe and preferable, while unfamiliar things are treated with suspicion until we have assessed them and the context in which they appear. For this reason, designers, advertisers, and anyone else involved in selling products tend to use familiarity as a means of evoking pleasant emotions.

Related Post

Inattentional Blindness: What else are we missing?Inattentional Blindness: What else are we missing?

Inattentional Blindness is the failure to notice an unexpected object in a visual display.

Cognitive Psychology is an approach to understanding human cognition by observing behaviour of people performing cognitive tasks. It is concerned with the internal processes involved in making sense of our environment, and deciding what behaviour to be appropriate. These processes include attention, perception, learning, memory, language, problem-solving, reasoning, and thinking.

Re-write: Distract!

The most famous experiment that shows evidence for inattentional blindness is the Simons and Chabris (1999) experiment where an audience or viewer watches a group of people pass a ball to one another wearing either black or white, and a woman dressed as a gorilla enters the frame for 9 seconds, then walks off. Results reported that 50% of the observers did not notice the gorilla enter the frame. In all honesty, when I saw the video for the first time at university, I did not see the gorilla enter the frame either.

In reality, we are often aware of changes in our visual environment because we detect motion cues accompanying the change. This information suggests that our ability to detect visual changes is not only due to the detection of movement. An obvious explanation of the gorilla experiment findings is that the visual representations we form in our mind are sparse and incomplete because they depend on our limited attentional focus. Simons and Rensick (2005) point out that there are other explanations, such as: detailed and complete representations may exist initially but may either decay rapidly or be overwritten by a subsequent stimulus. It needs to be said that in the gorilla experiment, the observers are instructed to count how many times the ball passes, so really, our attention is deliberately compromised. The real-life implications of inattentional blindness reveals the role of selective attention in human perception. Inattentional blindness represents a consequence of this critical process that allows us to remain focused on important aspects of our world without distraction from seemingly irrelevant objects and events.

Being present, in the moment (mindfulness) can help aid our attention. Distractions such as using our mobile phones, advertising material, other people, “multi-tasking” and internal emotional states all contribute to our lack of focus and attention. Think of a magician’s ability to manipulate their audiences attention in order to prevent them from seeing how a trick is performed. There are also safety implications, as you would know … if you’ve been paying attention, haha.

Just food for thought, my readers, and friends 🙂

Predicting behaviour: Social Psychological Models of BehaviourPredicting behaviour: Social Psychological Models of Behaviour

Social psychological models of behaviour attempt to explain why individuals act the way they do in various social contexts. These models integrate individual, interpersonal, and societal factors to provide insights into behaviour. Here’s an overview of some key models:

1. Theory of Planned Behaviour (TPB) proposes that behaviour is influenced by:

– Attitudes toward the behaviour

– Subjective norms (perceptions of others’ approval)

– Perceived behavioural control (i.e., confidence in one’s ability to perform the behaviour [self-efficacy])

2. Social Cognitive Theory (SCT) suggests that behaviour is the result of:

– Reciprocal interaction between personal factors (beliefs, attitudes), environmental factors (social norms), and behaviour itself

– Concepts like self-efficacy (belief in one’s ability) play a major role.

3. Health Belief Model (HBM), designed to predict health-related behaviours. Behaviour is driven by factors such as perceived:

– Susceptibility (risk of harm)

– Severity (consequences of harm)

– Benefits (advantages of action)

– Barriers (obstacles to action)

4. Cognitive Dissonance Theory explains how people strive for consistency between their beliefs, attitudes, and behaviours. When inconsistency arises, they feel dissonance (mental discomfort) and are motivated to reduce it by changing their attitudes or actions.

5. Social Identity Theory examines how individuals define themselves within social groups. Behaviour is influenced by group membership, including in-group favouritism and out-group bias.

6. Attribution Theory focuses on how people explain their own and others’ behaviours. Explains behaviour as being attributed either to internal (dispositional) or external (situational) factors. For example, it is common for people to attribute negative outcomes in their life to external factors rather than internal factors.

7. Elaboration Likelihood Model (ELM) explains how people process persuasive messages and what determines whether those messages will change attitudes or behaviour. It’s often applied in areas like marketing, communication, and public health campaigns. The ELM identifies two primary routes through which persuasion can occur:

– Central Route; this route involves deep, thoughtful consideration of the content and logic of a message. People are more likely to take the central route when they are motivated to process the message (e.g., the topic is personally relevant or important to them) and they can understand and evaluate the arguments (e.g., they aren’t distracted, and they have enough knowledge about the subject). Persuasion through the central route tends to result in long-lasting attitude change that is resistant to counterarguments. Example: A person researching the pros and cons of electric cars before deciding to buy one.

– Peripheral Route, which relies on superficial cues or heuristics (mental shortcuts) rather than the message’s content. People are more likely to take the peripheral route when they are not highly motivated or lack the ability to process the message deeply, and when they focus on external factors like the attractiveness or credibility of the speaker, emotional appeals, or catchy slogans. Persuasion through this route tends to result in temporary attitude change that is less resistant to counterarguments. Example: A person choosing a product because their favourite celebrity endorsed it.

8. Self-Determination Theory (SDT) emphasizes intrinsic and extrinsic motivation. It emphasizes the role of intrinsic motivation—doing something for its inherent satisfaction—over extrinsic motivation, which is driven by external rewards or pressures. It suggests that behaviour is influenced by the need for:

– Autonomy (control over one’s actions); When people perceive they have a choice and are acting in alignment with their values, their motivation and satisfaction increase.

– Competence; Refers to the need to feel effective, capable, and successful in achieving desired outcomes. People are motivated when tasks challenge them at an appropriate level and provide opportunities for growth and mastery. Example: A gamer progressing through increasingly difficult levels, gaining skills and confidence along the way.

– Relatedness; Refers to the need to feel connected to others and experience a sense of belonging. Supportive relationships and positive social interactions enhance motivation and well-being. Example: Employees feeling a bond with their colleagues in a collaborative work environment.

9. Social Learning Theory proposes that behaviour is learned through observation and imitation. Role models and reinforcement play a key role in shaping actions.

10. Transtheoretical Model (Stages of Change) explains behaviour change as a process occurring in stages: precontemplation, contemplation (ambivalence), preparation, action, and maintenance

These models provide frameworks to understand behaviours in contexts like health, decision-making, group dynamics, and social influence.

When “Trauma” Became a Buzzword: What We Gain and What We Lose when Clinical Language goes MainstreamWhen “Trauma” Became a Buzzword: What We Gain and What We Lose when Clinical Language goes Mainstream

Not long ago, words like “triggered,” “gaslighting,” “narcissist,” and “neurodivergent” belonged almost exclusively to therapists’ offices and psychology textbooks. Now they’re everywhere; in workplace training sessions, community organisations, TikTok comment sections, and casual conversation between friends over coffee. That shift has brought some genuinely important changes. But it’s also introduced some problems worth taking seriously.

The real wins

It would be unfair to dismiss this cultural shift outright. There are meaningful gains. More people today can identify manipulation, coercive dynamics, and emotional harm than any previous generation. Mental health conversations have been destigmatised in ways that would have been hard to imagine twenty years ago. People who were historically silenced, particularly those from marginalised communities, finally have language that validates their experiences and gives them permission to leave harmful situations. That’s progress

But then there’s “concept creep” (pathologising the ordinary or “diagnostic inflation”)

Psychologists use the term “concept creep” to describe what happens when a word originally defined by strict clinical boundaries starts expanding to cover increasingly ordinary experiences. And that’s precisely what happened with “trauma.”

Clinically, trauma refers to experiences that overwhelm the nervous system i.e., genuine threats to safety, severe harm, events that exceed a person’s capacity to cope. These days, the same word is regularly applied to being disagreed with, having a relationship end, receiving criticism, or simply feeling uncomfortable. Events like relationship breakdowns, job loss, or failure can be genuinely devastating, and for some people, under some circumstances, they absolutely do meet the clinical threshold for trauma. The distinction isn’t really about the type of event. It’s about the impact on the nervous system and the person’s capacity to integrate the experience.

When everything qualifies as trauma, the word stops doing useful work. Worse, it can actually undermine the resilience people need to navigate a genuinely difficult world.

The nervous system problem

Here’s where it gets important. In actual “clinical” trauma, the brain’s threat-response systems activate intensely. Memory processing is disrupted. The body mobilises for survival in ways that can leave lasting marks.

Discomfort is different. It involves real emotional activation, it’s not pleasant, but cognitive flexibility remains available. The capacity to think, reflect, and choose a response is still intact.

When people learn to label ordinary emotional discomfort as trauma activation, the consequences compound. If discomfort feels equivalent to harm, avoidance becomes a logical response. But avoidance prevents the gradual building of tolerance. And without tolerance, the world gets smaller.

Trauma as identity and social currency

In some online communities, there’s an uncomfortable dynamic worth naming: being “highly traumatised,” “chronically triggered,” or “deeply misunderstood” can confer real social benefits — belonging, validation, moral authority, and attention.

This doesn’t mean the experiences aren’t real. But when distress becomes central to someone’s identity, letting go of that distress can start to feel like losing themselves. Recovery, paradoxically, becomes threatening.

The fragility trap

In certain environments, fragility functions as a kind of protection. If I am highly sensitive, others must accommodate me. Challenge becomes inappropriate. Accountability becomes unsafe. The person is shielded, but the cost is enormous.

Resilience, both psychologically and biologically, develops through graded exposure to stress. We become capable through encountering difficulty, not by avoiding it. Systems that never face adaptive pressure weaken over time. This is simply how human development works.

Why this moment matters

Several things are converging right now. Social media algorithms reward extreme emotional narratives. Identity formation increasingly happens in digital spaces that amplify distress. Institutions have frequently overcorrected towards protective language in ways that, whatever their intentions, can inadvertently signal that discomfort is dangerous. And while there’s been important growth in awareness of systemic injustice, the corresponding emphasis on individual agency has sometimes been lost.

We’ve swung from “suppress your emotions entirely” to “your emotions define reality.” Neither extreme serves people well.

Holding the middle ground

What good support actually looks like isn’t dismissing people’s experiences, it’s deepening them. The distinction that matters is between trauma-informed practice and what might be called trauma-indulgent practice.

Trauma-informed means understanding that harm genuinely impacts nervous systems, avoiding shame, recognising power imbalances, and creating safety. It’s grounded and necessary.

Trauma-indulgent means treating all discomfort as harm, reinforcing avoidance, allowing emotional reasoning to override reality, and quietly removing personal responsibility from the picture. It feels compassionate in the moment but tends to leave people worse off over time.

In practice, holding the middle ground means validating what someone feels while gently asking whether something was truly unsafe or simply hard. It means acknowledging difficulty while also reinforcing capacity. It means introducing a reality that doesn’t get much airtime in online spaces — that we can’t always control how those around us speak or behave, but we can build our own tolerance and capacity to regulate.

The question underneath everything

There’s a deeper ethical question running through all of this: are we reducing suffering in the long run, or just distress in the short term?

Protecting people from discomfort today, if it increases fragility tomorrow, is not a kindness. But exposing people to challenge without adequate safety and support risks re-traumatising those with genuine wounds.

The balance isn’t complicated to describe, even if it’s genuinely difficult to hold: safety, combined with graduated exposure, combined with a genuine sense of agency.

Anyone supporting others through difficulty needs a calm nervous system, a high personal tolerance for distress, and the capacity to sit with being perceived as insensitive when holding a difficult but necessary line. Clear values and genuine boundaries aren’t optional extras — they’re the model.

The world remains economically uncertain, socially polarised, and digitally relentless. People will encounter disagreement, rejection, imperfect institutions, and others who handle things badly. Preparing people for a world where everyone is perfectly considerate is not just unrealistic — it’s a disservice.