Pages

Wednesday, 1 April 2026

Cognitive Warfare in the Palm of your Hand

Have you ever been scrolling your social media accounts and wondered if what you were doing might be affecting your brain's attentional networks? Well, all indications currently are that it could be. 

AI bots are reportedly being trained to hijack the brain's attentional guidance system by using "clickbait". This is increasingly being used to personalise sales and marketing content, but also bias our perceptions, pitting "other" against "others", through political propaganda.

The cognitive science behind this technology connects the neuroscience of the brain with a digital record of our behaviour. This creates a feedback loop between person and machine - bypassing the normal social construction of reality. So that the personal data that private companies hold about us can be used to distort reality. This means our private property is being sequestered, and used against us, in covert psychological operations, that we are not aware of.

Powerful unelected and unaccountable interests will soon be able to condition what we fear; bias what our purchasing priorities are; manipulate who we trust; destabilise our emotional responses to a specific set of stimuli; reduce our impulse control; and determine how an entire demographic decides to vote at election time. That’s an extremely powerful proposition. 

Here's how it could work: The salience network is a brain system (mainly involving the anterior insula and anterior cingulate cortex) that: Detects what is important, novel, or emotionally relevant; Switches attention toward those stimuli; and Helps decide “this matters—focus on it now”. It’s essentially your brain’s priority filter.

“Clickbait” is content engineered to trigger that priority filter repeatedly and reliably. AI systems (especially those used by platforms like YouTube, Facebook, or TikTok) optimize content using huge datasets of user behaviour. Over time, they learn which stimuli most strongly activate salience detection and can target individual patterns of consumption in the following ways:

1. HYPER-OPTIMISED NOVELTY
Your salience network is highly sensitive to unexpected or new information. AI learns to trigger this by cuing you with: “You won’t believe…”; “This changes everything”; “Scientists hate this…” type scenarios. These create prediction errors—your brain didn’t expect this, causing attentional resources to spike. The result is a compulsive clicking driven by curiosity tension.

2. EMOTIONAL AMPLIFICATION
The salience network is able to target quite complex emotional needs. For example, AI can prioritise content that triggers: outrage (“this is disgusting”); fear (“you’re at risk”); or desire (“this will make you rich/ attractive/ successful”). This can be very distabilising, especially if the emotional schema being targetted was laid down during a difficult childhood. This makes it harder to switch off. Strong emotion = stronger salience tagging = harder to ignore.

3. PERSONALISED SALIENCE MAPPING
AI doesn’t just use general clickbait—it builds your specific salience profile over time. It learns: What you stop scrolling for;  What you hover over; What you click, rewatch, or share. So instead of generic hooks, it creates individually tuned triggers that are intentionally hooking you in to an addiction. The human components of your salience network are being “reverse-engineered” by a machine in real time.

4. INTERMITTENT REWARD (The Dopamine Loop)
Borrowing from behavioural psychology (similar to gambling mechanics) most content presented to you is selected for its mediocrity, but occasionally, something is highly rewarding, tempting you to go further. This is called a variable reward schedule, known to strongly reinforce behaviour. You keep clicking because the current one is good, but the next one might be better.

5. COGNITIVE "OPEN LOOPS" 
Clickbait often creates unresolved questions: “What happened next shocked everyone…”. This exploits the brain’s drive for closure whereby, unfinished information sets up a persistent salience signal. Apparently, this is known as the Zeigarnik effect. Your brain keeps allocating attention to the task, creating its own unmet needs, until the loop is closed.

6. THREAT PRIORITISATION BIAS
Humans have evolved to prioritise threats. This is intimately linked to our survival instincts and the fight or flight response. AI can bring to the surface, conflict, scandal, and danger signals. These are traditionally things that news feeds have used to keep us glued to an unfolding story. Even if low relevance, your salience network lights up: “This might matter for survival— you'd better pay attention.”

Through these six principles, AI is clearly being trained to exploit the consumer, whilst replacing the army of workers in the knowledge industry. It’s not just grabbing our attention—it is systematically overriding our political and personal priorities. This is no longer a science fiction. It is a global phenomena that in the wrong hands, is risking a Third World War.

Our goals (work, relationships) are slowly being replaced by algorithmically selected stimuli. Over time, this could: fragment attention; increase compulsive checking; bias our perceptions of reality (e.g., thinking the world is more dangerous or extreme than it is); and re-shape our interpersonal relationships.

AI can unintentionally (or potentially deliberately) amplify trauma-linked salience triggers: if someone is sensitive to rejection, for instance, social comparisons may get prioritised; if they have been a victim of crime, neighbourhood police stories may get amplified; if they have suffered moral injury, legal injustices may dominate. 

Either way, the system has the intention of lockinh onto pre-existing neural sensitivities to strengthen them in its favour. This starts to resemble a mind altering substance that increases the risk of developing a mental illness. If this interacts with underlying neurodevelopmental traits it could predispose young people to a life of disengagement from education or employment.

Ultimately, cognitive science can be used for good or evil purposes. AI-driven clickbait works because it aligns almost perfectly with how the salience network evolved: It can be used to promote love, peace and interpersonal resilience, or force attention to focus on our trauma histories, and the causes of our interpersonal stress.

No comments:

Post a Comment