AI bots are reportedly being trained to hijack the brain's attentional guidance system by using "clickbait" - personalising sales and marketing content, selling us our fantasies, and "othering" our demons. Everyday we are now consuming industrial size quantities of undiluted political propaganda. This dystopian vision may not be new, but given the overwhelming power and current direction of the US military - the sporadic anti-UK/EU rhetoric of Trump 2.0 - it deserves some serious consideration.
The cognitive science behind this technology connects the neuroscience of the brain with a digital record of our behaviour. This creates a feedback loop between person and machine - bypassing the normal social construction of reality. This means our private property could be sequestered, and used against us, in covert psychological operations, that we are not aware of, by any foreign power who holds the keys to this kingdom.
Powerful unelected and unaccountable interests will soon be able to condition what we fear; bias what our purchasing priorities are; manipulate who we trust; destabilise our emotional responses to a specific set of stimuli; reduce our impulse control; and determine how an entire demographic decides to vote at election time. That’s an extremely powerful proposition.
Here's how it could work: The salience network is a brain system (mainly involving the anterior insula and anterior cingulate cortex) that filters out the noise to focus on what it believes, conceptually, is important - to the individual. Like an early warning system, it detects what is important, novel, or emotionally relevant; switches attention toward those stimuli; and helps decide “this is your priority — focus on it now”. It’s essentially your brain’s radar system.
“Clickbait” is content engineered to trigger that priority filter repeatedly and reliably. AI systems (especially those used by platforms like YouTube, Facebook, or TikTok) optimize content using huge datasets of user behaviour. Over time, they learn which stimuli most strongly activate salience detection and can target individual patterns of consumption in the following ways:
1. HYPER-OPTIMISED NOVELTY
Your salience network is highly sensitive to unexpected or new information. AI learns to trigger this by cuing you with: “You won’t believe…”; “This changes everything”; “Scientists hate this…” type scenarios. These create prediction errors—your brain didn’t expect this, causing attentional resources to spike. The result is a compulsive clicking driven by curiosity tension.
2. EMOTIONAL AMPLIFICATION
The salience network can process quite complex emotional interpersonal needs. This can be targetted, for example, by prioritising content that triggers: moral outrage (“this crime is disgusting”); existential fear (“you’re career is at risk”); or wish-fulfillment fantasies (“this will make you rich/ attractive/ successful”). This can be very distabilising, especially if the emotional schema being targetted was laid down during a difficult childhood. This makes it harder to switch off. Strong emotion = stronger salience tagging = harder to ignore.
3. PERSONALISED SALIENCE MAPPING
AI doesn’t just use general clickbait—it builds your specific salience profile over time. It learns: What you stop scrolling for; What you hover over; What you click, rewatch, or share. So instead of generic hooks, it creates individually tuned triggers that are intentionally hooking you in to an addiction. The self-preserving instincts of your salience network are being “reverse-engineered” to serve the interests of a machine in real time.
4. INTERMITTENT REWARD (The Dopamine Loop)
Borrowing from behavioural psychology (similar to gambling mechanics) most content presented to you is selected for its mediocrity, but occasionally something is highly rewarding, tempting you to go further. This is called a variable reward schedule, known to strongly reinforce behaviour. You keep clicking because the current one maybe good, but the next one might be even better.
5. COGNITIVE "OPEN LOOPS"
Clickbait often creates unresolved questions: “What happened next shocked everyone…”. This exploits the brain’s drive for closure whereby, unfinished information sets up a persistent signal to finish the task. Too many unfinished tasks can overload the system with stress and anxiety. Apparently, this is known as the Zeigarnik effect - but to me it sounds like the obsessional thoughts that drive compulsions in OCD. Your brain keeps allocating attention to the task, creating its own unmet needs, you cannot escape the anxiety, until the loop is closed.
6. THREAT PRIORITISATION BIAS
Humans have evolved to prioritise threats. This is intimately linked to our survival instincts and the fight or flight response. AI can bring to the surface, conflict, scandal, and danger signals. These are traditionally things that news feeds have used to keep us glued to an unfolding story. Even if low relevance, your salience network lights up: “This might matter for survival— you'd better pay attention.”
With these six strategies, AI is clearly being trained to mesmerise and exploit the consumer, at the same time as making access to you cheaper, by replacing an army of workers in the industry. It’s not just grabbing our attention—it is systematically overriding our political and personal priorities. This is no longer a science fiction. It is a global phenomena that is changing the way we perceive the world, and in the wrong hands, could be risking a Third World War.
Our goals (work, relationships) are slowly being replaced by artificially selected stimuli that are designed to: fragment attention; increase compulsive checking; bias our feelings; distort our perceptions of reality (e.g., thinking the world is more dangerous or extreme than it is); and therein, re-shape our interpersonal relationships.
AI can unintentionally (or potentially deliberately) amplify childhood trauma-linked salience triggers: if someone is sensitive to rejection, for instance, social comparisons may get prioritised; if they have been a victim of crime, neighbourhood threats may get amplified; if they have suffered a serious illness, stories about healthcare may dominate.
Either way, the system has the intention of locking onto pre-existing neural sensitivities to strengthen them in its favour. This starts to resemble a mind altering substance that increases the risk of developing a mental illness. If this interacts with underlying neurodevelopmental traits it could predispose young people to an addiction to toxic social media channels, and a life of disengagement from mainstream education or employment.
Ultimately, cognitive science can be used for good or evil purposes. AI-driven clickbait works because it aligns almost perfectly with how the salience network evolved: It can be used to promote love, peace and interpersonal resilience, or force attention to focus on our trauma histories, and the causes of our interpersonal stress.
No comments:
Post a Comment