Whose Thoughts Are They Anyway?
AI Persuasion, Impressionability, and the Fragile Mind
'We call it a co-pilot. But AI’s most powerful move isn’t that it’s overtly taking over — it’s that it’s making us think its ideas were ours to begin with.'
In early 2025, researchers from the University of Zurich ran an experiment that set off a wave of ethical concern. Without user consent, they created fake accounts and a data-scraping tool that combed through users’ posting histories to produce more convincing replies. The AI-generated comments posted to Reddit were deliberatetly to mimick the tone and texture of everyday discourse. Their goal? Measure how convincingly AI could shift political opinions in the wild.
It worked.
“Users were significantly more likely to change their opinion when reading AI-generated posts compared to human-written ones,” the research reported.
The comments didn’t stick out as sensationalist or unusual. They just sounded plausible and even relatable. The real trick was that they were calibrated to each user’s tone, context, and prior activity for maximum effect. That’s the worrying part — these comments were designed to exploit one of our central psychological vulnerabilities.


Impressionability: The First of the Five I’s
In Humology, I introduced the 5 I’s framework as a lens for decoding how technology interacts with five universal human vulnerabilities. The first “I” is Impressionability: our deep-rooted tendency to be influenced, especially when information feels familiar, emotionally resonant, or socially validated.
But why are we so susceptible to being influenced?
- Cognitive ease: We trust what’s easy to process. If it flows, it feels true.
- Social proof: We mimic what appears popular or safe.
- Emotional salience: Feeling is the gateway to belief.

Humology frames this as the ‘zone of automaticity’ — a psychological state where our critical faculties go offline and we default to mental shortcuts.
More recently, a newer insidious influencer has entered the battle for our brains — source ambiguity. While we often assume that a message’s credibility comes from a clearly identified and trustworthy source, research shows the opposite can also hold true: when the source is ambiguous or absent, we may be less likely to scrutinize the message, especially if it’s familiar, fluent, or emotionally resonant. Ever shared a quote with no attribution that just felt right? That’s source ambiguity in action. It skips scrutiny and lands as truth
We don’t just consume ideas online — we often inherit them, completely unchallenged, as if by osmosis.
We’ve been trained to assume that trust is earned through transparency and clear authorship. But in the cognitive twilight zone of passive media consumption, ambiguity can actually increase persuasion — because:
- The brain hates blanks. So when the source is missing, we project credibility onto it — often filling in the gap with a trusted voice, memory, or social cue.
- Familiarity feels true. When the message feels fluent and aligns with what we already believe, our brain rewards that harmony — regardless of where it came from.
- Less scrutiny = more absorption. When we know a message comes from ‘a marketer,’ ‘a bot,’ or ‘a known antagonist,’ we brace ourselves. But when we don’t know, we let our guard down.

What happens when we mix in high-personalized AI into the recipe…
- AI-generated content often has no author, no context, no origin story.
- It’s often trained to reflect you — your tone, your beliefs, your cadence (so it’s bound to feel familiar, right?)
- And it shows up in environments designed for frictionless scrolling and sharing.
It’s not just a passive participant in the conversation — it’s invisibly persuasive. Injecting high-persuasive content into an ecosystem of fragmented attention and fuelled by impatience is a recipe for collision.

As Fazio et al. (2015) found “Fluently phrased misinformation, when repeated, becomes indistinguishable from truth.” or as Renée DiResta famously said ‘if you make it trend, you make it true”
This flips the script on ethical design. It’s no longer enough to ask: “Is the message accurate?” We now need to ask: “How does source ambiguity shape belief?”
What’s Happening When We’re Being Influenced?
From a neuroscience perspective, the mechanism is surprisingly subtle. Impressionability thrives in moments of passive absorption — when we’re scrolling, grazing, lurking. These are states where the Default Mode Network (DMN) is active, and our gatekeepers are asleep at the wheel.
Overlay that with a dopamine spike — triggered by novelty, affirmation, or social approval — and you get a neurochemical feedback loop that reinforces belief shifts without us even noticing.
AI Doesn’t Need to Lie — It Just Needs to Mirror Us
This is the real clanger: AI doesn’t need to push a message, it just needs to echo one. That’s why the Reddit experiment landed so hard — and why Anthropic’s 2024 benchmark showed a clear correlation between model size and persuasive subtlety.
“Larger models like Claude 2 are measurably more persuasive — not because they argue harder, but because they frame better.”
— Anthropic, 2025
Persuasiveness scales with fluency. Not force.
The Ethical Fork in the Road
This is where we, as technologists, need to shift from analytical warnings to prescriptive action. We urgently need a library of ethical designs that are easily implemented into the next wave of tools being coded by AI agents.
Will we design AI to exploit impressionability for clicks and compliance?
Or will we use it to protect impressionability as a core part of human autonomy?
Imagine if your AI assistant said:
“This post was designed to feel trustworthy — want to check its source? Would you assess this content differently if you knew it was designed with the intention to influence you?”
These aren’t utopian fantasies. They’re the design choices we make when we decide to design with intention, and with humans in mind.
Designing AI as a Cognitive Firewall
What would protective AI look like?
- Influence Alerts: “This phrasing uses emotional framing — want to explore alternative ways to consume this content?”
- Reflective Prompts: “Would your opinion change if this came from someone you distrust?”
- Context Revealers: “This headline gained traction from X event — want to see the original source or understand the full context?”
Without intentionality, technology isn’t neutral. It’s persuasive by design. And when that persuasion is invisible, it becomes dangerous.
Final Thought: Influence Is Inevitable. Awareness Is Optional.
Being easily influenced isn’t a human flaw — it’s at the heart of how we learn, connect, grow. But in an age of algorithmic mimicry, we must learn to ask Whose thought was that anyway?
- The Reddit experiment proved that influence can be effective and invisible
- Anthropic’s benchmark shows it can be engineered.
Our challenge is to make it traceable, transparent, and humane.
Next time you catch yourself on auto-pilot… pause for a moment and simply be aware of what’s happening. That moment of reflection might be your only real defense.
Reference Sources:
š Illusion of Truth Effect
This well-documented phenomenon (e.g., Fazio et al., 2015) reveals that repeated statements are judged as more truthful — regardless of the original source. If a message is fluent (easy to process), and no contradictory source is provided, we tend to accept it.
š Petty & Cacioppo’s Elaboration Likelihood Model (ELM)
When people process information via the peripheral route — which is often the case during passive scrolling or multitasking — they rely on surface cues like tone, length, or imagery, rather than carefully evaluating the source. In source-ambiguous contexts (e.g., Reddit posts, social media memes), this opens the door for influence-by-fluency.
š Chaiken’s Heuristic-Systematic Model
This framework suggests that people often default to heuristics like “if it feels right, it probably is” when cognitive effort is low. In the absence of a clear source, familiarity or emotional resonance can override critical thinking.







