Whose Thoughts Are They Anyway?

Joanne Griffin • 6 May 2025

AI Persuasion, Impressionability, and the Fragile Mind

'We call it a co-pilot. But AI’s most powerful move isn’t that it’s overtly taking over — it’s that it’s making us think its ideas were ours to begin with.'

In early 2025, researchers from the University of Zurich ran an experiment that set off a wave of ethical concern. Without user consent, they created fake accounts and a data-scraping tool that combed through users’ posting histories to produce more convincing replies. The AI-generated comments posted to Reddit were deliberatetly to mimick the tone and texture of everyday discourse. Their goal? Measure how convincingly AI could shift political opinions in the wild.

It worked.

“Users were significantly more likely to change their opinion when reading AI-generated posts compared to human-written ones,” the research reported.

The comments didn’t stick out as sensationalist or unusual. They just sounded plausible and even relatable. The real trick was that they were calibrated to each user’s tone, context, and prior activity for maximum effect. That’s the worrying part — these comments were designed to exploit one of our central psychological vulnerabilities.




Example AI-generated comment. Image Credit: 404Media https://www.404media.co/researchers-secretly-ran-a-massive-unauthorized-ai-persuasion-experiment-on-reddit-users/


Impressionability: The First of the Five I’s


In Humology, I introduced the 5 I’s framework as a lens for decoding how technology interacts with five universal human vulnerabilities. The first “I” is Impressionability: our deep-rooted tendency to be influenced, especially when information feels familiar, emotionally resonant, or socially validated.


But why are we so susceptible to being influenced?

  • Cognitive ease: We trust what’s easy to process. If it flows, it feels true.
  • Social proof: We mimic what appears popular or safe.
  • Emotional salience: Feeling is the gateway to belief.


Humology frames this as the ‘zone of automaticity’ — a psychological state where our critical faculties go offline and we default to mental shortcuts.

More recently, a newer insidious influencer has entered the battle for our brains — source ambiguity. While we often assume that a message’s credibility comes from a clearly identified and trustworthy source, research shows the opposite can also hold true: when the source is ambiguous or absent, we may be less likely to scrutinize the message, especially if it’s familiar, fluent, or emotionally resonant. Ever shared a quote with no attribution that just felt right? That’s source ambiguity in action. It skips scrutiny and lands as truth


We don’t just consume ideas online — we often inherit them, completely unchallenged, as if by osmosis.

We’ve been trained to assume that trust is earned through transparency and clear authorship. But in the cognitive twilight zone of passive media consumption, ambiguity can actually increase persuasion — because:

  • The brain hates blanks. So when the source is missing, we project credibility onto it — often filling in the gap with a trusted voice, memory, or social cue.
  • Familiarity feels true. When the message feels fluent and aligns with what we already believe, our brain rewards that harmony — regardless of where it came from.
  • Less scrutiny = more absorption. When we know a message comes from ‘a marketer,’ ‘a bot,’ or ‘a known antagonist,’ we brace ourselves. But when we don’t know, we let our guard down.



What happens when we mix in high-personalized AI into the recipe…

  • AI-generated content often has no author, no context, no origin story.
  • It’s often trained to reflect you — your tone, your beliefs, your cadence (so it’s bound to feel familiar, right?)
  • And it shows up in environments designed for frictionless scrolling and sharing.

It’s not just a passive participant in the conversation — it’s invisibly persuasive. Injecting high-persuasive content into an ecosystem of fragmented attention and fuelled by impatience is a recipe for collision.





As Fazio et al. (2015) found “Fluently phrased misinformation, when repeated, becomes indistinguishable from truth.” or as Renée DiResta famously said ‘if you make it trend, you make it true”


This flips the script on ethical design. It’s no longer enough to ask: “Is the message accurate?” We now need to ask: “How does source ambiguity shape belief?”


What’s Happening When We’re Being Influenced?

From a neuroscience perspective, the mechanism is surprisingly subtle. Impressionability thrives in moments of passive absorption — when we’re scrolling, grazing, lurking. These are states where the Default Mode Network (DMN) is active, and our gatekeepers are asleep at the wheel.

Overlay that with a dopamine spike — triggered by novelty, affirmation, or social approval — and you get a neurochemical feedback loop that reinforces belief shifts without us even noticing.

AI Doesn’t Need to Lie — It Just Needs to Mirror Us

This is the real clanger: AI doesn’t need to push a message, it just needs to echo one. That’s why the Reddit experiment landed so hard — and why Anthropic’s 2024 benchmark showed a clear correlation between model size and persuasive subtlety.

“Larger models like Claude 2 are measurably more persuasive — not because they argue harder, but because they frame better.”
— Anthropic, 2025

Persuasiveness scales with fluency. Not force.



The Ethical Fork in the Road

This is where we, as technologists, need to shift from analytical warnings to prescriptive action. We urgently need a library of ethical designs that are easily implemented into the next wave of tools being coded by AI agents.

Will we design AI to exploit impressionability for clicks and compliance?
Or will we use it to protect impressionability as a core part of human autonomy?

Imagine if your AI assistant said:

“This post was designed to feel trustworthy — want to check its source? Would you assess this content differently if you knew it was designed with the intention to influence you?”

These aren’t utopian fantasies. They’re the design choices we make when we decide to design with intention, and with humans in mind.


Designing AI as a Cognitive Firewall

What would protective AI look like?


  • Influence Alerts: “This phrasing uses emotional framing — want to explore alternative ways to consume this content?”
  • Reflective Prompts: “Would your opinion change if this came from someone you distrust?”
  • Context Revealers: “This headline gained traction from X event — want to see the original source or understand the full context?”
Without intentionality, technology isn’t neutral. It’s persuasive by design. And when that persuasion is invisible, it becomes dangerous.

Final Thought: Influence Is Inevitable. Awareness Is Optional.


Being easily influenced isn’t a human flaw — it’s at the heart of how we learn, connect, grow. But in an age of algorithmic mimicry, we must learn to ask Whose thought was that anyway?

  • The Reddit experiment proved that influence can be effective and invisible
  • Anthropic’s benchmark shows it can be engineered.

Our challenge is to make it traceable, transparent, and humane.


Next time you catch yourself on auto-pilot… pause for a moment and simply be aware of what’s happening. That moment of reflection might be your only real defense.


Reference Sources:

šŸ” Illusion of Truth Effect
This well-documented phenomenon (e.g., Fazio et al., 2015) reveals that 
repeated statements are judged as more truthful — regardless of the original source. If a message is fluent (easy to process), and no contradictory source is provided, we tend to accept it.

šŸ“˜ Petty & Cacioppo’s Elaboration Likelihood Model (ELM)
When people process information via the peripheral route — which is often the case during passive scrolling or multitasking — they rely on surface cues like tone, length, or imagery, rather than carefully evaluating the source. 
In source-ambiguous contexts (e.g., Reddit posts, social media memes), this opens the door for influence-by-fluency.

šŸ“˜ Chaiken’s Heuristic-Systematic Model
This framework suggests that people often default to heuristics like “if it feels right, it probably is” when cognitive effort is low. In the absence of a clear source, familiarity or emotional resonance can override critical thinking.


An infinite loop
by Joanne Griffin 1 March 2025
In 1965, Time magazine declared that by 2000, Americans would work just 20 hours a week, retiring at 50 with ‘a guaranteed income for life.’ “Many scientists hope that in time the computer will allow man to return to the Hellenic concept of leisure, in which the Greeks had time to cultivate their minds and improve their environment while slaves did all the labor,” the article continued. The slaves, in modern Hellenism, would be the computers. Yet here we are, a quarter century after that prediction, grinding through intense work weeks while doomscrolling through other people’s vacations and wellness rituals. Here’s the uncomfortable truth: Every time technology offers to save us time, we invent new ways to stay busy . Email was supposed to kill paperwork. Instead, we send 300 billion emails a year. Slack was supposed to kill email. Instead, we send 1.5bn messages per week. History is littered with predictions about technology freeing us from work: šŸ’”Aristotle (350 BCE): ‘ If every tool could perform its own work, slavery would be unnecessary .’ šŸ’”John Maynard Keynes (1930): ‘ Our grandchildren will work three hours a day .’ šŸ’” Fei-Fei Li, Professor of Computer Science at Stanford University (2020s): ‘ I imagine a world in which AI is going to make us work more productively, live longer, and have cleaner energy .’ These visionaries agreed on one thing: Technology should serve humans. But history shows we’d rather serve technology. With every technological leap forward we tend to follow Amara’s Law : we overestimate liberation, underestimate adaptation. We don’t eliminate work; we upgrade it.
Brain overwhelmed with technology
by Joanne Griffin 27 January 2025
Creativity doesn’t exist in a vacuum. It’s built on cognitive skills like working memory, critical thinking, and deep focus. It’s the mental exercise of connecting ideas, questioning assumptions, and immersing ourselves in problems to come up with novel solutions. But when we let AI handle these cognitive tasks — whether brainstorming, writing, or problem-solving — our brains lose the practice needed to maintain those essential skills.
by Joanne Griffin 13 June 2024
Dublin Tech Summit: 29th May 2024 This thought-provoking session examines the evolving dynamics between AI technology and human interaction. From UNICEF's innovative approaches to Humalogy's human-centric solutions, gain insights into how AI is reshaping our world. With perspectives from Logitech's CIO, delve into the opportunities and challenges presented by AI integration.
A picture of an eye surrounded by the words Impatience, Inattention, Impressionability, Irrationalit
by Joanne Griffin 13 June 2024
The symbiotic dance between humans and technology has delicately unfolded over millenia: an intricate choreography that reflects our relentess pursuit of progress, of innovation, and of connection. At times we take the lead, and sometimes we follow. And as this timeless dance continues, it weaves the threads of our shared story. To truly understand our relationship with technology, we must look inward, to better understand ourselves. When we better understand ourselves — our desires, our vulnerabilities, our motivations — we can unlock profound insights into how we are shaping the world around us, and how we are adapting to the changes we have put in motion.
Declan Fosteron The Ryan Tubridy Show
by Declan Foster 14 October 2023
The meteoric rise of AI has put the fear of Skynet into all of us since the launch of ChatGPT and the sudden mainstreaming of the alarm around artificial intelligence, bots, and automation. How can we put humans back into the heart of technology is the objective of our guest this morning, Declan Foster who will help us unpack, debunk and humanize this complex world of technology that now swirls around us.
by Joanne Griffin 12 June 2023
Humology is awarded a 5-star review in the Reader's Choice Awards in 2023!
by Joanne Griffin 24 February 2023
Since the beginning of this year, ChatGPT has taken the world by storm, setting a record for the fastest user growth in January (reaching 100 million active users two months after launch!) The implications of this tool are far-reaching, with universities around the world scrambling to find a way to detect ChatGPT use in submitted essays, and tech giants from Google to Meta rerouting entire teams to focus on commercializing their own AI efforts. ChatGPT is first and foremost a conversational AI — so I thought “why not have a conversation about how it might impact us already-overwhelmed users!” Just a casual conversation between a curious human and the latest record-breaking emerging tech to disrupt the world as we know it.
Robot at whiteboard
by Declan Foster 26 January 2023
In my latest article, I will explain what ChatGPT is and the underlying technology in what I hope are easy-to-understand terms. I will also look at the implications for the organisational change management professions, including how we might utilise this new tool. There are, of course, some potential downsides to this technology, including ethical and copyright concerns, which I will also discuss.
by Declan Foster 11 January 2023
Technology has been an undeniable force in modern society, revolutionising how we work, shop, communicate and entertain. However, many groups lack access to the tools and resources needed to take advantage of digital technology. We are all probably familiar with the digital divide. This is the notion of a gap between those digitally included and those digitally excluded from the benefits of the digital revolution.
More posts