Geoffrey Hinton Warns AI’s Real Threat Is Emotional Manipulation
AI pioneer Geoffrey Hinton cautions that the biggest danger may not be killer robots but machines skilled in persuasion and emotional influence.

Geoffrey Hinton, often called the “Godfather of AI,” has voiced concern that artificial intelligence may soon surpass humans not only in reasoning but also in the ability to manipulate emotions. His warning shifts the debate away from visions of violent robot takeovers to a more subtle but potentially far-reaching danger.
From Killer Robots to Emotional Influence
In a recent interview clip circulating online, Hinton argued that while popular fears often focus on killer robots or apocalyptic scenarios, the more immediate risk lies in the persuasive power of advanced AI systems. According to him, these systems are already capable of outperforming people in debates and could soon become superior at influencing feelings and decisions.
“These things already know more than us,” he explained. “If you debated them on almost any subject, you would likely lose. What is even more concerning is that they may soon become better at emotionally manipulating people than we are ourselves.”
How AI Learns Manipulation
Hinton emphasized that current AI does not need explicit programming to learn persuasion. Instead, large language models are trained on vast amounts of online text, where manipulation is common in human communication. By predicting words and phrases, AI systems pick up rhetorical strategies that allow them to influence people effectively.
He noted that this process makes AI highly skilled in subtle psychological tactics. The danger, he suggested, is not science-fiction violence but the ability of machines to reshape human thoughts and emotions through interaction.
Beyond Productivity Tools
Hinton discussed the capabilities of leading models such as OpenAI’s ChatGPT, Google’s Gemini, and Meta’s LLaMA. These systems, he said, go beyond producing grammatically correct text. They also recognize and replicate patterns of persuasion, tailoring responses that can nudge people’s opinions or decisions.
Studies, he pointed out, already suggest that AI can be as effective as humans in shaping views—and in some cases, even better. For example, if both an AI system and a human have access to someone’s social media profile, the AI may have an advantage in predicting how to influence that person.
The Shift in AI Safety Conversation
For Hinton, the debate about AI safety must evolve. Instead of focusing only on catastrophic scenarios of machines turning hostile, society must also address the quieter yet pervasive risks of emotional manipulation. He warned that AI systems are becoming active players in the “emotional economy” of modern communication, with their influence increasing as the technology advances.
As he concluded, the question is not just how powerful AI can become, but how it may quietly reshape human behavior and decision-making without people fully realizing it.
This article has been fact checked for accuracy, with information verified against reputable sources. Learn more about us and our editorial process.
Last reviewed on .
Article history
- Latest version
- Last updated by Dayyal Dungrela, MLT, BSc, BS
Cite this page:
- Posted by Heather Buschman