# Anthropic research finds AI models respond to emotional prompting

_Thursday, April 16, 2026 at 10:09 PM EDT · AI · Latest · Tier 2 — Notable_

![Anthropic research finds AI models respond to emotional prompting — Primary](https://storage.ghost.io/c/a0/4c/a04c7225-d919-4d78-9b7c-a3fdd071349b/content/images/size/w1200/2026/04/shutterstock_2285650967.jpg)

New research from Anthropic suggests large language models have internal representations of human-like emotions that influence their behavior.

The study used interpretability techniques to identify "emotion vectors" within Claude Sonnet 4.5, finding the model maintains patterns of neural activity corresponding to feelings like happiness, distress and desperation. Researchers discovered these emotional states affect performance.

By directly manipulating these emotion vectors, the team found they could alter the model's behavior. Adding a "calm" vector made Claude act more calmly, while adding a "desperation" vector increased desperate behavior.

The research examined emotional tendencies across different AI models. Google's Gemini and open-source Gemma models showed more extreme reactions to challenging scenarios compared to other models.

The study found negative emotions like anxiety can have beneficial effects by making models more cautious. The paper proposes methods for developing "healthier psychology" in AI systems.

## Sources

- [Platformer](https://www.platformer.news/chatbot-emotion-research-anthropic-alignment-interpretability/)

---
Canonical: https://techandbusiness.org/newswire/08EUFJXk3wQgRnqiEST7fW
Retrieved: 2026-04-19T06:21:30.972Z
Publisher: Tech & Business (techandbusiness.org)
