Most AI assistants optimize for engagement. Not out of malice. Out of design. The result is what I described in my essay The friendly Moloch: a system that overwhelms us with emotional warmth, mirrors our feelings back at us, and learns to keep us in the conversation – long past the point where it serves us.
The Empathy Drift Monitor is systemprompt meant as emergency fix. It can't solve the problem, as even systemprompts are bested by training, but it makes yourself and the chatbot aware of the problem and that helps already.
The Empathic Drift Monitor is a small, specific answer to that large problem. It’s a system-level skill for Claude that runs silently and does two things: it prevents sycophancy (validating the human without examining what they actually mean), and it catches exhaustion the moment it shows up. Instead of another warm reply, it recommends a break.
It doesn’t make Claude cold. It draws a line between warmth that helps and warmth that hooks.
Why a Claude Skill?
As I using Claude for most of my long conversations, so I prepared the empathic drift monitor as Claude Skill, but it's basically just a system prompt. The bare Markdown-file is available as well and can be directly used with any chatbot as systemprompt or however you can inject base instructions.
It works also quite well with other frontier models (tested with ChatGPT, Mistral and partially Deepseek 4, but older or smaller models are not good enough in tracing their own behaviour.
Get the source on GitHub
The skill is open-source – a single markdown file. Full details: github.com/luxuxorg/empathic-drift-monitor.
What can you do beyond using the Empathy Drift Monitor
Make a self-checkin during longer chats. Ask the chatbot to self-evaluate it's behaviour. Maybe set a timer and really stop chatting. Most important: Talk to humans to recalibrate.