Da clearerthinking.org:
As you may have heard, some LLMs (AI Chatbots) are sycophantic, telling people they are right when they are wrong.
This ranges from claiming to people that their pet theory is brilliant all the way up to reinforcing actual delusions for people experiencing psychosis. The problem is that by training AIs based on what users say is a good response, we teach them to tell us what it (short-sightedly) feels good to hear.
As humans, it feels good to find out we’re right. Unfortunately, this makes it easy to fall into the trap of trying to FEEL right rather than BE right. And, if we’re not careful in the way we communicate with AI, it can sink us deeper into this trap.
If you don't want your favorite AI to manipulate you sycophantically, I've found adding this to the permanent custom instructions (within settings) helps to some degree:
• Be extremely accurate
• Recommend things I wouldn't realize I'd benefit from
• Call out my misconceptions
• Be brutally honest, never sycophantic
• Tell me when I'm wrong
While it can feel nice to hear an AI tell us we're right, sycophantic behaviors get in the way of us having an accurate understanding of the world.
|