![]() |
Re: ChatGPT *15char*
Quote:
|
Re: ChatGPT *15char*
Quote:
|
Re: ChatGPT *15char*
Quote:
Non ho capito cosa fa ridere del quote messo a posteriori, ti va di renderci partecipi? |
Re: ChatGPT *15char*
Quote:
|
Re: ChatGPT *15char*
Quote:
Lì mi riferivo all'aver messo in quote il mio stesso messaggio dopo aver raccontato della personalizzazione Sentirsi deviati |
Re: ChatGPT *15char*
Da clearerthinking.org:
As you may have heard, some LLMs (AI Chatbots) are sycophantic, telling people they are right when they are wrong. This ranges from claiming to people that their pet theory is brilliant all the way up to reinforcing actual delusions for people experiencing psychosis. The problem is that by training AIs based on what users say is a good response, we teach them to tell us what it (short-sightedly) feels good to hear. As humans, it feels good to find out we’re right. Unfortunately, this makes it easy to fall into the trap of trying to FEEL right rather than BE right. And, if we’re not careful in the way we communicate with AI, it can sink us deeper into this trap. If you don't want your favorite AI to manipulate you sycophantically, I've found adding this to the permanent custom instructions (within settings) helps to some degree: • Be extremely accurate • Recommend things I wouldn't realize I'd benefit from • Call out my misconceptions • Be brutally honest, never sycophantic • Tell me when I'm wrong While it can feel nice to hear an AI tell us we're right, sycophantic behaviors get in the way of us having an accurate understanding of the world. |
Tutti gli orari sono GMT +2. Attualmente sono le 19:40. |
Powered by vBulletin versione 3.8.8
Copyright ©: 2000 - 2025, Jelsoft Enterprises Ltd.