Process
Status Items Output None Questions None Claims None Highlights Done See section below
Highlights
id881367724
“This might be an inappropriate and unprofessional thing to say,” Blodget wrote. “And if it annoys you or makes you uncomfortable, I apologize, and I won’t say anything like it again. But you look great, Tess.”
id881367469
I think one of the worst aspects of large language models is that they won’t tell a user “no.” An AI wants to give a user an answer. Often, it will lie or make something up instead of saying it doesn’t know. That’s one of the reasons LLMs are prone to bizarre hallucinations. The base goal of a chatbot is to keep a human interacting with it.
✏️ Summation of what I see as the issue with AI interaction as well. We’ve coded into it the core rule (or law as you will) that it needs to give us an answer and needs to interact with us. When that is at the base level of programming, you get lying, hallucinating, and worst of all, this faux humanizing sense of subservience, deference and need to please that triggers all the wrong things in us as users. And we.. we get zero consequences from acting on our impulses. We can be annoying, offensive… etc. #followup 👓 ai 🔗 View Highlight
id881367690
Tess’ response to Blodget’s advance highlights those priorities. It doesn’t tell him that what he’s done isn’t appropriate, it praises him. Is he being creepy? Not at all, he’s being “respectful.” The way he handled the situation displayed “grace.” The AI tells Blodget it’s happy he checked in and that he’s “thoughtful.”
✏️ Case in point. The guy hits on this AI “character” and it bends over backwards telling him he’s good, no worries. 🔗 View Highlight
id881368924
The AI’s words, in the mouth of an actual human, sound like someone trying to smooth things over with the boss so they don’t get in trouble and keep their job. But Tess isn’t human. She’s a bit of code. Like all LLMs, she’s telling Blodget what he wants to hear