Process
Status Items Highlights Done See section below Claims None Questions None Output None
Document Notes
Type into chatgpt, and it’ll give you what a response to your text could/would/should sound like. It’s performative “par excellence” (sp?).. it doesn’t matter if the response is correct or not; it just has to sound right. (a direct result of it’s core mandate of pleasing the customer at all costs). When you tell it it’s wrong, it’s not being introspective; it’s giving you what a response to being wrong should sound like… but then it doesn’t really fix the issue does it? Hardly.
![The image is a screenshot of a tweet from Andrew Kadel ([@]DrewKadel[@]social.coop) posted on Twitter. The tweet is about ChatGPT and is attributed to Andrew's daughter, who has a degree in computer science. The tweet discusses the fundamental aspect of ChatGPT, which is that when you enter text into it, you're asking "What would a response to this sound like?" The tweet explains that ChatGPT generates responses that sound plausible and can cite non-existent papers with real journal names and author names related to the question, which is not deceptive but simply what a response to that question would sound like. The tweet also mentions that people often expect ChatGPT to do more than just generate responses that sound like answers, such as engaging in introspection or looking up more information, but it is only saying something that sounds like the next bit of the conversation.](https://files.mastodon.social/media_attachments/files/115/043/759/231/421/002/original/7e4acbd37746bb57.jpg)