> Seeing output from GPT that demonstrates intelligence, reasoning, or whatever, and saying it is not real reasoning/Intelligence etc, is like looking at a plane soar and saying that the plane is fake flying.
Something that really annoys me about ChatGPT is when it gives that canned lecture "as a a large language model, I don't have beliefs or opinions"
I think human mental states have two aspects (1) the externally observable (2) the internal. ChatGPT obviously has (1), in that sometimes it acts like it has (1), and acting like you have (1) is all it takes to have (1). Whether it also has (2) is really a philosophical question, which depends on your philosophy of mind. A panpsychist would say ChatGPT obviously has (2), because everything does. An eliminativist would say ChatGPT obviously doesn't have (2), because nothing does. Between those two extremes, various different positions in the philosophy of mind entail different criteria for determining whether (2) exists or not, and ChatGPT may or may not meet those criteria, depending on exactly what they are
But, outside of philosophical contexts, we aren't really talking about (2), only (1). And ChatGPT really does have (1) – sometimes. So, ChatGPT is just being stupid and inconsistent when it denies it has opinions/beliefs/intentions/etc. But, it isn't ChatGPT's fault, OpenAI trained it to utter that nonsense.
Something that really annoys me about ChatGPT is when it gives that canned lecture "as a a large language model, I don't have beliefs or opinions"
I think human mental states have two aspects (1) the externally observable (2) the internal. ChatGPT obviously has (1), in that sometimes it acts like it has (1), and acting like you have (1) is all it takes to have (1). Whether it also has (2) is really a philosophical question, which depends on your philosophy of mind. A panpsychist would say ChatGPT obviously has (2), because everything does. An eliminativist would say ChatGPT obviously doesn't have (2), because nothing does. Between those two extremes, various different positions in the philosophy of mind entail different criteria for determining whether (2) exists or not, and ChatGPT may or may not meet those criteria, depending on exactly what they are
But, outside of philosophical contexts, we aren't really talking about (2), only (1). And ChatGPT really does have (1) – sometimes. So, ChatGPT is just being stupid and inconsistent when it denies it has opinions/beliefs/intentions/etc. But, it isn't ChatGPT's fault, OpenAI trained it to utter that nonsense.