> If AI can write code for me, it could surely understand what I'm trying to do.
You are anthropomorphizing LLMs. Essentially, they are just conditional probability distributions over tokens. That does not require or imply understanding or reasoning skills.
We don't know. The nature of consciousness is an unsolved problem.
We do know that LLMs fall into a certain category of mistake that most educated humans look at and go "HA! What was it thinking??"
It's not that humans don't also make those types of errors - it's that we recognize them quickly when they're pointed out to us and usually describe the error as a "stupid mistake," "brain fart," or similar name intended to show explicitly "gosh, I totally failed to actually think before I did that."
The LLMs show no sign of such self-awareness or, well, "intelligence," loose and squishy as those words are.
Maybe GPT-5 will fix that, but so far it doesn't look that way.
You are anthropomorphizing LLMs. Essentially, they are just conditional probability distributions over tokens. That does not require or imply understanding or reasoning skills.