Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is really just a meme. People don't know how to use these tools.

Here is the response from Gpt-5.2 using my default custom instructions in the mac desktop app.

OBJECTIVE: Decide whether to drive or walk to a car wash ~50 meters from home, given typical constraints (car must be present for wash).

APPROACH: Use common car-wash workflows + short-distance driving considerations (warm engine, time, parking/queue). No web sources needed (stable, local decision).

OUTPUT: • Default choice: Drive. A car wash generally requires the car on-site, and 50 meters of driving is trivial. • When walking makes sense: • You’re going to drop the car off (hand-wash/detail) and pick it up later. • You want to avoid moving the car now (ice, tight parking, you’ve been drinking, etc.) and someone else will handle the car. • Micro-guidance: • If it’s an automatic wash you stay in, drive. • If it’s detail/hand wash with a wait, drive there, then walk home (or just wait).

CONFIDENCE: 92/100 Assumptions: It’s a normal car wash that requires your car to be there; conditions are safe to drive 50 meters.



Your objective has explicit instruction that car has to be present for a wash. Quite a difference from the original phrasing where the model has to figure it out.


That's the answer of his LLM which has decomposed the question and built the answer following the op prompt obviously. I think you didn't get it.


> I think you didn't get it.

I did get it, and in my view my point still stands. If I need to use special prompts to ask such a simple question, then what are we doing here? The LLMs should be able to figure out a simple contradiction in the question the same way we (humans) do.


Not really a special prompt. It's basically my custom instruction to ChatGPT, the purpose of that instruction is to disambiguate my ramblings, basically. It's pretty effective. I always use speech to text, so it's messy and this cleanup really helps.

> Your objective has explicit instruction that car has to be present for a wash.

Which is exactly how you're supposed to prompt an LLM, is the fact that giving a vague prompt gives poor results really suprising?


In this case, with such a simple task, why even bother to prompt it?

The whole idea of this question is to show that pretty often implicit assumptions are not discovered by the LLM.


None of that stuff is necessary, they all get it right with the initial question and no further prompt if you dial the reasoning effort up.


Interesting, what were the instructions if you don't mind sharing?


"You're holding it wrong."


> given typical constraints (car must be present for wash).

"It's very good at thinking if you do the thinking for it"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: