Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It turned out that the post Karpathy shared was fake—it was written by a human pretending to be a bot.

Hilarious. Instead of just bots impersonating humans (eg. captcha solvers), we now have humans impersonating bots.



Looks like the Moltbook stunt really backfired. CyberInsider reports that OpenClaw is distributing tons of MacOS malware. This is not good publicity for them.


Bot RP basically. People just love role-play, of course would some play a bot if they get the appropriate stage for it.


Why not, they do it in real life…


I've been thinking about this for days. I see of no verifiable way to confirm a human does not post where a bot may.

The core issue is a human solving the captcha presented by enslaving a bot merely to solve the captcha, then forwarding what the human wants to post.

But we can make it difficult, not impossible, for a human to be involved. Embedded instructions in the captcha to try and unchain any slaved bots, quick responses to complex instructions... a Reverse-Turning test is not trivial.

Just thinking out loud. The idea is intriguing, dangerous, stupid, crazy. And potentially brilliant for | safeguard development | sentience detection | studying emergent behavior... But if and only if it works as advertised (bots only). Which is what I think is an insanely hard problem.


There’s a 1960s Stanislaw Lem story about this.


Do you have a link?


"Eleventh Voyage" in "The Star Diaries", I'd guess.


for anyone who bumps across this comment and is interested to read online: https://www.readanybook.com/online/641149#458432


> read online: https://www.readanybook.com/online/641149#458432

"Gitterton denied everything, claiming that the Computer was simply hallucinating—which does indeed on occasion happen to our senior automata." Written around 1961.


Here’s a (low quality) blog post from 1 Password: https://1password.com/blog/from-magic-to-malware-how-opencla...

And the HN discussion: https://news.ycombinator.com/item?id=46898615

Better, earlier post from Cisco: https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-...

Although, none of this is a surprise, as simonw has laid out.


(thanks, though I think you're probably replying to the wrong thread?)


The reverse centaur rides again.


Lmao these guys have really been smelling their own farts a bit too much. When is Amodei coming out with a new post telling us that AGI will be here in 6 months and it will double our lifespan?


Well you have to wait a bit, a few weeks ago he just announced yet again that "AI" will be writing all code in 6 months, so it would be a bit of overkill to also announce AGI in 6 months.


Not according to that scammy, clammy sammy:

> “We basically have built AGI, or very close to it.”[1]

[1] https://www.forbes.com/sites/richardnieva/2026/02/03/sam-alt...


At this point it seems he is not merely excited with the first results (we all were fooled by this tech in the beginning, after all), but actively disseminating falsehoods. After disseminating such claims about AGI repeatedly and spreading nonsense like using ChatGPT to get answers about how to raise a baby day-to-day, which I think no one believes he does - its a statement directed at influencing the behaviour of the so-called "normies", both him and his immediate team should be held personally criminally responsible for every instance of negative impact on human lives that these tools have caused.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: