Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that the problem is that LLMs are good at making plausible-looking text and discerning if a random post is good or bad requires effort. And it's really bad when signal-to-noise ratio is low, due to slop being easier to make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: