Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be foolish to use the LLM directly without a wrapper that detects prompt injection attempts.


I think this is trying to appeal to the sort of agentic/molt-y type systems that recently became popular. Their whole thing is that they can modify their “prompts” in some way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: