Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought of doing a similar LLM in a AI evals teaching site to tell users to interact through it but was concerned with inducing users into a prompt injection friendly pattern.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: