LLM Unicode Prompt Injection
Be careful copying AI prompts… It has become common place on social media to see posts sharing “super prompts” or prompt templates. Researchers have discovered a technique that uses unicode to hide prompt injection as non-printable characters1. Prompt injection, a term coined by Simon Willison, is a type of attack that attempts to override a user or application prompt to either alter the results or to exfiltrate earlier elements of the prompt or used in retrieval augmented generation (RAG). It is a real challenge for LLM apps at the moment as there are no completely reliable mitigation techniques. ...