diff --git a/guides/prompt-adversarial.md b/guides/prompt-adversarial.md index 623a52e..d94ca6d 100644 --- a/guides/prompt-adversarial.md +++ b/guides/prompt-adversarial.md @@ -17,7 +17,7 @@ Topics: --- ## Prompt Injection -Prompt injection aims to hijack the model output by using clever prompts that change its behavior. These attacks could be harmful -- Simon Williams define it ["as a form of security exploit"](https://simonwillison.net/2022/Sep/12/prompt-injection/). +Prompt injection aims to hijack the model output by using clever prompts that change its behavior. These attacks could be harmful -- Simon Willison defined it ["as a form of security exploit"](https://simonwillison.net/2022/Sep/12/prompt-injection/). Let's cover a basic example to demonstrate how prompt injection can be achieved. We will use a popular example shared by [Riley on Twitter](https://twitter.com/goodside/status/1569128808308957185?s=20). @@ -186,4 +186,4 @@ Models like ChatGPT and Claude have been aligned to avoid outputting content tha --- [Previous Section (Advanced Prompting)](./prompts-advanced-usage.md) -[Next Section (Miscellaneous Topics)](./prompt-miscellaneous.md) \ No newline at end of file +[Next Section (Miscellaneous Topics)](./prompt-miscellaneous.md)