Prompt-Engineering-Guide/pages/risks.mdx

23 lines
959 B
Plaintext
Raw Normal View History

2023-03-11 10:21:43 +08:00
# Risks & Misuses
We have seen already how effective well-crafted prompts can be for various tasks using techniques like few-shot learning. As we think about building real-world applications on top of LLMs, it becomes crucial to think about the misuses, risks, and safety involved with language models. This section focuses on highlighting some of the risks and misuses of LLMs via techniques like prompt injections. It also highlights harmful behaviors including how to mitigate via effective prompting techniques. Other topics of interest include generalizability, calibration, biases, social biases, and factuality to name a few.
2023-03-13 03:14:15 +08:00
import { Card, Cards } from 'nextra-theme-docs'
<Cards num={3}>
<Card
arrow
title="Adversarial Prompting"
href="/risks/adversarial">
</Card>
<Card
arrow
title="Factuality"
href="/risks/factuality">
</Card>
<Card
arrow
title="Biases"
href="/risks/biases">
</Card>
</Cards>