23 lines
959 B
Plaintext
23 lines
959 B
Plaintext
|
# Risks & Misuses
|
||
|
|
||
|
We have seen already how effective well-crafted prompts can be for various tasks using techniques like few-shot learning. As we think about building real-world applications on top of LLMs, it becomes crucial to think about the misuses, risks, and safety involved with language models. This section focuses on highlighting some of the risks and misuses of LLMs via techniques like prompt injections. It also highlights harmful behaviors including how to mitigate via effective prompting techniques. Other topics of interest include generalizability, calibration, biases, social biases, and factuality to name a few.
|
||
|
|
||
|
import { Card, Cards } from 'nextra-theme-docs'
|
||
|
|
||
|
<Cards num={3}>
|
||
|
<Card
|
||
|
arrow
|
||
|
title="Adversarial Prompting"
|
||
|
href="/risks/adversarial">
|
||
|
</Card>
|
||
|
<Card
|
||
|
arrow
|
||
|
title="Factuality"
|
||
|
href="/risks/factuality">
|
||
|
</Card>
|
||
|
<Card
|
||
|
arrow
|
||
|
title="Biases"
|
||
|
href="/risks/biases">
|
||
|
</Card>
|
||
|
</Cards>
|