prompt tuning notes

pull/22/head
Elvis Saravia 2023-02-20 23:52:12 -06:00
parent 5fe8b22caf
commit db5414fbd6
3 changed files with 11 additions and 4 deletions

View File

@ -53,6 +53,8 @@ The following are the latest papers (sorted by release date) on prompt engineeri
- [Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing](https://arxiv.org/abs/2107.13586) (Jul 2021)
- Approaches/Techniques:
- [Scalable Prompt Generation for Semi-supervised Learning with Language Models](https://arxiv.org/abs/2302.09236) (Feb 2023)
- [Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints](https://arxiv.org/abs/2302.09185) (Feb 2023)
- [À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting](https://arxiv.org/abs/2302.07994) (Feb 2023)
- [GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks](https://arxiv.org/abs/2302.08043) (Feb 2023)
- [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (Feb 2023)

View File

@ -341,7 +341,7 @@ Some really interesting things happened with this example. In the first answer,
---
### Automatic Prompt Engineer (APE)
## Automatic Prompt Engineer (APE)
![](../img/APE.png)
@ -355,6 +355,11 @@ The prompt "Let's work this out it a step by step way to be sure we have the rig
![](../img/ape-zero-shot-cot.png)
This paper touches on an important topic related to prompt engineering which is this idea of automatically optimizing prompts. While we don't go deep in this topic in this guide, here are few key papers if you are interested in the topic:
- [AutoPrompt](https://arxiv.org/abs/2010.15980) - proposes an approach to automatically create prompts for a diverse set of tasks based on gradient-guided search.
- [Prefix Tuning](https://arxiv.org/abs/2101.00190) - a lightweight alternative to fine-tuning that prepends a trainable continuous prefix for NLG tasks.
- [Prompt Tuning](https://arxiv.org/abs/2104.08691) - proposes a mechanism for learning soft prompts through back propagation.
---
[Previous Section (Basic Prompting)](./prompts-basic-usage.md)

View File

@ -11,7 +11,7 @@ Topics:
- [Information Extraction](#information-extraction)
- [Question Answering](#question-answering)
- [Text Classification](#text-classification)
- [Role-Playing](#role-playing)
- [Conversation](#conversation)
- [Code Generation](#code-generation)
- [Reasoning](#reasoning)
@ -148,10 +148,10 @@ What is the problem here?
---
## Role-Playing
## Conversation
Perhaps one of the more interesting things you can achieve with prompt engineering is telling the system how to behave, its intent, and identity. This is particularly useful when you are building conversational systems.
For instance, let's create a conversational system that's able to give more technical and scientific responses to questions.
For instance, let's create a conversational system that's able to give more technical and scientific responses to questions. Note how we are explicitly telling it how to behave through the instruction.
```
The following is a conversation with an AI research assistant. The assistant tone is technical and scientific.