added papers
parent
3b29a17156
commit
e64d491222
|
@ -5,11 +5,25 @@ In this section, we discuss other miscellaneous but important topics in prompt e
|
|||
**Note that this section is under construction.**
|
||||
|
||||
Topic:
|
||||
- [Directional Stimulus Prompting](#directional-stimulus-prompting)
|
||||
- [Program-Aided Language Models](#program-aided-language-models)
|
||||
- [ReAct](#react)
|
||||
- [Multimodal CoT Prompting](#multimodal-prompting)
|
||||
- [GraphPrompts](#graphprompts)
|
||||
|
||||
---
|
||||
|
||||
## Directional Stimulus Prompting
|
||||
[Li et al., (2023)](https://arxiv.org/abs/2302.11520) proposes a new prompting technique to better guide the LLM in generating the desired summary.
|
||||
|
||||
A tuneable policy LM is trained to generate the stimulus/hint. Seeing more use of RL to optimize LLMs.
|
||||
|
||||
The figure below shows how Directional Stimulus Prompting compares with standard prompting. The policy LM can be small and optimized to generate the hints that guide a black-box frozen LLM.
|
||||
|
||||
![](../img/dsp.jpeg)
|
||||
|
||||
Full example coming soon!
|
||||
|
||||
---
|
||||
## Program-Aided Language Models
|
||||
[Gao et al., (2022)](https://arxiv.org/abs/2211.10435) presents a method that uses LLMs to read natural language problems and generate programs as the intermediate reasoning steps. Coined, program-aided language models (PAL), it differs from chain-of-thought prompting in that instead of using free-form text to obtain solution it offloads the solution step to a programmatic runtime such as a Python interpreter.
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 144 KiB |
Loading…
Reference in New Issue