Update README.md
parent
293ffd3fe1
commit
e1b1c7ce6a
|
@ -34,12 +34,14 @@ This guide contains a set of papers, learning guides, and tools related to promp
|
||||||
- [Structured Prompting: Scaling In-Context Learning to 1,000 Examples](https://arxiv.org/abs/2212.06713) (Dec 2022)
|
- [Structured Prompting: Scaling In-Context Learning to 1,000 Examples](https://arxiv.org/abs/2212.06713) (Dec 2022)
|
||||||
- [PAL: Program-aided Language Models](https://arxiv.org/abs/2211.10435) (Nov 2022)
|
- [PAL: Program-aided Language Models](https://arxiv.org/abs/2211.10435) (Nov 2022)
|
||||||
- [Large Language Models Are Human-Level Prompt Engineers](https://arxiv.org/abs/2211.01910) (Nov 2022)
|
- [Large Language Models Are Human-Level Prompt Engineers](https://arxiv.org/abs/2211.01910) (Nov 2022)
|
||||||
|
- [Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods](https://arxiv.org/abs/2210.07321) (Nov 2022)
|
||||||
- [Teaching Algorithmic Reasoning via In-context Learning](https://arxiv.org/abs/2211.09066) (Nov 2022)
|
- [Teaching Algorithmic Reasoning via In-context Learning](https://arxiv.org/abs/2211.09066) (Nov 2022)
|
||||||
- [Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference](https://arxiv.org/abs/2211.11875) (Nov 2022)
|
- [Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference](https://arxiv.org/abs/2211.11875) (Nov 2022)
|
||||||
- [Ask Me Anything: A simple strategy for prompting language models](https://paperswithcode.com/paper/ask-me-anything-a-simple-strategy-for) (Oct 2022)
|
- [Ask Me Anything: A simple strategy for prompting language models](https://paperswithcode.com/paper/ask-me-anything-a-simple-strategy-for) (Oct 2022)
|
||||||
- [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) (Oct 2022)
|
- [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) (Oct 2022)
|
||||||
- [Prompting GPT-3 To Be Reliable](https://arxiv.org/abs/2210.09150) (Oct 2022)
|
- [Prompting GPT-3 To Be Reliable](https://arxiv.org/abs/2210.09150) (Oct 2022)
|
||||||
- [Decomposed Prompting: A Modular Approach for Solving Complex Tasks](https://arxiv.org/abs/2210.02406) (Oct 2022)
|
- [Decomposed Prompting: A Modular Approach for Solving Complex Tasks](https://arxiv.org/abs/2210.02406) (Oct 2022)
|
||||||
|
- [Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples](https://arxiv.org/abs/2209.02128) (Sep 2022)
|
||||||
- [Promptagator: Few-shot Dense Retrieval From 8 Examples](https://arxiv.org/abs/2209.11755) (Sep 2022)
|
- [Promptagator: Few-shot Dense Retrieval From 8 Examples](https://arxiv.org/abs/2209.11755) (Sep 2022)
|
||||||
- [On the Advance of Making Language Models Better Reasoners](https://arxiv.org/abs/2206.02336) (June 2022)
|
- [On the Advance of Making Language Models Better Reasoners](https://arxiv.org/abs/2206.02336) (June 2022)
|
||||||
- [Large Language Models are Zero-Shot Reasoners](https://arxiv.org/abs/2205.11916) (May 2022)
|
- [Large Language Models are Zero-Shot Reasoners](https://arxiv.org/abs/2205.11916) (May 2022)
|
||||||
|
@ -92,6 +94,7 @@ This guide contains a set of papers, learning guides, and tools related to promp
|
||||||
- [Scale SpellBook](https://scale.com/spellbook)
|
- [Scale SpellBook](https://scale.com/spellbook)
|
||||||
- [Interactive Composition Explorer](https://github.com/oughtinc/ice)
|
- [Interactive Composition Explorer](https://github.com/oughtinc/ice)
|
||||||
- [LearnGPT](https://www.learngpt.com/)
|
- [LearnGPT](https://www.learngpt.com/)
|
||||||
|
- [hwchase17/adversarial-prompts](https://github.com/hwchase17/adversarial-prompts)
|
||||||
- [Promptable](https://promptable.ai/)
|
- [Promptable](https://promptable.ai/)
|
||||||
- [GPT Index](https://github.com/jerryjliu/gpt_index)
|
- [GPT Index](https://github.com/jerryjliu/gpt_index)
|
||||||
- [Prompt Base](https://promptbase.com/)
|
- [Prompt Base](https://promptbase.com/)
|
||||||
|
|
Loading…
Reference in New Issue