Compare commits

...

5 Commits

Author SHA1 Message Date
Ritvik Rastogi d1ba792ab6
Update papers.en.mdx 2023-10-23 08:56:35 +05:30
Ritvik Rastogi c418be5dbd
Merge branch 'main' into ritvik-papers-update 2023-10-20 09:07:55 +05:30
Ritvik Rastogi 1b7564beca
Merge branch 'main' into ritvik-papers-update 2023-10-15 20:53:52 +05:30
Ritvik19 2ff2d88f0f Updated papers.en.mdx 2023-10-15 20:52:11 +05:30
Ritvik19 599620fcf7 Updated papers.en.mdx 2023-10-08 21:02:32 +05:30
1 changed files with 16 additions and 0 deletions

View File

@ -23,6 +23,7 @@ The following are the latest papers (sorted by release date) on prompt engineeri
## Approaches
- [Large Language Models as Analogical Reasoners](https://arxiv.org/abs/2310.01714) (October 2023)
- [Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL](https://arxiv.org/abs/2309.06653) (September 2023)
- [Chain-of-Verification Reduces Hallucination in Large Language Models](https://arxiv.org/abs/2309.11495) (September 2023)
- [Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers](https://arxiv.org/abs/2309.08532) (September 2023)
@ -175,6 +176,21 @@ The following are the latest papers (sorted by release date) on prompt engineeri
## Applications
- [PromptRE: Weakly-Supervised Document-Level Relation Extraction via Prompting-Based Data Programming](https://arxiv.org/abs/2310.09265) (October 2023)
- [Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation](https://arxiv.org/abs/2310.08395) (October 2023)
- [Who Wrote it and Why? Prompting Large-Language Models for Authorship Verification](https://arxiv.org/abs/2310.08123) (October 2023)
- [Promptor: A Conversational and Autonomous Prompt Generation Agent for Intelligent Text Entry Techniques](https://arxiv.org/abs/2310.08101) (October 2023)
- [Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models](https://arxiv.org/abs/2310.03965) (October 2023)
- [From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting](https://arxiv.org/abs/2309.04269) (September 2023)
- [Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation](https://arxiv.org/abs/2310.02304) (October 2023)
- [Think before you speak: Training Language Models With Pause Tokens](https://arxiv.org/abs/2310.02226) (October 2023)
- [(Dynamic) Prompting might be all you need to repair Compressed LLMs](https://arxiv.org/abs/2310.00867) (October 2023)
- [In-Context Learning in Large Language Models: A Neuroscience-inspired Analysis of Representations](https://arxiv.org/abs/2310.00313) (September 2023)
- [Understanding In-Context Learning from Repetitions](https://arxiv.org/abs/2310.00297) (September 2023)
- [Investigating the Efficacy of Large Language Models in Reflective Assessment Methods through Chain of Thoughts Prompting](https://arxiv.org/abs/2310.00272) (September 2023)
- [Automatic Prompt Rewriting for Personalized Text Generation](https://arxiv.org/abs/2310.00152) (September 2023)
- [Efficient Streaming Language Models with Attention Sinks](https://arxiv.org/abs/2309.17453) (September 2023)
- [The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision)](https://arxiv.org/abs/2309.17421) (September 2023)
- [Graph Neural Prompting with Large Language Models](https://arxiv.org/abs/2309.15427) (September 2023)
- [Large Language Model Alignment: A Survey](https://arxiv.org/abs/2309.15025) (September 2023)
- [Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic](https://arxiv.org/abs/2309.13339) (September 2023)