new techniques

pull/195/head
Elvis Saravia 2023-06-08 14:52:35 -06:00
parent 103d9fa449
commit 38c07d17c6
29 changed files with 82 additions and 1 deletions

BIN
img/rag.png 100644

Binary file not shown.

After

Width:  |  Height:  |  Size: 149 KiB

View File

@ -5,6 +5,7 @@
"consistency": "Autoconsistència",
"knowledge": "Prompt de coneixement generat",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Enginyeria de prompts automàtic (APE)",
"activeprompt": "Prompt actiu",

View File

@ -5,6 +5,7 @@
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Active-Prompt",

View File

@ -5,6 +5,7 @@
"consistency": "Auto-consistencia",
"knowledge": "Prompt de conocimiento generado",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Ingeniería de prompts automático (APE)",
"activeprompt": "Prompt activo",

View File

@ -5,6 +5,7 @@
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Active-Prompt",

View File

@ -5,6 +5,7 @@
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Active-Prompt",

View File

@ -5,6 +5,7 @@
"consistency": "Self-Consistency",
"knowledge": "Prompt Generate Knowledge",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Prompt Attivo",

View File

@ -5,6 +5,7 @@
"consistency": "自己整合性Self-Consistency",
"knowledge": "知識生成プロンプティング",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "自動プロンプトエンジニア",
"activeprompt": "アクティブプロンプト",

View File

@ -5,6 +5,7 @@
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Active-Prompt",

View File

@ -5,6 +5,7 @@
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Active-Prompt",

View File

@ -4,6 +4,9 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Active-Prompt",
"dsp": "Directional Stimulus Prompting",

View File

@ -5,6 +5,7 @@
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "Automatic Prompt Engineer",
"activeprompt": "Active-Prompt",

View File

@ -5,6 +5,7 @@
"consistency": "自我一致性",
"knowledge": "生成知识提示",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",
"ape": "自动提示工程师",
"activeprompt": "Active-Prompt",

View File

@ -0,0 +1,3 @@
# Automatic Reasoning and Tool-use (ART)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,25 @@
# Retrieval Augmented Generation (RAG)
import {Screenshot} from 'components/screenshot'
import RAG from '../../img/rag.png'
General-purpose language models can be fine-tuned to achieve several common tasks such as sentiment analysis and named entity recognition. These tasks generally don't require additional background knowledge.
For more complex and knowledge-intensive tasks, it's possible to build a language model-based system that accesses external knowledge sources to complete tasks. This enables more factual consistency, improves reliability of the generated responses, and helps to mitigate the problem of "hallucination".
Meta AI researchers introduced a method called [Retrieval Augmented Generation (RAG)](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) to address such knowledge-intensive tasks. RAG combines an information retrieval component with a text generator model. RAG can be fine-tuned and it's internal knowledge can be modified in an efficient manner and without needing retraining of the entire model.
RAG takes an input and retrieves a set of relevant/supporting documents given a source (e.g., Wikipedia). The documents are concatenated as context with the original input prompt and fed to the text generator which produces the final output. This makes RAG adaptive for situations where facts could evolve over time. This is very useful as LLMs's parametric knowledge is static. RAG allows language models to bypass retraining, enabling access to the latest information for generating reliable outputs via retrieval-based generation.
Lewis et al., (2021) proposed a general-purpose fine-tuning recipe for RAG. A pre-trained seq2seq model is used as the parametric memory and a dense vector index of Wikipedia is used as non-parametric memory (accessed using a neural pre-trained retriever). Below is a overview of how the approach works:
<Screenshot src={RAG} alt="RAG" />
Image Source: [Lewis et el. (2021)](https://arxiv.org/pdf/2005.11401.pdf)
RAG performs strong on several benchmarks such as [Natural Questions](https://ai.google.com/research/NaturalQuestions), [WebQuestions](https://paperswithcode.com/dataset/webquestions), and CuratedTrec. RAG generates responses that are more factual, specific, and diverse when tested on MS-MARCO and Jeopardy questions. RAG also improves results on FEVER fact verification.
This shows the potential of RAG as a viable option for enhancing outputs of language models in knowledge-intensive tasks.
More recently, these retriever-based approaches have become more popular and are combined with popular LLMs like ChatGPT to improve capabilities and factual consistency.
You can find a [simple example of how to use retrievers and LLMs for question answering with sources](https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html) from the LangChain documentation.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -0,0 +1,3 @@
# Retrieval Augmented Generation (RAG)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -24,5 +24,7 @@ Image Source: [Yao et el. (2023)](https://arxiv.org/abs/2305.10601)
From the results reported in the figure below, ToT substantially outperforms the other prompting methods:
<Screenshot src={TOT2} alt="TOT2" />
<Screenshot src={TOT3} alt="TOT3" />
Image Source: [Yao et el. (2023)](https://arxiv.org/abs/2305.10601)
Code available [here](https://github.com/princeton-nlp/tree-of-thought-llm)

View File

@ -0,0 +1,3 @@
# Tree of Thoughts (ToT)
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

View File

@ -11,6 +11,7 @@
- [Dyno](https://trydyno.com)
- [EmergentMind](https://www.emergentmind.com)
- [EveryPrompt](https://www.everyprompt.com)
- [fastRAG](https://github.com/IntelLabs/fastRAG)
- [Guardrails](https://github.com/ShreyaR/guardrails)
- [Guidance](https://github.com/microsoft/guidance)
- [GPT Index](https://github.com/jerryjliu/gpt_index)