From 3ee137c9e6a551cc3e37ab706b17518076ba8d9d Mon Sep 17 00:00:00 2001 From: Omer Bensaadon Date: Sun, 23 Apr 2023 19:11:54 -0400 Subject: [PATCH] Adding details --- pages/techniques/zeroshot.en.mdx | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/pages/techniques/zeroshot.en.mdx b/pages/techniques/zeroshot.en.mdx index 86164ee..0e71ff3 100644 --- a/pages/techniques/zeroshot.en.mdx +++ b/pages/techniques/zeroshot.en.mdx @@ -1,5 +1,7 @@ # Zero-Shot Prompting -LLMs today trained on large amounts of data and tuned to follow instructions, are capable of performing tasks zero-shot. We tried a few zero-shot examples in the previous section. Here is one of the examples we used: +Large LLMs today, such as GPT-3, are tuned to follow instructions and are trained on large amounts of data; so they are capable of performing some tasks "zero-shot." + +We tried a few zero-shot examples in the previous section. Here is one of the examples we used: *Prompt:* ``` @@ -14,7 +16,7 @@ Sentiment: Neutral ``` -Note that in the prompt above we didn't provide the model with any examples -- that's the zero-shot capabilities at work. +Note that in the prompt above we didn't provide the model with any examples of text alongside their classifications, the LLM already understands "sentiment" -- that's the zero-shot capabilities at work. Instruction tuning has shown to improve zero-shot learning [Wei et al. (2022)](https://arxiv.org/pdf/2109.01652.pdf). Instruction tuning is essentially the concept of finetuning models on datasets described via instructions. Furthermore, [RLHF](https://arxiv.org/abs/1706.03741) (reinforcement learning from human feedback) has been adopted to scale instruction tuning wherein the model is aligned to better fit human preferences. This recent development powers models like ChatGPT. We will discuss all these approaches and methods in upcoming sections.