diff --git a/pages/techniques/ape.en.mdx b/pages/techniques/ape.en.mdx index b6086b6..e613e66 100644 --- a/pages/techniques/ape.en.mdx +++ b/pages/techniques/ape.en.mdx @@ -14,7 +14,7 @@ The first step involves a large language model (as an inference model) that is g APE discovers a better zero-shot CoT prompt than the human engineered "Let's think step by step" prompt ([Kojima et al., 2022](https://arxiv.org/abs/2205.11916)). -The prompt "Let’s work this out in a step by step way to be sure we have the right answer." elicits chain-of-though reasoning and improves performance on the MultiArith and GSM8K benchmarks: +The prompt "Let's work this out in a step by step way to be sure we have the right answer." elicits chain-of-though reasoning and improves performance on the MultiArith and GSM8K benchmarks: Image Source: [Zhou et al., (2022)](https://arxiv.org/abs/2211.01910)