removed cards

pull/46/head
Elvis Saravia 2023-03-12 14:43:31 -06:00
parent 48c081b892
commit bc1ebcd01f
6 changed files with 4 additions and 200 deletions

View File

@ -3,19 +3,3 @@
In this guide we will cover some advanced and interesting ways we can use prompt engineering to perform useful and more advanced tasks.
**Note that this section is under heavy development.**
import { Card, Cards } from 'nextra-theme-docs'
<Cards num={12}>
<Card
arrow
title="Generating Data"
href="/applications/generating">
</Card>
<Card
arrow
title="Program-Aided Language Models"
href="/applications/pal">
</Card>
</Cards>

View File

@ -3,53 +3,3 @@
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs). Researchers use prompt engineering to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning. Developers use prompt engineering to design robust and effective prompting techniques that interface with LLMs and other tools.
Motivated by the high interest in developing with LLMs, we have created this new prompt engineering guide that contains all the latest papers, learning guides, lectures, references, and tools related to prompt engineering.
import { Card, Cards } from 'nextra-theme-docs'
<Cards num={9}>
<Card
arrow
title="Introduction"
href="/introduction">
</Card>
<Card
arrow
title="Techniques"
href="/techniques">
</Card>
<Card
arrow
title="Applications"
href="/applications">
</Card>
<Card
arrow
title="Models"
href="/models">
</Card>
<Card
arrow
title="Risks & Misuses"
href="/risks">
</Card>
<Card
arrow
title="Papers"
href="/papers">
</Card>
<Card
arrow
title="Tools"
href="/tools">
</Card>
<Card
arrow
title="Datasets"
href="/datasets">
</Card>
<Card
arrow
title="Additional Readings"
href="/readings">
</Card>
</Cards>

View File

@ -5,38 +5,3 @@ Prompt engineering is a relatively new discipline for developing and optimizing
This guide covers the basics of standard prompts to provide a rough idea on how to use prompts to interact and instruct large language models (LLMs).
All examples are tested with `text-davinci-003` (using OpenAI's playground) unless otherwise specified. It uses the default configurations, e.g., `temperature=0.7` and `top-p=1`.
import { Card, Cards } from 'nextra-theme-docs'
<Cards num={6}>
<Card
arrow
title="Basic Prompts"
href="/introduction/basics">
</Card>
<Card
arrow
title="LLM Settings"
href="/introduction/settings">
</Card>
<Card
arrow
title="Standard Prompts"
href="/introduction/standard">
</Card>
<Card
arrow
title="Prompt Elements"
href="/introduction/elements">
</Card>
<Card
arrow
title="General Tips for Designing Prompts"
href="/introduction/tips">
</Card>
<Card
arrow
title="Examples of Prompts"
href="/introduction/examples">
</Card>
</Cards>

View File

@ -1,13 +1,3 @@
# Models
In this section, we will cover some of the capabilities of language models by applying the latest and most advanced prompting engineering techniques.
import { Card, Cards } from 'nextra-theme-docs'
<Cards num={1}>
<Card
arrow
title="ChatGPT"
href="/models/chatgpt">
</Card>
</Cards>

View File

@ -1,23 +1,3 @@
# Risks & Misuses
We have seen already how effective well-crafted prompts can be for various tasks using techniques like few-shot learning. As we think about building real-world applications on top of LLMs, it becomes crucial to think about the misuses, risks, and safety involved with language models. This section focuses on highlighting some of the risks and misuses of LLMs via techniques like prompt injections. It also highlights harmful behaviors including how to mitigate via effective prompting techniques. Other topics of interest include generalizability, calibration, biases, social biases, and factuality to name a few.
import { Card, Cards } from 'nextra-theme-docs'
<Cards num={3}>
<Card
arrow
title="Adversarial Prompting"
href="/risks/adversarial">
</Card>
<Card
arrow
title="Factuality"
href="/risks/factuality">
</Card>
<Card
arrow
title="Biases"
href="/risks/biases">
</Card>
</Cards>

View File

@ -3,68 +3,3 @@
By this point, it should be obvious that it helps to improve prompts to get better results on different tasks. That's the whole idea behind prompt engineering.
While those examples were fun, let's cover a few concepts more formally before we jump into more advanced concepts.
import { Card, Cards } from 'nextra-theme-docs'
<Cards num={12}>
<Card
arrow
title="Zero-shot Prompting"
href="/techniques/zeroshot">
</Card>
<Card
arrow
title="Few-shot Prompting"
href="/techniques/fewshot">
</Card>
<Card
arrow
title="Chain-of-Thought Prompting"
href="/techniques/cot">
</Card>
<Card
arrow
title="Zero-shot CoT"
href="/techniques/zerocot">
</Card>
<Card
arrow
title="Self-Consistency"
href="/techniques/consistency">
</Card>
<Card
arrow
title="Generate Knowledge Prompting"
href="/techniques/knowledge">
</Card>
<Card
arrow
title="Automatic Prompt Engineer"
href="/techniques/ape">
</Card>
<Card
arrow
title="Active-Prompt"
href="/techniques/activeprompt">
</Card>
<Card
arrow
title="Directional Stimulus Prompting"
href="/techniques/dsp">
</Card>
<Card
arrow
title="ReAct"
href="/techniques/react">
</Card>
<Card
arrow
title="Multimodal CoT"
href="/techniques/multimodalcot">
</Card>
<Card
arrow
title="Graph Prompting"
href="/techniques/graph">
</Card>
</Cards>