Merge pull request #300 from sarah-ahm/patch-1

Added more LLM settings
main
Elvis Saravia 2023-10-12 09:47:57 -06:00 committed by GitHub
commit 7b88cf1567
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 11 additions and 1 deletions

View File

@ -6,6 +6,16 @@ When working with prompts, you interact with the LLM via an API or directly. You
**Top_p** - Similarly, with `top_p`, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value.
The general recommendation is to alter one, not both.
The general recommendation is to alter temperature or top_p, not both.
**Max Length** - You can manage the number of tokens the model generates by adjusting the 'max length'. Specifying a max length helps you prevent long or irrelevant responses and control costs.
**Stop Sequences** - A 'stop sequence' is a string that stops the model from generating tokens. Specifying stop sequences is another way to control the length and structure of the model's response. For example, you can tell the model to generate lists that have no more than 10 items by adding "11" as a stop sequence.
**Frequency Penalty** - The 'frequency penalty' applies a penalty on the next token proportional to how many times that token already appeared in the response and prompt. The higher the frequency penalty, the less likely a word will appear again. This setting reduces the repetition of words in the model's response by giving tokens that appear more a higher penalty.
**Presence Penalty** - The 'presence penalty' also applies a penalty on repeated tokens but, unlike the frequency penalty, the penalty is the same for all repeated tokens. A token that appears twice and a token that appears 10 times are penalized the same. This setting prevents the model from repeating phrases too often in its response. If you want the model to generate diverse or creative text, you might want to use a higher presence penalty. Or, if you need the model to stay focused, try using a lower presence penalty.
Similar to temperature and top_p, the general recommendation is to alter the frequency or presence penalty, not both.
Before starting with some basic examples, keep in mind that your results may vary depending on the version of LLM you use.