Prompt-Engineering-Guide/pages/models/collection.en.mdx

108 lines
24 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters!

This file contains invisible Unicode characters that may be processed differently from what appears below. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to reveal hidden characters.

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

# LLM Collection
import { Callout, FileTree } from 'nextra-theme-docs'
This section consists of a collection and summary of notable and foundational LLMs.
## Models
| Model | Release Date | Size (B) | Checkpoints | Description |
| --- | --- | --- | --- | --- |
| [Falcon LLM](https://falconllm.tii.ae/) | Sep 2023 | 7, 40, 180 | [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b), [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B) | Falcon LLM is a foundational large language model (LLM) with 180 billion parameters trained on 3500 Billion tokens. TII has now released Falcon LLM a 180B model. |
| [Mistral-7B-v0.1](https://arxiv.org/abs/2310.06825) | Sep 2023 | 7 | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | Mistral-7B-v0.1 is a pretrained generative text model with 7 billion parameters. The model is based on a transformer architecture with features like Grouped-Query Attention, Byte-fallback BPE tokenizer and Sliding-Window Attention. |
| [CodeLlama](https://scontent.fbze2-1.fna.fbcdn.net/v/t39.2365-6/369856151_1754812304950972_1159666448927483931_n.pdf?_nc_cat=107&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=aLQJyBvzDUwAX-5EVhT&_nc_ht=scontent.fbze2-1.fna&oh=00_AfA2dCIqykviwlY3NiHIFzO85n1-JyK4_pM24FJ5v5XUOA&oe=6535DD4F) | Aug 2023 |7, 13, 34 | [CodeLlama-7B](https://huggingface.co/codellama/CodeLlama-7b-hf), [CodeLlama-13B](https://huggingface.co/codellama/CodeLlama-13b-hf), [CodeLlama-34B](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | The Code Llama family is designed for general code synthesis and understanding. It is specifically tuned for instruction following and safer deployment. The models are auto-regressive and use an optimized transformer architecture. They are intended for commercial and research use in English and relevant programming languages. |
| [Llama-2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) | Jul 2023 | 7, 13, 70 | [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b), [Llama-2-13B](https://huggingface.co/meta-llama/Llama-2-13b), [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | LLaMA-2, developed by Meta AI, was released in July 2023 with models of 7, 13, and 70 billion parameters. It maintains a similar architecture to LLaMA-1 but uses 40% more training data. LLaMA-2 includes foundational models and dialog-fine-tuned models, known as LLaMA-2 Chat, and is available for many commercial uses, with some restrictions. |
| [XGen-7B-8K](https://arxiv.org/abs/2309.03450) | Jul 2023 | 7 | [XGen-7B-8K](https://huggingface.co/Salesforce/xgen-7b-8k-inst) | The XGen-7B-8K, developed by Salesforce AI Research, is a 7B parameter language model. |
| [Claude-2](https://www.anthropic.com/index/claude-2) | Jul 2023 | 130 | - | Claude 2 is a foundational LLM built by Anthropic, designed to be safer and more "steerable" than its previous version. It is conversational and can be used for a variety of tasks like customer support, Q&A, and more. It can process large amounts of text and is well-suited for applications that require handling extensive data, such as documents, emails, FAQs, and chat transcripts. |
| [Tulu](https://arxiv.org/abs/2306.04751) | Jun 2023 | 7, 13, 30, 65 | [Tulu-7B](https://huggingface.co/allenai/tulu-7b), [Tulu-13B](https://huggingface.co/allenai/tulu-13b) [Tulu-30B](https://huggingface.co/allenai/tulu-30b), [Tulu-65B](https://huggingface.co/allenai/tulu-65b) | Tulu is a family of models developed by Allen Institute for AI. The models are LLaMa models that have been fine-tuned on a mixture of instruction datasets, including FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT. They are designed to follow complex instructions across various NLP tasks |
| [ChatGLM2-6B](https://arxiv.org/abs/2103.10360) | Jun 2023 | 6 | [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | ChatGLM2-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. It has improved performance, longer context capabilities, more efficient inference, and an open license for academic and commercial use. The model uses a hybrid objective function and has been trained with 1.4T bilingual tokens. It shows substantial improvements in performance on various datasets compared to its first-generation counterpart. |
| [Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b) | Jun 2023 | 13 | [Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b) | Nous-Hermes-13B is a language model fine-tuned by Nous Research on over 300,000 instructions. |
| [Baize-v2](https://arxiv.org/pdf/2304.01196.pdf) | May 2023 | 7, 13 | [Baize-v2-13B](https://huggingface.co/project-baize/baize-v2-13b) | Baize-v2 is an open-source chat model developed by UCSD and Sun Yat-Sen University, fine-tuned with LoRA, and trained with supervised fine-tuning (SFT) and self-distillation with feedback (SDF). |
| [RWKV-4-Raven](https://arxiv.org/abs/2305.13048) | May 2023 | 1.5, 3, 7, 14 | [RWKV-4-Raven](https://huggingface.co/BlinkDL/rwkv-4-raven) | RWKV-4-Raven is a series of models. These models are fine-tuned on various datasets like Alpaca, CodeAlpaca, Guanaco, GPT4All, and ShareGPT. They follow a 100% RNN architecture for the language model. |
| [Guanaco](https://arxiv.org/abs/2305.14314) | May 2023 | 7, 13, 33, 65 | [Guanaco-7B](https://huggingface.co/timdettmers/guanaco-7b), [Guanaco-13B](https://huggingface.co/timdettmers/guanaco-13b), [Guanaco-33B](https://huggingface.co/timdettmers/guanaco-33b) [Guanaco-65B](https://huggingface.co/timdettmers/guanaco-65b) | Guanaco models are open-source chatbots fine-tuned through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are intended for research purposes. The models allow for cheap and local experimentation with high-quality chatbot systems. |
| [PaLM 2](https://arxiv.org/abs/2305.10403) | May 2023 | - | - | A Language Model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. |
| [Gorilla](https://arxiv.org/abs/2305.15334v1) | May 2023 | 7 | [Gorilla](https://github.com/ShishirPatil/gorilla) | Gorilla: Large Language Model Connected with Massive APIs |
| [RedPajama-INCITE](https://www.together.xyz/blog/redpajama-models-v1) | May 2023 | 3, 7 | [RedPajama-INCITE](https://huggingface.co/togethercomputer) | A family of models including base, instruction-tuned & chat models. |
| [LIMA](https://arxiv.org/abs/2305.11206v1) | May 2023 | 65 | - | A 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. |
| [Replit Code](https://huggingface.co/replit) | May 2023 | 3 | [Replit Code](https://huggingface.co/replit) | replit-code-v1-3b model is a 2.7B LLM trained on 20 languages from the Stack Dedup v1.2 dataset. |
| [h2oGPT](https://arxiv.org/pdf/2306.08161.pdf) | May 2023 | 7, 12, 20, 40 | [h2oGPT](https://github.com/h2oai/h2ogpt) | h2oGPT is a LLM fine-tuning framework and chatbot UI with document(s) question-answer capabilities. |
| [CodeGen2](https://arxiv.org/abs/2305.02309) | May 2023 | 1, 3, 7, 16 | [CodeGen2](https://github.com/salesforce/codegen2) | Code models for program synthesis. |
| [CodeT5 and CodeT5+](https://arxiv.org/abs/2305.07922) | May 2023 | 16 | [CodeT5](https://github.com/salesforce/codet5) | CodeT5 and CodeT5+ models for Code Understanding and Generation from Salesforce Research. |
| [StarCoder](https://huggingface.co/blog/starcoder) | May 2023 | 15 | [StarCoder](https://huggingface.co/bigcode/starcoder) | StarCoder: A State-of-the-Art LLM for Code |
| [MPT](https://www.mosaicml.com/blog/mpt-7b) | May 2023 | 7, 30 | [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) | MosaicML's MPT models are open-source, commercially licensed Large Language Models, offering customizable AI solutions optimized for various NLP tasks. |
| [DLite](https://medium.com/ai-squared/announcing-dlite-v2-lightweight-open-llms-that-can-run-anywhere-a852e5978c6e) | May 2023 | 0.124 - 1.5 | [DLite-v2-1.5B](https://huggingface.co/aisquared/dlite-v2-1_5b) | Lightweight instruction following models which exhibit ChatGPT-like interactivity. |
| [WizardLM](https://arxiv.org/abs/2304.12244) | Apr 2023 | 70, 30, 13 | [WizardLM-13B](https://huggingface.co/WizardLM/WizardLM-13B-V1.2), [WizardLM-30B](https://huggingface.co/WizardLM/WizardLM-30B-V1.0), [WizardLM-70B](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | WizardLM is a family of large language models designed to follow complex instructions. The models performs well in coding, mathematical reasoning, and open-domain conversations. The models are license-friendly and adopt a prompt format from Vicuna for multi-turn conversations. The models are developed by the WizardLM Team, designed for various NLP tasks. |
| [FastChat-T5-3B](https://arxiv.org/abs/2306.05685) | Apr 2023 | 3 | [FastChat-T5-3B](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) | FastChat-T5 is an open-source chatbot trained by fine-tuning Flan-t5-xl (3B parameters) on user-shared conversations collected from ShareGPT. It's based on an encoder-decoder transformer architecture and can autoregressively generate responses to users' inputs. |
| [GPT4All-13B-Snoozy](https://gpt4all.io/reports/GPT4All_Technical_Report_3.pdf) | Apr 2023 | 13 | [GPT4All-13B-Snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy) | GPT4All-13B-Snoozy is a GPL licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories. It has been finetuned from LLama 13B and is developed by Nomic AI. The model is designed for assistant-style interaction data and is primarily in English. |
| [Koala-13B](https://bair.berkeley.edu/blog/2023/04/03/koala/) | Apr 2023 | 13 | [Koala-13B](https://huggingface.co/young-geng/koala) | Koala-13B is a chatbot created by Berkeley AI Research (BAIR). It is fine-tuned on Meta's LLaMA and focuses on dialogue data scraped from the web. The model aims to balance performance and cost, providing a lighter, open-source alternative to models like ChatGPT. It has been trained on interaction data that includes conversations with highly capable closed-source models such as ChatGPT. |
| [OpenAssistant (Llama family)](https://arxiv.org/abs/2304.07327) | Apr 2023 | 30, 70 | [Llama2-30b-oasst](https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor), [Llama2-70b-oasst](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10) | OpenAssistant-LLaMA models are language models from OpenAssistant's work on the Llama models. It supports CPU + GPU inference using GGML format and aims to provide an open-source alternative for instruction following tasks |
| [Dolly](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm) | Apr 2023 | 3, 7, 12 | [Dolly-v2-3B](https://huggingface.co/databricks/dolly-v2-3b), [Dolly-v2-7B](https://huggingface.co/databricks/dolly-v2-7b), [Dolly-v2-12B](https://huggingface.co/databricks/dolly-v2-12b) | An instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use. |
| [StableLM](https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models) | Apr 2023 | 3, 7 | [StableLM-Alpha-3B](https://huggingface.co/stabilityai/stablelm-tuned-alpha-3b), [StableLM-Alpha-7B](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) | Stability AI's StableLM series of language models |
| [Pythia](https://arxiv.org/abs/2304.01373) | Apr 2023 | 0.070 - 12 | [Pythia](https://github.com/eleutherai/pythia) | A suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. |
| [Open Assistant (Pythia Family)](https://open-assistant.io/) | Mar 2023 | 12 | [Open Assistant](https://huggingface.co/OpenAssistant) | OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. |
| [Med-PaLM 2](https://arxiv.org/abs/2305.09617v1) | Mar 2023 | - | - | Towards Expert-Level Medical Question Answering with Large Language Models |
| [ChatGLM-6B](https://chatglm.cn/blog) | Mar 2023 | 6 | [ChatGLM-6B](https://huggingface.co/THUDM/chatglm-6b) | ChatGLM-6B, is an open-source, Chinese-English bilingual dialogue model based on the General Language Model (GLM) architecture with 6.2 billion parameters. Despite its small size causing some factual or mathematical logic issues, it's adept for Chinese question-answering, summarization, and conversational tasks due to its training on over 1 trillion English and Chinese tokens |
| [GPT-3.5-turbo](https://openai.com/blog/chatgpt) | Mar 2023 | 175 | - | GPT-3.5-Turbo is OpenAI's advanced language model optimized for chat but also works well for traditional completion tasks. It offers better performance across all aspects compared to GPT-3 and is 10 times cheaper per token. |
| [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/) | Mar 2023 | 7, 13, 33 | [Vicuna-7B](https://huggingface.co/lmsys/vicuna-7b-v1.5), [Vicuna-13B](https://huggingface.co/lmsys/vicuna-13b-v1.5) | Vicuna is a family of auto-regressive language models based on the transformer architecture. It's fine-tuned from LLaMA and primarily intended for research on large language models and chatbots. It's developed by LMSYS and has a non-commercial license. |
| [Alpaca-13B](https://crfm.stanford.edu/2023/03/13/alpaca.html) | Mar 2023 | 13 | - | Alpaca is an instruction-following language model fine-tuned from Meta's LLaMA 7B. It's designed for academic research to address issues like misinformation and toxicity. Alpaca is trained on 52K instruction-following demonstrations and aims to be a more accessible option for academic study. It's not intended for commercial use due to licensing and safety concerns. |
| [Claude-1](https://www.anthropic.com/index/introducing-claude) | Mar 2023 | 137 | - | Claude is foundational a large language model (LLM) built by Anthropic. It is designed to be a helpful, honest, and harmless AI assistant. It can perform a wide variety of conversational and text processing tasks and is accessible through a chat interface and API. |
| [Cerebras-GPT](https://arxiv.org/abs/2304.03208) | Mar 2023 | 0.111 - 13 | [Cerebras-GPT](https://huggingface.co/cerebras) | Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
| [BloombergGPT](https://arxiv.org/abs/2303.17564v1)| Mar 2023 | 50 | - | BloombergGPT: A Large Language Model for Finance|
| [PanGu-Σ](https://arxiv.org/abs/2303.10845v1) | Mar 2023 | 1085 | - | PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing |
| [GPT-4](https://arxiv.org/abs/2303.08774v3) | Mar 2023 | - | - | GPT-4 Technical Report |
| [LLaMA](https://arxiv.org/abs/2302.13971v1) | Feb 2023 | 7, 13, 33, 65 | [LLaMA](https://github.com/facebookresearch/llama) | LLaMA: Open and Efficient Foundation Language Models |
| [ChatGPT](https://openai.com/blog/chatgpt) | Nov 2022 | - | - | A model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. |
| [Galactica](https://arxiv.org/abs/2211.09085v1) | Nov 2022 | 0.125 - 120 | [Galactica](https://huggingface.co/models?other=galactica) | Galactica: A Large Language Model for Science |
| [mT0](https://arxiv.org/abs/2211.01786v1) | Nov 2022 | 13 | [mT0-xxl](https://huggingface.co/bigscience/mt0-xxl) | Crosslingual Generalization through Multitask Finetuning |
| [BLOOM](https://arxiv.org/abs/2211.05100v3) | Nov 2022 | 176 | [BLOOM](https://huggingface.co/bigscience/bloom) | BLOOM: A 176B-Parameter Open-Access Multilingual Language Model |
| [U-PaLM](https://arxiv.org/abs/2210.11399v2) | Oct 2022 | 540 | - | Transcending Scaling Laws with 0.1% Extra Compute |
| [UL2](https://arxiv.org/abs/2205.05131v3) | Oct 2022 | 20 | [UL2, Flan-UL2](https://github.com/google-research/google-research/tree/master/ul2#checkpoints) | UL2: Unifying Language Learning Paradigms |
| [Sparrow](https://arxiv.org/abs/2209.14375) | Sep 2022 | 70 | - | Improving alignment of dialogue agents via targeted human judgements |
| [Flan-T5](https://arxiv.org/abs/2210.11416v5) | Oct 2022 | 11 | [Flan-T5-xxl](https://huggingface.co/google/flan-t5-xxl) | Scaling Instruction-Finetuned Language Models |
| [AlexaTM](https://arxiv.org/abs/2208.01448v2) | Aug 2022 | 20 | - | AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model |
| [GLM-130B](https://arxiv.org/abs/2210.02414v1) | Oct 2022 | 130 | [GLM-130B](https://github.com/THUDM/GLM-130B) | GLM-130B: An Open Bilingual Pre-trained Model |
| [OPT-IML](https://arxiv.org/abs/2212.12017v3) | Dec 2022 | 30, 175 | [OPT-IML](https://github.com/facebookresearch/metaseq/tree/main/projects/OPT-IML#pretrained-model-weights) | OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization |
| [OPT](https://arxiv.org/abs/2205.01068) | May 2022 | 175 | [OPT-13B](https://huggingface.co/facebook/opt-13b), [OPT-66B](https://huggingface.co/facebook/opt-66b) | OPT: Open Pre-trained Transformer Language Models |
| [PaLM](https://arxiv.org/abs/2204.02311v5) |Apr 2022| 540 | - | PaLM: Scaling Language Modeling with Pathways |
| [Tk-Instruct](https://arxiv.org/abs/2204.07705v3) | Apr 2022 | 11 | [Tk-Instruct-11B](https://huggingface.co/allenai/tk-instruct-11b-def) | Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks |
| [GPT-NeoX-20B](https://arxiv.org/abs/2204.06745v1) | Apr 2022 | 20 | [GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b) | GPT-NeoX-20B: An Open-Source Autoregressive Language Model |
| [Chinchilla](https://arxiv.org/abs/2203.15556) | Mar 2022 | 70 | - | Shows that for a compute budget, the best performances are not achieved by the largest models but by smaller models trained on more data. |
| [InstructGPT](https://arxiv.org/abs/2203.02155v1) | Mar 2022 | 175 | - | Training language models to follow instructions with human feedback |
| [CodeGen](https://arxiv.org/abs/2203.13474v5) | Mar 2022 | 0.350 - 16 | [CodeGen](https://huggingface.co/models?search=salesforce+codegen) | CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis |
| [AlphaCode](https://arxiv.org/abs/2203.07814v1) | Feb 2022 | 41 | - | Competition-Level Code Generation with AlphaCode |
| [MT-NLG](https://arxiv.org/abs/2201.11990v3) | Jan 2022 | 530 | - | Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model|
| [LaMDA](https://arxiv.org/abs/2201.08239v3) | Jan 2022 | 137 | - | LaMDA: Language Models for Dialog Applications |
| [GLaM](https://arxiv.org/abs/2112.06905) | Dec 2021 | 1200 | - | GLaM: Efficient Scaling of Language Models with Mixture-of-Experts |
| [Gopher](https://arxiv.org/abs/2112.11446v2) | Dec 2021 | 280 | - | Scaling Language Models: Methods, Analysis & Insights from Training Gopher |
| [WebGPT](https://arxiv.org/abs/2112.09332v3) | Dec 2021 | 175 | - | WebGPT: Browser-assisted question-answering with human feedback |
| [Yuan 1.0](https://arxiv.org/abs/2110.04725v2) | Oct 2021| 245 | - | Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and Few-Shot Learning |
| [T0](https://arxiv.org/abs/2110.08207) | Oct 2021 | 11 | [T0](https://huggingface.co/bigscience/T0) | Multitask Prompted Training Enables Zero-Shot Task Generalization |
| [FLAN](https://arxiv.org/abs/2109.01652v5) | Sep 2021 | 137 | - | Finetuned Language Models Are Zero-Shot Learners |
| [HyperCLOVA](https://arxiv.org/abs/2109.04650) | Sep 2021 | 82 | - | What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers |
| [ERNIE 3.0 Titan](https://arxiv.org/abs/2112.12731v1) | Jul 2021 | 10 | - | ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation |
| [Jurassic-1](https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf) | Aug 2021 | 178 | - | Jurassic-1: Technical Details and Evaluation |
| [ERNIE 3.0](https://arxiv.org/abs/2107.02137v1) | Jul 2021 | 10 | - | ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation|
| [Codex](https://arxiv.org/abs/2107.03374v2) | Jul 2021 | 12 | - | Evaluating Large Language Models Trained on Code |
| [GPT-J-6B](https://arankomatsuzaki.wordpress.com/2021/06/04/gpt-j/) | Jun 2021 | 6 | [GPT-J-6B](https://github.com/kingoflolz/mesh-transformer-jax/#gpt-j-6b) | A 6 billion parameter, autoregressive text generation model trained on The Pile. |
| [CPM-2](https://arxiv.org/abs/2106.10715v3) | Jun 2021 | 198 | [CPM](https://github.com/TsinghuaAI/CPM) | CPM-2: Large-scale Cost-effective Pre-trained Language Models |
| [PanGu-α](https://arxiv.org/abs/2104.12369v1) | Apr 2021 | 13 | [PanGu-α](https://gitee.com/mindspore/models/tree/master/official/nlp/Pangu_alpha#download-the-checkpoint) | PanGu-α: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation |
| [mT5](https://arxiv.org/abs/2010.11934v3) | Oct 2020 | 13 | [mT5](https://github.com/google-research/multilingual-t5#released-model-checkpoints) | mT5: A massively multilingual pre-trained text-to-text transformer |
| [BART](https://arxiv.org/abs/1910.13461) | Jul 2020 | - | [BART](https://github.com/facebookresearch/fairseq) | Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension |
| [GShard](https://arxiv.org/abs/2006.16668v1) | Jun 2020 | 600| -| GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding |
| [GPT-3](https://arxiv.org/abs/2005.14165) | May 2020 | 175 | - | Language Models are Few-Shot Learners |
| [CTRL](https://arxiv.org/abs/1909.05858) | Sep 2019 | 1.63 | [CTRL](https://github.com/salesforce/ctrl) | CTRL: A Conditional Transformer Language Model for Controllable Generation |
| [ALBERT](https://arxiv.org/abs/1909.11942) | Sep 2019 | 0.235 | [ALBERT](https://github.com/google-research/ALBERT) | A Lite BERT for Self-supervised Learning of Language Representations |
| [XLNet](https://arxiv.org/abs/1906.08237) | Jun 2019 | - | [XLNet](https://github.com/zihangdai/xlnet#released-models) | Generalized Autoregressive Pretraining for Language Understanding and Generation |
| [T5](https://arxiv.org/abs/1910.10683) | Oct 2019 | 0.06 - 11 | [Flan-T5](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) | Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer |
| [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) | Nov 2019 | 1.5 | [GPT-2](https://github.com/openai/gpt-2) | Language Models are Unsupervised Multitask Learners |
| [RoBERTa](https://arxiv.org/abs/1907.11692) | Jul 2019 | 0.125 - 0.355 | [RoBERTa](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) | A Robustly Optimized BERT Pretraining Approach |
| [BERT](https://arxiv.org/abs/1810.04805)| Oct 2018 | - | [BERT](https://github.com/google-research/bert) | Bidirectional Encoder Representations from Transformers |
| [GPT](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) | Jun 2018 | - | [GPT](https://github.com/openai/finetune-transformer-lm) | Improving Language Understanding by Generative Pre-Training |
<Callout emoji="⚠️">
This section is under development.
</Callout>
Data adopted from [Papers with Code](https://paperswithcode.com/methods/category/language-models) and the recent work by [Zhao et al. (2023)](https://arxiv.org/pdf/2303.18223.pdf).