Large Language Models are Zero-Shot ReasonersTakeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa
Publikationsdatum:
|
|
Dieses Biblionetz-Objekt existiert erst seit September 2024.
Es ist deshalb gut möglich, dass viele der eigentlich vorhandenen Vernetzungen zu älteren Biblionetz-Objekten bisher nicht erstellt wurden.
Somit kann es sein, dass diese Seite sehr lückenhaft ist.
Zusammenfassungen
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
Von Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa im Text Large Language Models are Zero-Shot Reasoners (2022) Dieser Text erwähnt ...
Personen KB IB clear | Sandhini Agarwal , Dario Amodei , Amanda Askell , Christopher Berner , Tom B. Brown , Mark Chen , Benjamin Chess , Rewon Child , Jack Clark , Kewal Dhariwal , Prafulla Dhariwal , Aidan N. Gomez , Scott Gray , Tom Henighan , Ariel Herbert-Voss , Christopher Hesse , Llion Jones , Lukasz Kaiser , Jared Kaplan , Gretchen Krueger , Mateusz Litwin , Benjamin Mann , Sam McCandlish , Arvind Neelakantan , Niki Parmar , Illia Polosukhin , Alec Radford , Aditya Ramesh , Nick Ryder , Girish Sastry , Noam Shazeer , Pranav Shyam , Eric Sigler , Melanie Subbiah , Ilya Sutskever , Jakob Uszkoreit , Ashish Vaswani , Clemens Winter , Jeffrey Wu , Daniel M. Ziegler | ||||||||||||||||||
Begriffe KB IB clear | Chain of Thought , Generative Machine-Learning-Systeme (GMLS)computer-generated text | ||||||||||||||||||
Bücher |
| ||||||||||||||||||
Texte |
|
Dieser Text erwähnt vermutlich nicht ...
Nicht erwähnte Begriffe | Chat-GPT, GMLS & Bildung |
Zitationsgraph
Zitationsgraph (Beta-Test mit vis.js)
4 Erwähnungen
- The End of Programming (Matt Welsh) (2023)
- Talking about Large Language Models (Murray Shanahan) (2024)
- Das müssen Sie über KI wissen - c't 11/2024 (2024)
- Bedürfnisartikulationskompetenz - Prompt-Engineering: Von der Kunst, die KI zu nutzen (Ben Danneberg)
- Generative KI-Systeme in der Lehre systematisch anleiten (Timon Rimensberger) (2024)
Anderswo finden
Volltext dieses Dokuments
Large Language Models are Zero-Shot Reasoners: Artikel als Volltext (: , 745 kByte; : ) |
Anderswo suchen
Beat und dieser Text
Beat hat Dieser Text erst in den letzten 6 Monaten in Biblionetz aufgenommen. Beat besitzt kein physisches, aber ein digitales Exemplar. Eine digitale Version ist auf dem Internet verfügbar (s.o.). Aufgrund der wenigen Einträge im Biblionetz scheint er es nicht wirklich gelesen zu haben. Es gibt bisher auch nur wenige Objekte im Biblionetz, die dieses Werk zitieren.