Verified claim · AI-ML · 100% confidence
Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.
Last verified 2026-05-16 · Methodology veritas-v0.1 · a8503ad535423b54
Structured fields
- Subject
- Chain-of-Thought (CoT)
- Predicate
introduced_in- Object
- Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
- Confidence
- 100%
- Tags
- chain-of-thought · cot · wei · google-brain · prompting · reasoning · foundational · neurips · 2022 · introduced_in
Sources (2)
[1] preprint · arXiv / NeurIPS 2022 (Wei, Wang, Schuurmans, Bosma, Ichter, Xia, Chi, Le, Zhou / Google Brain) · 2022-01-28
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models“We explore how generating a chain of thought — a series of intermediate reasoning steps — significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain-of-thought prompting.”
[2] peer reviewed · NeurIPS 2022 · 2022-12-08
Chain-of-Thought Prompting — NeurIPS 2022 proceedings
Cite this claim
Ready-to-paste citation (Markdown / plain text):
Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. — SourceScore Claim a8503ad535423b54 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/a8503ad535423b54.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/a8503ad535423b54/" width="100%" height="360" frameborder="0" loading="lazy" title="Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
Chain-of-Thought prompting introduced in paper: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Wei et al., 2022).
3af924da138ff84c · 100% confidence · shares 6 tags (chain-of-thought, cot, prompting…)
ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models.
95193a0b79c777e8 · 100% confidence · shares 4 tags (prompting, foundational, 2022…)
ReAct (Reasoning + Acting) introduced in paper: ReAct: Synergizing Reasoning and Acting in Language Models (Yao et al., 2022).
fceea64fa7d04d3a · 100% confidence · shares 3 tags (reasoning, foundational, 2022)
InstructGPT introduced in: Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT.
590b9de765b8126e · 100% confidence · shares 3 tags (foundational, 2022, introduced_in)
Tree of Thoughts introduced in: Yao et al. 2023 — deliberate problem solving with LLMs.
9d7676f71d1ee4f3 · 100% confidence · shares 3 tags (reasoning, prompting, introduced_in)
Frequently asked questions
Is the claim "Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models." verified?
Yes — SourceScore verified this claim with 100% confidence as of 2026-05-16. The verification uses 2 primary sources cross-referenced against the SourceScore methodology (version veritas-v0.1). Full source list + signed JSON envelope linked below.
What is the evidence for "Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models."?
Evidence comes from 2 primary sources: arXiv / NeurIPS 2022 (Wei, Wang, Schuurmans, Bosma, Ichter, Xia, Chi, Le, Zhou / Google Brain), NeurIPS 2022. Each source is listed below with verbatim excerpts and URLs. The signed JSON envelope at https://sourcescore.org/api/v1/claims/a8503ad535423b54.json includes an HMAC-SHA256 signature for audit verification.
When was this claim last verified by SourceScore?
Last verified 2026-05-16 under methodology version veritas-v0.1. The signed JSON envelope is dated and cryptographically signed for audit trail. Re-verification cadence depends on the claim type and source freshness.
How can I cite this SourceScore claim in my code or article?
Fetch the signed JSON envelope from https://sourcescore.org/api/v1/claims/a8503ad535423b54.json which includes the verbatim claim, primary sources, confidence, methodology version, last-verified date, and HMAC-SHA256 signature for audit. The CC-BY-4.0 license permits commercial use with attribution to SourceScore.
Use this claim in your code
Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.
cURL
curl https://sourcescore.org/api/v1/claims/a8503ad535423b54.jsonJavaScript / TypeScript
const r = await fetch("https://sourcescore.org/api/v1/claims/a8503ad535423b54.json");
const envelope = await r.json();
console.log(envelope.claim.statement);
// "Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models."Python
import httpx
r = httpx.get("https://sourcescore.org/api/v1/claims/a8503ad535423b54.json")
envelope = r.json()
print(envelope["claim"]["statement"])
# "Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models."LangChain (retrieve-then-cite)
from langchain_core.tools import tool
import httpx
@tool
def get_chain_of_thought_cot_fact() -> dict:
"""Fetch the verified SourceScore claim for Chain-of-Thought (CoT)."""
r = httpx.get("https://sourcescore.org/api/v1/claims/a8503ad535423b54.json")
return r.json()