Verified claim · AI-ML · 100% confidence
ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models.
Last verified 2026-05-16 · Methodology veritas-v0.1 · 95193a0b79c777e8
Structured fields
- Subject
- ReAct prompting pattern
- Predicate
introduced_in- Object
- Yao et al. 2022 — synergizing reasoning and acting in language models
- Confidence
- 100%
- Tags
- react · yao · princeton · prompting · agent · foundational · iclr · 2022 · introduced_in
Sources (2)
[1] preprint · arXiv / ICLR 2023 (Yao, Zhao, Yu, Du, Shafran, Narasimhan, Cao / Princeton + Google Brain) · 2022-10-06
ReAct: Synergizing Reasoning and Acting in Language Models“We present an approach that uses LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions.”
[2] peer reviewed · ICLR 2023 · 2023-05-01
ReAct — ICLR 2023 OpenReview
Cite this claim
Ready-to-paste citation (Markdown / plain text):
ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models. — SourceScore Claim 95193a0b79c777e8 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/95193a0b79c777e8.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/95193a0b79c777e8/" width="100%" height="360" frameborder="0" loading="lazy" title="ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
ReAct (Reasoning + Acting) introduced in paper: ReAct: Synergizing Reasoning and Acting in Language Models (Yao et al., 2022).
fceea64fa7d04d3a · 100% confidence · shares 4 tags (react, foundational, 2022…)
Chain-of-Thought (CoT) introduced in: Wei et al. 2022 — Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.
a8503ad535423b54 · 100% confidence · shares 4 tags (prompting, foundational, 2022…)
Chain-of-Thought prompting introduced in paper: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Wei et al., 2022).
3af924da138ff84c · 100% confidence · shares 3 tags (prompting, foundational, 2022)
InstructGPT introduced in: Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT.
590b9de765b8126e · 100% confidence · shares 3 tags (foundational, 2022, introduced_in)
Tree of Thoughts introduced in: Yao et al. 2023 — deliberate problem solving with LLMs.
9d7676f71d1ee4f3 · 100% confidence · shares 3 tags (princeton, prompting, introduced_in)
Frequently asked questions
Is the claim "ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models." verified?
Yes — SourceScore verified this claim with 100% confidence as of 2026-05-16. The verification uses 2 primary sources cross-referenced against the SourceScore methodology (version veritas-v0.1). Full source list + signed JSON envelope linked below.
What is the evidence for "ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models."?
Evidence comes from 2 primary sources: arXiv / ICLR 2023 (Yao, Zhao, Yu, Du, Shafran, Narasimhan, Cao / Princeton + Google Brain), ICLR 2023. Each source is listed below with verbatim excerpts and URLs. The signed JSON envelope at https://sourcescore.org/api/v1/claims/95193a0b79c777e8.json includes an HMAC-SHA256 signature for audit verification.
When was this claim last verified by SourceScore?
Last verified 2026-05-16 under methodology version veritas-v0.1. The signed JSON envelope is dated and cryptographically signed for audit trail. Re-verification cadence depends on the claim type and source freshness.
How can I cite this SourceScore claim in my code or article?
Fetch the signed JSON envelope from https://sourcescore.org/api/v1/claims/95193a0b79c777e8.json which includes the verbatim claim, primary sources, confidence, methodology version, last-verified date, and HMAC-SHA256 signature for audit. The CC-BY-4.0 license permits commercial use with attribution to SourceScore.
Use this claim in your code
Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.
cURL
curl https://sourcescore.org/api/v1/claims/95193a0b79c777e8.jsonJavaScript / TypeScript
const r = await fetch("https://sourcescore.org/api/v1/claims/95193a0b79c777e8.json");
const envelope = await r.json();
console.log(envelope.claim.statement);
// "ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models."Python
import httpx
r = httpx.get("https://sourcescore.org/api/v1/claims/95193a0b79c777e8.json")
envelope = r.json()
print(envelope["claim"]["statement"])
# "ReAct prompting pattern introduced in: Yao et al. 2022 — synergizing reasoning and acting in language models."LangChain (retrieve-then-cite)
from langchain_core.tools import tool
import httpx
@tool
def get_react_prompting_pattern_fact() -> dict:
"""Fetch the verified SourceScore claim for ReAct prompting pattern."""
r = httpx.get("https://sourcescore.org/api/v1/claims/95193a0b79c777e8.json")
return r.json()