Verified claim · AI-ML · 100% confidence
Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017).
Last verified 2026-05-16 · Methodology veritas-v0.1 · 67866330cd60e54d
Structured fields
- Subject
- Reinforcement Learning from Human Feedback (RLHF)
- Predicate
introduced_in_paper- Object
- Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017)
- Confidence
- 100%
- Tags
- rlhf · alignment · foundational · christiano · 2017 · nips
Sources (3)
[1] preprint · arXiv (Christiano, Leike, Brown, Martic, Legg, Amodei) · 2017-06-12
Deep Reinforcement Learning from Human Preferences“For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. … We explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments.”
[2] peer reviewed · NeurIPS Foundation · 2017-12-04
Deep RL from Human Preferences (NeurIPS 2017 proceedings)[3] official blog · OpenAI · 2017-06-13
Learning from human preferences
Cite this claim
Ready-to-paste citation (Markdown / plain text):
Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017). — SourceScore Claim 67866330cd60e54d (verified 2026-05-16). https://sourcescore.org/api/v1/claims/67866330cd60e54d.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/67866330cd60e54d/" width="100%" height="360" frameborder="0" loading="lazy" title="Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017)."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
Transformer architecture introduced in paper: Attention Is All You Need (Vaswani et al., 2017).
ad17e76a8baad7a1 · 100% confidence · shares 3 tags (foundational, 2017, nips)
Direct Preference Optimization (DPO) introduced in paper: Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov et al., 2023).
a3e691683a4577af · 100% confidence · shares 3 tags (alignment, foundational, nips)
Proximal Policy Optimization (PPO) introduced in paper: Proximal Policy Optimization Algorithms (Schulman et al., 2017).
00f224e1ccc158ef · 100% confidence · shares 3 tags (foundational, 2017, rlhf)
Retrieval-Augmented Generation (RAG) introduced in paper: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Lewis et al., 2020).
d15057ced937a103 · 100% confidence · shares 2 tags (foundational, nips)
InstructGPT methodology introduced in paper: Training language models to follow instructions with human feedback (Ouyang et al., 2022).
5da8f8dffc038b8e · 100% confidence · shares 2 tags (alignment, rlhf)
Programmatic access
Fetch this claim with a signed envelope for verification:
curl https://sourcescore.org/api/v1/claims/67866330cd60e54d.json