Verified claim · AI-ML · 100% confidence
ELMo (Embeddings from Language Models) introduced in paper: Deep contextualized word representations (Peters et al., 2018).
Last verified 2026-05-16 · Methodology veritas-v0.1 · ee150c6e44364a3d
Structured fields
- Subject
- ELMo (Embeddings from Language Models)
- Predicate
introduced_in_paper- Object
- Deep contextualized word representations (Peters et al., 2018)
- Confidence
- 100%
- Tags
- elmo · word-embeddings · contextualized · foundational · 2018 · naacl
Sources (2)
[1] preprint · arXiv (Peters, Neumann, Iyyer, Gardner, Clark, Lee, Zettlemoyer) · 2018-02-15
Deep contextualized word representations“We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy).”
[2] peer reviewed · ACL Anthology · 2018-06-01
Deep contextualized word representations (NAACL 2018)
Cite this claim
Ready-to-paste citation (Markdown / plain text):
ELMo (Embeddings from Language Models) introduced in paper: Deep contextualized word representations (Peters et al., 2018). — SourceScore Claim ee150c6e44364a3d (verified 2026-05-16). https://sourcescore.org/api/v1/claims/ee150c6e44364a3d.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/ee150c6e44364a3d/" width="100%" height="360" frameborder="0" loading="lazy" title="ELMo (Embeddings from Language Models) introduced in paper: Deep contextualized word representations (Peters et al., 2018)."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
BERT (Bidirectional Encoder Representations from Transformers) introduced in paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018).
4c1ee70007dc89c1 · 100% confidence · shares 2 tags (foundational, 2018)
SentencePiece tokenizer introduced in paper: SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo & Richardson, 2018).
0d47bb8eb637a2e4 · 100% confidence · shares 2 tags (foundational, 2018)
GLUE benchmark introduced in paper: GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (Wang et al., 2018).
aa113b5e61d5c214 · 100% confidence · shares 2 tags (foundational, 2018)
Transformer architecture introduced in paper: Attention Is All You Need (Vaswani et al., 2017).
ad17e76a8baad7a1 · 100% confidence · shares 1 tag (foundational)
Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017).
67866330cd60e54d · 100% confidence · shares 1 tag (foundational)
Programmatic access
Fetch this claim with a signed envelope for verification:
curl https://sourcescore.org/api/v1/claims/ee150c6e44364a3d.json