Verified claim · AI-ML · 100% confidence
Mamba state-space model introduced in paper: Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Gu, Dao, 2023).
Last verified 2026-05-16 · Methodology veritas-v0.1 · 3518f8aa40cb0d36
Structured fields
- Subject
- Mamba state-space model
- Predicate
introduced_in_paper- Object
- Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Gu, Dao, 2023)
- Confidence
- 100%
- Tags
- mamba · state-space · foundational · gu · dao · 2023
Sources (2)
[1] preprint · arXiv (Gu, Dao) · 2023-12-01
Mamba: Linear-Time Sequence Modeling with Selective State Spaces“We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. … Mamba enjoys fast inference (5× higher throughput than Transformers).”
[2] github release · state-spaces (Gu, Dao) · 2023-12-01
Mamba reference implementation
Cite this claim
Ready-to-paste citation (Markdown / plain text):
Mamba state-space model introduced in paper: Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Gu, Dao, 2023). — SourceScore Claim 3518f8aa40cb0d36 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/3518f8aa40cb0d36.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/3518f8aa40cb0d36/" width="100%" height="360" frameborder="0" loading="lazy" title="Mamba state-space model introduced in paper: Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Gu, Dao, 2023)."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
Direct Preference Optimization (DPO) introduced in paper: Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov et al., 2023).
a3e691683a4577af · 100% confidence · shares 2 tags (foundational, 2023)
QLoRA introduced in paper: QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al., 2023).
767cbe41c961be1a · 100% confidence · shares 2 tags (foundational, 2023)
Transformer architecture introduced in paper: Attention Is All You Need (Vaswani et al., 2017).
ad17e76a8baad7a1 · 100% confidence · shares 1 tag (foundational)
Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017).
67866330cd60e54d · 100% confidence · shares 1 tag (foundational)
Retrieval-Augmented Generation (RAG) introduced in paper: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Lewis et al., 2020).
d15057ced937a103 · 100% confidence · shares 1 tag (foundational)
Programmatic access
Fetch this claim with a signed envelope for verification:
curl https://sourcescore.org/api/v1/claims/3518f8aa40cb0d36.json