Verified claim · AI-ML · 100% confidence
QLoRA introduced in paper: QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al., 2023).
Last verified 2026-05-16 · Methodology veritas-v0.1 · 767cbe41c961be1a
Structured fields
- Subject
- QLoRA
- Predicate
introduced_in_paper- Object
- QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al., 2023)
- Confidence
- 100%
- Tags
- qlora · quantization · peft · fine-tuning · foundational · 2023
Sources (2)
[1] preprint · arXiv (Dettmers, Pagnoni, Holtzman, Zettlemoyer) · 2023-05-23
QLoRA: Efficient Finetuning of Quantized LLMs“We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance.”
[2] github release · Artidoro Pagnoni / University of Washington · 2023-05-23
artidoro/qlora — official implementation
Cite this claim
Ready-to-paste citation (Markdown / plain text):
QLoRA introduced in paper: QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al., 2023). — SourceScore Claim 767cbe41c961be1a (verified 2026-05-16). https://sourcescore.org/api/v1/claims/767cbe41c961be1a.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/767cbe41c961be1a/" width="100%" height="360" frameborder="0" loading="lazy" title="QLoRA introduced in paper: QLoRA: Efficient Finetuning of Quantized LLMs (Dettmers et al., 2023)."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
LoRA (Low-Rank Adaptation) introduced in paper: LoRA: Low-Rank Adaptation of Large Language Models (Hu et al., 2021).
f191b2876790dc6e · 100% confidence · shares 3 tags (peft, fine-tuning, foundational)
Low-Rank Adaptation (LoRA) introduced in paper: LoRA: Low-Rank Adaptation of Large Language Models (Hu et al., 2021).
d7b97d1b93d8d8bc · 100% confidence · shares 2 tags (fine-tuning, foundational)
Direct Preference Optimization (DPO) introduced in paper: Direct Preference Optimization: Your Language Model is Secretly a Reward Model (Rafailov et al., 2023).
a3e691683a4577af · 100% confidence · shares 2 tags (foundational, 2023)
Mamba state-space model introduced in paper: Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Gu, Dao, 2023).
3518f8aa40cb0d36 · 100% confidence · shares 2 tags (foundational, 2023)
Transformer architecture introduced in paper: Attention Is All You Need (Vaswani et al., 2017).
ad17e76a8baad7a1 · 100% confidence · shares 1 tag (foundational)
Programmatic access
Fetch this claim with a signed envelope for verification:
curl https://sourcescore.org/api/v1/claims/767cbe41c961be1a.json