Verified claim · AI-ML · 100% confidence
vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention.
Last verified 2026-05-16 · Methodology veritas-v0.1 · 468a9e2c047d8f2f
Structured fields
- Subject
- vLLM
- Predicate
introduced_in- Object
- Kwon et al. 2023 — high-throughput LLM serving via PagedAttention
- Confidence
- 100%
- Tags
- vllm · paged-attention · uc-berkeley · inference · serving · open-source · 2023 · introduced_in
Sources (2)
[1] preprint · arXiv (Kwon, Li, Zhuang, Sheng, Zheng, Yu, Gonzalez, Zhang, Stoica / UC Berkeley) · 2023-09-12
Efficient Memory Management for Large Language Model Serving with PagedAttention“We propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce memory usage.”
[2] github release · vLLM Project · 2023-06-20
vLLM — official GitHub repository
Cite this claim
Ready-to-paste citation (Markdown / plain text):
vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention. — SourceScore Claim 468a9e2c047d8f2f (verified 2026-05-16). https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/468a9e2c047d8f2f/" width="100%" height="360" frameborder="0" loading="lazy" title="vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
SGLang introduced in: Zheng et al. 2024 — efficient LLM serving with structured outputs.
4244c11611a72550 · 100% confidence · shares 4 tags (uc-berkeley, inference, open-source…)
llama.cpp publicly released on: 2023-03-10 by Georgi Gerganov.
2c6ddc094019890c · 100% confidence · shares 3 tags (inference, open-source, 2023)
Chatbot Arena introduced in: Zheng et al. 2023 — LMSYS open platform for evaluating LLMs by human preference.
789ddc9bc9c3d688 · 100% confidence · shares 3 tags (uc-berkeley, 2023, introduced_in)
Triton inference server publicly released on: 2018-11 by NVIDIA — formerly TensorRT Inference Server.
78ec1ceed08a221c · 100% confidence · shares 3 tags (inference, serving, open-source)
Toolformer introduced in: Schick et al. 2023 — self-supervised LLM tool-use.
cd4387e16e2c3e3d · 100% confidence · shares 2 tags (2023, introduced_in)
Use this claim in your code
Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.
cURL
curl https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.jsonJavaScript / TypeScript
const r = await fetch("https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json");
const envelope = await r.json();
console.log(envelope.claim.statement);
// "vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention."Python
import httpx
r = httpx.get("https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json")
envelope = r.json()
print(envelope["claim"]["statement"])
# "vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention."LangChain (retrieve-then-cite)
from langchain_core.tools import tool
import httpx
@tool
def get_vllm_fact() -> dict:
"""Fetch the verified SourceScore claim for vLLM."""
r = httpx.get("https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json")
return r.json()