SourceScore

Verified claim · AI-ML · 100% confidence

vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention.

Last verified 2026-05-16 · Methodology veritas-v0.1 · 468a9e2c047d8f2f

Structured fields

Subject
vLLM
Predicate
introduced_in
Object
Kwon et al. 2023 — high-throughput LLM serving via PagedAttention
Confidence
100%
Tags
vllm · paged-attention · uc-berkeley · inference · serving · open-source · 2023 · introduced_in

Sources (2)

  1. [1] preprint · arXiv (Kwon, Li, Zhuang, Sheng, Zheng, Yu, Gonzalez, Zhang, Stoica / UC Berkeley) · 2023-09-12

    Efficient Memory Management for Large Language Model Serving with PagedAttention
    We propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests to further reduce memory usage.
  2. [2] github release · vLLM Project · 2023-06-20

    vLLM — official GitHub repository

Cite this claim

Ready-to-paste citation (Markdown / plain text):

vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention. — SourceScore Claim 468a9e2c047d8f2f (verified 2026-05-16). https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/468a9e2c047d8f2f/" width="100%" height="360" frameborder="0" loading="lazy" title="vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json") envelope = r.json() print(envelope["claim"]["statement"]) # "vLLM introduced in: Kwon et al. 2023 — high-throughput LLM serving via PagedAttention."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_vllm_fact() -> dict: """Fetch the verified SourceScore claim for vLLM.""" r = httpx.get("https://sourcescore.org/api/v1/claims/468a9e2c047d8f2f.json") return r.json()