From 8eb56f466d75a015dea4b783483f52ff020e31d4 Mon Sep 17 00:00:00 2001 From: Yuan Tang Date: Sun, 23 Feb 2025 22:09:45 -0500 Subject: [PATCH] docs: Add vLLM to the list of inference providers in concepts page --- docs/source/concepts/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/concepts/index.md b/docs/source/concepts/index.md index 27eb74f00..c839266b6 100644 --- a/docs/source/concepts/index.md +++ b/docs/source/concepts/index.md @@ -25,7 +25,7 @@ We are working on adding a few more APIs to complete the application lifecycle. ## API Providers The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include: -- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, etc.), +- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, etc.), - Vector databases (e.g., ChromaDB, Weaviate, Qdrant, FAISS, PGVector, etc.), - Safety providers (e.g., Meta's Llama Guard, AWS Bedrock Guardrails, etc.)