mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
Added Elasticsearch in docs + ci-test build
This commit is contained in:
parent
9bfe5245a0
commit
98a3df09bb
9 changed files with 7 additions and 4 deletions
|
|
@ -9,7 +9,7 @@ sidebar_position: 2
|
||||||
|
|
||||||
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
|
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
|
||||||
- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, etc.),
|
- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, etc.),
|
||||||
- Vector databases (e.g., ChromaDB, Weaviate, Qdrant, Milvus, FAISS, PGVector, etc.),
|
- Vector databases (e.g., ChromaDB, Weaviate, Qdrant, Milvus, FAISS, PGVector, Elasticsearch, etc.),
|
||||||
- Safety providers (e.g., Meta's Llama Guard, AWS Bedrock Guardrails, etc.)
|
- Safety providers (e.g., Meta's Llama Guard, AWS Bedrock Guardrails, etc.)
|
||||||
|
|
||||||
Providers come in two flavors:
|
Providers come in two flavors:
|
||||||
|
|
|
||||||
|
|
@ -54,7 +54,7 @@ Llama Stack consists of a server (with multiple pluggable API providers) and Cli
|
||||||
Llama Stack provides adapters for popular providers across all API categories:
|
Llama Stack provides adapters for popular providers across all API categories:
|
||||||
|
|
||||||
- **Inference**: Meta Reference, Ollama, Fireworks, Together, NVIDIA, vLLM, AWS Bedrock, OpenAI, Anthropic, and more
|
- **Inference**: Meta Reference, Ollama, Fireworks, Together, NVIDIA, vLLM, AWS Bedrock, OpenAI, Anthropic, and more
|
||||||
- **Vector Databases**: FAISS, Chroma, Milvus, Postgres, Weaviate, Qdrant, and others
|
- **Vector Databases**: FAISS, Chroma, Milvus, Postgres, Weaviate, Qdrant, Elasticsearch and others
|
||||||
- **Safety**: Llama Guard, Prompt Guard, Code Scanner, AWS Bedrock
|
- **Safety**: Llama Guard, Prompt Guard, Code Scanner, AWS Bedrock
|
||||||
- **Training & Evaluation**: HuggingFace, TorchTune, NVIDIA NEMO
|
- **Training & Evaluation**: HuggingFace, TorchTune, NVIDIA NEMO
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -9,7 +9,7 @@ sidebar_position: 1
|
||||||
|
|
||||||
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
|
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
|
||||||
- LLM inference providers (e.g., Meta Reference, Ollama, Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, OpenAI, Anthropic, Gemini, WatsonX, etc.),
|
- LLM inference providers (e.g., Meta Reference, Ollama, Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, OpenAI, Anthropic, Gemini, WatsonX, etc.),
|
||||||
- Vector databases (e.g., FAISS, SQLite-Vec, ChromaDB, Weaviate, Qdrant, Milvus, PGVector, etc.),
|
- Vector databases (e.g., FAISS, SQLite-Vec, ChromaDB, Weaviate, Qdrant, Milvus, PGVector, Elasticsearch, etc.),
|
||||||
- Safety providers (e.g., Meta's Llama Guard, Prompt Guard, Code Scanner, AWS Bedrock Guardrails, etc.),
|
- Safety providers (e.g., Meta's Llama Guard, Prompt Guard, Code Scanner, AWS Bedrock Guardrails, etc.),
|
||||||
- Tool Runtime providers (e.g., RAG Runtime, Brave Search, etc.)
|
- Tool Runtime providers (e.g., RAG Runtime, Brave Search, etc.)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -159,7 +159,8 @@ const sidebars: SidebarsConfig = {
|
||||||
'providers/vector_io/remote_milvus',
|
'providers/vector_io/remote_milvus',
|
||||||
'providers/vector_io/remote_pgvector',
|
'providers/vector_io/remote_pgvector',
|
||||||
'providers/vector_io/remote_qdrant',
|
'providers/vector_io/remote_qdrant',
|
||||||
'providers/vector_io/remote_weaviate'
|
'providers/vector_io/remote_weaviate',
|
||||||
|
'providers/vector_io/remote_elasticsearch'
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -323,6 +323,7 @@ exclude = [
|
||||||
"^src/llama_stack/providers/remote/vector_io/qdrant/",
|
"^src/llama_stack/providers/remote/vector_io/qdrant/",
|
||||||
"^src/llama_stack/providers/remote/vector_io/sample/",
|
"^src/llama_stack/providers/remote/vector_io/sample/",
|
||||||
"^src/llama_stack/providers/remote/vector_io/weaviate/",
|
"^src/llama_stack/providers/remote/vector_io/weaviate/",
|
||||||
|
"^src/llama_stack/providers/remote/vector_io/elasticsearch/",
|
||||||
"^src/llama_stack/providers/utils/bedrock/client\\.py$",
|
"^src/llama_stack/providers/utils/bedrock/client\\.py$",
|
||||||
"^src/llama_stack/providers/utils/bedrock/refreshable_boto_session\\.py$",
|
"^src/llama_stack/providers/utils/bedrock/refreshable_boto_session\\.py$",
|
||||||
"^src/llama_stack/providers/utils/inference/embedding_mixin\\.py$",
|
"^src/llama_stack/providers/utils/inference/embedding_mixin\\.py$",
|
||||||
|
|
|
||||||
|
|
@ -27,6 +27,7 @@ distribution_spec:
|
||||||
- provider_type: remote::pgvector
|
- provider_type: remote::pgvector
|
||||||
- provider_type: remote::qdrant
|
- provider_type: remote::qdrant
|
||||||
- provider_type: remote::weaviate
|
- provider_type: remote::weaviate
|
||||||
|
- provider_type: remote::elasticsearch
|
||||||
files:
|
files:
|
||||||
- provider_type: inline::localfs
|
- provider_type: inline::localfs
|
||||||
safety:
|
safety:
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue