mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-18 07:18:53 +00:00
feat: Enable setting a default embedding model in the stack (#3803)
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Python Package Build Test / build (3.12) (push) Failing after 1s
Python Package Build Test / build (3.13) (push) Failing after 1s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s
Vector IO Integration Tests / test-matrix (push) Failing after 4s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 5s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
UI Tests / ui-tests (22) (push) Successful in 40s
Pre-commit / pre-commit (push) Successful in 1m28s
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 0s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Python Package Build Test / build (3.12) (push) Failing after 1s
Python Package Build Test / build (3.13) (push) Failing after 1s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s
Vector IO Integration Tests / test-matrix (push) Failing after 4s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 5s
API Conformance Tests / check-schema-compatibility (push) Successful in 11s
UI Tests / ui-tests (22) (push) Successful in 40s
Pre-commit / pre-commit (push) Successful in 1m28s
# What does this PR do? Enables automatic embedding model detection for vector stores and by using a `default_configured` boolean that can be defined in the `run.yaml`. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> ## Test Plan - Unit tests - Integration tests - Simple example below: Spin up the stack: ```bash uv run llama stack build --distro starter --image-type venv --run ``` Then test with OpenAI's client: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none") vs = client.vector_stores.create() ``` Previously you needed: ```python vs = client.vector_stores.create( extra_body={ "embedding_model": "sentence-transformers/all-MiniLM-L6-v2", "embedding_dimension": 384, } ) ``` The `extra_body` is now unnecessary. --------- Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
This commit is contained in:
parent
d875e427bf
commit
ef4bc70bbe
29 changed files with 553 additions and 403 deletions
|
@ -10,358 +10,111 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
# Retrieval Augmented Generation (RAG)
|
||||
|
||||
RAG enables your applications to reference and recall information from previous interactions or external documents.
|
||||
|
||||
RAG enables your applications to reference and recall information from external documents. Llama Stack makes Agentic RAG available through OpenAI's Responses API.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Start the Server
|
||||
|
||||
In one terminal, start the Llama Stack server:
|
||||
|
||||
```bash
|
||||
uv run llama stack build --distro starter --image-type venv --run
|
||||
```
|
||||
|
||||
### 2. Connect with OpenAI Client
|
||||
|
||||
In another terminal, use the standard OpenAI client with the Responses API:
|
||||
|
||||
```python
|
||||
import io, requests
|
||||
from openai import OpenAI
|
||||
|
||||
url = "https://www.paulgraham.com/greatwork.html"
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
|
||||
|
||||
# Create vector store - auto-detects default embedding model
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
response = requests.get(url)
|
||||
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
|
||||
file_id = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants").id
|
||||
client.vector_stores.files.create(vector_store_id=vs.id, file_id=file_id)
|
||||
|
||||
resp = client.responses.create(
|
||||
model="gpt-4o",
|
||||
input="How do you do great work? Use the existing knowledge_search tool.",
|
||||
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
|
||||
include=["file_search_call.results"],
|
||||
)
|
||||
|
||||
print(resp.output[-1].content[-1].text)
|
||||
```
|
||||
Which should give output like:
|
||||
```
|
||||
Doing great work is about more than just hard work and ambition; it involves combining several elements:
|
||||
|
||||
1. **Pursue What Excites You**: Engage in projects that are both ambitious and exciting to you. It's important to work on something you have a natural aptitude for and a deep interest in.
|
||||
|
||||
2. **Explore and Discover**: Great work often feels like a blend of discovery and creation. Focus on seeing possibilities and let ideas take their natural shape, rather than just executing a plan.
|
||||
|
||||
3. **Be Bold Yet Flexible**: Take bold steps in your work without over-planning. An adaptable approach that evolves with new ideas can often lead to breakthroughs.
|
||||
|
||||
4. **Work on Your Own Projects**: Develop a habit of working on projects of your own choosing, as these often lead to great achievements. These should be projects you find exciting and that challenge you intellectually.
|
||||
|
||||
5. **Be Earnest and Authentic**: Approach your work with earnestness and authenticity. Trying to impress others with affectation can be counterproductive, as genuine effort and intellectual honesty lead to better work outcomes.
|
||||
|
||||
6. **Build a Supportive Environment**: Work alongside great colleagues who inspire you and enhance your work. Surrounding yourself with motivating individuals creates a fertile environment for great work.
|
||||
|
||||
7. **Maintain High Morale**: High morale significantly impacts your ability to do great work. Stay optimistic and protect your mental well-being to maintain progress and momentum.
|
||||
|
||||
8. **Balance**: While hard work is essential, overworking can lead to diminishing returns. Balance periods of intensive work with rest to sustain productivity over time.
|
||||
|
||||
This approach shows that great work is less about following a strict formula and more about aligning your interests, ambition, and environment to foster creativity and innovation.
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
Llama Stack organizes the APIs that enable RAG into three layers:
|
||||
Llama Stack provides OpenAI-compatible RAG capabilities through:
|
||||
|
||||
1. **Lower-Level APIs**: Deal with raw storage and retrieval. These include Vector IO, KeyValue IO (coming soon) and Relational IO (also coming soon)
|
||||
2. **RAG Tool**: A first-class tool as part of the [Tools API](./tools) that allows you to ingest documents (from URLs, files, etc) with various chunking strategies and query them smartly
|
||||
3. **Agents API**: The top-level [Agents API](./agent) that allows you to create agents that can use the tools to answer questions, perform tasks, and more
|
||||
- **Vector Stores API**: OpenAI-compatible vector storage with automatic embedding model detection
|
||||
- **Files API**: Document upload and processing using OpenAI's file format
|
||||
- **Responses API**: Enhanced chat completions with agentic tool calling via file search
|
||||
|
||||

|
||||
## Configuring Default Embedding Models
|
||||
|
||||
The RAG system uses lower-level storage for different types of data:
|
||||
- **Vector IO**: For semantic search and retrieval
|
||||
- **Key-Value and Relational IO**: For structured data storage
|
||||
To enable automatic vector store creation without specifying embedding models, configure a default embedding model in your run.yaml like so:
|
||||
|
||||
:::info[Future Storage Types]
|
||||
We may add more storage types like Graph IO in the future.
|
||||
:::
|
||||
|
||||
## Setting up Vector Databases
|
||||
|
||||
For this guide, we will use [Ollama](https://ollama.com/) as the inference provider. Ollama is an LLM runtime that allows you to run Llama models locally.
|
||||
|
||||
Here's how to set up a vector database for RAG:
|
||||
|
||||
```python
|
||||
# Create HTTP client
|
||||
import os
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
|
||||
|
||||
# Register a vector database
|
||||
vector_db_id = "my_documents"
|
||||
response = client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model="nomic-embed-text-v1.5",
|
||||
embedding_dimension=768,
|
||||
provider_id="faiss",
|
||||
)
|
||||
```yaml
|
||||
models:
|
||||
- model_id: nomic-ai/nomic-embed-text-v1.5
|
||||
provider_id: inline::sentence-transformers
|
||||
metadata:
|
||||
embedding_dimension: 768
|
||||
default_configured: true
|
||||
```
|
||||
|
||||
## Document Ingestion
|
||||
With this configuration:
|
||||
- `client.vector_stores.create()` works without requiring embedding model parameters
|
||||
- The system automatically uses the default model and its embedding dimension for any newly created vector store
|
||||
- Only one model can be marked as `default_configured: true`
|
||||
|
||||
You can ingest documents into the vector database using two methods: directly inserting pre-chunked documents or using the RAG Tool.
|
||||
## Vector Store Operations
|
||||
|
||||
### Direct Document Insertion
|
||||
### Creating Vector Stores
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="basic" label="Basic Insertion">
|
||||
You can create vector stores with automatic or explicit embedding model selection:
|
||||
|
||||
```python
|
||||
# You can insert a pre-chunked document directly into the vector db
|
||||
chunks = [
|
||||
{
|
||||
"content": "Your document text here",
|
||||
"mime_type": "text/plain",
|
||||
"metadata": {
|
||||
"document_id": "doc1",
|
||||
"author": "Jane Doe",
|
||||
},
|
||||
},
|
||||
]
|
||||
client.vector_io.insert(vector_db_id=vector_db_id, chunks=chunks)
|
||||
```
|
||||
# Automatic - uses default configured embedding model
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="embeddings" label="With Precomputed Embeddings">
|
||||
|
||||
If you decide to precompute embeddings for your documents, you can insert them directly into the vector database by including the embedding vectors in the chunk data. This is useful if you have a separate embedding service or if you want to customize the ingestion process.
|
||||
|
||||
```python
|
||||
chunks_with_embeddings = [
|
||||
{
|
||||
"content": "First chunk of text",
|
||||
"mime_type": "text/plain",
|
||||
"embedding": [0.1, 0.2, 0.3, ...], # Your precomputed embedding vector
|
||||
"metadata": {"document_id": "doc1", "section": "introduction"},
|
||||
},
|
||||
{
|
||||
"content": "Second chunk of text",
|
||||
"mime_type": "text/plain",
|
||||
"embedding": [0.2, 0.3, 0.4, ...], # Your precomputed embedding vector
|
||||
"metadata": {"document_id": "doc1", "section": "methodology"},
|
||||
},
|
||||
]
|
||||
client.vector_io.insert(vector_db_id=vector_db_id, chunks=chunks_with_embeddings)
|
||||
```
|
||||
|
||||
:::warning[Embedding Dimensions]
|
||||
When providing precomputed embeddings, ensure the embedding dimension matches the `embedding_dimension` specified when registering the vector database.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Document Retrieval
|
||||
|
||||
You can query the vector database to retrieve documents based on their embeddings.
|
||||
|
||||
```python
|
||||
# You can then query for these chunks
|
||||
chunks_response = client.vector_io.query(
|
||||
vector_db_id=vector_db_id,
|
||||
query="What do you know about..."
|
||||
# Explicit - specify embedding model when you need a specific one
|
||||
vs = client.vector_stores.create(
|
||||
extra_body={
|
||||
"embedding_model": "nomic-ai/nomic-embed-text-v1.5",
|
||||
"embedding_dimension": 768
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Using the RAG Tool
|
||||
|
||||
:::danger[Deprecation Notice]
|
||||
The RAG Tool is being deprecated in favor of directly using the OpenAI-compatible Search API. We recommend migrating to the OpenAI APIs for better compatibility and future support.
|
||||
:::
|
||||
|
||||
A better way to ingest documents is to use the RAG Tool. This tool allows you to ingest documents from URLs, files, etc. and automatically chunks them into smaller pieces. More examples for how to format a RAGDocument can be found in the [appendix](#more-ragdocument-examples).
|
||||
|
||||
### OpenAI API Integration & Migration
|
||||
|
||||
The RAG tool has been updated to use OpenAI-compatible APIs. This provides several benefits:
|
||||
|
||||
- **Files API Integration**: Documents are now uploaded using OpenAI's file upload endpoints
|
||||
- **Vector Stores API**: Vector storage operations use OpenAI's vector store format with configurable chunking strategies
|
||||
- **Error Resilience**: When processing multiple documents, individual failures are logged but don't crash the operation. Failed documents are skipped while successful ones continue processing.
|
||||
|
||||
### Migration Path
|
||||
|
||||
We recommend migrating to the OpenAI-compatible Search API for:
|
||||
|
||||
1. **Better OpenAI Ecosystem Integration**: Direct compatibility with OpenAI tools and workflows including the Responses API
|
||||
2. **Future-Proof**: Continued support and feature development
|
||||
3. **Full OpenAI Compatibility**: Vector Stores, Files, and Search APIs are fully compatible with OpenAI's Responses API
|
||||
|
||||
The OpenAI APIs are used under the hood, so you can continue to use your existing RAG Tool code with minimal changes. However, we recommend updating your code to use the new OpenAI-compatible APIs for better long-term support. If any documents fail to process, they will be logged in the response but will not cause the entire operation to fail.
|
||||
|
||||
### RAG Tool Example
|
||||
|
||||
```python
|
||||
from llama_stack_client import RAGDocument
|
||||
|
||||
urls = ["memory_optimizations.rst", "chat.rst", "llama3.rst"]
|
||||
documents = [
|
||||
RAGDocument(
|
||||
document_id=f"num-{i}",
|
||||
content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}",
|
||||
mime_type="text/plain",
|
||||
metadata={},
|
||||
)
|
||||
for i, url in enumerate(urls)
|
||||
]
|
||||
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=documents,
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=512,
|
||||
)
|
||||
|
||||
# Query documents
|
||||
results = client.tool_runtime.rag_tool.query(
|
||||
vector_db_ids=[vector_db_id],
|
||||
content="What do you know about...",
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Context Configuration
|
||||
|
||||
You can configure how the RAG tool adds metadata to the context if you find it useful for your application:
|
||||
|
||||
```python
|
||||
# Query documents with custom template
|
||||
results = client.tool_runtime.rag_tool.query(
|
||||
vector_db_ids=[vector_db_id],
|
||||
content="What do you know about...",
|
||||
query_config={
|
||||
"chunk_template": "Result {index}\nContent: {chunk.content}\nMetadata: {metadata}\n",
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
## Building RAG-Enhanced Agents
|
||||
|
||||
One of the most powerful patterns is combining agents with RAG capabilities. Here's a complete example:
|
||||
|
||||
### Agent with Knowledge Search
|
||||
|
||||
```python
|
||||
from llama_stack_client import Agent
|
||||
|
||||
# Create agent with memory
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.3-70B-Instruct",
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[
|
||||
{
|
||||
"name": "builtin::rag/knowledge_search",
|
||||
"args": {
|
||||
"vector_db_ids": [vector_db_id],
|
||||
# Defaults
|
||||
"query_config": {
|
||||
"chunk_size_in_tokens": 512,
|
||||
"chunk_overlap_in_tokens": 0,
|
||||
"chunk_template": "Result {index}\nContent: {chunk.content}\nMetadata: {metadata}\n",
|
||||
},
|
||||
},
|
||||
}
|
||||
],
|
||||
)
|
||||
session_id = agent.create_session("rag_session")
|
||||
|
||||
# Ask questions about documents in the vector db, and the agent will query the db to answer the question.
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "How to optimize memory in PyTorch?"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
:::tip[Agent Instructions]
|
||||
The `instructions` field in the `AgentConfig` can be used to guide the agent's behavior. It is important to experiment with different instructions to see what works best for your use case.
|
||||
:::
|
||||
|
||||
### Document-Aware Conversations
|
||||
|
||||
You can also pass documents along with the user's message and ask questions about them:
|
||||
|
||||
```python
|
||||
# Initial document ingestion
|
||||
response = agent.create_turn(
|
||||
messages=[
|
||||
{"role": "user", "content": "I am providing some documents for reference."}
|
||||
],
|
||||
documents=[
|
||||
{
|
||||
"content": "https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/memory_optimizations.rst",
|
||||
"mime_type": "text/plain",
|
||||
}
|
||||
],
|
||||
session_id=session_id,
|
||||
)
|
||||
|
||||
# Query with RAG
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "What are the key topics in the documents?"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
### Viewing Agent Responses
|
||||
|
||||
You can print the response with the following:
|
||||
|
||||
```python
|
||||
from llama_stack_client import AgentEventLogger
|
||||
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
```
|
||||
|
||||
## Vector Database Management
|
||||
|
||||
### Unregistering Vector DBs
|
||||
|
||||
If you need to clean up and unregister vector databases, you can do so as follows:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="single" label="Single Database">
|
||||
|
||||
```python
|
||||
# Unregister a specified vector database
|
||||
vector_db_id = "my_vector_db_id"
|
||||
print(f"Unregistering vector database: {vector_db_id}")
|
||||
client.vector_dbs.unregister(vector_db_id=vector_db_id)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="all" label="All Databases">
|
||||
|
||||
```python
|
||||
# Unregister all vector databases
|
||||
for vector_db_id in client.vector_dbs.list():
|
||||
print(f"Unregistering vector database: {vector_db_id.identifier}")
|
||||
client.vector_dbs.unregister(vector_db_id=vector_db_id.identifier)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🎯 **Document Chunking**
|
||||
- Use appropriate chunk sizes (512 tokens is often a good starting point)
|
||||
- Consider overlap between chunks for better context preservation
|
||||
- Experiment with different chunking strategies for your content type
|
||||
|
||||
### 🔍 **Embedding Strategy**
|
||||
- Choose embedding models that match your domain
|
||||
- Consider the trade-off between embedding dimension and performance
|
||||
- Test different embedding models for your specific use case
|
||||
|
||||
### 📊 **Query Optimization**
|
||||
- Use specific, well-formed queries for better retrieval
|
||||
- Experiment with different search strategies
|
||||
- Consider hybrid approaches (keyword + semantic search)
|
||||
|
||||
### 🛡️ **Error Handling**
|
||||
- Implement proper error handling for failed document processing
|
||||
- Monitor ingestion success rates
|
||||
- Have fallback strategies for retrieval failures
|
||||
|
||||
## Appendix
|
||||
|
||||
### More RAGDocument Examples
|
||||
|
||||
Here are various ways to create RAGDocument objects for different content types:
|
||||
|
||||
```python
|
||||
from llama_stack_client import RAGDocument
|
||||
import base64
|
||||
|
||||
# File URI
|
||||
RAGDocument(document_id="num-0", content={"uri": "file://path/to/file"})
|
||||
|
||||
# Plain text
|
||||
RAGDocument(document_id="num-1", content="plain text")
|
||||
|
||||
# Explicit text input
|
||||
RAGDocument(
|
||||
document_id="num-2",
|
||||
content={
|
||||
"type": "text",
|
||||
"text": "plain text input",
|
||||
}, # for inputs that should be treated as text explicitly
|
||||
)
|
||||
|
||||
# Image from URL
|
||||
RAGDocument(
|
||||
document_id="num-3",
|
||||
content={
|
||||
"type": "image",
|
||||
"image": {"url": {"uri": "https://mywebsite.com/image.jpg"}},
|
||||
},
|
||||
)
|
||||
|
||||
# Base64 encoded image
|
||||
B64_ENCODED_IMAGE = base64.b64encode(
|
||||
requests.get(
|
||||
"https://raw.githubusercontent.com/meta-llama/llama-stack/refs/heads/main/docs/_static/llama-stack.png"
|
||||
).content
|
||||
)
|
||||
RAGDocument(
|
||||
document_id="num-4",
|
||||
content={"type": "image", "image": {"data": B64_ENCODED_IMAGE}},
|
||||
)
|
||||
```
|
||||
For more strongly typed interaction use the typed dicts found [here](https://github.com/meta-llama/llama-stack-client-python/blob/38cd91c9e396f2be0bec1ee96a19771582ba6f17/src/llama_stack_client/types/shared_params/document.py).
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue