docs: Add Files API and Vector Store integration documentation

- Add comprehensive Files API documentation with OpenAI-compatible endpoints
- Create file operations and vector store integration guide
- Add vector store provider docs for FAISS, SQLite-vec, Milvus, ChromaDB, Qdrant, Weaviate, PGVector
- Support for release 0.2.14 FileResponse and Vector Store API features
- Refactor documentation to focus on OpenAI APIs as primary interface
- Remove redundant 'OpenAI-compatible' qualifiers throughout docs
- Rename openai_file_operations_vector_stores.md to file_operations_vector_stores.md
- Update cross-references and documentation structure
This commit is contained in:
Akram Ben Aissi 2025-08-29 19:07:56 +02:00
parent f7c5ef4ec0
commit 14c21aec67
6 changed files with 1248 additions and 2 deletions

View file

@ -7,7 +7,7 @@ sidebar_position: 1
# APIs
A Llama Stack API is described as a collection of REST endpoints. We currently support the following APIs:
A Llama Stack API is described as a collection of REST endpoints following OpenAI API standards. We currently support the following APIs:
- **Inference**: run inference with a LLM
- **Safety**: apply safety policies to the output at a Systems (not only model) level
@ -16,13 +16,27 @@ A Llama Stack API is described as a collection of REST endpoints. We currently s
- **Scoring**: evaluate outputs of the system
- **Eval**: generate outputs (via Inference or Agents) and perform scoring
- **VectorIO**: perform operations on vector stores, such as adding documents, searching, and deleting documents
- **Files**: manage file uploads, storage, and retrieval
- **Telemetry**: collect telemetry data from the system
- **Post Training**: fine-tune a model
- **Tool Runtime**: interact with various tools and protocols
- **Responses**: generate responses from an LLM using this OpenAI compatible API.
- **Responses**: generate responses from an LLM
We are working on adding a few more APIs to complete the application lifecycle. These will include:
- **Batch Inference**: run inference on a dataset of inputs
- **Batch Agents**: run agents on a dataset of inputs
- **Synthetic Data Generation**: generate synthetic data for model development
- **Batches**: OpenAI-compatible batch management for inference
## OpenAI API Compatibility
We are working on adding OpenAI API compatibility to Llama Stack. This will allow you to use Llama Stack with OpenAI API clients and tools.
### File Operations and Vector Store Integration
The Files API and Vector Store APIs work together through file operations, enabling automatic document processing and search. This integration implements the [OpenAI Vector Store Files API specification](https://platform.openai.com/docs/api-reference/vector-stores-files) and allows you to:
- Upload documents through the Files API
- Automatically process and chunk documents into searchable vectors
- Store processed content in vector databases based on the availability of [our providers](../../providers/index.mdx)
- Search through documents using natural language queries
For detailed information about this integration, see [File Operations and Vector Store Integration](../file_operations_vector_stores.md).

View file

@ -0,0 +1,423 @@
# File Operations and Vector Store Integration
## Overview
Llama Stack provides seamless integration between the Files API and Vector Store APIs, enabling you to upload documents and automatically process them into searchable vector embeddings. This integration implements file operations following the [OpenAI Vector Store Files API specification](https://platform.openai.com/docs/api-reference/vector-stores-files).
## Enhanced Capabilities Beyond OpenAI
While Llama Stack maintains full compatibility with OpenAI's Vector Store API, it provides several additional capabilities that enhance functionality and flexibility:
### **Embedding Model Specification**
Unlike OpenAI's vector stores which use a fixed embedding model, Llama Stack allows you to specify which embedding model to use when creating a vector store:
```python
# Create vector store with specific embedding model
vector_store = client.vector_stores.create(
name="my_documents",
embedding_model="all-MiniLM-L6-v2", # Specify your preferred model
embedding_dimension=384,
)
```
### **Advanced Search Modes**
Llama Stack supports multiple search modes beyond basic vector similarity:
- **Vector Search**: Pure semantic similarity search using embeddings
- **Keyword Search**: Traditional keyword-based search for exact matches
- **Hybrid Search**: Combines both vector and keyword search for optimal results
```python
# Different search modes
results = await client.vector_stores.search(
vector_store_id=vector_store.id,
query="machine learning algorithms",
search_mode="hybrid", # or "vector", "keyword"
max_num_results=5,
)
```
### **Flexible Ranking Options**
For hybrid search, Llama Stack offers configurable ranking strategies:
- **RRF (Reciprocal Rank Fusion)**: Combines rankings with configurable impact factor
- **Weighted Ranker**: Linear combination of vector and keyword scores with adjustable weights
```python
# Custom ranking configuration
results = await client.vector_stores.search(
vector_store_id=vector_store.id,
query="neural networks",
search_mode="hybrid",
ranking_options={
"ranker": {"type": "weighted", "alpha": 0.7} # 70% vector, 30% keyword
},
)
```
### **Provider Selection**
Choose from multiple vector store providers based on your specific needs:
- **Inline Providers**: FAISS (fast in-memory), SQLite-vec (disk-based), Milvus (high-performance)
- **Remote Providers**: ChromaDB, Qdrant, Weaviate, Postgres (PGVector), Milvus
```python
# Specify provider when creating vector store
vector_store = client.vector_stores.create(
name="my_documents", provider_id="sqlite-vec" # Choose your preferred provider
)
```
## How It Works
The file operations work through several key components:
1. **File Upload**: Documents are uploaded through the Files API
2. **Automatic Processing**: Files are automatically chunked and converted to embeddings
3. **Vector Storage**: Chunks are stored in vector databases with metadata
4. **Search & Retrieval**: Users can search through processed documents using natural language
## Supported Vector Store Providers
The following vector store providers support file operations:
### Inline Providers (Single Node)
- **FAISS**: Fast in-memory vector similarity search
- **SQLite-vec**: Disk-based storage with hybrid search capabilities
- **Milvus**: High-performance vector database with advanced indexing
### Remote Providers (Hosted)
- **ChromaDB**: Vector database with metadata filtering
- **Qdrant**: Vector similarity search with payload filtering
- **Weaviate**: Vector database with GraphQL interface
- **Postgres (PGVector)**: Vector extensions for PostgreSQL
## File Processing Pipeline
### 1. File Upload
```python
from llama_stack import LlamaStackClient
client = LlamaStackClient("http://localhost:8000")
# Upload a document
with open("document.pdf", "rb") as f:
file_info = await client.files.upload(file=f, purpose="assistants")
```
### 2. Attach to Vector Store
```python
# Create a vector store
vector_store = client.vector_stores.create(name="my_documents")
# Attach the file to the vector store
file_attach_response = await client.vector_stores.files.create(
vector_store_id=vector_store.id, file_id=file_info.id
)
```
### 3. Automatic Processing
The system automatically:
- Detects the file type and extracts text content
- Splits content into chunks (default: 800 tokens with 400 token overlap)
- Generates embeddings for each chunk
- Stores chunks with metadata in the vector store
- Updates file status to "completed"
### 4. Search and Retrieval
```python
# Search through processed documents
search_results = await client.vector_stores.search(
vector_store_id=vector_store.id,
query="What is the main topic discussed?",
max_num_results=5,
)
# Process results
for result in search_results.data:
print(f"Score: {result.score}")
for content in result.content:
print(f"Content: {content.text}")
```
## Supported File Types
The FileResponse system supports various document formats:
- **Text Files**: `.txt`, `.md`, `.rst`
- **Documents**: `.pdf`, `.docx`, `.doc`
- **Code**: `.py`, `.js`, `.java`, `.cpp`, etc.
- **Data**: `.json`, `.csv`, `.xml`
- **Web Content**: HTML files
## Chunking Strategies
### Default Strategy
The default chunking strategy uses:
- **Max Chunk Size**: 800 tokens
- **Overlap**: 400 tokens
- **Method**: Semantic boundary detection
### Custom Chunking
You can customize chunking when attaching files:
```python
from llama_stack.apis.vector_io import VectorStoreChunkingStrategy
# Custom chunking strategy
chunking_strategy = VectorStoreChunkingStrategy(
type="custom", max_chunk_size_tokens=1000, chunk_overlap_tokens=200
)
# Attach file with custom chunking
file_attach_response = await client.vector_stores.files.create(
vector_store_id=vector_store.id,
file_id=file_info.id,
chunking_strategy=chunking_strategy,
)
```
**Note**: While Llama Stack is OpenAI-compatible, it also supports additional options beyond the standard OpenAI API. When creating vector stores, you can specify custom embedding models and embedding dimensions that will be used when processing chunks from attached files.
## File Management
### List Files in Vector Store
```python
# List all files in a vector store
files = await client.vector_stores.files.list(vector_store_id=vector_store.id)
for file in files:
print(f"File: {file.filename}, Status: {file.status}")
```
### File Status Tracking
Files go through several statuses:
- **in_progress**: File is being processed
- **completed**: File successfully processed and searchable
- **failed**: Processing failed (check `last_error` for details)
- **cancelled**: Processing was cancelled
### Retrieve File Content
```python
# Get chunked content from vector store
content_response = await client.vector_stores.files.retrieve_content(
vector_store_id=vector_store.id, file_id=file_info.id
)
for chunk in content_response.content:
print(f"Chunk {chunk.metadata.get('chunk_index', 0)}: {chunk.text}")
```
## Vector Store Management
### List Vector Stores
Retrieve a paginated list of all vector stores:
```python
# List all vector stores with default pagination
vector_stores = await client.vector_stores.list()
# Custom pagination and ordering
vector_stores = await client.vector_stores.list(
limit=10,
order="asc", # or "desc"
after="vs_12345678", # cursor-based pagination
)
for store in vector_stores.data:
print(f"Store: {store.name}, Files: {store.file_counts.total}")
print(f"Created: {store.created_at}, Status: {store.status}")
```
### Retrieve Vector Store Details
Get detailed information about a specific vector store:
```python
# Get vector store details
store_details = await client.vector_stores.retrieve(vector_store_id="vs_12345678")
print(f"Name: {store_details.name}")
print(f"Status: {store_details.status}")
print(f"File Counts: {store_details.file_counts}")
print(f"Usage: {store_details.usage_bytes} bytes")
print(f"Created: {store_details.created_at}")
print(f"Metadata: {store_details.metadata}")
```
### Update Vector Store
Modify vector store properties such as name, metadata, or expiration settings:
```python
# Update vector store name and metadata
updated_store = await client.vector_stores.update(
vector_store_id="vs_12345678",
name="Updated Document Collection",
metadata={
"description": "Updated collection for research",
"category": "research",
"version": "2.0",
},
)
# Set expiration policy
expired_store = await client.vector_stores.update(
vector_store_id="vs_12345678",
expires_after={"anchor": "last_active_at", "days": 30},
)
print(f"Updated store: {updated_store.name}")
print(f"Last active: {updated_store.last_active_at}")
```
### Delete Vector Store
Remove a vector store and all its associated data:
```python
# Delete a vector store
delete_response = await client.vector_stores.delete(vector_store_id="vs_12345678")
if delete_response.deleted:
print(f"Vector store {delete_response.id} successfully deleted")
else:
print("Failed to delete vector store")
```
**Important Notes:**
- Deleting a vector store removes all files, chunks, and embeddings
- This operation cannot be undone
- The underlying vector database is also cleaned up
- Consider backing up important data before deletion
## Search Capabilities
### Vector Search
Pure similarity search using embeddings:
```python
results = await client.vector_stores.search(
vector_store_id=vector_store.id,
query="machine learning algorithms",
max_num_results=10,
)
```
### Filtered Search
Combine vector search with metadata filtering:
```python
results = await client.vector_stores.search(
vector_store_id=vector_store.id,
query="machine learning algorithms",
filters={"file_type": "pdf", "upload_date": "2024-01-01"},
max_num_results=10,
)
```
### Hybrid Search
[SQLite-vec](../providers/vector_io/inline_sqlite-vec.md), [pgvector](../providers/vector_io/remote_pgvector.md), and [Milvus](../providers/vector_io/inline_milvus.md) support combining vector and keyword search.
## Performance Considerations
> **Note**: For detailed performance optimization strategies, see [Performance Considerations](../providers/files/openai_file_operations_support.md#performance-considerations) in the provider documentation.
**Key Points:**
- **Chunk Size**: 400-600 tokens for precision, 800-1200 for context
- **Storage**: Choose provider based on your performance needs
- **Search**: Optimize for your specific use case
## Error Handling
> **Note**: For comprehensive troubleshooting and error handling, see [Troubleshooting](../providers/files/openai_file_operations_support.md#troubleshooting) in the provider documentation.
**Common Issues:**
- File processing failures (format, size limits)
- Search performance optimization
- Storage and memory issues
## Best Practices
> **Note**: For detailed best practices and recommendations, see [Best Practices](../providers/files/openai_file_operations_support.md#best-practices) in the provider documentation.
**Key Recommendations:**
- File organization and naming conventions
- Chunking strategy optimization
- Metadata and monitoring practices
- Regular cleanup and maintenance
## Integration Examples
### RAG Application
```python
# Build a RAG system with file uploads
async def build_rag_system():
# Create vector store
vector_store = client.vector_stores.create(name="knowledge_base")
# Upload and process documents
documents = ["doc1.pdf", "doc2.pdf", "doc3.pdf"]
for doc in documents:
with open(doc, "rb") as f:
file_info = await client.files.create(file=f, purpose="assistants")
await client.vector_stores.files.create(
vector_store_id=vector_store.id, file_id=file_info.id
)
return vector_store
# Query the RAG system
async def query_rag(vector_store_id, question):
results = await client.vector_stores.search(
vector_store_id=vector_store_id, query=question, max_num_results=5
)
return results
```
### Document Analysis
```python
# Analyze document content through vector search
async def analyze_document(vector_store_id, file_id):
# Get document content
content = await client.vector_stores.files.retrieve_content(
vector_store_id=vector_store_id, file_id=file_id
)
# Search for specific topics
topics = ["introduction", "methodology", "conclusion"]
analysis = {}
for topic in topics:
results = await client.vector_stores.search(
vector_store_id=vector_store_id, query=topic, max_num_results=3
)
analysis[topic] = results.data
return analysis
```
## Next Steps
- Explore the [Files API documentation](../apis/files.md) for detailed API reference
- Check [Vector Store Providers](../providers/vector_io/index.md) for specific implementation details
- Review [Getting Started](../getting_started/index.md) for quick setup instructions

View file

@ -0,0 +1,287 @@
# Files API
## Overview
The Files API provides file management capabilities for Llama Stack. It allows you to upload, store, retrieve, and manage files that can be used across various endpoints in your application.
## Features
- **File Upload**: Upload files with metadata and purpose classification
- **File Management**: List, retrieve, and delete files
- **Content Retrieval**: Access raw file content for processing
- **API Compatibility**: Full compatibility with OpenAI Files API endpoints
- **Flexible Storage**: Support for local filesystem and cloud storage backends
## API Endpoints
### Upload File
**POST** `/v1/openai/v1/files`
Upload a file that can be used across various endpoints.
**Request Body:**
- `file`: The file object to be uploaded (multipart form data)
- `purpose`: The intended purpose of the uploaded file
**Supported Purposes:**
- `batch`: Files for batch operations
**Response:**
```json
{
"id": "file-abc123",
"object": "file",
"bytes": 140,
"created_at": 1613779121,
"filename": "mydata.jsonl",
"purpose": "batch"
}
```
**Example:**
```python
import requests
with open("data.jsonl", "rb") as f:
files = {"file": f}
data = {"purpose": "batch"}
response = requests.post(
"http://localhost:8000/v1/openai/v1/files", files=files, data=data
)
file_info = response.json()
```
### List Files
**GET** `/v1/openai/v1/files`
Returns a list of files that belong to the user's organization.
**Query Parameters:**
- `after` (optional): A cursor for pagination
- `limit` (optional): Limit on number of objects (1-10,000, default: 10,000)
- `order` (optional): Sort order by created_at timestamp (`asc` or `desc`, default: `desc`)
- `purpose` (optional): Filter files by purpose
**Response:**
```json
{
"object": "list",
"data": [
{
"id": "file-abc123",
"object": "file",
"bytes": 140,
"created_at": 1613779121,
"filename": "mydata.jsonl",
"purpose": "fine-tune"
}
],
"has_more": false
}
```
**Example:**
```python
import requests
# List all files
response = requests.get("http://localhost:8000/v1/openai/v1/files")
files = response.json()
# List files with pagination
response = requests.get(
"http://localhost:8000/v1/openai/v1/files",
params={"limit": 10, "after": "file-abc123"},
)
files = response.json()
# Filter by purpose
response = requests.get(
"http://localhost:8000/v1/openai/v1/files", params={"purpose": "fine-tune"}
)
files = response.json()
```
### Retrieve File
**GET** `/v1/openai/v1/files/{file_id}`
Returns information about a specific file.
**Path Parameters:**
- `file_id`: The ID of the file to retrieve
**Response:**
```json
{
"id": "file-abc123",
"object": "file",
"bytes": 140,
"created_at": 1613779121,
"filename": "mydata.jsonl",
"purpose": "fine-tune"
}
```
**Example:**
```python
import requests
file_id = "file-abc123"
response = requests.get(f"http://localhost:8000/v1/openai/v1/files/{file_id}")
file_info = response.json()
```
### Delete File
**DELETE** `/v1/openai/v1/files/{file_id}`
Delete a file.
**Path Parameters:**
- `file_id`: The ID of the file to delete
**Response:**
```json
{
"id": "file-abc123",
"object": "file",
"deleted": true
}
```
**Example:**
```python
import requests
file_id = "file-abc123"
response = requests.delete(f"http://localhost:8000/v1/openai/v1/files/{file_id}")
result = response.json()
```
### Retrieve File Content
**GET** `/v1/openai/v1/files/{file_id}/content`
Returns the raw file content as a binary response.
**Path Parameters:**
- `file_id`: The ID of the file to retrieve content from
**Response:**
Binary file content with appropriate headers:
- `Content-Type`: `application/octet-stream`
- `Content-Disposition`: `attachment; filename="filename"`
**Example:**
```python
import requests
file_id = "file-abc123"
response = requests.get(f"http://localhost:8000/v1/openai/v1/files/{file_id}/content")
# Save content to file
with open("downloaded_file.jsonl", "wb") as f:
f.write(response.content)
# Or process content directly
content = response.content
```
## Vector Store Integration
The Files API integrates with Vector Stores to enable document processing and search. For detailed information about this integration, see [File Operations and Vector Store Integration](../concepts/file_operations_vector_stores.md).
### Vector Store File Operations
**List Vector Store Files:**
- **GET** `/v1/openai/v1/vector_stores/{vector_store_id}/files`
**Retrieve Vector Store File Content:**
- **GET** `/v1/openai/v1/vector_stores/{vector_store_id}/files/{file_id}/content`
**Attach File to Vector Store:**
- **POST** `/v1/openai/v1/vector_stores/{vector_store_id}/files`
## Error Handling
The Files API returns standard HTTP status codes and error responses:
- `400 Bad Request`: Invalid request parameters
- `404 Not Found`: File not found
- `429 Too Many Requests`: Rate limit exceeded
- `500 Internal Server Error`: Server error
**Error Response Format:**
```json
{
"error": {
"message": "Error description",
"type": "invalid_request_error",
"code": "file_not_found"
}
}
```
## Rate Limits
The Files API implements rate limiting to ensure fair usage:
- File uploads: 100 files per minute
- File retrievals: 1000 requests per minute
- File deletions: 100 requests per minute
## Best Practices
1. **File Organization**: Use descriptive filenames and appropriate purpose classifications
2. **Batch Operations**: For multiple files, consider using batch endpoints when available
3. **Error Handling**: Always check response status codes and handle errors gracefully
4. **Content Types**: Ensure files are uploaded with appropriate content types
5. **Cleanup**: Regularly delete unused files to manage storage costs
## Integration Examples
### With Python Client
```python
from llama_stack import LlamaStackClient
client = LlamaStackClient("http://localhost:8000")
# Upload a file
with open("data.jsonl", "rb") as f:
file_info = await client.files.upload(file=f, purpose="fine-tune")
# List files
files = await client.files.list(purpose="fine-tune")
# Retrieve file content
content = await client.files.retrieve_content(file_info.id)
```
### With cURL
```bash
# Upload file
curl -X POST http://localhost:8000/v1/openai/v1/files \
-F "file=@data.jsonl" \
-F "purpose=fine-tune"
# List files
curl http://localhost:8000/v1/openai/v1/files
# Download file content
curl http://localhost:8000/v1/openai/v1/files/file-abc123/content \
-o downloaded_file.jsonl
```
## Provider Support
The Files API supports multiple storage backends:
- **Local Filesystem**: Store files on local disk (inline provider)
- **S3**: Store files in AWS S3 or S3-compatible services (remote provider)
- **Custom Backends**: Extensible architecture for custom storage providers
See the [Files Providers](index.md) documentation for detailed configuration options.

View file

@ -0,0 +1,80 @@
# File Operations Quick Reference
## Overview
As of release 0.2.14, Llama Stack provides comprehensive file operations and Vector Store API integration, following the [OpenAI Vector Store Files API specification](https://platform.openai.com/docs/api-reference/vector-stores-files).
> **Note**: For detailed overview and implementation details, see [Overview](../openai_file_operations_support.md#overview) in the full documentation.
## Supported Providers
> **Note**: For complete provider details and features, see [Supported Providers](../openai_file_operations_support.md#supported-providers) in the full documentation.
**Inline Providers**: FAISS, SQLite-vec, Milvus
**Remote Providers**: ChromaDB, Qdrant, Weaviate, PGVector
## Quick Start
### 1. Upload File
```python
file_info = await client.files.upload(
file=open("document.pdf", "rb"), purpose="assistants"
)
```
### 2. Create Vector Store
```python
vector_store = client.vector_stores.create(name="my_docs")
```
### 3. Attach File
```python
await client.vector_stores.files.create(
vector_store_id=vector_store.id, file_id=file_info.id
)
```
### 4. Search
```python
results = await client.vector_stores.search(
vector_store_id=vector_store.id, query="What is the main topic?", max_num_results=5
)
```
## File Processing & Search
**Processing**: 800 tokens default chunk size, 400 token overlap
**Formats**: PDF, DOCX, TXT, Code files, etc.
**Search**: Vector similarity, Hybrid (SQLite-vec), Filtered with metadata
## Configuration
> **Note**: For detailed configuration examples and options, see [Configuration Examples](../openai_file_operations_support.md#configuration-examples) in the full documentation.
**Basic Setup**: Configure vector_io and files providers in your run.yaml
## Common Use Cases
- **RAG Systems**: Document Q&A with file uploads
- **Knowledge Bases**: Searchable document collections
- **Content Analysis**: Document similarity and clustering
- **Research Tools**: Literature review and analysis
## Performance Tips
> **Note**: For detailed performance optimization strategies, see [Performance Considerations](../openai_file_operations_support.md#performance-considerations) in the full documentation.
**Quick Tips**: Choose provider based on your needs (speed vs. storage vs. scalability)
## Troubleshooting
> **Note**: For comprehensive troubleshooting, see [Troubleshooting](../openai_file_operations_support.md#troubleshooting) in the full documentation.
**Quick Fixes**: Check file format compatibility, optimize chunk sizes, monitor storage
## Resources
- [Full Documentation](openai_file_operations_support.md)
- [Integration Guide](../concepts/file_operations_vector_stores.md)
- [Files API](files_api.md)
- [Provider Details](../vector_io/index.md)

View file

@ -0,0 +1,292 @@
# File Operations Support in Vector Store Providers
## Overview
This document provides a comprehensive overview of file operations and Vector Store API support across all available vector store providers in Llama Stack. As of release 0.2.14, the following providers support full file operations integration.
## Supported Providers
### ✅ Full File Operations Support
The following providers support complete file operations integration, including file upload, automatic processing, and search:
#### Inline Providers (Single Node)
| Provider | File Operations | Key Features |
|----------|----------------|--------------|
| **FAISS** | ✅ Full Support | Fast in-memory search, GPU acceleration |
| **SQLite-vec** | ✅ Full Support | Hybrid search, disk-based storage |
| **Milvus** | ✅ Full Support | High-performance, scalable indexing |
#### Remote Providers (Hosted)
| Provider | File Operations | Key Features |
|----------|----------------|--------------|
| **ChromaDB** | ✅ Full Support | Metadata filtering, persistent storage |
| **Qdrant** | ✅ Full Support | Payload filtering, advanced search |
| **Weaviate** | ✅ Full Support | GraphQL interface, schema management |
| **Postgres (PGVector)** | ✅ Full Support | SQL integration, ACID compliance |
### 🔄 Partial Support
Some providers may support basic vector operations but lack full file operations integration:
| Provider | Status | Notes |
|----------|--------|-------|
| **Meta Reference** | 🔄 Basic | Core vector operations only |
## File Operations Features
All supported providers offer the following file operations capabilities:
### Core Functionality
- **File Upload & Processing**: Automatic document ingestion and chunking
- **Vector Storage**: Embedding generation and storage
- **Search & Retrieval**: Semantic search with metadata filtering
- **File Management**: List, retrieve, and manage files in vector stores
### Advanced Features
- **Automatic Chunking**: Configurable chunk sizes and overlap
- **Metadata Preservation**: File attributes and chunk metadata
- **Status Tracking**: Monitor file processing progress
- **Error Handling**: Comprehensive error reporting and recovery
## Implementation Details
### File Processing Pipeline
1. **Upload**: File uploaded via Files API
2. **Extraction**: Text content extracted from various formats
3. **Chunking**: Content split into optimal chunks (default: 800 tokens)
4. **Embedding**: Chunks converted to vector embeddings
5. **Storage**: Vectors stored with metadata in vector database
6. **Indexing**: Search index updated for fast retrieval
### Supported File Formats
- **Documents**: PDF, DOCX, DOC
- **Text**: TXT, MD, RST
- **Code**: Python, JavaScript, Java, C++, etc.
- **Data**: JSON, CSV, XML
- **Web**: HTML files
### Chunking Strategies
- **Default**: 800 tokens with 400 token overlap
- **Custom**: Configurable chunk sizes and overlap
- **Semantic**: Intelligent boundary detection
- **Static**: Fixed-size chunks with overlap
## Provider-Specific Features
### FAISS
- **Storage**: In-memory with optional persistence
- **Performance**: Optimized for speed and GPU acceleration
- **Use Case**: High-performance, memory-constrained environments
### SQLite-vec
- **Storage**: Disk-based with SQLite backend
- **Search**: Hybrid vector + keyword search
- **Use Case**: Large document collections, frequent updates
### Milvus
- **Storage**: Scalable distributed storage
- **Indexing**: Multiple index types (IVF, HNSW)
- **Use Case**: Production deployments, large-scale applications
### ChromaDB
- **Storage**: Persistent storage with metadata
- **Filtering**: Advanced metadata filtering
- **Use Case**: Applications requiring rich metadata
### Qdrant
- **Storage**: High-performance vector database
- **Filtering**: Payload-based filtering
- **Use Case**: Real-time applications, complex queries
### Weaviate
- **Storage**: GraphQL-native vector database
- **Schema**: Flexible schema management
- **Use Case**: Applications requiring complex data relationships
### Postgres (PGVector)
- **Storage**: SQL database with vector extensions
- **Integration**: ACID compliance, existing SQL workflows
- **Use Case**: Applications requiring transactional guarantees
## Configuration Examples
### Basic Configuration
```yaml
vector_io:
- provider_id: faiss
provider_type: inline::faiss
config:
kvstore:
type: sqlite
db_path: ~/.llama/faiss_store.db
```
### With FileResponse Support
```yaml
vector_io:
- provider_id: faiss
provider_type: inline::faiss
config:
kvstore:
type: sqlite
db_path: ~/.llama/faiss_store.db
files:
- provider_id: local-files
provider_type: inline::localfs
config:
storage_dir: ~/.llama/files
metadata_store:
type: sqlite
db_path: ~/.llama/files_metadata.db
```
## Usage Examples
### Python Client
```python
from llama_stack import LlamaStackClient
client = LlamaStackClient("http://localhost:8000")
# Create vector store
vector_store = client.vector_stores.create(name="documents")
# Upload and process file
with open("document.pdf", "rb") as f:
file_info = await client.files.upload(file=f, purpose="assistants")
# Attach to vector store
await client.vector_stores.files.create(
vector_store_id=vector_store.id, file_id=file_info.id
)
# Search
results = await client.vector_stores.search(
vector_store_id=vector_store.id, query="What is the main topic?", max_num_results=5
)
```
### cURL Commands
```bash
# Upload file
curl -X POST http://localhost:8000/v1/openai/v1/files \
-F "file=@document.pdf" \
-F "purpose=assistants"
# Create vector store
curl -X POST http://localhost:8000/v1/openai/v1/vector_stores \
-H "Content-Type: application/json" \
-d '{"name": "documents"}'
# Attach file to vector store
curl -X POST http://localhost:8000/v1/openai/v1/vector_stores/{store_id}/files \
-H "Content-Type: application/json" \
-d '{"file_id": "file-abc123"}'
# Search vector store
curl -X POST http://localhost:8000/v1/openai/v1/vector_stores/{store_id}/search \
-H "Content-Type: application/json" \
-d '{"query": "What is the main topic?", "max_num_results": 5}'
```
## Performance Considerations
### Chunk Size Optimization
- **Small chunks (400-600 tokens)**: Better precision, more results
- **Large chunks (800-1200 tokens)**: Better context, fewer results
- **Overlap (50%)**: Maintains context between chunks
### Storage Efficiency
- **FAISS**: Fastest, but memory-limited
- **SQLite-vec**: Good balance of performance and storage
- **Milvus**: Scalable, production-ready
- **Remote providers**: Managed, but network-dependent
### Search Performance
- **Vector search**: Fastest for semantic queries
- **Hybrid search**: Best accuracy (SQLite-vec only)
- **Filtered search**: Fast with metadata constraints
## Troubleshooting
### Common Issues
1. **File Processing Failures**
- Check file format compatibility
- Verify file size limits
- Review error messages in file status
2. **Search Performance**
- Optimize chunk sizes for your use case
- Use filters to narrow search scope
- Monitor vector store metrics
3. **Storage Issues**
- Check available disk space
- Verify database permissions
- Monitor memory usage (for in-memory providers)
### Monitoring
```python
# Check file processing status
file_status = await client.vector_stores.files.retrieve(
vector_store_id=vector_store.id, file_id=file_info.id
)
if file_status.status == "failed":
print(f"Error: {file_status.last_error.message}")
# Monitor vector store health
health = await client.vector_stores.health(vector_store_id=vector_store.id)
print(f"Status: {health.status}")
```
## Best Practices
1. **File Organization**: Use descriptive names and organize by purpose
2. **Chunking Strategy**: Test different sizes for your specific use case
3. **Metadata**: Add relevant attributes for better filtering
4. **Monitoring**: Track processing status and search performance
5. **Cleanup**: Regularly remove unused files to manage storage
## Future Enhancements
Planned improvements for file operations support:
- **Batch Processing**: Process multiple files simultaneously
- **Advanced Chunking**: More sophisticated chunking algorithms
- **Custom Embeddings**: Support for custom embedding models
- **Real-time Updates**: Live file processing and indexing
- **Multi-format Support**: Enhanced file format support
## Support and Resources
- **Documentation**: [File Operations and Vector Store Integration](../concepts/file_operations_vector_stores.md)
- **API Reference**: [Files API](files_api.md)
- **Provider Docs**: [Vector Store Providers](../vector_io/index.md)
- **Examples**: [Getting Started](../getting_started/index.md)
- **Community**: [GitHub Discussions](https://github.com/meta-llama/llama-stack/discussions)

150
docs/source/index.md Normal file
View file

@ -0,0 +1,150 @@
# Llama Stack
Welcome to Llama Stack, the open-source framework for building generative AI applications.
```{admonition} Llama 4 is here!
:class: tip
Check out [Getting Started with Llama 4](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started_llama4.ipynb)
```
```{admonition} News
:class: tip
Llama Stack {{ llama_stack_version }} is now available! See the {{ llama_stack_version_link }} for more details.
```
## What is Llama Stack?
Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of OpenAI-compatible APIs with implementations from leading service providers, enabling seamless transitions between development and production environments. More specifically, it provides
- **OpenAI-compatible API layer** for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry
- **Plugin architecture** to support the rich ecosystem of implementations of the different APIs in different environments like local development, on-premises, cloud, and mobile
- **Prepackaged verified distributions** which offer a one-stop solution for developers to get started quickly and reliably in any environment
- **Multiple developer interfaces** like CLI and SDKs for Python, Node, iOS, and Android
- **Standalone applications** as examples for how to build production-grade AI applications with Llama Stack
```{image} ../_static/llama-stack.png
:alt: Llama Stack
:width: 400px
```
Our goal is to provide pre-packaged implementations (aka "distributions") which can be run in a variety of deployment environments. LlamaStack can assist you in your entire app development lifecycle - start iterating on local, mobile or desktop and seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available.
## How does Llama Stack work?
Llama Stack consists of a [server](./distributions/index.md) (with multiple pluggable API [providers](./providers/index.md)) and Client SDKs (see below) meant to
be used in your applications. The server can be run in a variety of environments, including local (inline)
development, on-premises, and cloud. The client SDKs are available for Python, Swift, Node, and
Kotlin.
## Quick Links
- Ready to build? Check out the [Quick Start](getting_started/index) to get started.
- Want to contribute? See the [Contributing](contributing/index) guide.
## Supported Llama Stack Implementations
A number of "adapters" are available for some popular Inference and Vector Store providers. For other APIs (particularly Safety and Agents), we provide *reference implementations* you can use to get started. We expect this list to grow over time. We are slowly onboarding more providers to the ecosystem as we get more confidence in the APIs.
**Inference API**
| **Provider** | **Environments** |
| :----: | :----: |
| Meta Reference | Single Node |
| Ollama | Single Node |
| Fireworks | Hosted |
| Together | Hosted |
| NVIDIA NIM | Hosted and Single Node |
| vLLM | Hosted and Single Node |
| TGI | Hosted and Single Node |
| AWS Bedrock | Hosted |
| Cerebras | Hosted |
| Groq | Hosted |
| SambaNova | Hosted |
| PyTorch ExecuTorch | On-device iOS, Android |
| OpenAI | Hosted |
| Anthropic | Hosted |
| Gemini | Hosted |
| WatsonX | Hosted |
**Agents API**
| **Provider** | **Environments** |
| :----: | :----: |
| Meta Reference | Single Node |
| Fireworks | Hosted |
| Together | Hosted |
| PyTorch ExecuTorch | On-device iOS |
**Vector IO API**
| **Provider** | **Environments** |
| :----: | :----: |
| FAISS | Single Node |
| SQLite-Vec | Single Node |
| Chroma | Hosted and Single Node |
| Milvus | Hosted and Single Node |
| Postgres (PGVector) | Hosted and Single Node |
| Weaviate | Hosted |
| Qdrant | Hosted and Single Node |
**Files API (OpenAI-compatible)**
| **Provider** | **Environments** |
| :----: | :----: |
| Local Filesystem | Single Node |
| S3 | Hosted |
**Vector Store Files API (OpenAI-compatible)**
| **Provider** | **Environments** |
| :----: | :----: |
| FAISS | Single Node |
| SQLite-vec | Single Node |
| Milvus | Single Node |
| ChromaDB | Hosted and Single Node |
| Qdrant | Hosted and Single Node |
| Weaviate | Hosted |
| Postgres (PGVector) | Hosted and Single Node |
**Safety API**
| **Provider** | **Environments** |
| :----: | :----: |
| Llama Guard | Depends on Inference Provider |
| Prompt Guard | Single Node |
| Code Scanner | Single Node |
| AWS Bedrock | Hosted |
**Post Training API**
| **Provider** | **Environments** |
| :----: | :----: |
| Meta Reference | Single Node |
| HuggingFace | Single Node |
| TorchTune | Single Node |
| NVIDIA NEMO | Hosted |
**Eval API**
| **Provider** | **Environments** |
| :----: | :----: |
| Meta Reference | Single Node |
| NVIDIA NEMO | Hosted |
**Telemetry API**
| **Provider** | **Environments** |
| :----: | :----: |
| Meta Reference | Single Node |
**Tool Runtime API**
| **Provider** | **Environments** |
| :----: | :----: |
| Brave Search | Hosted |
| RAG Runtime | Single Node |
```{toctree}
:hidden:
:maxdepth: 3
self
getting_started/index
concepts/index
providers/index
distributions/index
advanced_apis/index
building_applications/index
deploying/index
contributing/index
references/index
```