mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
Merge branch 'main' into llama_stack_how_to_documentation
This commit is contained in:
commit
3661d9f150
3489 changed files with 710200 additions and 535689 deletions
|
|
@ -13,6 +13,42 @@ npm run serve
|
|||
```
|
||||
You can open up the docs in your browser at http://localhost:3000
|
||||
|
||||
## File Import System
|
||||
|
||||
This documentation uses `remark-code-import` to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.
|
||||
|
||||
### Importing Code Files
|
||||
|
||||
To import Python code (or any code files) with syntax highlighting, use this syntax in `.mdx` files:
|
||||
|
||||
```markdown
|
||||
```python file=./demo_script.py title="demo_script.py"
|
||||
```
|
||||
```
|
||||
|
||||
This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.
|
||||
|
||||
**Note:** Paths are relative to the current `.mdx` file location, not the repository root.
|
||||
|
||||
### Importing Markdown Files as Content
|
||||
|
||||
For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:
|
||||
|
||||
```jsx
|
||||
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
|
||||
<ReactMarkdown>{Contributing}</ReactMarkdown>
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
- Install dependencies: `npm install --save-dev raw-loader react-markdown`
|
||||
|
||||
**Path Resolution:**
|
||||
- For `remark-code-import`: Paths are relative to the current `.mdx` file location
|
||||
- For `raw-loader`: Paths are relative to the current `.mdx` file location
|
||||
- Use `../` to navigate up directories as needed
|
||||
|
||||
## Content
|
||||
|
||||
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
|
||||
|
|
|
|||
|
|
@ -51,8 +51,8 @@ device: cpu
|
|||
You can access the HuggingFace trainer via the `starter` distribution:
|
||||
|
||||
```bash
|
||||
llama stack build --distro starter --image-type venv
|
||||
llama stack run ~/.llama/distributions/starter/starter-run.yaml
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
llama stack run starter
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
|
|
|||
62
docs/docs/api-deprecated/index.mdx
Normal file
62
docs/docs/api-deprecated/index.mdx
Normal file
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: Deprecated APIs
|
||||
description: Legacy APIs that are being phased out
|
||||
sidebar_label: Deprecated
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# Deprecated APIs
|
||||
|
||||
This section contains APIs that are being phased out in favor of newer, more standardized implementations. These APIs are maintained for backward compatibility but are not recommended for new projects.
|
||||
|
||||
:::warning Deprecation Notice
|
||||
These APIs are deprecated and will be removed in future versions. Please migrate to the recommended alternatives listed below.
|
||||
:::
|
||||
|
||||
## Migration Guide
|
||||
|
||||
When using deprecated APIs, please refer to the migration guides provided for each API to understand how to transition to the supported alternatives.
|
||||
|
||||
## Deprecated API List
|
||||
|
||||
### Legacy Inference APIs
|
||||
Some older inference endpoints that have been superseded by the standardized Inference API.
|
||||
|
||||
**Migration Path:** Use the [Inference API](../api/) instead.
|
||||
|
||||
### Legacy Vector Operations
|
||||
Older vector database operations that have been replaced by the Vector IO API.
|
||||
|
||||
**Migration Path:** Use the [Vector IO API](../api/) instead.
|
||||
|
||||
### Legacy File Operations
|
||||
Older file management endpoints that have been replaced by the Files API.
|
||||
|
||||
**Migration Path:** Use the [Files API](../api/) instead.
|
||||
|
||||
## Support Timeline
|
||||
|
||||
Deprecated APIs will be supported according to the following timeline:
|
||||
|
||||
- **Current Version**: Full support with deprecation warnings
|
||||
- **Next Major Version**: Limited support with migration notices
|
||||
- **Following Major Version**: Removal of deprecated APIs
|
||||
|
||||
## Getting Help
|
||||
|
||||
If you need assistance migrating from deprecated APIs:
|
||||
|
||||
1. Check the specific migration guides for each API
|
||||
2. Review the [API Reference](../api/) for current alternatives
|
||||
3. Consult the [Community Forums](https://github.com/llamastack/llama-stack/discussions) for migration support
|
||||
4. Open an issue on GitHub for specific migration questions
|
||||
|
||||
## Contributing
|
||||
|
||||
If you find issues with deprecated APIs or have suggestions for improving the migration process, please contribute by:
|
||||
|
||||
1. Opening an issue describing the problem
|
||||
2. Submitting a pull request with improvements
|
||||
3. Updating migration documentation
|
||||
|
||||
For more information on contributing, see our [Contributing Guide](../contributing/).
|
||||
128
docs/docs/api-experimental/index.mdx
Normal file
128
docs/docs/api-experimental/index.mdx
Normal file
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: Experimental APIs
|
||||
description: APIs in development with limited support
|
||||
sidebar_label: Experimental
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# Experimental APIs
|
||||
|
||||
This section contains APIs that are currently in development and may have limited support or stability. These APIs are available for testing and feedback but should not be used in production environments.
|
||||
|
||||
:::warning Experimental Notice
|
||||
These APIs are experimental and may change without notice. Use with caution and provide feedback to help improve them.
|
||||
:::
|
||||
|
||||
## Current Experimental APIs
|
||||
|
||||
### Batch Inference API
|
||||
Run inference on a dataset of inputs in batch mode for improved efficiency.
|
||||
|
||||
**Status:** In Development
|
||||
**Provider Support:** Limited
|
||||
**Use Case:** Large-scale inference operations
|
||||
|
||||
**Features:**
|
||||
- Batch processing of multiple inputs
|
||||
- Optimized resource utilization
|
||||
- Progress tracking and monitoring
|
||||
|
||||
### Batch Agents API
|
||||
Run agentic workflows on a dataset of inputs in batch mode.
|
||||
|
||||
**Status:** In Development
|
||||
**Provider Support:** Limited
|
||||
**Use Case:** Large-scale agent operations
|
||||
|
||||
**Features:**
|
||||
- Batch agent execution
|
||||
- Parallel processing capabilities
|
||||
- Result aggregation and analysis
|
||||
|
||||
### Synthetic Data Generation API
|
||||
Generate synthetic data for model development and testing.
|
||||
|
||||
**Status:** Early Development
|
||||
**Provider Support:** Very Limited
|
||||
**Use Case:** Training data augmentation
|
||||
|
||||
**Features:**
|
||||
- Automated data generation
|
||||
- Quality control mechanisms
|
||||
- Customizable generation parameters
|
||||
|
||||
### Batches API (OpenAI-compatible)
|
||||
OpenAI-compatible batch management for inference operations.
|
||||
|
||||
**Status:** In Development
|
||||
**Provider Support:** Limited
|
||||
**Use Case:** OpenAI batch processing compatibility
|
||||
|
||||
**Features:**
|
||||
- OpenAI batch API compatibility
|
||||
- Job scheduling and management
|
||||
- Status tracking and monitoring
|
||||
|
||||
## Getting Started with Experimental APIs
|
||||
|
||||
### Prerequisites
|
||||
- Llama Stack server running with experimental features enabled
|
||||
- Appropriate provider configurations
|
||||
- Understanding of API limitations
|
||||
|
||||
### Configuration
|
||||
Experimental APIs may require special configuration flags or provider settings. Check the specific API documentation for setup requirements.
|
||||
|
||||
### Usage Guidelines
|
||||
1. **Testing Only**: Use experimental APIs for testing and development only
|
||||
2. **Monitor Changes**: Watch for updates and breaking changes
|
||||
3. **Provide Feedback**: Report issues and suggest improvements
|
||||
4. **Backup Data**: Always backup important data when using experimental features
|
||||
|
||||
## Feedback and Contribution
|
||||
|
||||
We encourage feedback on experimental APIs to help improve them:
|
||||
|
||||
### Reporting Issues
|
||||
- Use GitHub issues with the "experimental" label
|
||||
- Include detailed error messages and reproduction steps
|
||||
- Specify the API version and provider being used
|
||||
|
||||
### Feature Requests
|
||||
- Submit feature requests through GitHub discussions
|
||||
- Provide use cases and expected behavior
|
||||
- Consider contributing implementations
|
||||
|
||||
### Testing
|
||||
- Test experimental APIs in your environment
|
||||
- Report performance issues and optimization opportunities
|
||||
- Share success stories and use cases
|
||||
|
||||
## Migration to Stable APIs
|
||||
|
||||
As experimental APIs mature, they will be moved to the stable API section. When this happens:
|
||||
|
||||
1. **Announcement**: We'll announce the promotion in release notes
|
||||
2. **Migration Guide**: Detailed migration instructions will be provided
|
||||
3. **Deprecation Timeline**: Experimental versions will be deprecated with notice
|
||||
4. **Support**: Full support will be available for stable versions
|
||||
|
||||
## Provider Support
|
||||
|
||||
Experimental APIs may have limited provider support. Check the specific API documentation for:
|
||||
|
||||
- Supported providers
|
||||
- Configuration requirements
|
||||
- Known limitations
|
||||
- Performance characteristics
|
||||
|
||||
## Roadmap
|
||||
|
||||
Experimental APIs are part of our ongoing development roadmap:
|
||||
|
||||
- **Q1 2024**: Batch Inference API stabilization
|
||||
- **Q2 2024**: Batch Agents API improvements
|
||||
- **Q3 2024**: Synthetic Data Generation API expansion
|
||||
- **Q4 2024**: Batches API full OpenAI compatibility
|
||||
|
||||
For the latest updates, follow our [GitHub releases](https://github.com/llamastack/llama-stack/releases) and [roadmap discussions](https://github.com/llamastack/llama-stack/discussions).
|
||||
287
docs/docs/api-openai/index.mdx
Normal file
287
docs/docs/api-openai/index.mdx
Normal file
|
|
@ -0,0 +1,287 @@
|
|||
---
|
||||
title: OpenAI API Compatibility
|
||||
description: OpenAI-compatible APIs and features in Llama Stack
|
||||
sidebar_label: OpenAI Compatibility
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# OpenAI API Compatibility
|
||||
|
||||
Llama Stack provides comprehensive OpenAI API compatibility, allowing you to use existing OpenAI API clients and tools with Llama Stack providers. This compatibility layer ensures seamless migration and interoperability.
|
||||
|
||||
## Overview
|
||||
|
||||
OpenAI API compatibility in Llama Stack includes:
|
||||
|
||||
- **OpenAI-compatible endpoints** for all major APIs
|
||||
- **Request/response format compatibility** with OpenAI standards
|
||||
- **Authentication and authorization** using OpenAI-style API keys
|
||||
- **Error handling** with OpenAI-compatible error codes and messages
|
||||
- **Rate limiting** and usage tracking compatible with OpenAI patterns
|
||||
|
||||
## Supported OpenAI APIs
|
||||
|
||||
### Chat Completions API
|
||||
OpenAI-compatible chat completions for conversational AI applications.
|
||||
|
||||
**Endpoint:** `/v1/chat/completions`
|
||||
**Compatibility:** Full OpenAI API compatibility
|
||||
**Providers:** All inference providers
|
||||
|
||||
**Features:**
|
||||
- Message-based conversations
|
||||
- System prompts and user messages
|
||||
- Function calling support
|
||||
- Streaming responses
|
||||
- Temperature and other parameter controls
|
||||
|
||||
### Completions API
|
||||
OpenAI-compatible text completions for general text generation.
|
||||
|
||||
**Endpoint:** `/v1/completions`
|
||||
**Compatibility:** Full OpenAI API compatibility
|
||||
**Providers:** All inference providers
|
||||
|
||||
**Features:**
|
||||
- Text completion generation
|
||||
- Prompt engineering support
|
||||
- Customizable parameters
|
||||
- Batch processing capabilities
|
||||
|
||||
### Embeddings API
|
||||
OpenAI-compatible embeddings for vector operations.
|
||||
|
||||
**Endpoint:** `/v1/embeddings`
|
||||
**Compatibility:** Full OpenAI API compatibility
|
||||
**Providers:** All embedding providers
|
||||
|
||||
**Features:**
|
||||
- Text embedding generation
|
||||
- Multiple embedding models
|
||||
- Batch embedding processing
|
||||
- Vector similarity operations
|
||||
|
||||
### Files API
|
||||
OpenAI-compatible file management for document processing.
|
||||
|
||||
**Endpoint:** `/v1/files`
|
||||
**Compatibility:** Full OpenAI API compatibility
|
||||
**Providers:** Local Filesystem, S3
|
||||
|
||||
**Features:**
|
||||
- File upload and management
|
||||
- Document processing
|
||||
- File metadata tracking
|
||||
- Secure file access
|
||||
|
||||
### Vector Store Files API
|
||||
OpenAI-compatible vector store file operations for RAG applications.
|
||||
|
||||
**Endpoint:** `/v1/vector_stores/{vector_store_id}/files`
|
||||
**Compatibility:** Full OpenAI API compatibility
|
||||
**Providers:** FAISS, SQLite-vec, Milvus, ChromaDB, Qdrant, Weaviate, Postgres (PGVector)
|
||||
|
||||
**Features:**
|
||||
- Automatic document processing
|
||||
- Vector store integration
|
||||
- File chunking and indexing
|
||||
- Search and retrieval operations
|
||||
|
||||
### Batches API
|
||||
OpenAI-compatible batch processing for large-scale operations.
|
||||
|
||||
**Endpoint:** `/v1/batches`
|
||||
**Compatibility:** OpenAI API compatibility (experimental)
|
||||
**Providers:** Limited support
|
||||
|
||||
**Features:**
|
||||
- Batch job creation and management
|
||||
- Progress tracking
|
||||
- Result retrieval
|
||||
- Error handling
|
||||
|
||||
## Migration from OpenAI
|
||||
|
||||
### Step 1: Update API Endpoint
|
||||
Change your API endpoint from OpenAI to your Llama Stack server:
|
||||
|
||||
```python
|
||||
# Before (OpenAI)
|
||||
import openai
|
||||
client = openai.OpenAI(api_key="your-openai-key")
|
||||
|
||||
# After (Llama Stack)
|
||||
import openai
|
||||
client = openai.OpenAI(
|
||||
api_key="your-llama-stack-key",
|
||||
base_url="http://localhost:8000/v1" # Your Llama Stack server
|
||||
)
|
||||
```
|
||||
|
||||
### Step 2: Configure Providers
|
||||
Set up your preferred providers in the Llama Stack configuration:
|
||||
|
||||
```yaml
|
||||
# stack-config.yaml
|
||||
inference:
|
||||
providers:
|
||||
- name: "meta-reference"
|
||||
type: "inline"
|
||||
model: "llama-3.1-8b"
|
||||
```
|
||||
|
||||
### Step 3: Test Compatibility
|
||||
Verify that your existing code works with Llama Stack:
|
||||
|
||||
```python
|
||||
# Test chat completions
|
||||
response = client.chat.completions.create(
|
||||
model="llama-3.1-8b",
|
||||
messages=[
|
||||
{"role": "user", "content": "Hello, world!"}
|
||||
]
|
||||
)
|
||||
print(response.choices[0].message.content)
|
||||
```
|
||||
|
||||
## Provider-Specific Features
|
||||
|
||||
### Meta Reference Provider
|
||||
- Full OpenAI API compatibility
|
||||
- Local model execution
|
||||
- Custom model support
|
||||
|
||||
### Remote Providers
|
||||
- OpenAI API compatibility
|
||||
- Cloud-based execution
|
||||
- Scalable infrastructure
|
||||
|
||||
### Vector Store Providers
|
||||
- OpenAI vector store API compatibility
|
||||
- Automatic document processing
|
||||
- Advanced search capabilities
|
||||
|
||||
## Authentication
|
||||
|
||||
Llama Stack supports OpenAI-style authentication:
|
||||
|
||||
### API Key Authentication
|
||||
```python
|
||||
client = openai.OpenAI(
|
||||
api_key="your-api-key",
|
||||
base_url="http://localhost:8000/v1"
|
||||
)
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
export OPENAI_API_KEY="your-api-key"
|
||||
export OPENAI_BASE_URL="http://localhost:8000/v1"
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Llama Stack provides OpenAI-compatible error responses:
|
||||
|
||||
```python
|
||||
try:
|
||||
response = client.chat.completions.create(...)
|
||||
except openai.APIError as e:
|
||||
print(f"API Error: {e}")
|
||||
except openai.RateLimitError as e:
|
||||
print(f"Rate Limit Error: {e}")
|
||||
except openai.APIConnectionError as e:
|
||||
print(f"Connection Error: {e}")
|
||||
```
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
OpenAI-compatible rate limiting is supported:
|
||||
|
||||
- **Requests per minute** limits
|
||||
- **Tokens per minute** limits
|
||||
- **Concurrent request** limits
|
||||
- **Usage tracking** and monitoring
|
||||
|
||||
## Monitoring and Observability
|
||||
|
||||
Track your API usage with OpenAI-compatible monitoring:
|
||||
|
||||
- **Request/response logging**
|
||||
- **Usage metrics** and analytics
|
||||
- **Performance monitoring**
|
||||
- **Error tracking** and alerting
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Provider Selection
|
||||
Choose providers based on your requirements:
|
||||
- **Local development**: Meta Reference, Ollama
|
||||
- **Production**: Cloud providers (Fireworks, Together, NVIDIA)
|
||||
- **Specialized use cases**: Custom providers
|
||||
|
||||
### 2. Model Configuration
|
||||
Configure models for optimal performance:
|
||||
- **Model selection** based on task requirements
|
||||
- **Parameter tuning** for specific use cases
|
||||
- **Resource allocation** for performance
|
||||
|
||||
### 3. Error Handling
|
||||
Implement robust error handling:
|
||||
- **Retry logic** for transient failures
|
||||
- **Fallback providers** for high availability
|
||||
- **Monitoring** and alerting for issues
|
||||
|
||||
### 4. Security
|
||||
Follow security best practices:
|
||||
- **API key management** and rotation
|
||||
- **Access control** and authorization
|
||||
- **Data privacy** and compliance
|
||||
|
||||
## Implementation Examples
|
||||
|
||||
For detailed code examples and implementation guides, see our [OpenAI Implementation Guide](../providers/openai.mdx).
|
||||
|
||||
## Known Limitations
|
||||
|
||||
### Responses API Limitations
|
||||
The Responses API is still in active development. For detailed information about current limitations and implementation status, see our [OpenAI Responses API Limitations](../providers/openai_responses_limitations.mdx).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Connection Errors**
|
||||
- Verify server is running
|
||||
- Check network connectivity
|
||||
- Validate API endpoint URL
|
||||
|
||||
**Authentication Errors**
|
||||
- Verify API key is correct
|
||||
- Check key permissions
|
||||
- Ensure proper authentication headers
|
||||
|
||||
**Model Errors**
|
||||
- Verify model is available
|
||||
- Check provider configuration
|
||||
- Validate model parameters
|
||||
|
||||
### Getting Help
|
||||
|
||||
For OpenAI compatibility issues:
|
||||
|
||||
1. **Check Documentation**: Review provider-specific documentation
|
||||
2. **Community Support**: Ask questions in GitHub discussions
|
||||
3. **Issue Reporting**: Open GitHub issues for bugs
|
||||
4. **Professional Support**: Contact support for enterprise issues
|
||||
|
||||
## Roadmap
|
||||
|
||||
Upcoming OpenAI compatibility features:
|
||||
|
||||
- **Enhanced batch processing** support
|
||||
- **Advanced function calling** capabilities
|
||||
- **Improved error handling** and diagnostics
|
||||
- **Performance optimizations** for large-scale deployments
|
||||
|
||||
For the latest updates, follow our [GitHub releases](https://github.com/llamastack/llama-stack/releases) and [roadmap discussions](https://github.com/llamastack/llama-stack/discussions).
|
||||
144
docs/docs/api/index.mdx
Normal file
144
docs/docs/api/index.mdx
Normal file
|
|
@ -0,0 +1,144 @@
|
|||
---
|
||||
title: API Reference
|
||||
description: Complete reference for Llama Stack APIs
|
||||
sidebar_label: Overview
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# API Reference
|
||||
|
||||
Llama Stack provides a comprehensive set of APIs for building generative AI applications. All APIs follow OpenAI-compatible standards and can be used interchangeably across different providers.
|
||||
|
||||
## Core APIs
|
||||
|
||||
### Inference API
|
||||
Run inference with Large Language Models (LLMs) and embedding models.
|
||||
|
||||
**Supported Providers:**
|
||||
- Meta Reference (Single Node)
|
||||
- Ollama (Single Node)
|
||||
- Fireworks (Hosted)
|
||||
- Together (Hosted)
|
||||
- NVIDIA NIM (Hosted and Single Node)
|
||||
- vLLM (Hosted and Single Node)
|
||||
- TGI (Hosted and Single Node)
|
||||
- AWS Bedrock (Hosted)
|
||||
- Cerebras (Hosted)
|
||||
- Groq (Hosted)
|
||||
- SambaNova (Hosted)
|
||||
- PyTorch ExecuTorch (On-device iOS, Android)
|
||||
- OpenAI (Hosted)
|
||||
- Anthropic (Hosted)
|
||||
- Gemini (Hosted)
|
||||
- WatsonX (Hosted)
|
||||
|
||||
### Agents API
|
||||
Run multi-step agentic workflows with LLMs, including tool usage, memory (RAG), and complex reasoning.
|
||||
|
||||
**Supported Providers:**
|
||||
- Meta Reference (Single Node)
|
||||
- Fireworks (Hosted)
|
||||
- Together (Hosted)
|
||||
- PyTorch ExecuTorch (On-device iOS)
|
||||
|
||||
### Vector IO API
|
||||
Perform operations on vector stores, including adding documents, searching, and deleting documents.
|
||||
|
||||
**Supported Providers:**
|
||||
- FAISS (Single Node)
|
||||
- SQLite-Vec (Single Node)
|
||||
- Chroma (Hosted and Single Node)
|
||||
- Milvus (Hosted and Single Node)
|
||||
- Postgres (PGVector) (Hosted and Single Node)
|
||||
- Weaviate (Hosted)
|
||||
- Qdrant (Hosted and Single Node)
|
||||
|
||||
### Files API (OpenAI-compatible)
|
||||
Manage file uploads, storage, and retrieval with OpenAI-compatible endpoints.
|
||||
|
||||
**Supported Providers:**
|
||||
- Local Filesystem (Single Node)
|
||||
- S3 (Hosted)
|
||||
|
||||
### Vector Store Files API (OpenAI-compatible)
|
||||
Integrate file operations with vector stores for automatic document processing and search.
|
||||
|
||||
**Supported Providers:**
|
||||
- FAISS (Single Node)
|
||||
- SQLite-vec (Single Node)
|
||||
- Milvus (Single Node)
|
||||
- ChromaDB (Hosted and Single Node)
|
||||
- Qdrant (Hosted and Single Node)
|
||||
- Weaviate (Hosted)
|
||||
- Postgres (PGVector) (Hosted and Single Node)
|
||||
|
||||
### Safety API
|
||||
Apply safety policies to outputs at a systems level, not just model level.
|
||||
|
||||
**Supported Providers:**
|
||||
- Llama Guard (Depends on Inference Provider)
|
||||
- Prompt Guard (Single Node)
|
||||
- Code Scanner (Single Node)
|
||||
- AWS Bedrock (Hosted)
|
||||
|
||||
### Post Training API
|
||||
Fine-tune models for specific use cases and domains.
|
||||
|
||||
**Supported Providers:**
|
||||
- Meta Reference (Single Node)
|
||||
- HuggingFace (Single Node)
|
||||
- TorchTune (Single Node)
|
||||
- NVIDIA NEMO (Hosted)
|
||||
|
||||
### Eval API
|
||||
Generate outputs and perform scoring to evaluate system performance.
|
||||
|
||||
**Supported Providers:**
|
||||
- Meta Reference (Single Node)
|
||||
- NVIDIA NEMO (Hosted)
|
||||
|
||||
### Telemetry API
|
||||
Collect telemetry data from the system for monitoring and observability.
|
||||
|
||||
**Supported Providers:**
|
||||
- Meta Reference (Single Node)
|
||||
|
||||
### Tool Runtime API
|
||||
Interact with various tools and protocols to extend LLM capabilities.
|
||||
|
||||
**Supported Providers:**
|
||||
- Brave Search (Hosted)
|
||||
- RAG Runtime (Single Node)
|
||||
|
||||
## API Compatibility
|
||||
|
||||
All Llama Stack APIs are designed to be OpenAI-compatible, allowing you to:
|
||||
- Use existing OpenAI API clients and tools
|
||||
- Migrate from OpenAI to other providers seamlessly
|
||||
- Maintain consistent API contracts across different environments
|
||||
|
||||
## Getting Started
|
||||
|
||||
To get started with Llama Stack APIs:
|
||||
|
||||
1. **Choose a Distribution**: Select a pre-configured distribution that matches your environment
|
||||
2. **Configure Providers**: Set up the providers you want to use for each API
|
||||
3. **Start the Server**: Launch the Llama Stack server with your configuration
|
||||
4. **Use the APIs**: Make requests to the API endpoints using your preferred client
|
||||
|
||||
For detailed setup instructions, see our [Getting Started Guide](../getting_started/quickstart).
|
||||
|
||||
## Provider Details
|
||||
|
||||
For complete provider compatibility and setup instructions, see our [Providers Documentation](../providers/).
|
||||
|
||||
## API Stability
|
||||
|
||||
Llama Stack APIs are organized by stability level:
|
||||
- **[Stable APIs](./index.mdx)** - Production-ready APIs with full support
|
||||
- **[Experimental APIs](../api-experimental/)** - APIs in development with limited support
|
||||
- **[Deprecated APIs](../api-deprecated/)** - Legacy APIs being phased out
|
||||
|
||||
## OpenAI Integration
|
||||
|
||||
For specific OpenAI API compatibility features, see our [OpenAI Compatibility Guide](../api-openai/).
|
||||
|
|
@ -35,9 +35,6 @@ Here are the key topics that will help you build effective AI applications:
|
|||
- **[Telemetry](./telemetry.mdx)** - Monitor and analyze your agents' performance and behavior
|
||||
- **[Safety](./safety.mdx)** - Implement guardrails and safety measures to ensure responsible AI behavior
|
||||
|
||||
### 🎮 **Interactive Development**
|
||||
- **[Playground](./playground.mdx)** - Interactive environment for testing and developing applications
|
||||
|
||||
## Application Patterns
|
||||
|
||||
### 🤖 **Conversational Agents**
|
||||
|
|
|
|||
|
|
@ -1,299 +1,87 @@
|
|||
---
|
||||
title: Llama Stack Playground
|
||||
description: Interactive interface to explore and experiment with Llama Stack capabilities
|
||||
title: Admin UI & Chat Playground
|
||||
description: Web-based admin interface and chat playground for Llama Stack
|
||||
sidebar_label: Playground
|
||||
sidebar_position: 10
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
# Admin UI & Chat Playground
|
||||
|
||||
# Llama Stack Playground
|
||||
The Llama Stack UI provides a comprehensive web-based admin interface for managing your Llama Stack server, with an integrated chat playground for interactive testing. This admin interface is the primary way to monitor, manage, and debug your Llama Stack applications.
|
||||
|
||||
:::note[Experimental Feature]
|
||||
The Llama Stack Playground is currently experimental and subject to change. We welcome feedback and contributions to help improve it.
|
||||
:::
|
||||
## Quick Start
|
||||
|
||||
The Llama Stack Playground is a simple interface that aims to:
|
||||
- **Showcase capabilities and concepts** of Llama Stack in an interactive environment
|
||||
- **Demo end-to-end application code** to help users get started building their own applications
|
||||
- **Provide a UI** to help users inspect and understand Llama Stack API providers and resources
|
||||
|
||||
## Key Features
|
||||
|
||||
### Interactive Playground Pages
|
||||
|
||||
The playground provides interactive pages for users to explore Llama Stack API capabilities:
|
||||
|
||||
#### Chatbot Interface
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/8d2ef802-5812-4a28-96e1-316038c84cbf" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="chat" label="Chat">
|
||||
|
||||
**Simple Chat Interface**
|
||||
- Chat directly with Llama models through an intuitive interface
|
||||
- Uses the `/chat/completions` streaming API under the hood
|
||||
- Real-time message streaming for responsive interactions
|
||||
- Perfect for testing model capabilities and prompt engineering
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rag" label="RAG Chat">
|
||||
|
||||
**Document-Aware Conversations**
|
||||
- Upload documents to create memory banks
|
||||
- Chat with a RAG-enabled agent that can query your documents
|
||||
- Uses Llama Stack's `/agents` API to create and manage RAG sessions
|
||||
- Ideal for exploring knowledge-enhanced AI applications
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Evaluation Interface
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/6cc1659f-eba4-49ca-a0a5-7c243557b4f5" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="scoring" label="Scoring Evaluations">
|
||||
|
||||
**Custom Dataset Evaluation**
|
||||
- Upload your own evaluation datasets
|
||||
- Run evaluations using available scoring functions
|
||||
- Uses Llama Stack's `/scoring` API for flexible evaluation workflows
|
||||
- Great for testing application performance on custom metrics
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="benchmarks" label="Benchmark Evaluations">
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%', marginBottom: '1rem'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/345845c7-2a2b-4095-960a-9ae40f6a93cf" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
|
||||
**Pre-registered Evaluation Tasks**
|
||||
- Evaluate models or agents on pre-defined tasks
|
||||
- Uses Llama Stack's `/eval` API for comprehensive evaluation
|
||||
- Combines datasets and scoring functions for standardized testing
|
||||
|
||||
**Setup Requirements:**
|
||||
Register evaluation datasets and benchmarks first:
|
||||
Launch the admin UI with:
|
||||
|
||||
```bash
|
||||
# Register evaluation dataset
|
||||
llama-stack-client datasets register \
|
||||
--dataset-id "mmlu" \
|
||||
--provider-id "huggingface" \
|
||||
--url "https://huggingface.co/datasets/llamastack/evals" \
|
||||
--metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \
|
||||
--schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string"}, "chat_completion_input": {"type": "string"}}'
|
||||
|
||||
# Register benchmark task
|
||||
llama-stack-client benchmarks register \
|
||||
--eval-task-id meta-reference-mmlu \
|
||||
--provider-id meta-reference \
|
||||
--dataset-id mmlu \
|
||||
--scoring-functions basic::regex_parser_multiple_choice_answer
|
||||
npx llama-stack-ui
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
Then visit `http://localhost:8322` to access the interface.
|
||||
|
||||
#### Inspection Interface
|
||||
## Admin Interface Features
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/01d52b2d-92af-4e3a-b623-a9b8ba22ba99" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
The Llama Stack UI is organized into three main sections:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="providers" label="API Providers">
|
||||
### 🎯 Create
|
||||
**Chat Playground** - Interactive testing environment
|
||||
- Real-time chat interface for testing agents and models
|
||||
- Multi-turn conversations with tool calling support
|
||||
- Agent SDK integration (will be migrated to Responses API)
|
||||
- Custom system prompts and model parameter adjustment
|
||||
|
||||
**Provider Management**
|
||||
- Inspect available Llama Stack API providers
|
||||
- View provider configurations and capabilities
|
||||
- Uses the `/providers` API for real-time provider information
|
||||
- Essential for understanding your deployment's capabilities
|
||||
### 📊 Manage
|
||||
**Logs & Resource Management** - Monitor and manage your stack
|
||||
- **Responses Logs**: View and analyze agent responses and interactions
|
||||
- **Chat Completions Logs**: Monitor chat completion requests and responses
|
||||
- **Vector Stores**: Create, manage, and monitor vector databases for RAG workflows
|
||||
- **Prompts**: Full CRUD operations for prompt templates and management
|
||||
- **Files**: Forthcoming file management capabilities
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="resources" label="API Resources">
|
||||
## Key Capabilities for Application Development
|
||||
|
||||
**Resource Exploration**
|
||||
- Inspect Llama Stack API resources including:
|
||||
- **Models**: Available language models
|
||||
- **Datasets**: Registered evaluation datasets
|
||||
- **Memory Banks**: Vector databases and knowledge stores
|
||||
- **Benchmarks**: Evaluation tasks and scoring functions
|
||||
- **Shields**: Safety and content moderation tools
|
||||
- Uses `/<resources>/list` APIs for comprehensive resource visibility
|
||||
- For detailed information about resources, see [Core Concepts](/docs/concepts)
|
||||
### Real-time Monitoring
|
||||
- **Response Tracking**: Monitor all agent responses and tool calls
|
||||
- **Completion Analysis**: View chat completion performance and patterns
|
||||
- **Vector Store Activity**: Track RAG operations and document processing
|
||||
- **Prompt Usage**: Analyze prompt template performance
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
### Resource Management
|
||||
- **Vector Store CRUD**: Create, update, and delete vector databases
|
||||
- **Prompt Library**: Organize and version control your prompts
|
||||
- **File Operations**: Manage documents and assets (forthcoming)
|
||||
|
||||
### Interactive Testing
|
||||
- **Chat Playground**: Test conversational flows before production deployment
|
||||
- **Agent Prototyping**: Validate agent behaviors and tool integrations
|
||||
|
||||
## Development Workflow Integration
|
||||
|
||||
The admin UI supports your development lifecycle:
|
||||
|
||||
1. **Development**: Use chat playground to prototype and test features
|
||||
2. **Monitoring**: Track system performance through logs and metrics
|
||||
3. **Management**: Organize prompts, vector stores, and other resources
|
||||
4. **Debugging**: Analyze logs to identify and resolve issues
|
||||
|
||||
## Architecture Notes
|
||||
|
||||
- **Current**: Chat playground uses Agents SDK
|
||||
- **Future**: Migration to Responses API for improved performance and consistency
|
||||
- **Admin Focus**: Primary emphasis on monitoring, logging, and resource management
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Quick Start Guide
|
||||
1. **Launch the UI**: Run `npx llama-stack-ui`
|
||||
2. **Explore Logs**: Start with Responses and Chat Completions logs to understand your system activity
|
||||
3. **Test in Playground**: Use the chat interface to validate your agent configurations
|
||||
4. **Manage Resources**: Create vector stores and organize prompts through the UI
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="setup" label="Setup">
|
||||
For detailed setup and configuration, see the [Llama Stack UI documentation](/docs/distributions/llama_stack_ui).
|
||||
|
||||
**1. Start the Llama Stack API Server**
|
||||
## Next Steps
|
||||
|
||||
```bash
|
||||
# Build and run a distribution (example: together)
|
||||
llama stack build --distro together --image-type venv
|
||||
llama stack run together
|
||||
```
|
||||
|
||||
**2. Start the Streamlit UI**
|
||||
|
||||
```bash
|
||||
# Launch the playground interface
|
||||
uv run --with ".[ui]" streamlit run llama_stack.core/ui/app.py
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="usage" label="Usage Tips">
|
||||
|
||||
**Making the Most of the Playground:**
|
||||
|
||||
- **Start with Chat**: Test basic model interactions and prompt engineering
|
||||
- **Explore RAG**: Upload sample documents to see knowledge-enhanced responses
|
||||
- **Try Evaluations**: Use the scoring interface to understand evaluation metrics
|
||||
- **Inspect Resources**: Check what providers and resources are available
|
||||
- **Experiment with Settings**: Adjust parameters to see how they affect results
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Available Distributions
|
||||
|
||||
The playground works with any Llama Stack distribution. Popular options include:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="together" label="Together AI">
|
||||
|
||||
```bash
|
||||
llama stack build --distro together --image-type venv
|
||||
llama stack run together
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Cloud-hosted models
|
||||
- Fast inference
|
||||
- Multiple model options
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="ollama" label="Ollama (Local)">
|
||||
|
||||
```bash
|
||||
llama stack build --distro ollama --image-type venv
|
||||
llama stack run ollama
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Local model execution
|
||||
- Privacy-focused
|
||||
- No internet required
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="meta-reference" label="Meta Reference">
|
||||
|
||||
```bash
|
||||
llama stack build --distro meta-reference --image-type venv
|
||||
llama stack run meta-reference
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Reference implementation
|
||||
- All API features available
|
||||
- Best for development
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Use Cases & Examples
|
||||
|
||||
### Educational Use Cases
|
||||
- **Learning Llama Stack**: Hands-on exploration of API capabilities
|
||||
- **Prompt Engineering**: Interactive testing of different prompting strategies
|
||||
- **RAG Experimentation**: Understanding how document retrieval affects responses
|
||||
- **Evaluation Understanding**: See how different metrics evaluate model performance
|
||||
|
||||
### Development Use Cases
|
||||
- **Prototype Testing**: Quick validation of application concepts
|
||||
- **API Exploration**: Understanding available endpoints and parameters
|
||||
- **Integration Planning**: Seeing how different components work together
|
||||
- **Demo Creation**: Showcasing Llama Stack capabilities to stakeholders
|
||||
|
||||
### Research Use Cases
|
||||
- **Model Comparison**: Side-by-side testing of different models
|
||||
- **Evaluation Design**: Understanding how scoring functions work
|
||||
- **Safety Testing**: Exploring shield effectiveness with different inputs
|
||||
- **Performance Analysis**: Measuring model behavior across different scenarios
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🚀 **Getting Started**
|
||||
- Begin with simple chat interactions to understand basic functionality
|
||||
- Gradually explore more advanced features like RAG and evaluations
|
||||
- Use the inspection tools to understand your deployment's capabilities
|
||||
|
||||
### 🔧 **Development Workflow**
|
||||
- Use the playground to prototype before writing application code
|
||||
- Test different parameter settings interactively
|
||||
- Validate evaluation approaches before implementing them programmatically
|
||||
|
||||
### 📊 **Evaluation & Testing**
|
||||
- Start with simple scoring functions before trying complex evaluations
|
||||
- Use the playground to understand evaluation results before automation
|
||||
- Test safety features with various input types
|
||||
|
||||
### 🎯 **Production Preparation**
|
||||
- Use playground insights to inform your production API usage
|
||||
- Test edge cases and error conditions interactively
|
||||
- Validate resource configurations before deployment
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Getting Started Guide](../getting_started/quickstart)** - Complete setup and introduction
|
||||
- **[Core Concepts](/docs/concepts)** - Understanding Llama Stack fundamentals
|
||||
- **[Agents](./agent)** - Building intelligent agents
|
||||
- **[RAG (Retrieval Augmented Generation)](./rag)** - Knowledge-enhanced applications
|
||||
- **[Evaluations](./evals)** - Comprehensive evaluation framework
|
||||
- **[API Reference](/docs/api/llama-stack-specification)** - Complete API documentation
|
||||
- Set up your [first agent](/docs/building_applications/agent)
|
||||
- Implement [RAG functionality](/docs/building_applications/rag)
|
||||
- Add [evaluation metrics](/docs/building_applications/evals)
|
||||
- Configure [safety measures](/docs/building_applications/safety)
|
||||
|
|
|
|||
|
|
@ -10,358 +10,114 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
# Retrieval Augmented Generation (RAG)
|
||||
|
||||
RAG enables your applications to reference and recall information from previous interactions or external documents.
|
||||
|
||||
RAG enables your applications to reference and recall information from external documents. Llama Stack makes Agentic RAG available through OpenAI's Responses API.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Start the Server
|
||||
|
||||
In one terminal, start the Llama Stack server:
|
||||
|
||||
```bash
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
llama stack run starter
|
||||
```
|
||||
|
||||
### 2. Connect with OpenAI Client
|
||||
|
||||
In another terminal, use the standard OpenAI client with the Responses API:
|
||||
|
||||
```python
|
||||
import io, requests
|
||||
from openai import OpenAI
|
||||
|
||||
url = "https://www.paulgraham.com/greatwork.html"
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
|
||||
|
||||
# Create vector store - auto-detects default embedding model
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
response = requests.get(url)
|
||||
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
|
||||
file_id = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants").id
|
||||
client.vector_stores.files.create(vector_store_id=vs.id, file_id=file_id)
|
||||
|
||||
resp = client.responses.create(
|
||||
model="gpt-4o",
|
||||
input="How do you do great work? Use the existing knowledge_search tool.",
|
||||
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
|
||||
include=["file_search_call.results"],
|
||||
)
|
||||
|
||||
print(resp.output[-1].content[-1].text)
|
||||
```
|
||||
Which should give output like:
|
||||
```
|
||||
Doing great work is about more than just hard work and ambition; it involves combining several elements:
|
||||
|
||||
1. **Pursue What Excites You**: Engage in projects that are both ambitious and exciting to you. It's important to work on something you have a natural aptitude for and a deep interest in.
|
||||
|
||||
2. **Explore and Discover**: Great work often feels like a blend of discovery and creation. Focus on seeing possibilities and let ideas take their natural shape, rather than just executing a plan.
|
||||
|
||||
3. **Be Bold Yet Flexible**: Take bold steps in your work without over-planning. An adaptable approach that evolves with new ideas can often lead to breakthroughs.
|
||||
|
||||
4. **Work on Your Own Projects**: Develop a habit of working on projects of your own choosing, as these often lead to great achievements. These should be projects you find exciting and that challenge you intellectually.
|
||||
|
||||
5. **Be Earnest and Authentic**: Approach your work with earnestness and authenticity. Trying to impress others with affectation can be counterproductive, as genuine effort and intellectual honesty lead to better work outcomes.
|
||||
|
||||
6. **Build a Supportive Environment**: Work alongside great colleagues who inspire you and enhance your work. Surrounding yourself with motivating individuals creates a fertile environment for great work.
|
||||
|
||||
7. **Maintain High Morale**: High morale significantly impacts your ability to do great work. Stay optimistic and protect your mental well-being to maintain progress and momentum.
|
||||
|
||||
8. **Balance**: While hard work is essential, overworking can lead to diminishing returns. Balance periods of intensive work with rest to sustain productivity over time.
|
||||
|
||||
This approach shows that great work is less about following a strict formula and more about aligning your interests, ambition, and environment to foster creativity and innovation.
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
Llama Stack organizes the APIs that enable RAG into three layers:
|
||||
Llama Stack provides OpenAI-compatible RAG capabilities through:
|
||||
|
||||
1. **Lower-Level APIs**: Deal with raw storage and retrieval. These include Vector IO, KeyValue IO (coming soon) and Relational IO (also coming soon)
|
||||
2. **RAG Tool**: A first-class tool as part of the [Tools API](./tools) that allows you to ingest documents (from URLs, files, etc) with various chunking strategies and query them smartly
|
||||
3. **Agents API**: The top-level [Agents API](./agent) that allows you to create agents that can use the tools to answer questions, perform tasks, and more
|
||||
- **Vector Stores API**: OpenAI-compatible vector storage with automatic embedding model detection
|
||||
- **Files API**: Document upload and processing using OpenAI's file format
|
||||
- **Responses API**: Enhanced chat completions with agentic tool calling via file search
|
||||
|
||||

|
||||
## Configuring Default Embedding Models
|
||||
|
||||
The RAG system uses lower-level storage for different types of data:
|
||||
- **Vector IO**: For semantic search and retrieval
|
||||
- **Key-Value and Relational IO**: For structured data storage
|
||||
To enable automatic vector store creation without specifying embedding models, configure a default embedding model in your run.yaml like so:
|
||||
|
||||
:::info[Future Storage Types]
|
||||
We may add more storage types like Graph IO in the future.
|
||||
:::
|
||||
|
||||
## Setting up Vector Databases
|
||||
|
||||
For this guide, we will use [Ollama](https://ollama.com/) as the inference provider. Ollama is an LLM runtime that allows you to run Llama models locally.
|
||||
|
||||
Here's how to set up a vector database for RAG:
|
||||
|
||||
```python
|
||||
# Create HTTP client
|
||||
import os
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
|
||||
|
||||
# Register a vector database
|
||||
vector_db_id = "my_documents"
|
||||
response = client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model="all-MiniLM-L6-v2",
|
||||
embedding_dimension=384,
|
||||
provider_id="faiss",
|
||||
)
|
||||
```yaml
|
||||
vector_stores:
|
||||
default_provider_id: faiss
|
||||
default_embedding_model:
|
||||
provider_id: sentence-transformers
|
||||
model_id: nomic-ai/nomic-embed-text-v1.5
|
||||
```
|
||||
|
||||
## Document Ingestion
|
||||
With this configuration:
|
||||
- `client.vector_stores.create()` works without requiring embedding model or provider parameters
|
||||
- The system automatically uses the default vector store provider (`faiss`) when multiple providers are available
|
||||
- The system automatically uses the default embedding model (`sentence-transformers/nomic-ai/nomic-embed-text-v1.5`) for any newly created vector store
|
||||
- The `default_provider_id` specifies which vector storage backend to use
|
||||
- The `default_embedding_model` specifies both the inference provider and model for embeddings
|
||||
|
||||
You can ingest documents into the vector database using two methods: directly inserting pre-chunked documents or using the RAG Tool.
|
||||
## Vector Store Operations
|
||||
|
||||
### Direct Document Insertion
|
||||
### Creating Vector Stores
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="basic" label="Basic Insertion">
|
||||
You can create vector stores with automatic or explicit embedding model selection:
|
||||
|
||||
```python
|
||||
# You can insert a pre-chunked document directly into the vector db
|
||||
chunks = [
|
||||
{
|
||||
"content": "Your document text here",
|
||||
"mime_type": "text/plain",
|
||||
"metadata": {
|
||||
"document_id": "doc1",
|
||||
"author": "Jane Doe",
|
||||
},
|
||||
},
|
||||
]
|
||||
client.vector_io.insert(vector_db_id=vector_db_id, chunks=chunks)
|
||||
```
|
||||
# Automatic - uses default configured embedding model and vector store provider
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="embeddings" label="With Precomputed Embeddings">
|
||||
|
||||
If you decide to precompute embeddings for your documents, you can insert them directly into the vector database by including the embedding vectors in the chunk data. This is useful if you have a separate embedding service or if you want to customize the ingestion process.
|
||||
|
||||
```python
|
||||
chunks_with_embeddings = [
|
||||
{
|
||||
"content": "First chunk of text",
|
||||
"mime_type": "text/plain",
|
||||
"embedding": [0.1, 0.2, 0.3, ...], # Your precomputed embedding vector
|
||||
"metadata": {"document_id": "doc1", "section": "introduction"},
|
||||
},
|
||||
{
|
||||
"content": "Second chunk of text",
|
||||
"mime_type": "text/plain",
|
||||
"embedding": [0.2, 0.3, 0.4, ...], # Your precomputed embedding vector
|
||||
"metadata": {"document_id": "doc1", "section": "methodology"},
|
||||
},
|
||||
]
|
||||
client.vector_io.insert(vector_db_id=vector_db_id, chunks=chunks_with_embeddings)
|
||||
```
|
||||
|
||||
:::warning[Embedding Dimensions]
|
||||
When providing precomputed embeddings, ensure the embedding dimension matches the `embedding_dimension` specified when registering the vector database.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Document Retrieval
|
||||
|
||||
You can query the vector database to retrieve documents based on their embeddings.
|
||||
|
||||
```python
|
||||
# You can then query for these chunks
|
||||
chunks_response = client.vector_io.query(
|
||||
vector_db_id=vector_db_id,
|
||||
query="What do you know about..."
|
||||
# Explicit - specify embedding model and/or provider when you need specific ones
|
||||
vs = client.vector_stores.create(
|
||||
extra_body={
|
||||
"provider_id": "faiss", # Optional: specify vector store provider
|
||||
"embedding_model": "sentence-transformers/nomic-ai/nomic-embed-text-v1.5",
|
||||
"embedding_dimension": 768 # Optional: will be auto-detected if not provided
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Using the RAG Tool
|
||||
|
||||
:::danger[Deprecation Notice]
|
||||
The RAG Tool is being deprecated in favor of directly using the OpenAI-compatible Search API. We recommend migrating to the OpenAI APIs for better compatibility and future support.
|
||||
:::
|
||||
|
||||
A better way to ingest documents is to use the RAG Tool. This tool allows you to ingest documents from URLs, files, etc. and automatically chunks them into smaller pieces. More examples for how to format a RAGDocument can be found in the [appendix](#more-ragdocument-examples).
|
||||
|
||||
### OpenAI API Integration & Migration
|
||||
|
||||
The RAG tool has been updated to use OpenAI-compatible APIs. This provides several benefits:
|
||||
|
||||
- **Files API Integration**: Documents are now uploaded using OpenAI's file upload endpoints
|
||||
- **Vector Stores API**: Vector storage operations use OpenAI's vector store format with configurable chunking strategies
|
||||
- **Error Resilience**: When processing multiple documents, individual failures are logged but don't crash the operation. Failed documents are skipped while successful ones continue processing.
|
||||
|
||||
### Migration Path
|
||||
|
||||
We recommend migrating to the OpenAI-compatible Search API for:
|
||||
|
||||
1. **Better OpenAI Ecosystem Integration**: Direct compatibility with OpenAI tools and workflows including the Responses API
|
||||
2. **Future-Proof**: Continued support and feature development
|
||||
3. **Full OpenAI Compatibility**: Vector Stores, Files, and Search APIs are fully compatible with OpenAI's Responses API
|
||||
|
||||
The OpenAI APIs are used under the hood, so you can continue to use your existing RAG Tool code with minimal changes. However, we recommend updating your code to use the new OpenAI-compatible APIs for better long-term support. If any documents fail to process, they will be logged in the response but will not cause the entire operation to fail.
|
||||
|
||||
### RAG Tool Example
|
||||
|
||||
```python
|
||||
from llama_stack_client import RAGDocument
|
||||
|
||||
urls = ["memory_optimizations.rst", "chat.rst", "llama3.rst"]
|
||||
documents = [
|
||||
RAGDocument(
|
||||
document_id=f"num-{i}",
|
||||
content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}",
|
||||
mime_type="text/plain",
|
||||
metadata={},
|
||||
)
|
||||
for i, url in enumerate(urls)
|
||||
]
|
||||
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=documents,
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=512,
|
||||
)
|
||||
|
||||
# Query documents
|
||||
results = client.tool_runtime.rag_tool.query(
|
||||
vector_db_ids=[vector_db_id],
|
||||
content="What do you know about...",
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Context Configuration
|
||||
|
||||
You can configure how the RAG tool adds metadata to the context if you find it useful for your application:
|
||||
|
||||
```python
|
||||
# Query documents with custom template
|
||||
results = client.tool_runtime.rag_tool.query(
|
||||
vector_db_ids=[vector_db_id],
|
||||
content="What do you know about...",
|
||||
query_config={
|
||||
"chunk_template": "Result {index}\nContent: {chunk.content}\nMetadata: {metadata}\n",
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
## Building RAG-Enhanced Agents
|
||||
|
||||
One of the most powerful patterns is combining agents with RAG capabilities. Here's a complete example:
|
||||
|
||||
### Agent with Knowledge Search
|
||||
|
||||
```python
|
||||
from llama_stack_client import Agent
|
||||
|
||||
# Create agent with memory
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.3-70B-Instruct",
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[
|
||||
{
|
||||
"name": "builtin::rag/knowledge_search",
|
||||
"args": {
|
||||
"vector_db_ids": [vector_db_id],
|
||||
# Defaults
|
||||
"query_config": {
|
||||
"chunk_size_in_tokens": 512,
|
||||
"chunk_overlap_in_tokens": 0,
|
||||
"chunk_template": "Result {index}\nContent: {chunk.content}\nMetadata: {metadata}\n",
|
||||
},
|
||||
},
|
||||
}
|
||||
],
|
||||
)
|
||||
session_id = agent.create_session("rag_session")
|
||||
|
||||
# Ask questions about documents in the vector db, and the agent will query the db to answer the question.
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "How to optimize memory in PyTorch?"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
:::tip[Agent Instructions]
|
||||
The `instructions` field in the `AgentConfig` can be used to guide the agent's behavior. It is important to experiment with different instructions to see what works best for your use case.
|
||||
:::
|
||||
|
||||
### Document-Aware Conversations
|
||||
|
||||
You can also pass documents along with the user's message and ask questions about them:
|
||||
|
||||
```python
|
||||
# Initial document ingestion
|
||||
response = agent.create_turn(
|
||||
messages=[
|
||||
{"role": "user", "content": "I am providing some documents for reference."}
|
||||
],
|
||||
documents=[
|
||||
{
|
||||
"content": "https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/memory_optimizations.rst",
|
||||
"mime_type": "text/plain",
|
||||
}
|
||||
],
|
||||
session_id=session_id,
|
||||
)
|
||||
|
||||
# Query with RAG
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "What are the key topics in the documents?"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
### Viewing Agent Responses
|
||||
|
||||
You can print the response with the following:
|
||||
|
||||
```python
|
||||
from llama_stack_client import AgentEventLogger
|
||||
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
```
|
||||
|
||||
## Vector Database Management
|
||||
|
||||
### Unregistering Vector DBs
|
||||
|
||||
If you need to clean up and unregister vector databases, you can do so as follows:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="single" label="Single Database">
|
||||
|
||||
```python
|
||||
# Unregister a specified vector database
|
||||
vector_db_id = "my_vector_db_id"
|
||||
print(f"Unregistering vector database: {vector_db_id}")
|
||||
client.vector_dbs.unregister(vector_db_id=vector_db_id)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="all" label="All Databases">
|
||||
|
||||
```python
|
||||
# Unregister all vector databases
|
||||
for vector_db_id in client.vector_dbs.list():
|
||||
print(f"Unregistering vector database: {vector_db_id.identifier}")
|
||||
client.vector_dbs.unregister(vector_db_id=vector_db_id.identifier)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🎯 **Document Chunking**
|
||||
- Use appropriate chunk sizes (512 tokens is often a good starting point)
|
||||
- Consider overlap between chunks for better context preservation
|
||||
- Experiment with different chunking strategies for your content type
|
||||
|
||||
### 🔍 **Embedding Strategy**
|
||||
- Choose embedding models that match your domain
|
||||
- Consider the trade-off between embedding dimension and performance
|
||||
- Test different embedding models for your specific use case
|
||||
|
||||
### 📊 **Query Optimization**
|
||||
- Use specific, well-formed queries for better retrieval
|
||||
- Experiment with different search strategies
|
||||
- Consider hybrid approaches (keyword + semantic search)
|
||||
|
||||
### 🛡️ **Error Handling**
|
||||
- Implement proper error handling for failed document processing
|
||||
- Monitor ingestion success rates
|
||||
- Have fallback strategies for retrieval failures
|
||||
|
||||
## Appendix
|
||||
|
||||
### More RAGDocument Examples
|
||||
|
||||
Here are various ways to create RAGDocument objects for different content types:
|
||||
|
||||
```python
|
||||
from llama_stack_client import RAGDocument
|
||||
import base64
|
||||
|
||||
# File URI
|
||||
RAGDocument(document_id="num-0", content={"uri": "file://path/to/file"})
|
||||
|
||||
# Plain text
|
||||
RAGDocument(document_id="num-1", content="plain text")
|
||||
|
||||
# Explicit text input
|
||||
RAGDocument(
|
||||
document_id="num-2",
|
||||
content={
|
||||
"type": "text",
|
||||
"text": "plain text input",
|
||||
}, # for inputs that should be treated as text explicitly
|
||||
)
|
||||
|
||||
# Image from URL
|
||||
RAGDocument(
|
||||
document_id="num-3",
|
||||
content={
|
||||
"type": "image",
|
||||
"image": {"url": {"uri": "https://mywebsite.com/image.jpg"}},
|
||||
},
|
||||
)
|
||||
|
||||
# Base64 encoded image
|
||||
B64_ENCODED_IMAGE = base64.b64encode(
|
||||
requests.get(
|
||||
"https://raw.githubusercontent.com/meta-llama/llama-stack/refs/heads/main/docs/_static/llama-stack.png"
|
||||
).content
|
||||
)
|
||||
RAGDocument(
|
||||
document_id="num-4",
|
||||
content={"type": "image", "image": {"data": B64_ENCODED_IMAGE}},
|
||||
)
|
||||
```
|
||||
For more strongly typed interaction use the typed dicts found [here](https://github.com/meta-llama/llama-stack-client-python/blob/38cd91c9e396f2be0bec1ee96a19771582ba6f17/src/llama_stack_client/types/shared_params/document.py).
|
||||
|
|
|
|||
|
|
@ -391,5 +391,4 @@ client.shields.register(
|
|||
- **[Agents](./agent)** - Integrating safety shields with intelligent agents
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - Understanding safety in the execution flow
|
||||
- **[Evaluations](./evals)** - Evaluating safety shield effectiveness
|
||||
- **[Telemetry](./telemetry)** - Monitoring safety violations and metrics
|
||||
- **[Llama Guard Documentation](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)** - Advanced safety model details
|
||||
|
|
|
|||
|
|
@ -10,58 +10,8 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
# Telemetry
|
||||
|
||||
The Llama Stack telemetry system provides comprehensive tracing, metrics, and logging capabilities. It supports multiple sink types including OpenTelemetry, SQLite, and Console output for complete observability of your AI applications.
|
||||
The Llama Stack uses OpenTelemetry to provide comprehensive tracing, metrics, and logging capabilities.
|
||||
|
||||
## Event Types
|
||||
|
||||
The telemetry system supports three main types of events:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="unstructured" label="Unstructured Logs">
|
||||
|
||||
Free-form log messages with severity levels for general application logging:
|
||||
|
||||
```python
|
||||
unstructured_log_event = UnstructuredLogEvent(
|
||||
message="This is a log message",
|
||||
severity=LogSeverity.INFO
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="metrics" label="Metric Events">
|
||||
|
||||
Numerical measurements with units for tracking performance and usage:
|
||||
|
||||
```python
|
||||
metric_event = MetricEvent(
|
||||
metric="my_metric",
|
||||
value=10,
|
||||
unit="count"
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="structured" label="Structured Logs">
|
||||
|
||||
System events like span start/end that provide structured operation tracking:
|
||||
|
||||
```python
|
||||
structured_log_event = SpanStartPayload(
|
||||
name="my_span",
|
||||
parent_span_id="parent_span_id"
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Spans and Traces
|
||||
|
||||
- **Spans**: Represent individual operations with timing information and hierarchical relationships
|
||||
- **Traces**: Collections of related spans that form a complete request flow across your application
|
||||
|
||||
This hierarchical structure allows you to understand the complete execution path of requests through your Llama Stack application.
|
||||
|
||||
## Automatic Metrics Generation
|
||||
|
||||
|
|
@ -129,21 +79,6 @@ Send events to an OpenTelemetry Collector for integration with observability pla
|
|||
- Compatible with all OpenTelemetry collectors
|
||||
- Supports both traces and metrics
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="sqlite" label="SQLite">
|
||||
|
||||
Store events in a local SQLite database for direct querying:
|
||||
|
||||
**Use Cases:**
|
||||
- Local development and debugging
|
||||
- Custom analytics and reporting
|
||||
- Offline analysis of application behavior
|
||||
|
||||
**Features:**
|
||||
- Direct SQL querying capabilities
|
||||
- Persistent local storage
|
||||
- No external dependencies
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="console" label="Console">
|
||||
|
||||
|
|
@ -174,9 +109,8 @@ telemetry:
|
|||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "llama-stack-service"
|
||||
sinks: ['console', 'sqlite', 'otel_trace', 'otel_metric']
|
||||
sinks: ['console', 'otel_trace', 'otel_metric']
|
||||
otel_exporter_otlp_endpoint: "http://localhost:4318"
|
||||
sqlite_db_path: "/path/to/telemetry.db"
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
|
@ -185,7 +119,7 @@ Configure telemetry behavior using environment variables:
|
|||
|
||||
- **`OTEL_EXPORTER_OTLP_ENDPOINT`**: OpenTelemetry Collector endpoint (default: `http://localhost:4318`)
|
||||
- **`OTEL_SERVICE_NAME`**: Service name for telemetry (default: empty string)
|
||||
- **`TELEMETRY_SINKS`**: Comma-separated list of sinks (default: `console,sqlite`)
|
||||
- **`TELEMETRY_SINKS`**: Comma-separated list of sinks (default: `[]`)
|
||||
|
||||
### Quick Setup: Complete Telemetry Stack
|
||||
|
||||
|
|
@ -248,37 +182,10 @@ Forward metrics to other observability systems:
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## SQLite Querying
|
||||
|
||||
The `sqlite` sink allows you to query traces without an external system. This is particularly useful for development and custom analytics.
|
||||
|
||||
### Example Queries
|
||||
|
||||
```sql
|
||||
-- Query recent traces
|
||||
SELECT * FROM traces WHERE timestamp > datetime('now', '-1 hour');
|
||||
|
||||
-- Analyze span durations
|
||||
SELECT name, AVG(duration_ms) as avg_duration
|
||||
FROM spans
|
||||
GROUP BY name
|
||||
ORDER BY avg_duration DESC;
|
||||
|
||||
-- Find slow operations
|
||||
SELECT * FROM spans
|
||||
WHERE duration_ms > 1000
|
||||
ORDER BY duration_ms DESC;
|
||||
```
|
||||
|
||||
:::tip[Advanced Analytics]
|
||||
Refer to the [Getting Started notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb) for more examples on querying traces and spans programmatically.
|
||||
:::
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🔍 **Monitoring Strategy**
|
||||
- Use OpenTelemetry for production environments
|
||||
- Combine multiple sinks for development (console + SQLite)
|
||||
- Set up alerts on key metrics like token usage and error rates
|
||||
|
||||
### 📊 **Metrics Analysis**
|
||||
|
|
@ -293,45 +200,8 @@ Refer to the [Getting Started notebook](https://github.com/meta-llama/llama-stac
|
|||
|
||||
### 🔧 **Configuration Management**
|
||||
- Use environment variables for flexible deployment
|
||||
- Configure appropriate retention policies for SQLite
|
||||
- Ensure proper network access to OpenTelemetry collectors
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Basic Telemetry Setup
|
||||
|
||||
```python
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
# Client with telemetry headers
|
||||
client = LlamaStackClient(
|
||||
base_url="http://localhost:8000",
|
||||
extra_headers={
|
||||
"X-Telemetry-Service": "my-ai-app",
|
||||
"X-Telemetry-Version": "1.0.0"
|
||||
}
|
||||
)
|
||||
|
||||
# All API calls will be automatically traced
|
||||
response = client.chat.completions.create(
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
messages=[{"role": "user", "content": "Hello!"}]
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Telemetry Context
|
||||
|
||||
```python
|
||||
# Add custom span attributes for better tracking
|
||||
with tracer.start_as_current_span("custom_operation") as span:
|
||||
span.set_attribute("user_id", "user123")
|
||||
span.set_attribute("operation_type", "chat_completion")
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
messages=[{"role": "user", "content": "Hello!"}]
|
||||
)
|
||||
```
|
||||
|
||||
## Related Resources
|
||||
|
||||
|
|
|
|||
|
|
@ -104,23 +104,19 @@ client.toolgroups.register(
|
|||
)
|
||||
```
|
||||
|
||||
Note that most of the more useful MCP servers need you to authenticate with them. Many of them use OAuth2.0 for authentication. You can provide authorization headers to send to the MCP server using the "Provider Data" abstraction provided by Llama Stack. When making an agent call,
|
||||
Note that most of the more useful MCP servers need you to authenticate with them. Many of them use OAuth2.0 for authentication. You can provide the authorization token when creating the Agent:
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
...,
|
||||
tools=["mcp::deepwiki"],
|
||||
extra_headers={
|
||||
"X-LlamaStack-Provider-Data": json.dumps(
|
||||
{
|
||||
"mcp_headers": {
|
||||
"http://mcp.deepwiki.com/sse": {
|
||||
"Authorization": "Bearer <your_access_token>",
|
||||
},
|
||||
},
|
||||
}
|
||||
),
|
||||
},
|
||||
tools=[
|
||||
{
|
||||
"type": "mcp",
|
||||
"server_url": "https://mcp.deepwiki.com/sse",
|
||||
"server_label": "mcp::deepwiki",
|
||||
"authorization": "<your_access_token>", # OAuth token (without "Bearer " prefix)
|
||||
}
|
||||
],
|
||||
)
|
||||
agent.create_turn(...)
|
||||
```
|
||||
|
|
|
|||
|
|
@ -62,6 +62,10 @@ The new `/v2` API must be introduced alongside the existing `/v1` API and run in
|
|||
|
||||
When a `/v2` API is introduced, a clear and generous deprecation policy for the `/v1` API must be published simultaneously. This policy must outline the timeline for the eventual removal of the `/v1` API, giving users ample time to migrate.
|
||||
|
||||
### Deprecated APIs
|
||||
|
||||
Deprecated APIs are those that are no longer actively maintained or supported. Depreated APIs are marked with the flag `deprecated = True` in the OpenAPI spec. These APIs will be removed in a future release.
|
||||
|
||||
### API Stability vs. Provider Stability
|
||||
|
||||
The leveling introduced in this document relates to the stability of the API and not specifically the providers within the API.
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ External APIs must expose a `available_providers()` function in their module tha
|
|||
|
||||
```python
|
||||
# llama_stack_api_weather/api.py
|
||||
from llama_stack.providers.datatypes import Api, InlineProviderSpec, ProviderSpec
|
||||
from llama_stack_api import Api, InlineProviderSpec, ProviderSpec
|
||||
|
||||
|
||||
def available_providers() -> list[ProviderSpec]:
|
||||
|
|
@ -79,7 +79,7 @@ A Protocol class like so:
|
|||
# llama_stack_api_weather/api.py
|
||||
from typing import Protocol
|
||||
|
||||
from llama_stack.schema_utils import webmethod
|
||||
from llama_stack_api import webmethod
|
||||
|
||||
|
||||
class WeatherAPI(Protocol):
|
||||
|
|
@ -151,13 +151,12 @@ __all__ = ["WeatherAPI", "available_providers"]
|
|||
# llama-stack-api-weather/src/llama_stack_api_weather/weather.py
|
||||
from typing import Protocol
|
||||
|
||||
from llama_stack.providers.datatypes import (
|
||||
from llama_stack_api import (
|
||||
Api,
|
||||
ProviderSpec,
|
||||
RemoteProviderSpec,
|
||||
webmethod,
|
||||
)
|
||||
from llama_stack.schema_utils import webmethod
|
||||
|
||||
|
||||
def available_providers() -> list[ProviderSpec]:
|
||||
return [
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ sidebar_position: 1
|
|||
|
||||
# APIs
|
||||
|
||||
A Llama Stack API is described as a collection of REST endpoints. We currently support the following APIs:
|
||||
A Llama Stack API is described as a collection of REST endpoints following OpenAI API standards. We currently support the following APIs:
|
||||
|
||||
- **Inference**: run inference with a LLM
|
||||
- **Safety**: apply safety policies to the output at a Systems (not only model) level
|
||||
|
|
@ -16,13 +16,26 @@ A Llama Stack API is described as a collection of REST endpoints. We currently s
|
|||
- **Scoring**: evaluate outputs of the system
|
||||
- **Eval**: generate outputs (via Inference or Agents) and perform scoring
|
||||
- **VectorIO**: perform operations on vector stores, such as adding documents, searching, and deleting documents
|
||||
- **Files**: manage file uploads, storage, and retrieval
|
||||
- **Telemetry**: collect telemetry data from the system
|
||||
- **Post Training**: fine-tune a model
|
||||
- **Tool Runtime**: interact with various tools and protocols
|
||||
- **Responses**: generate responses from an LLM using this OpenAI compatible API.
|
||||
- **Responses**: generate responses from an LLM
|
||||
|
||||
We are working on adding a few more APIs to complete the application lifecycle. These will include:
|
||||
- **Batch Inference**: run inference on a dataset of inputs
|
||||
- **Batch Agents**: run agents on a dataset of inputs
|
||||
- **Synthetic Data Generation**: generate synthetic data for model development
|
||||
- **Batches**: OpenAI-compatible batch management for inference
|
||||
|
||||
|
||||
## OpenAI API Compatibility
|
||||
We are working on adding OpenAI API compatibility to Llama Stack. This will allow you to use Llama Stack with OpenAI API clients and tools.
|
||||
|
||||
### File Operations and Vector Store Integration
|
||||
|
||||
The Files API and Vector Store APIs work together through file operations, enabling automatic document processing and search. This integration implements the [OpenAI Vector Store Files API specification](https://platform.openai.com/docs/api-reference/vector-stores-files) and allows you to:
|
||||
- Upload documents through the Files API
|
||||
- Automatically process and chunk documents into searchable vectors
|
||||
- Store processed content in vector databases based on the availability of [our providers](../../providers/index.mdx)
|
||||
- Search through documents using natural language queries
|
||||
For detailed information about this integration, see [File Operations and Vector Store Integration](../file_operations_vector_stores.md).
|
||||
|
|
|
|||
420
docs/docs/concepts/file_operations_vector_stores.mdx
Normal file
420
docs/docs/concepts/file_operations_vector_stores.mdx
Normal file
|
|
@ -0,0 +1,420 @@
|
|||
# File Operations and Vector Store Integration
|
||||
|
||||
## Overview
|
||||
|
||||
Llama Stack provides seamless integration between the Files API and Vector Store APIs, enabling you to upload documents and automatically process them into searchable vector embeddings. This integration implements file operations following the [OpenAI Vector Store Files API specification](https://platform.openai.com/docs/api-reference/vector-stores-files).
|
||||
|
||||
## Enhanced Capabilities Beyond OpenAI
|
||||
|
||||
While Llama Stack maintains full compatibility with OpenAI's Vector Store API, it provides several additional capabilities that enhance functionality and flexibility:
|
||||
|
||||
### **Embedding Model Specification**
|
||||
Unlike OpenAI's vector stores which use a fixed embedding model, Llama Stack allows you to specify which embedding model to use when creating a vector store:
|
||||
|
||||
```python
|
||||
# Create vector store with specific embedding model
|
||||
vector_store = client.vector_stores.create(
|
||||
name="my_documents",
|
||||
embedding_model="all-MiniLM-L6-v2", # Specify your preferred model
|
||||
embedding_dimension=384,
|
||||
)
|
||||
```
|
||||
|
||||
### **Advanced Search Modes**
|
||||
Llama Stack supports multiple search modes beyond basic vector similarity:
|
||||
|
||||
- **Vector Search**: Pure semantic similarity search using embeddings
|
||||
- **Keyword Search**: Traditional keyword-based search for exact matches
|
||||
- **Hybrid Search**: Combines both vector and keyword search for optimal results
|
||||
|
||||
```python
|
||||
# Different search modes
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store.id,
|
||||
query="machine learning algorithms",
|
||||
search_mode="hybrid", # or "vector", "keyword"
|
||||
max_num_results=5,
|
||||
)
|
||||
```
|
||||
|
||||
### **Flexible Ranking Options**
|
||||
For hybrid search, Llama Stack offers configurable ranking strategies:
|
||||
|
||||
- **RRF (Reciprocal Rank Fusion)**: Combines rankings with configurable impact factor
|
||||
- **Weighted Ranker**: Linear combination of vector and keyword scores with adjustable weights
|
||||
|
||||
```python
|
||||
# Custom ranking configuration
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store.id,
|
||||
query="neural networks",
|
||||
search_mode="hybrid",
|
||||
ranking_options={
|
||||
"ranker": {"type": "weighted", "alpha": 0.7} # 70% vector, 30% keyword
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
### **Provider Selection**
|
||||
Choose from multiple vector store providers based on your specific needs:
|
||||
|
||||
- **Inline Providers**: FAISS (fast in-memory), SQLite-vec (disk-based), Milvus (high-performance)
|
||||
- **Remote Providers**: ChromaDB, Qdrant, Weaviate, Postgres (PGVector), Milvus
|
||||
|
||||
```python
|
||||
# Specify provider when creating vector store
|
||||
vector_store = client.vector_stores.create(
|
||||
name="my_documents", provider_id="sqlite-vec" # Choose your preferred provider
|
||||
)
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
The file operations work through several key components:
|
||||
|
||||
1. **File Upload**: Documents are uploaded through the Files API
|
||||
2. **Automatic Processing**: Files are automatically chunked and converted to embeddings
|
||||
3. **Vector Storage**: Chunks are stored in vector databases with metadata
|
||||
4. **Search & Retrieval**: Users can search through processed documents using natural language
|
||||
|
||||
## Supported Vector Store Providers
|
||||
|
||||
The following vector store providers support file operations:
|
||||
|
||||
### Inline Providers (Single Node)
|
||||
|
||||
- **FAISS**: Fast in-memory vector similarity search
|
||||
- **SQLite-vec**: Disk-based storage with hybrid search capabilities
|
||||
|
||||
### Remote Providers (Hosted)
|
||||
|
||||
- **ChromaDB**: Vector database with metadata filtering
|
||||
- **Weaviate**: Vector database with GraphQL interface
|
||||
- **Postgres (PGVector)**: Vector extensions for PostgreSQL
|
||||
|
||||
### Both Inline & Remote Providers
|
||||
- **Milvus**: High-performance vector database with advanced indexing
|
||||
- **Qdrant**: Vector similarity search with payload filtering
|
||||
|
||||
## File Processing Pipeline
|
||||
|
||||
### 1. File Upload
|
||||
|
||||
```python
|
||||
from llama_stack import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient("http://localhost:8000")
|
||||
|
||||
# Upload a document
|
||||
with open("document.pdf", "rb") as f:
|
||||
file_info = await client.files.upload(file=f, purpose="assistants")
|
||||
```
|
||||
|
||||
### 2. Attach to Vector Store
|
||||
|
||||
```python
|
||||
# Create a vector store
|
||||
vector_store = client.vector_stores.create(name="my_documents")
|
||||
|
||||
# Attach the file to the vector store
|
||||
file_attach_response = await client.vector_stores.files.create(
|
||||
vector_store_id=vector_store.id, file_id=file_info.id
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Automatic Processing
|
||||
|
||||
The system automatically:
|
||||
- Detects the file type and extracts text content
|
||||
- Splits content into chunks (default: 800 tokens with 400 token overlap)
|
||||
- Generates embeddings for each chunk
|
||||
- Stores chunks with metadata in the vector store
|
||||
- Updates file status to "completed"
|
||||
|
||||
### 4. Search and Retrieval
|
||||
|
||||
```python
|
||||
# Search through processed documents
|
||||
search_results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store.id,
|
||||
query="What is the main topic discussed?",
|
||||
max_num_results=5,
|
||||
)
|
||||
|
||||
# Process results
|
||||
for result in search_results.data:
|
||||
print(f"Score: {result.score}")
|
||||
for content in result.content:
|
||||
print(f"Content: {content.text}")
|
||||
```
|
||||
|
||||
## Supported File Types
|
||||
|
||||
The FileResponse system supports various document formats:
|
||||
|
||||
- **Text Files**: `.txt`, `.md`, `.rst`
|
||||
- **Documents**: `.pdf`, `.docx`, `.doc`
|
||||
- **Code**: `.py`, `.js`, `.java`, `.cpp`, etc.
|
||||
- **Data**: `.json`, `.csv`, `.xml`
|
||||
- **Web Content**: HTML files
|
||||
|
||||
## Chunking Strategies
|
||||
|
||||
### Default Strategy
|
||||
|
||||
The default chunking strategy uses:
|
||||
- **Max Chunk Size**: 800 tokens
|
||||
- **Overlap**: 400 tokens
|
||||
- **Method**: Semantic boundary detection
|
||||
|
||||
### Custom Chunking
|
||||
|
||||
You can customize chunking when attaching files:
|
||||
|
||||
```python
|
||||
from llama_stack.apis.vector_io import VectorStoreChunkingStrategy
|
||||
|
||||
# Attach file with custom chunking
|
||||
file_attach_response = await client.vector_stores.files.create(
|
||||
vector_store_id=vector_store.id,
|
||||
file_id=file_info.id,
|
||||
chunking_strategy=chunking_strategy,
|
||||
)
|
||||
```
|
||||
|
||||
**Note**: While Llama Stack is OpenAI-compatible, it also supports additional options beyond the standard OpenAI API. When creating vector stores, you can specify custom embedding models and embedding dimensions that will be used when processing chunks from attached files.
|
||||
|
||||
|
||||
## File Management
|
||||
|
||||
### List Files in Vector Store
|
||||
|
||||
```python
|
||||
# List all files in a vector store
|
||||
files = await client.vector_stores.files.list(vector_store_id=vector_store.id)
|
||||
|
||||
for file in files:
|
||||
print(f"File: {file.filename}, Status: {file.status}")
|
||||
```
|
||||
|
||||
### File Status Tracking
|
||||
|
||||
Files go through several statuses:
|
||||
- **in_progress**: File is being processed
|
||||
- **completed**: File successfully processed and searchable
|
||||
- **failed**: Processing failed (check `last_error` for details)
|
||||
- **cancelled**: Processing was cancelled
|
||||
|
||||
### Retrieve File Content
|
||||
|
||||
```python
|
||||
# Get chunked content from vector store
|
||||
content_response = await client.vector_stores.files.retrieve_content(
|
||||
vector_store_id=vector_store.id, file_id=file_info.id
|
||||
)
|
||||
|
||||
for chunk in content_response.content:
|
||||
print(f"Chunk {chunk.metadata.get('chunk_index', 0)}: {chunk.text}")
|
||||
```
|
||||
|
||||
## Vector Store Management
|
||||
|
||||
### List Vector Stores
|
||||
|
||||
Retrieve a paginated list of all vector stores:
|
||||
|
||||
```python
|
||||
# List all vector stores with default pagination
|
||||
vector_stores = await client.vector_stores.list()
|
||||
|
||||
# Custom pagination and ordering
|
||||
vector_stores = await client.vector_stores.list(
|
||||
limit=10,
|
||||
order="asc", # or "desc"
|
||||
after="vs_12345678", # cursor-based pagination
|
||||
)
|
||||
|
||||
for store in vector_stores.data:
|
||||
print(f"Store: {store.name}, Files: {store.file_counts.total}")
|
||||
print(f"Created: {store.created_at}, Status: {store.status}")
|
||||
```
|
||||
|
||||
### Retrieve Vector Store Details
|
||||
|
||||
Get detailed information about a specific vector store:
|
||||
|
||||
```python
|
||||
# Get vector store details
|
||||
store_details = await client.vector_stores.retrieve(vector_store_id="vs_12345678")
|
||||
|
||||
print(f"Name: {store_details.name}")
|
||||
print(f"Status: {store_details.status}")
|
||||
print(f"File Counts: {store_details.file_counts}")
|
||||
print(f"Usage: {store_details.usage_bytes} bytes")
|
||||
print(f"Created: {store_details.created_at}")
|
||||
print(f"Metadata: {store_details.metadata}")
|
||||
```
|
||||
|
||||
### Update Vector Store
|
||||
|
||||
Modify vector store properties such as name, metadata, or expiration settings:
|
||||
|
||||
```python
|
||||
# Update vector store name and metadata
|
||||
updated_store = await client.vector_stores.update(
|
||||
vector_store_id="vs_12345678",
|
||||
name="Updated Document Collection",
|
||||
metadata={
|
||||
"description": "Updated collection for research",
|
||||
"category": "research",
|
||||
"version": "2.0",
|
||||
},
|
||||
)
|
||||
|
||||
# Set expiration policy
|
||||
expired_store = await client.vector_stores.update(
|
||||
vector_store_id="vs_12345678",
|
||||
expires_after={"anchor": "last_active_at", "days": 30},
|
||||
)
|
||||
|
||||
print(f"Updated store: {updated_store.name}")
|
||||
print(f"Last active: {updated_store.last_active_at}")
|
||||
```
|
||||
|
||||
### Delete Vector Store
|
||||
|
||||
Remove a vector store and all its associated data:
|
||||
|
||||
```python
|
||||
# Delete a vector store
|
||||
delete_response = await client.vector_stores.delete(vector_store_id="vs_12345678")
|
||||
|
||||
if delete_response.deleted:
|
||||
print(f"Vector store {delete_response.id} successfully deleted")
|
||||
else:
|
||||
print("Failed to delete vector store")
|
||||
```
|
||||
|
||||
**Important Notes:**
|
||||
- Deleting a vector store removes all files, chunks, and embeddings
|
||||
- This operation cannot be undone
|
||||
- The underlying vector database is also cleaned up
|
||||
- Consider backing up important data before deletion
|
||||
|
||||
## Search Capabilities
|
||||
|
||||
### Vector Search
|
||||
|
||||
Pure similarity search using embeddings:
|
||||
|
||||
```python
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store.id,
|
||||
query="machine learning algorithms",
|
||||
max_num_results=10,
|
||||
)
|
||||
```
|
||||
|
||||
### Filtered Search
|
||||
|
||||
Combine vector search with metadata filtering:
|
||||
|
||||
```python
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store.id,
|
||||
query="machine learning algorithms",
|
||||
filters={"file_type": "pdf", "upload_date": "2024-01-01"},
|
||||
max_num_results=10,
|
||||
)
|
||||
```
|
||||
|
||||
### Hybrid Search
|
||||
|
||||
[SQLite-vec](../providers/vector_io/inline_sqlite-vec.mdx), [pgvector](../providers/vector_io/remote_pgvector.mdx), and [Milvus](../providers/vector_io/inline_milvus.mdx) support combining vector and keyword search.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
> **Note**: For detailed performance optimization strategies, see [Performance Considerations](../providers/files/openai_file_operations_support.md#performance-considerations) in the provider documentation.
|
||||
|
||||
**Key Points:**
|
||||
- **Chunk Size**: 400-600 tokens for precision, 800-1200 for context
|
||||
- **Storage**: Choose provider based on your performance needs
|
||||
- **Search**: Optimize for your specific use case
|
||||
|
||||
## Error Handling
|
||||
|
||||
> **Note**: For comprehensive troubleshooting and error handling, see [Troubleshooting](../providers/files/openai_file_operations_support.md#troubleshooting) in the provider documentation.
|
||||
|
||||
**Common Issues:**
|
||||
- File processing failures (format, size limits)
|
||||
- Search performance optimization
|
||||
- Storage and memory issues
|
||||
|
||||
## Best Practices
|
||||
|
||||
> **Note**: For detailed best practices and recommendations, see [Best Practices](../providers/files/openai_file_operations_support.md#best-practices) in the provider documentation.
|
||||
|
||||
**Key Recommendations:**
|
||||
- File organization and naming conventions
|
||||
- Chunking strategy optimization
|
||||
- Metadata and monitoring practices
|
||||
- Regular cleanup and maintenance
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### RAG Application
|
||||
|
||||
```python
|
||||
# Build a RAG system with file uploads
|
||||
async def build_rag_system():
|
||||
# Create vector store
|
||||
vector_store = client.vector_stores.create(name="knowledge_base")
|
||||
|
||||
# Upload and process documents
|
||||
documents = ["doc1.pdf", "doc2.pdf", "doc3.pdf"]
|
||||
for doc in documents:
|
||||
with open(doc, "rb") as f:
|
||||
file_info = await client.files.create(file=f, purpose="assistants")
|
||||
await client.vector_stores.files.create(
|
||||
vector_store_id=vector_store.id, file_id=file_info.id
|
||||
)
|
||||
|
||||
return vector_store
|
||||
|
||||
|
||||
# Query the RAG system
|
||||
async def query_rag(vector_store_id, question):
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store_id, query=question, max_num_results=5
|
||||
)
|
||||
return results
|
||||
```
|
||||
|
||||
### Document Analysis
|
||||
|
||||
```python
|
||||
# Analyze document content through vector search
|
||||
async def analyze_document(vector_store_id, file_id):
|
||||
# Get document content
|
||||
content = await client.vector_stores.files.retrieve_content(
|
||||
vector_store_id=vector_store_id, file_id=file_id
|
||||
)
|
||||
|
||||
# Search for specific topics
|
||||
topics = ["introduction", "methodology", "conclusion"]
|
||||
analysis = {}
|
||||
|
||||
for topic in topics:
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store_id, query=topic, max_num_results=3
|
||||
)
|
||||
analysis[topic] = results.data
|
||||
|
||||
return analysis
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore the [Files API documentation](../../providers/files/files.mdx) for detailed API reference
|
||||
- Check [Vector Store Providers](../providers/vector_io/index.mdx) for specific implementation details
|
||||
- Review [Getting Started](../getting_started/quickstart.mdx) for quick setup instructions
|
||||
|
|
@ -1,233 +1,13 @@
|
|||
# Contributing to Llama Stack
|
||||
We want to make contributing to this project as easy and transparent as
|
||||
possible.
|
||||
---
|
||||
title: Contributing
|
||||
description: Contributing to Llama Stack
|
||||
sidebar_label: Contributing to Llama Stack
|
||||
sidebar_position: 3
|
||||
hide_title: true
|
||||
---
|
||||
|
||||
## Set up your development environment
|
||||
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
|
||||
We use [uv](https://github.com/astral-sh/uv) to manage python dependencies and virtual environments.
|
||||
You can install `uv` by following this [guide](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
|
||||
You can install the dependencies by running:
|
||||
|
||||
```bash
|
||||
cd llama-stack
|
||||
uv sync --group dev
|
||||
uv pip install -e .
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
```{note}
|
||||
You can use a specific version of Python with `uv` by adding the `--python <version>` flag (e.g. `--python 3.12`).
|
||||
Otherwise, `uv` will automatically select a Python version according to the `requires-python` section of the `pyproject.toml`.
|
||||
For more info, see the [uv docs around Python versions](https://docs.astral.sh/uv/concepts/python-versions/).
|
||||
```
|
||||
|
||||
Note that you can create a dotenv file `.env` that includes necessary environment variables:
|
||||
```
|
||||
LLAMA_STACK_BASE_URL=http://localhost:8321
|
||||
LLAMA_STACK_CLIENT_LOG=debug
|
||||
LLAMA_STACK_PORT=8321
|
||||
LLAMA_STACK_CONFIG=<provider-name>
|
||||
TAVILY_SEARCH_API_KEY=
|
||||
BRAVE_SEARCH_API_KEY=
|
||||
```
|
||||
|
||||
And then use this dotenv file when running client SDK tests via the following:
|
||||
```bash
|
||||
uv run --env-file .env -- pytest -v tests/integration/inference/test_text_inference.py --text-model=meta-llama/Llama-3.1-8B-Instruct
|
||||
```
|
||||
|
||||
### Pre-commit Hooks
|
||||
|
||||
We use [pre-commit](https://pre-commit.com/) to run linting and formatting checks on your code. You can install the pre-commit hooks by running:
|
||||
|
||||
```bash
|
||||
uv run pre-commit install
|
||||
```
|
||||
|
||||
After that, pre-commit hooks will run automatically before each commit.
|
||||
|
||||
Alternatively, if you don't want to install the pre-commit hooks, you can run the checks manually by running:
|
||||
|
||||
```bash
|
||||
uv run pre-commit run --all-files
|
||||
```
|
||||
|
||||
```{caution}
|
||||
Before pushing your changes, make sure that the pre-commit hooks have passed successfully.
|
||||
```
|
||||
|
||||
## Discussions -> Issues -> Pull Requests
|
||||
|
||||
We actively welcome your pull requests. However, please read the following. This is heavily inspired by [Ghostty](https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTING.md).
|
||||
|
||||
If in doubt, please open a [discussion](https://github.com/meta-llama/llama-stack/discussions); we can always convert that to an issue later.
|
||||
|
||||
### Issues
|
||||
We use GitHub issues to track public bugs. Please ensure your description is
|
||||
clear and has sufficient instructions to be able to reproduce the issue.
|
||||
|
||||
Meta has a [bounty program](http://facebook.com/whitehat/info) for the safe
|
||||
disclosure of security bugs. In those cases, please go through the process
|
||||
outlined on that page and do not file a public issue.
|
||||
|
||||
### Contributor License Agreement ("CLA")
|
||||
In order to accept your pull request, we need you to submit a CLA. You only need
|
||||
to do this once to work on any of Meta's open source projects.
|
||||
|
||||
Complete your CLA here: [https://code.facebook.com/cla](https://code.facebook.com/cla)
|
||||
|
||||
**I'd like to contribute!**
|
||||
|
||||
If you are new to the project, start by looking at the issues tagged with "good first issue". If you're interested
|
||||
leave a comment on the issue and a triager will assign it to you.
|
||||
|
||||
Please avoid picking up too many issues at once. This helps you stay focused and ensures that others in the community also have opportunities to contribute.
|
||||
- Try to work on only 1–2 issues at a time, especially if you’re still getting familiar with the codebase.
|
||||
- Before taking an issue, check if it’s already assigned or being actively discussed.
|
||||
- If you’re blocked or can’t continue with an issue, feel free to unassign yourself or leave a comment so others can step in.
|
||||
|
||||
**I have a bug!**
|
||||
|
||||
1. Search the issue tracker and discussions for similar issues.
|
||||
2. If you don't have steps to reproduce, open a discussion.
|
||||
3. If you have steps to reproduce, open an issue.
|
||||
|
||||
**I have an idea for a feature!**
|
||||
|
||||
1. Open a discussion.
|
||||
|
||||
**I've implemented a feature!**
|
||||
|
||||
1. If there is an issue for the feature, open a pull request.
|
||||
2. If there is no issue, open a discussion and link to your branch.
|
||||
|
||||
**I have a question!**
|
||||
|
||||
1. Open a discussion or use [Discord](https://discord.gg/llama-stack).
|
||||
|
||||
|
||||
**Opening a Pull Request**
|
||||
|
||||
1. Fork the repo and create your branch from `main`.
|
||||
2. If you've changed APIs, update the documentation.
|
||||
3. Ensure the test suite passes.
|
||||
4. Make sure your code lints using `pre-commit`.
|
||||
5. If you haven't already, complete the Contributor License Agreement ("CLA").
|
||||
6. Ensure your pull request follows the [conventional commits format](https://www.conventionalcommits.org/en/v1.0.0/).
|
||||
7. Ensure your pull request follows the [coding style](#coding-style).
|
||||
|
||||
|
||||
Please keep pull requests (PRs) small and focused. If you have a large set of changes, consider splitting them into logically grouped, smaller PRs to facilitate review and testing.
|
||||
|
||||
```{tip}
|
||||
As a general guideline:
|
||||
- Experienced contributors should try to keep no more than 5 open PRs at a time.
|
||||
- New contributors are encouraged to have only one open PR at a time until they’re familiar with the codebase and process.
|
||||
```
|
||||
|
||||
## Repository guidelines
|
||||
|
||||
### Coding Style
|
||||
|
||||
* Comments should provide meaningful insights into the code. Avoid filler comments that simply
|
||||
describe the next step, as they create unnecessary clutter, same goes for docstrings.
|
||||
* Prefer comments to clarify surprising behavior and/or relationships between parts of the code
|
||||
rather than explain what the next line of code does.
|
||||
* Catching exceptions, prefer using a specific exception type rather than a broad catch-all like
|
||||
`Exception`.
|
||||
* Error messages should be prefixed with "Failed to ..."
|
||||
* 4 spaces for indentation rather than tab
|
||||
* When using `# noqa` to suppress a style or linter warning, include a comment explaining the
|
||||
justification for bypassing the check.
|
||||
* When using `# type: ignore` to suppress a mypy warning, include a comment explaining the
|
||||
justification for bypassing the check.
|
||||
* Don't use unicode characters in the codebase. ASCII-only is preferred for compatibility or
|
||||
readability reasons.
|
||||
* Providers configuration class should be Pydantic Field class. It should have a `description` field
|
||||
that describes the configuration. These descriptions will be used to generate the provider
|
||||
documentation.
|
||||
* When possible, use keyword arguments only when calling functions.
|
||||
* Llama Stack utilizes custom Exception classes for certain Resources that should be used where applicable.
|
||||
|
||||
### License
|
||||
By contributing to Llama, you agree that your contributions will be licensed
|
||||
under the LICENSE file in the root directory of this source tree.
|
||||
|
||||
## Common Tasks
|
||||
|
||||
Some tips about common tasks you work on while contributing to Llama Stack:
|
||||
|
||||
### Using `llama stack build`
|
||||
|
||||
Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
|
||||
|
||||
Example:
|
||||
```bash
|
||||
cd work/
|
||||
git clone https://github.com/meta-llama/llama-stack.git
|
||||
git clone https://github.com/meta-llama/llama-stack-client-python.git
|
||||
cd llama-stack
|
||||
LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --distro <...>
|
||||
```
|
||||
|
||||
### Updating distribution configurations
|
||||
|
||||
If you have made changes to a provider's configuration in any form (introducing a new config key, or
|
||||
changing models, etc.), you should run `./scripts/distro_codegen.py` to re-generate various YAML
|
||||
files as well as the documentation. You should not change `docs/source/.../distributions/` files
|
||||
manually as they are auto-generated.
|
||||
|
||||
### Updating the provider documentation
|
||||
|
||||
If you have made changes to a provider's configuration, you should run `./scripts/provider_codegen.py`
|
||||
to re-generate the documentation. You should not change `docs/source/.../providers/` files manually
|
||||
as they are auto-generated.
|
||||
Note that the provider "description" field will be used to generate the provider documentation.
|
||||
|
||||
### Building the Documentation
|
||||
|
||||
If you are making changes to the documentation at [https://llamastack.github.io/](https://llamastack.github.io/), you can use the following command to build the documentation and preview your changes.
|
||||
|
||||
```bash
|
||||
# This rebuilds the documentation pages and the OpenAPI spec.
|
||||
npm install
|
||||
npm run gen-api-docs all
|
||||
npm run build
|
||||
|
||||
# This will start a local server (usually at http://127.0.0.1:3000).
|
||||
npm run serve
|
||||
```
|
||||
|
||||
### Update API Documentation
|
||||
|
||||
If you modify or add new API endpoints, update the API documentation accordingly. You can do this by running the following command:
|
||||
|
||||
```bash
|
||||
uv run ./docs/openapi_generator/run_openapi_generator.sh
|
||||
```
|
||||
|
||||
The generated API schema will be available in `docs/static/`. Make sure to review the changes before committing.
|
||||
|
||||
## Adding a New Provider
|
||||
|
||||
See:
|
||||
- [Adding a New API Provider Page](./new_api_provider.mdx) which describes how to add new API providers to the Stack.
|
||||
- [Vector Database Page](./new_vector_database.mdx) which describes how to add a new vector databases with Llama Stack.
|
||||
- [External Provider Page](/docs/providers/external/) which describes how to add external providers to the Stack.
|
||||
|
||||
|
||||
## Testing
|
||||
|
||||
|
||||
See the [Testing README](https://github.com/meta-llama/llama-stack/blob/main/tests/README.md) for detailed testing information.
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
For developers who need deeper understanding of the testing system internals:
|
||||
|
||||
- [Record-Replay Testing](./testing/record-replay.mdx)
|
||||
|
||||
### Benchmarking
|
||||
|
||||
See the [Benchmarking README](https://github.com/meta-llama/llama-stack/blob/main/benchmarking/k8s-benchmark/README.md) for benchmarking information.
|
||||
<ReactMarkdown>{Contributing}</ReactMarkdown>
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ def get_base_url(self) -> str:
|
|||
|
||||
## Testing the Provider
|
||||
|
||||
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, you should install dependencies via `llama stack build --distro together`.
|
||||
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, install its dependencies with `llama stack list-deps together | xargs -L1 uv pip install`.
|
||||
|
||||
### 1. Integration Testing
|
||||
|
||||
|
|
|
|||
|
|
@ -68,7 +68,9 @@ recordings/
|
|||
Direct API calls with no recording or replay:
|
||||
|
||||
```python
|
||||
with inference_recording(mode=InferenceMode.LIVE):
|
||||
from llama_stack.testing.api_recorder import api_recording, APIRecordingMode
|
||||
|
||||
with api_recording(mode=APIRecordingMode.LIVE):
|
||||
response = await client.chat.completions.create(...)
|
||||
```
|
||||
|
||||
|
|
@ -79,7 +81,7 @@ Use for initial development and debugging against real APIs.
|
|||
Captures API interactions while passing through real responses:
|
||||
|
||||
```python
|
||||
with inference_recording(mode=InferenceMode.RECORD, storage_dir="./recordings"):
|
||||
with api_recording(mode=APIRecordingMode.RECORD, storage_dir="./recordings"):
|
||||
response = await client.chat.completions.create(...)
|
||||
# Real API call made, response captured AND returned
|
||||
```
|
||||
|
|
@ -96,7 +98,7 @@ The recording process:
|
|||
Returns stored responses instead of making API calls:
|
||||
|
||||
```python
|
||||
with inference_recording(mode=InferenceMode.REPLAY, storage_dir="./recordings"):
|
||||
with api_recording(mode=APIRecordingMode.REPLAY, storage_dir="./recordings"):
|
||||
response = await client.chat.completions.create(...)
|
||||
# No API call made, cached response returned instantly
|
||||
```
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
# Kubernetes Deployment Guide
|
||||
|
||||
Deploy Llama Stack and vLLM servers in a Kubernetes cluster instead of running them locally. This guide covers both local development with Kind and production deployment on AWS EKS.
|
||||
Deploy Llama Stack and vLLM servers in a Kubernetes cluster instead of running them locally. This guide covers deployment using the Kubernetes operator to manage the Llama Stack server with Kind. The vLLM inference server is deployed manually.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
@ -110,115 +110,176 @@ spec:
|
|||
EOF
|
||||
```
|
||||
|
||||
### Step 3: Configure Llama Stack
|
||||
### Step 3: Install Kubernetes Operator
|
||||
|
||||
Update your run configuration:
|
||||
|
||||
```yaml
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: vllm
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: http://vllm-server.default.svc.cluster.local:8000/v1
|
||||
max_tokens: 4096
|
||||
api_token: fake
|
||||
```
|
||||
|
||||
Build container image:
|
||||
Install the Llama Stack Kubernetes operator to manage Llama Stack deployments:
|
||||
|
||||
```bash
|
||||
tmp_dir=$(mktemp -d) && cat >$tmp_dir/Containerfile.llama-stack-run-k8s <<EOF
|
||||
FROM distribution-myenv:dev
|
||||
RUN apt-get update && apt-get install -y git
|
||||
RUN git clone https://github.com/meta-llama/llama-stack.git /app/llama-stack-source
|
||||
ADD ./vllm-llama-stack-run-k8s.yaml /app/config.yaml
|
||||
EOF
|
||||
podman build -f $tmp_dir/Containerfile.llama-stack-run-k8s -t llama-stack-run-k8s $tmp_dir
|
||||
# Install from the latest main branch
|
||||
kubectl apply -f https://raw.githubusercontent.com/llamastack/llama-stack-k8s-operator/main/release/operator.yaml
|
||||
|
||||
# Or install a specific version (e.g., v0.4.0)
|
||||
# kubectl apply -f https://raw.githubusercontent.com/llamastack/llama-stack-k8s-operator/v0.4.0/release/operator.yaml
|
||||
```
|
||||
|
||||
### Step 4: Deploy Llama Stack Server
|
||||
Verify the operator is running:
|
||||
|
||||
```bash
|
||||
kubectl get pods -n llama-stack-operator-system
|
||||
```
|
||||
|
||||
For more information about the operator, see the [llama-stack-k8s-operator repository](https://github.com/llamastack/llama-stack-k8s-operator).
|
||||
|
||||
### Step 4: Deploy Llama Stack Server using Operator
|
||||
|
||||
Create a `LlamaStackDistribution` custom resource to deploy the Llama Stack server. The operator will automatically create the necessary Deployment, Service, and other resources.
|
||||
You can optionally override the default `run.yaml` using `spec.server.userConfig` with a ConfigMap (see [userConfig spec](https://github.com/llamastack/llama-stack-k8s-operator/blob/main/docs/api-overview.md#userconfigspec)).
|
||||
|
||||
```yaml
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: llamastack.io/v1alpha1
|
||||
kind: LlamaStackDistribution
|
||||
metadata:
|
||||
name: llama-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: llama-stack-server
|
||||
name: llamastack-vllm
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: llama-stack
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: llama-stack
|
||||
spec:
|
||||
containers:
|
||||
- name: llama-stack
|
||||
image: localhost/llama-stack-run-k8s:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
command: ["llama", "stack", "run", "/app/config.yaml"]
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
volumeMounts:
|
||||
- name: llama-storage
|
||||
mountPath: /root/.llama
|
||||
volumes:
|
||||
- name: llama-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: llama-pvc
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: llama-stack-service
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: llama-stack
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 5000
|
||||
targetPort: 5000
|
||||
type: ClusterIP
|
||||
server:
|
||||
distribution:
|
||||
name: starter
|
||||
containerSpec:
|
||||
port: 8321
|
||||
env:
|
||||
- name: VLLM_URL
|
||||
value: "http://vllm-server.default.svc.cluster.local:8000/v1"
|
||||
- name: VLLM_MAX_TOKENS
|
||||
value: "4096"
|
||||
- name: VLLM_API_TOKEN
|
||||
value: "fake"
|
||||
# Optional: override run.yaml from a ConfigMap using userConfig
|
||||
userConfig:
|
||||
configMap:
|
||||
name: llama-stack-config
|
||||
storage:
|
||||
size: "20Gi"
|
||||
mountPath: "/home/lls/.lls"
|
||||
EOF
|
||||
```
|
||||
|
||||
**Configuration Options:**
|
||||
|
||||
- `replicas`: Number of Llama Stack server instances to run
|
||||
- `server.distribution.name`: The distribution to use (e.g., `starter` for the starter distribution). See the [list of supported distributions](https://github.com/llamastack/llama-stack-k8s-operator/blob/main/distributions.json) in the operator repository.
|
||||
- `server.distribution.image`: (Optional) Custom container image for non-supported distributions. Use this field when deploying a distribution that is not in the supported list. If specified, this takes precedence over `name`.
|
||||
- `server.containerSpec.port`: Port on which the Llama Stack server listens (default: 8321)
|
||||
- `server.containerSpec.env`: Environment variables to configure providers:
|
||||
- `server.userConfig`: (Optional) Override the default `run.yaml` using a ConfigMap. See [userConfig spec](https://github.com/llamastack/llama-stack-k8s-operator/blob/main/docs/api-overview.md#userconfigspec).
|
||||
- `server.storage.size`: Size of the persistent volume for model and data storage
|
||||
- `server.storage.mountPath`: Where to mount the storage in the container
|
||||
|
||||
**Note:** For a complete list of supported distributions, see [distributions.json](https://github.com/llamastack/llama-stack-k8s-operator/blob/main/distributions.json) in the operator repository. To use a custom or non-supported distribution, set the `server.distribution.image` field with your container image instead of `server.distribution.name`.
|
||||
|
||||
The operator automatically creates:
|
||||
- A Deployment for the Llama Stack server
|
||||
- A Service to access the server
|
||||
- A PersistentVolumeClaim for storage
|
||||
- All necessary RBAC resources
|
||||
|
||||
|
||||
Check the status of your deployment:
|
||||
|
||||
```bash
|
||||
kubectl get llamastackdistribution
|
||||
kubectl describe llamastackdistribution llamastack-vllm
|
||||
```
|
||||
|
||||
### Step 5: Test Deployment
|
||||
|
||||
Wait for the Llama Stack server pod to be ready:
|
||||
|
||||
```bash
|
||||
# Port forward and test
|
||||
kubectl port-forward service/llama-stack-service 5000:5000
|
||||
llama-stack-client --endpoint http://localhost:5000 inference chat-completion --message "hello, what model are you?"
|
||||
# Check the status of the LlamaStackDistribution
|
||||
kubectl get llamastackdistribution llamastack-vllm
|
||||
|
||||
# Check the pods created by the operator
|
||||
kubectl get pods -l app.kubernetes.io/name=llama-stack
|
||||
|
||||
# Wait for the pod to be ready
|
||||
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=llama-stack --timeout=300s
|
||||
```
|
||||
|
||||
Get the service name created by the operator (it typically follows the pattern `<llamastackdistribution-name>-service`):
|
||||
|
||||
```bash
|
||||
# List services to find the service name
|
||||
kubectl get services | grep llamastack
|
||||
|
||||
# Port forward and test (replace SERVICE_NAME with the actual service name)
|
||||
kubectl port-forward service/llamastack-vllm-service 8321:8321
|
||||
```
|
||||
|
||||
In another terminal, test the deployment:
|
||||
|
||||
```bash
|
||||
llama-stack-client --endpoint http://localhost:8321 inference chat-completion --message "hello, what model are you?"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Check pod status:**
|
||||
### vLLM Server Issues
|
||||
|
||||
**Check vLLM pod status:**
|
||||
```bash
|
||||
kubectl get pods -l app.kubernetes.io/name=vllm
|
||||
kubectl logs -l app.kubernetes.io/name=vllm
|
||||
```
|
||||
|
||||
**Test service connectivity:**
|
||||
**Test vLLM service connectivity:**
|
||||
```bash
|
||||
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- curl http://vllm-server:8000/v1/models
|
||||
```
|
||||
|
||||
### Llama Stack Server Issues
|
||||
|
||||
**Check LlamaStackDistribution status:**
|
||||
```bash
|
||||
# Get detailed status
|
||||
kubectl describe llamastackdistribution llamastack-vllm
|
||||
|
||||
# Check for events
|
||||
kubectl get events --sort-by='.lastTimestamp' | grep llamastack-vllm
|
||||
```
|
||||
|
||||
**Check operator-managed pods:**
|
||||
```bash
|
||||
# List all pods managed by the operator
|
||||
kubectl get pods -l app.kubernetes.io/name=llama-stack
|
||||
|
||||
# Check pod logs (replace POD_NAME with actual pod name)
|
||||
kubectl logs -l app.kubernetes.io/name=llama-stack
|
||||
```
|
||||
|
||||
**Check operator status:**
|
||||
```bash
|
||||
# Verify the operator is running
|
||||
kubectl get pods -n llama-stack-operator-system
|
||||
|
||||
# Check operator logs if issues persist
|
||||
kubectl logs -n llama-stack-operator-system -l control-plane=controller-manager
|
||||
```
|
||||
|
||||
**Verify service connectivity:**
|
||||
```bash
|
||||
# Get the service endpoint
|
||||
kubectl get svc llamastack-vllm-service
|
||||
|
||||
# Test connectivity from within the cluster
|
||||
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- curl http://llamastack-vllm-service:8321/health
|
||||
```
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Deployment Overview](/docs/deploying/)** - Overview of deployment options
|
||||
- **[Distributions](/docs/distributions)** - Understanding Llama Stack distributions
|
||||
- **[Configuration](/docs/distributions/configuration)** - Detailed configuration options
|
||||
- **[LlamaStack Operator](https://github.com/llamastack/llama-stack-k8s-operator)** - Overview of llama-stack kubernetes operator
|
||||
- **[LlamaStackDistribution](https://github.com/llamastack/llama-stack-k8s-operator/blob/main/docs/api-overview.md)** - API Spec of the llama-stack operator Custom Resource.
|
||||
|
|
|
|||
|
|
@ -5,225 +5,80 @@ sidebar_label: Build your own Distribution
|
|||
sidebar_position: 3
|
||||
---
|
||||
|
||||
This guide will walk you through the steps to get started with building a Llama Stack distribution from scratch with your choice of API providers.
|
||||
This guide walks you through inspecting existing distributions, customising their configuration, and building runnable artefacts for your own deployment.
|
||||
|
||||
### Explore existing distributions
|
||||
|
||||
### Setting your log level
|
||||
All first-party distributions live under `llama_stack/distributions/`. Each directory contains:
|
||||
|
||||
In order to specify the proper logging level users can apply the following environment variable `LLAMA_STACK_LOGGING` with the following format:
|
||||
- `build.yaml` – the distribution specification (providers, additional dependencies, optional external provider directories).
|
||||
- `run.yaml` – sample run configuration (when provided).
|
||||
- Documentation fragments that power this site.
|
||||
|
||||
`LLAMA_STACK_LOGGING=server=debug;core=info`
|
||||
|
||||
Where each category in the following list:
|
||||
|
||||
- all
|
||||
- core
|
||||
- server
|
||||
- router
|
||||
- inference
|
||||
- agents
|
||||
- safety
|
||||
- eval
|
||||
- tools
|
||||
- client
|
||||
|
||||
Can be set to any of the following log levels:
|
||||
|
||||
- debug
|
||||
- info
|
||||
- warning
|
||||
- error
|
||||
- critical
|
||||
|
||||
The default global log level is `info`. `all` sets the log level for all components.
|
||||
|
||||
A user can also set `LLAMA_STACK_LOG_FILE` which will pipe the logs to the specified path as well as to the terminal. An example would be: `export LLAMA_STACK_LOG_FILE=server.log`
|
||||
|
||||
### Llama Stack Build
|
||||
|
||||
In order to build your own distribution, we recommend you clone the `llama-stack` repository.
|
||||
|
||||
|
||||
```
|
||||
git clone git@github.com:meta-llama/llama-stack.git
|
||||
cd llama-stack
|
||||
pip install -e .
|
||||
```
|
||||
Use the CLI to build your distribution.
|
||||
The main points to consider are:
|
||||
1. **Image Type** - Do you want a venv environment or a Container (eg. Docker)
|
||||
2. **Template** - Do you want to use a template to build your distribution? or start from scratch ?
|
||||
3. **Config** - Do you want to use a pre-existing config file to build your distribution?
|
||||
|
||||
```
|
||||
llama stack build -h
|
||||
usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--distro DISTRIBUTION] [--list-distros] [--image-type {container,venv}] [--image-name IMAGE_NAME] [--print-deps-only]
|
||||
[--run] [--providers PROVIDERS]
|
||||
|
||||
Build a Llama stack container
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG Path to a config file to use for the build. You can find example configs in llama_stack.cores/**/build.yaml. If this argument is not provided, you will be prompted to
|
||||
enter information interactively (default: None)
|
||||
--template TEMPLATE (deprecated) Name of the example template config to use for build. You may use `llama stack build --list-distros` to check out the available distributions (default:
|
||||
None)
|
||||
--distro DISTRIBUTION, --distribution DISTRIBUTION
|
||||
Name of the distribution to use for build. You may use `llama stack build --list-distros` to check out the available distributions (default: None)
|
||||
--list-distros, --list-distributions
|
||||
Show the available distributions for building a Llama Stack distribution (default: False)
|
||||
--image-type {container,venv}
|
||||
Image Type to use for the build. If not specified, will use the image type from the template config. (default: None)
|
||||
--image-name IMAGE_NAME
|
||||
[for image-type=container|venv] Name of the virtual environment to use for the build. If not specified, currently active environment will be used if found. (default:
|
||||
None)
|
||||
--print-deps-only Print the dependencies for the stack only, without building the stack (default: False)
|
||||
--run Run the stack after building using the same image type, name, and other applicable arguments (default: False)
|
||||
--providers PROVIDERS
|
||||
Build a config for a list of providers and only those providers. This list is formatted like: api1=provider1,api2=provider2. Where there can be multiple providers per
|
||||
API. (default: None)
|
||||
```
|
||||
|
||||
After this step is complete, a file named `<name>-build.yaml` and template file `<name>-run.yaml` will be generated and saved at the output file path specified at the end of the command.
|
||||
Browse that folder to understand available providers and copy a distribution to use as a starting point. When creating a new stack, duplicate an existing directory, rename it, and adjust the `build.yaml` file to match your requirements.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="template" label="Building from a template">
|
||||
To build from alternative API providers, we provide distribution templates for users to get started building a distribution backed by different providers.
|
||||
<TabItem value="container" label="Building a container">
|
||||
|
||||
The following command will allow you to see the available templates and their corresponding providers.
|
||||
```
|
||||
llama stack build --list-templates
|
||||
Use the Containerfile at `containers/Containerfile`, which installs `llama-stack`, resolves distribution dependencies via `llama stack list-deps`, and sets the entrypoint to `llama stack run`.
|
||||
|
||||
```bash
|
||||
docker build . \
|
||||
-f containers/Containerfile \
|
||||
--build-arg DISTRO_NAME=starter \
|
||||
--tag llama-stack:starter
|
||||
```
|
||||
|
||||
```
|
||||
------------------------------+-----------------------------------------------------------------------------+
|
||||
| Template Name | Description |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| watsonx | Use watsonx for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| vllm-gpu | Use a built-in vLLM engine for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| together | Use Together.AI for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| tgi | Use (an external) TGI server for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| starter | Quick start template for running Llama Stack with several popular providers |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| sambanova | Use SambaNova for running LLM inference and safety |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| remote-vllm | Use (an external) vLLM server for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| postgres-demo | Quick start template for running Llama Stack with several popular providers |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| passthrough | Use Passthrough hosted llama-stack endpoint for LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| open-benchmark | Distribution for running open benchmarks |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| ollama | Use (an external) Ollama server for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| nvidia | Use NVIDIA NIM for running LLM inference, evaluation and safety |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| meta-reference-gpu | Use Meta Reference for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| llama_api | Distribution for running e2e tests in CI |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| hf-serverless | Use (an external) Hugging Face Inference Endpoint for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| hf-endpoint | Use (an external) Hugging Face Inference Endpoint for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| groq | Use Groq for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| fireworks | Use Fireworks.AI for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| experimental-post-training | Experimental template for post training |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| dell | Dell's distribution of Llama Stack. TGI inference via Dell's custom |
|
||||
| | container |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| ci-tests | Distribution for running e2e tests in CI |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| cerebras | Use Cerebras for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| bedrock | Use AWS Bedrock for running LLM inference and safety |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
```
|
||||
Handy build arguments:
|
||||
|
||||
You may then pick a template to build your distribution with providers fitted to your liking.
|
||||
- `DISTRO_NAME` – distribution directory name (defaults to `starter`).
|
||||
- `RUN_CONFIG_PATH` – absolute path inside the build context for a run config that should be baked into the image (e.g. `/workspace/run.yaml`).
|
||||
- `INSTALL_MODE=editable` – install the repository copied into `/workspace` with `uv pip install -e`. Pair it with `--build-arg LLAMA_STACK_DIR=/workspace`.
|
||||
- `LLAMA_STACK_CLIENT_DIR` – optional editable install of the Python client.
|
||||
- `PYPI_VERSION` / `TEST_PYPI_VERSION` – pin specific releases when not using editable installs.
|
||||
- `KEEP_WORKSPACE=1` – retain `/workspace` in the final image if you need to access additional files (such as sample configs or provider bundles).
|
||||
|
||||
For example, to build a distribution with TGI as the inference provider, you can run:
|
||||
```
|
||||
$ llama stack build --distro starter
|
||||
...
|
||||
You can now edit ~/.llama/distributions/llamastack-starter/starter-run.yaml and run `llama stack run ~/.llama/distributions/llamastack-starter/starter-run.yaml`
|
||||
```
|
||||
Make sure any custom `build.yaml`, run configs, or provider directories you reference are included in the Docker build context so the Containerfile can read them.
|
||||
|
||||
```{tip}
|
||||
The generated `run.yaml` file is a starting point for your configuration. For comprehensive guidance on customizing it for your specific needs, infrastructure, and deployment scenarios, see [Customizing Your run.yaml Configuration](customizing_run_yaml.md).
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="scratch" label="Building from Scratch">
|
||||
<TabItem value="external" label="Building with external providers">
|
||||
|
||||
If the provided templates do not fit your use case, you could start off with running `llama stack build` which will allow you to a interactively enter wizard where you will be prompted to enter build configurations.
|
||||
External providers live outside the main repository but can be bundled by pointing `external_providers_dir` to a directory that contains your provider packages.
|
||||
|
||||
It would be best to start with a template and understand the structure of the config file and the various concepts ( APIS, providers, resources, etc.) before starting from scratch.
|
||||
```
|
||||
llama stack build
|
||||
1. Copy providers into the build context, for example `cp -R path/to/providers providers.d`.
|
||||
2. Update `build.yaml` with the directory and provider entries.
|
||||
3. Adjust run configs to use the in-container path (usually `/.llama/providers.d`). Pass `--build-arg RUN_CONFIG_PATH=/workspace/run.yaml` if you want to bake the config.
|
||||
|
||||
> Enter a name for your Llama Stack (e.g. my-local-stack): my-stack
|
||||
> Enter the image type you want your Llama Stack to be built as (container or venv): venv
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's select
|
||||
the provider types (implementations) you want to use for these APIs.
|
||||
|
||||
Tip: use <TAB> to see options for the providers.
|
||||
|
||||
> Enter provider for API inference: inline::meta-reference
|
||||
> Enter provider for API safety: inline::llama-guard
|
||||
> Enter provider for API agents: inline::meta-reference
|
||||
> Enter provider for API memory: inline::faiss
|
||||
> Enter provider for API datasetio: inline::meta-reference
|
||||
> Enter provider for API scoring: inline::meta-reference
|
||||
> Enter provider for API eval: inline::meta-reference
|
||||
> Enter provider for API telemetry: inline::meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack:
|
||||
|
||||
You can now edit ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml and run `llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml`
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="config" label="Building from a pre-existing build config file">
|
||||
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
|
||||
|
||||
- The config file will be of contents like the ones in `llama_stack/distributions/*build.yaml`.
|
||||
|
||||
```
|
||||
llama stack build --config llama_stack/distributions/starter/build.yaml
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="external" label="Building with External Providers">
|
||||
|
||||
Llama Stack supports external providers that live outside of the main codebase. This allows you to create and maintain your own providers independently or use community-provided providers.
|
||||
|
||||
To build a distribution with external providers, you need to:
|
||||
|
||||
1. Configure the `external_providers_dir` in your build configuration file:
|
||||
Example `build.yaml` excerpt for a custom Ollama provider:
|
||||
|
||||
```yaml
|
||||
# Example my-external-stack.yaml with external providers
|
||||
version: '2'
|
||||
distribution_spec:
|
||||
description: Custom distro for CI tests
|
||||
providers:
|
||||
inference:
|
||||
- remote::custom_ollama
|
||||
# Add more providers as needed
|
||||
image_type: container
|
||||
image_name: ci-test
|
||||
# Path to external provider implementations
|
||||
external_providers_dir: ~/.llama/providers.d
|
||||
- remote::custom_ollama
|
||||
external_providers_dir: /workspace/providers.d
|
||||
```
|
||||
|
||||
Inside `providers.d/custom_ollama/provider.py`, define `get_provider_spec()` so the CLI can discover dependencies:
|
||||
|
||||
```python
|
||||
from llama_stack_api.providers.datatypes import ProviderSpec
|
||||
|
||||
|
||||
def get_provider_spec() -> ProviderSpec:
|
||||
return ProviderSpec(
|
||||
provider_type="remote::custom_ollama",
|
||||
module="llama_stack_ollama_provider",
|
||||
config_class="llama_stack_ollama_provider.config.OllamaImplConfig",
|
||||
pip_packages=[
|
||||
"ollama",
|
||||
"aiohttp",
|
||||
"llama-stack-provider-ollama",
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
Here's an example for a custom Ollama provider:
|
||||
|
|
@ -232,9 +87,9 @@ Here's an example for a custom Ollama provider:
|
|||
adapter:
|
||||
adapter_type: custom_ollama
|
||||
pip_packages:
|
||||
- ollama
|
||||
- aiohttp
|
||||
- llama-stack-provider-ollama # This is the provider package
|
||||
- ollama
|
||||
- aiohttp
|
||||
- llama-stack-provider-ollama # This is the provider package
|
||||
config_class: llama_stack_ollama_provider.config.OllamaImplConfig
|
||||
module: llama_stack_ollama_provider
|
||||
api_dependencies: []
|
||||
|
|
@ -245,53 +100,22 @@ The `pip_packages` section lists the Python packages required by the provider, a
|
|||
provider package itself. The package must be available on PyPI or can be provided from a local
|
||||
directory or a git repository (git must be installed on the build environment).
|
||||
|
||||
2. Build your distribution using the config file:
|
||||
For deeper guidance, see the [External Providers documentation](../providers/external/).
|
||||
|
||||
```
|
||||
llama stack build --config my-external-stack.yaml
|
||||
```
|
||||
|
||||
For more information on external providers, including directory structure, provider types, and implementation requirements, see the [External Providers documentation](../providers/external/).
|
||||
</TabItem>
|
||||
<TabItem value="container" label="Building Container">
|
||||
</Tabs>
|
||||
|
||||
:::tip Podman Alternative
|
||||
Podman is supported as an alternative to Docker. Set `CONTAINER_BINARY` to `podman` in your environment to use Podman.
|
||||
:::
|
||||
### Run your stack server
|
||||
|
||||
To build a container image, you may start off from a template and use the `--image-type container` flag to specify `container` as the build image type.
|
||||
|
||||
```
|
||||
llama stack build --distro starter --image-type container
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack build --distro starter --image-type container
|
||||
...
|
||||
Containerfile created successfully in /tmp/tmp.viA3a3Rdsg/ContainerfileFROM python:3.10-slim
|
||||
...
|
||||
```
|
||||
|
||||
You can now edit ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml and run `llama stack run ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml`
|
||||
```
|
||||
|
||||
Now set some environment variables for the inference model ID and Llama Stack Port and create a local directory to mount into the container's file system.
|
||||
After building the image, launch it directly with Docker or Podman—the entrypoint calls `llama stack run` using the baked distribution or the bundled run config:
|
||||
|
||||
```bash
|
||||
export INFERENCE_MODEL="llama3.2:3b"
|
||||
export LLAMA_STACK_PORT=8321
|
||||
mkdir -p ~/.llama
|
||||
```
|
||||
|
||||
After this step is successful, you should be able to find the built container image and test it with the below Docker command:
|
||||
|
||||
```
|
||||
docker run -d \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-e INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
-e OLLAMA_URL=http://host.docker.internal:11434 \
|
||||
localhost/distribution-ollama:dev \
|
||||
llama-stack:starter \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
|
|
@ -311,131 +135,14 @@ Here are the docker flags and their uses:
|
|||
|
||||
* `--port $LLAMA_STACK_PORT`: Port number for the server to listen on
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
### Running your Stack server
|
||||
Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack build` step.
|
||||
If you prepared a custom run config, mount it into the container and reference it explicitly:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v $(pwd)/run.yaml:/app/run.yaml \
|
||||
llama-stack:starter \
|
||||
/app/run.yaml
|
||||
```
|
||||
llama stack run -h
|
||||
usage: llama stack run [-h] [--port PORT] [--image-name IMAGE_NAME]
|
||||
[--image-type {venv}] [--enable-ui]
|
||||
[config | distro]
|
||||
|
||||
Start the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution.
|
||||
|
||||
positional arguments:
|
||||
config | distro Path to config file to use for the run or name of known distro (`llama stack list` for a list). (default: None)
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--port PORT Port to run the server on. It can also be passed via the env var LLAMA_STACK_PORT. (default: 8321)
|
||||
--image-name IMAGE_NAME
|
||||
[DEPRECATED] This flag is no longer supported. Please activate your virtual environment before running. (default: None)
|
||||
--image-type {venv}
|
||||
[DEPRECATED] This flag is no longer supported. Please activate your virtual environment before running. (default: None)
|
||||
--enable-ui Start the UI server (default: False)
|
||||
```
|
||||
|
||||
**Note:** Container images built with `llama stack build --image-type container` cannot be run using `llama stack run`. Instead, they must be run directly using Docker or Podman commands as shown in the container building section above.
|
||||
|
||||
```
|
||||
# Start using template name
|
||||
llama stack run tgi
|
||||
|
||||
# Start using config file
|
||||
llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
|
||||
|
||||
Serving API inspect
|
||||
GET /health
|
||||
GET /providers/list
|
||||
GET /routes/list
|
||||
Serving API inference
|
||||
POST /inference/chat_completion
|
||||
POST /inference/completion
|
||||
POST /inference/embeddings
|
||||
...
|
||||
Serving API agents
|
||||
POST /agents/create
|
||||
POST /agents/session/create
|
||||
POST /agents/turn/create
|
||||
POST /agents/delete
|
||||
POST /agents/session/delete
|
||||
POST /agents/session/get
|
||||
POST /agents/step/get
|
||||
POST /agents/turn/get
|
||||
|
||||
Listening on ['::', '0.0.0.0']:8321
|
||||
INFO: Started server process [2935911]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit)
|
||||
INFO: 2401:db00:35c:2d2b:face:0:c9:0:54678 - "GET /models/list HTTP/1.1" 200 OK
|
||||
```
|
||||
|
||||
### Listing Distributions
|
||||
Using the list command, you can view all existing Llama Stack distributions, including stacks built from templates, from scratch, or using custom configuration files.
|
||||
|
||||
```
|
||||
llama stack list -h
|
||||
usage: llama stack list [-h]
|
||||
|
||||
list the build stacks
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
```
|
||||
|
||||
Example Usage
|
||||
|
||||
```
|
||||
llama stack list
|
||||
```
|
||||
|
||||
```
|
||||
------------------------------+-----------------------------------------------------------------+--------------+------------+
|
||||
| Stack Name | Path | Build Config | Run Config |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| together | ~/.llama/distributions/together | Yes | No |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| bedrock | ~/.llama/distributions/bedrock | Yes | No |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| starter | ~/.llama/distributions/starter | Yes | Yes |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| remote-vllm | ~/.llama/distributions/remote-vllm | Yes | Yes |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
```
|
||||
|
||||
### Removing a Distribution
|
||||
Use the remove command to delete a distribution you've previously built.
|
||||
|
||||
```
|
||||
llama stack rm -h
|
||||
usage: llama stack rm [-h] [--all] [name]
|
||||
|
||||
Remove the build stack
|
||||
|
||||
positional arguments:
|
||||
name Name of the stack to delete (default: None)
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--all, -a Delete all stacks (use with caution) (default: False)
|
||||
```
|
||||
|
||||
Example
|
||||
```
|
||||
llama stack rm llamastack-test
|
||||
```
|
||||
|
||||
To keep your environment organized and avoid clutter, consider using `llama stack list` to review old or unused distributions and `llama stack rm <name>` to delete them when they're no longer needed.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If you encounter any issues, ask questions in our discord or search through our [GitHub Issues](https://github.com/meta-llama/llama-stack/issues), or file an new issue.
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ apis:
|
|||
- inference
|
||||
- vector_io
|
||||
- safety
|
||||
- telemetry
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: ollama
|
||||
|
|
@ -44,18 +43,36 @@ providers:
|
|||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: sqlite
|
||||
namespace: null
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/agents_store.db
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config: {}
|
||||
metadata_store:
|
||||
namespace: null
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/registry.db
|
||||
persistence:
|
||||
agent_state:
|
||||
backend: kv_default
|
||||
namespace: agents
|
||||
responses:
|
||||
backend: sql_default
|
||||
table_name: responses
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/kvstore.db
|
||||
sql_default:
|
||||
type: sql_sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/sqlstore.db
|
||||
stores:
|
||||
metadata:
|
||||
backend: kv_default
|
||||
namespace: registry
|
||||
inference:
|
||||
backend: sql_default
|
||||
table_name: inference_store
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
conversations:
|
||||
backend: sql_default
|
||||
table_name: openai_conversations
|
||||
prompts:
|
||||
backend: kv_default
|
||||
namespace: prompts
|
||||
models:
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
|
|
@ -78,7 +95,6 @@ apis:
|
|||
- inference
|
||||
- vector_io
|
||||
- safety
|
||||
- telemetry
|
||||
```
|
||||
|
||||
## Providers
|
||||
|
|
@ -205,7 +221,15 @@ models:
|
|||
```
|
||||
A Model is an instance of a "Resource" (see [Concepts](../concepts/)) and is associated with a specific inference provider (in this case, the provider with identifier `ollama`). This is an instance of a "pre-registered" model. While we always encourage the clients to register models before using them, some Stack servers may come up a list of "already known and available" models.
|
||||
|
||||
What's with the `provider_model_id` field? This is an identifier for the model inside the provider's model catalog. Contrast it with `model_id` which is the identifier for the same model for Llama Stack's purposes. For example, you may want to name "llama3.2:vision-11b" as "image_captioning_model" when you use it in your Stack interactions. When omitted, the server will set `provider_model_id` to be the same as `model_id`.
|
||||
What's with the `provider_model_id` field? This is an identifier for the model inside the provider's model catalog. The `model_id` field is provided for configuration purposes but is not used as part of the model identifier.
|
||||
|
||||
**Important:** Models are identified as `provider_id/provider_model_id` in the system and when making API calls. When `provider_model_id` is omitted, the server will set it to be the same as `model_id`.
|
||||
|
||||
Examples:
|
||||
- Config: `model_id: llama3.2`, `provider_id: ollama`, `provider_model_id: null`
|
||||
→ Access as: `ollama/llama3.2`
|
||||
- Config: `model_id: my-llama`, `provider_id: vllm-inference`, `provider_model_id: llama-3-2-3b`
|
||||
→ Access as: `vllm-inference/llama-3-2-3b` (the `model_id` is not used in the identifier)
|
||||
|
||||
If you need to conditionally register a model in the configuration, such as only when specific environment variable(s) are set, this can be accomplished by utilizing a special `__disabled__` string as the default value of an environment variable substitution, as shown below:
|
||||
|
||||
|
|
@ -575,24 +599,13 @@ created by users sharing a team with them:
|
|||
|
||||
In addition to resource-based access control, Llama Stack supports endpoint-level authorization using OAuth 2.0 style scopes. When authentication is enabled, specific API endpoints require users to have particular scopes in their authentication token.
|
||||
|
||||
**Scope-Gated APIs:**
|
||||
The following APIs are currently gated by scopes:
|
||||
|
||||
- **Telemetry API** (scope: `telemetry.read`):
|
||||
- `POST /telemetry/traces` - Query traces
|
||||
- `GET /telemetry/traces/{trace_id}` - Get trace by ID
|
||||
- `GET /telemetry/traces/{trace_id}/spans/{span_id}` - Get span by ID
|
||||
- `POST /telemetry/spans/{span_id}/tree` - Get span tree
|
||||
- `POST /telemetry/spans` - Query spans
|
||||
- `POST /telemetry/metrics/{metric_name}` - Query metrics
|
||||
|
||||
**Authentication Configuration:**
|
||||
|
||||
For **JWT/OAuth2 providers**, scopes should be included in the JWT's claims:
|
||||
```json
|
||||
{
|
||||
"sub": "user123",
|
||||
"scope": "telemetry.read",
|
||||
"scope": "<scope>",
|
||||
"aud": "llama-stack"
|
||||
}
|
||||
```
|
||||
|
|
@ -602,7 +615,7 @@ For **custom authentication providers**, the endpoint must return user attribute
|
|||
{
|
||||
"principal": "user123",
|
||||
"attributes": {
|
||||
"scopes": ["telemetry.read"]
|
||||
"scopes": ["<scope>"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -11,8 +11,8 @@ If you are planning to use an external service for Inference (even Ollama or TGI
|
|||
This avoids the overhead of setting up a server.
|
||||
```bash
|
||||
# setup
|
||||
uv pip install llama-stack
|
||||
llama stack build --distro starter --image-type venv
|
||||
uv pip install llama-stack llama-stack-client
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
```
|
||||
|
||||
```python
|
||||
|
|
|
|||
|
|
@ -19,3 +19,4 @@ This section provides an overview of the distributions available in Llama Stack.
|
|||
- **[Starting Llama Stack Server](./starting_llama_stack_server.mdx)** - How to run distributions
|
||||
- **[Importing as Library](./importing_as_library.mdx)** - Use distributions in your code
|
||||
- **[Configuration Reference](./configuration.mdx)** - Configuration file format details
|
||||
- **[Llama Stack UI](./llama_stack_ui.mdx)** - Web-based user interface for interacting with Llama Stack servers
|
||||
|
|
|
|||
|
|
@ -1,56 +1,163 @@
|
|||
apiVersion: v1
|
||||
data:
|
||||
stack_run_config.yaml: "version: '2'\nimage_name: kubernetes-demo\napis:\n- agents\n-
|
||||
inference\n- files\n- safety\n- telemetry\n- tool_runtime\n- vector_io\nproviders:\n
|
||||
\ inference:\n - provider_id: vllm-inference\n provider_type: remote::vllm\n
|
||||
\ config:\n url: ${env.VLLM_URL:=http://localhost:8000/v1}\n max_tokens:
|
||||
${env.VLLM_MAX_TOKENS:=4096}\n api_token: ${env.VLLM_API_TOKEN:=fake}\n tls_verify:
|
||||
${env.VLLM_TLS_VERIFY:=true}\n - provider_id: vllm-safety\n provider_type:
|
||||
remote::vllm\n config:\n url: ${env.VLLM_SAFETY_URL:=http://localhost:8000/v1}\n
|
||||
\ max_tokens: ${env.VLLM_MAX_TOKENS:=4096}\n api_token: ${env.VLLM_API_TOKEN:=fake}\n
|
||||
\ tls_verify: ${env.VLLM_TLS_VERIFY:=true}\n - provider_id: sentence-transformers\n
|
||||
\ provider_type: inline::sentence-transformers\n config: {}\n vector_io:\n
|
||||
\ - provider_id: ${env.ENABLE_CHROMADB:+chromadb}\n provider_type: remote::chromadb\n
|
||||
\ config:\n url: ${env.CHROMADB_URL:=}\n kvstore:\n type: postgres\n
|
||||
\ host: ${env.POSTGRES_HOST:=localhost}\n port: ${env.POSTGRES_PORT:=5432}\n
|
||||
\ db: ${env.POSTGRES_DB:=llamastack}\n user: ${env.POSTGRES_USER:=llamastack}\n
|
||||
\ password: ${env.POSTGRES_PASSWORD:=llamastack}\n files:\n - provider_id:
|
||||
meta-reference-files\n provider_type: inline::localfs\n config:\n storage_dir:
|
||||
${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}\n metadata_store:\n
|
||||
\ type: sqlite\n db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/starter}/files_metadata.db
|
||||
\ \n safety:\n - provider_id: llama-guard\n provider_type: inline::llama-guard\n
|
||||
\ config:\n excluded_categories: []\n agents:\n - provider_id: meta-reference\n
|
||||
\ provider_type: inline::meta-reference\n config:\n persistence_store:\n
|
||||
\ type: postgres\n host: ${env.POSTGRES_HOST:=localhost}\n port:
|
||||
${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n user:
|
||||
${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\n
|
||||
\ responses_store:\n type: postgres\n host: ${env.POSTGRES_HOST:=localhost}\n
|
||||
\ port: ${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n
|
||||
\ user: ${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\n
|
||||
\ telemetry:\n - provider_id: meta-reference\n provider_type: inline::meta-reference\n
|
||||
\ config:\n service_name: \"${env.OTEL_SERVICE_NAME:=\\u200B}\"\n sinks:
|
||||
${env.TELEMETRY_SINKS:=console}\n tool_runtime:\n - provider_id: brave-search\n
|
||||
\ provider_type: remote::brave-search\n config:\n api_key: ${env.BRAVE_SEARCH_API_KEY:+}\n
|
||||
\ max_results: 3\n - provider_id: tavily-search\n provider_type: remote::tavily-search\n
|
||||
\ config:\n api_key: ${env.TAVILY_SEARCH_API_KEY:+}\n max_results:
|
||||
3\n - provider_id: rag-runtime\n provider_type: inline::rag-runtime\n config:
|
||||
{}\n - provider_id: model-context-protocol\n provider_type: remote::model-context-protocol\n
|
||||
\ config: {}\nmetadata_store:\n type: postgres\n host: ${env.POSTGRES_HOST:=localhost}\n
|
||||
\ port: ${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n user:
|
||||
${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\n
|
||||
\ table_name: llamastack_kvstore\ninference_store:\n type: postgres\n host:
|
||||
${env.POSTGRES_HOST:=localhost}\n port: ${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n
|
||||
\ user: ${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\nmodels:\n-
|
||||
metadata:\n embedding_dimension: 384\n model_id: all-MiniLM-L6-v2\n provider_id:
|
||||
sentence-transformers\n model_type: embedding\n- metadata: {}\n model_id: ${env.INFERENCE_MODEL}\n
|
||||
\ provider_id: vllm-inference\n model_type: llm\n- metadata: {}\n model_id:
|
||||
${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}\n provider_id: vllm-safety\n
|
||||
\ model_type: llm\nshields:\n- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}\nvector_dbs:
|
||||
[]\ndatasets: []\nscoring_fns: []\nbenchmarks: []\ntool_groups:\n- toolgroup_id:
|
||||
builtin::websearch\n provider_id: tavily-search\n- toolgroup_id: builtin::rag\n
|
||||
\ provider_id: rag-runtime\nserver:\n port: 8321\n auth:\n provider_config:\n
|
||||
\ type: github_token\n"
|
||||
stack_run_config.yaml: |
|
||||
version: '2'
|
||||
image_name: kubernetes-demo
|
||||
apis:
|
||||
- agents
|
||||
- inference
|
||||
- files
|
||||
- safety
|
||||
- telemetry
|
||||
- tool_runtime
|
||||
- vector_io
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: vllm-inference
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: vllm-safety
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_SAFETY_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: sentence-transformers
|
||||
provider_type: inline::sentence-transformers
|
||||
config: {}
|
||||
vector_io:
|
||||
- provider_id: ${env.ENABLE_CHROMADB:+chromadb}
|
||||
provider_type: remote::chromadb
|
||||
config:
|
||||
url: ${env.CHROMADB_URL:=}
|
||||
kvstore:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
files:
|
||||
- provider_id: meta-reference-files
|
||||
provider_type: inline::localfs
|
||||
config:
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/starter}/files_metadata.db
|
||||
safety:
|
||||
- provider_id: llama-guard
|
||||
provider_type: inline::llama-guard
|
||||
config:
|
||||
excluded_categories: []
|
||||
agents:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
responses_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "${env.OTEL_SERVICE_NAME:=\u200B}"
|
||||
sinks: ${env.TELEMETRY_SINKS:=console}
|
||||
tool_runtime:
|
||||
- provider_id: brave-search
|
||||
provider_type: remote::brave-search
|
||||
config:
|
||||
api_key: ${env.BRAVE_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: tavily-search
|
||||
provider_type: remote::tavily-search
|
||||
config:
|
||||
api_key: ${env.TAVILY_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: rag-runtime
|
||||
provider_type: inline::rag-runtime
|
||||
config: {}
|
||||
- provider_id: model-context-protocol
|
||||
provider_type: remote::model-context-protocol
|
||||
config: {}
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: ${env.POSTGRES_TABLE_NAME:=llamastack_kvstore}
|
||||
sql_default:
|
||||
type: sql_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
stores:
|
||||
metadata:
|
||||
backend: kv_default
|
||||
namespace: registry
|
||||
inference:
|
||||
backend: sql_default
|
||||
table_name: inference_store
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
conversations:
|
||||
backend: sql_default
|
||||
table_name: openai_conversations
|
||||
prompts:
|
||||
backend: kv_default
|
||||
namespace: prompts
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 768
|
||||
model_id: nomic-embed-text-v1.5
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
server:
|
||||
port: 8321
|
||||
auth:
|
||||
provider_config:
|
||||
type: github_token
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: llama-stack-config
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ apis:
|
|||
- inference
|
||||
- files
|
||||
- safety
|
||||
- telemetry
|
||||
- tool_runtime
|
||||
- vector_io
|
||||
providers:
|
||||
|
|
@ -32,21 +31,17 @@ providers:
|
|||
provider_type: remote::chromadb
|
||||
config:
|
||||
url: ${env.CHROMADB_URL:=}
|
||||
kvstore:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
persistence:
|
||||
namespace: vector_io::chroma_remote
|
||||
backend: kv_default
|
||||
files:
|
||||
- provider_id: meta-reference-files
|
||||
provider_type: inline::localfs
|
||||
config:
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/starter}/files_metadata.db
|
||||
table_name: files_metadata
|
||||
backend: sql_default
|
||||
safety:
|
||||
- provider_id: llama-guard
|
||||
provider_type: inline::llama-guard
|
||||
|
|
@ -56,26 +51,15 @@ providers:
|
|||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
responses_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "${env.OTEL_SERVICE_NAME:=\u200B}"
|
||||
sinks: ${env.TELEMETRY_SINKS:=console}
|
||||
persistence:
|
||||
agent_state:
|
||||
namespace: agents
|
||||
backend: kv_default
|
||||
responses:
|
||||
table_name: responses
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
tool_runtime:
|
||||
- provider_id: brave-search
|
||||
provider_type: remote::brave-search
|
||||
|
|
@ -93,48 +77,73 @@ providers:
|
|||
- provider_id: model-context-protocol
|
||||
provider_type: remote::model-context-protocol
|
||||
config: {}
|
||||
metadata_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: llamastack_kvstore
|
||||
inference_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 384
|
||||
model_id: all-MiniLM-L6-v2
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: ${env.POSTGRES_TABLE_NAME:=llamastack_kvstore}
|
||||
sql_default:
|
||||
type: sql_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
stores:
|
||||
metadata:
|
||||
namespace: registry
|
||||
backend: kv_default
|
||||
inference:
|
||||
table_name: inference_store
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
conversations:
|
||||
table_name: openai_conversations
|
||||
backend: sql_default
|
||||
prompts:
|
||||
namespace: prompts
|
||||
backend: kv_default
|
||||
registered_resources:
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 768
|
||||
model_id: nomic-embed-text-v1.5
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
server:
|
||||
port: 8321
|
||||
auth:
|
||||
provider_config:
|
||||
type: github_token
|
||||
telemetry:
|
||||
enabled: true
|
||||
vector_stores:
|
||||
default_provider_id: chromadb
|
||||
default_embedding_model:
|
||||
provider_id: sentence-transformers
|
||||
model_id: nomic-ai/nomic-embed-text-v1.5
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ spec:
|
|||
|
||||
# Navigate to the UI directory
|
||||
echo "Navigating to UI directory..."
|
||||
cd /app/llama_stack/ui
|
||||
cd /app/llama_stack_ui
|
||||
|
||||
# Check if package.json exists
|
||||
if [ ! -f "package.json" ]; then
|
||||
|
|
|
|||
109
docs/docs/distributions/llama_stack_ui.mdx
Normal file
109
docs/docs/distributions/llama_stack_ui.mdx
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
---
|
||||
title: Llama Stack UI
|
||||
description: Web-based user interface for interacting with Llama Stack servers
|
||||
sidebar_label: Llama Stack UI
|
||||
sidebar_position: 8
|
||||
---
|
||||
|
||||
# Llama Stack UI
|
||||
|
||||
The Llama Stack UI is a web-based interface for interacting with Llama Stack servers. Built with Next.js and React, it provides a visual way to work with agents, manage resources, and view logs.
|
||||
|
||||
## Features
|
||||
|
||||
- **Logs & Monitoring**: View chat completions, agent responses, and vector store activity
|
||||
- **Vector Stores**: Create and manage vector databases for RAG (Retrieval-Augmented Generation) workflows
|
||||
- **Prompt Management**: Create and manage reusable prompts
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need a running Llama Stack server. The UI is a client that connects to the Llama Stack backend.
|
||||
|
||||
If you don't have a Llama Stack server running yet, see the [Starting Llama Stack Server](../getting_started/starting_llama_stack_server.mdx) guide.
|
||||
|
||||
## Running the UI
|
||||
|
||||
### Option 1: Using npx (Recommended for Quick Start)
|
||||
|
||||
The fastest way to get started is using `npx`:
|
||||
|
||||
```bash
|
||||
npx llama-stack-ui
|
||||
```
|
||||
|
||||
This will start the UI server on `http://localhost:8322` (default port).
|
||||
|
||||
### Option 2: Using Docker
|
||||
|
||||
Run the UI in a container:
|
||||
|
||||
```bash
|
||||
docker run -p 8322:8322 llamastack/ui
|
||||
```
|
||||
|
||||
Access the UI at `http://localhost:8322`.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
The UI can be configured using the following environment variables:
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `LLAMA_STACK_BACKEND_URL` | URL of your Llama Stack server | `http://localhost:8321` |
|
||||
| `LLAMA_STACK_UI_PORT` | Port for the UI server | `8322` |
|
||||
|
||||
If the Llama Stack server is running with authentication enabled, you can configure the UI to use it by setting the following environment variables:
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `NEXTAUTH_URL` | NextAuth URL for authentication | `http://localhost:8322` |
|
||||
| `GITHUB_CLIENT_ID` | GitHub OAuth client ID (optional, for authentication) | - |
|
||||
| `GITHUB_CLIENT_SECRET` | GitHub OAuth client secret (optional, for authentication) | - |
|
||||
|
||||
### Setting Environment Variables
|
||||
|
||||
#### For npx:
|
||||
|
||||
```bash
|
||||
LLAMA_STACK_BACKEND_URL=http://localhost:8321 \
|
||||
LLAMA_STACK_UI_PORT=8080 \
|
||||
npx llama-stack-ui
|
||||
```
|
||||
|
||||
#### For Docker:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 \
|
||||
-e LLAMA_STACK_BACKEND_URL=http://localhost:8321 \
|
||||
-e LLAMA_STACK_UI_PORT=8080 \
|
||||
llamastack/ui
|
||||
```
|
||||
|
||||
## Using the UI
|
||||
|
||||
### Managing Resources
|
||||
|
||||
- **Vector Stores**: Create vector databases for RAG workflows, view stored documents and embeddings
|
||||
- **Prompts**: Create and manage reusable prompt templates
|
||||
- **Chat Completions**: View history of chat interactions
|
||||
- **Responses**: Browse detailed agent responses and tool calls
|
||||
|
||||
## Development
|
||||
|
||||
If you want to run the UI from source for development:
|
||||
|
||||
```bash
|
||||
# From the project root
|
||||
cd src/llama_stack_ui
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Set environment variables
|
||||
export LLAMA_STACK_BACKEND_URL=http://localhost:8321
|
||||
|
||||
# Start the development server
|
||||
npm run dev
|
||||
```
|
||||
|
||||
The development server will start on `http://localhost:8322` with hot reloading enabled.
|
||||
|
|
@ -59,7 +59,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
|
|||
uv venv starter --python 3.12
|
||||
source starter/bin/activate # On Windows: starter\Scripts\activate
|
||||
pip install --no-cache llama-stack==0.2.2
|
||||
llama stack build --distro starter --image-type venv
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
export FIREWORKS_API_KEY=<SOME_KEY>
|
||||
llama stack run starter --port 5050
|
||||
```
|
||||
|
|
|
|||
|
|
@ -2,10 +2,10 @@
|
|||
|
||||
Remote-Hosted distributions are available endpoints serving Llama Stack API that you can directly connect to.
|
||||
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety | Telemetry |
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety |
|
||||
|-------------|----------|-----------|---------|---------|---------|------------|
|
||||
| Together | [https://llama-stack.together.ai](https://llama-stack.together.ai) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [https://llamastack-preview.fireworks.ai](https://llamastack-preview.fireworks.ai) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Together | [https://llama-stack.together.ai](https://llama-stack.together.ai) | remote::together | meta-reference | remote::weaviate | meta-reference |
|
||||
| Fireworks | [https://llamastack-preview.fireworks.ai](https://llamastack-preview.fireworks.ai) | remote::fireworks | meta-reference | remote::weaviate | meta-reference |
|
||||
|
||||
## Connecting to Remote-Hosted Distributions
|
||||
|
||||
|
|
|
|||
143
docs/docs/distributions/remote_hosted_distro/oci.md
Normal file
143
docs/docs/distributions/remote_hosted_distro/oci.md
Normal file
|
|
@ -0,0 +1,143 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
<!-- This file was auto-generated by distro_codegen.py, please edit source -->
|
||||
# OCI Distribution
|
||||
|
||||
The `llamastack/distribution-oci` distribution consists of the following provider configurations.
|
||||
|
||||
| API | Provider(s) |
|
||||
|-----|-------------|
|
||||
| agents | `inline::meta-reference` |
|
||||
| datasetio | `remote::huggingface`, `inline::localfs` |
|
||||
| eval | `inline::meta-reference` |
|
||||
| files | `inline::localfs` |
|
||||
| inference | `remote::oci` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The following environment variables can be configured:
|
||||
|
||||
- `OCI_AUTH_TYPE`: OCI authentication type (instance_principal or config_file) (default: `instance_principal`)
|
||||
- `OCI_REGION`: OCI region (e.g., us-ashburn-1, us-chicago-1, us-phoenix-1, eu-frankfurt-1) (default: ``)
|
||||
- `OCI_COMPARTMENT_OCID`: OCI compartment ID for the Generative AI service (default: ``)
|
||||
- `OCI_CONFIG_FILE_PATH`: OCI config file path (required if OCI_AUTH_TYPE is config_file) (default: `~/.oci/config`)
|
||||
- `OCI_CLI_PROFILE`: OCI CLI profile name to use from config file (default: `DEFAULT`)
|
||||
|
||||
|
||||
## Prerequisites
|
||||
### Oracle Cloud Infrastructure Setup
|
||||
|
||||
Before using the OCI Generative AI distribution, ensure you have:
|
||||
|
||||
1. **Oracle Cloud Infrastructure Account**: Sign up at [Oracle Cloud Infrastructure](https://cloud.oracle.com/)
|
||||
2. **Generative AI Service Access**: Enable the Generative AI service in your OCI tenancy
|
||||
3. **Compartment**: Create or identify a compartment where you'll deploy Generative AI models
|
||||
4. **Authentication**: Configure authentication using either:
|
||||
- **Instance Principal** (recommended for cloud-hosted deployments)
|
||||
- **API Key** (for on-premises or development environments)
|
||||
|
||||
### Authentication Methods
|
||||
|
||||
#### Instance Principal Authentication (Recommended)
|
||||
Instance Principal authentication allows OCI resources to authenticate using the identity of the compute instance they're running on. This is the most secure method for production deployments.
|
||||
|
||||
Requirements:
|
||||
- Instance must be running in an Oracle Cloud Infrastructure compartment
|
||||
- Instance must have appropriate IAM policies to access Generative AI services
|
||||
|
||||
#### API Key Authentication
|
||||
For development or on-premises deployments, follow [this doc](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm) to learn how to create your API signing key for your config file.
|
||||
|
||||
### Required IAM Policies
|
||||
|
||||
Ensure your OCI user or instance has the following policy statements:
|
||||
|
||||
```
|
||||
Allow group <group_name> to use generative-ai-inference-endpoints in compartment <compartment_name>
|
||||
Allow group <group_name> to manage generative-ai-inference-endpoints in compartment <compartment_name>
|
||||
```
|
||||
|
||||
## Supported Services
|
||||
|
||||
### Inference: OCI Generative AI
|
||||
Oracle Cloud Infrastructure Generative AI provides access to high-performance AI models through OCI's Platform-as-a-Service offering. The service supports:
|
||||
|
||||
- **Chat Completions**: Conversational AI with context awareness
|
||||
- **Text Generation**: Complete prompts and generate text content
|
||||
|
||||
#### Available Models
|
||||
Common OCI Generative AI models include access to Meta, Cohere, OpenAI, Grok, and more models.
|
||||
|
||||
### Safety: Llama Guard
|
||||
For content safety and moderation, this distribution uses Meta's LlamaGuard model through the OCI Generative AI service to provide:
|
||||
- Content filtering and moderation
|
||||
- Policy compliance checking
|
||||
- Harmful content detection
|
||||
|
||||
### Vector Storage: Multiple Options
|
||||
The distribution supports several vector storage providers:
|
||||
- **FAISS**: Local in-memory vector search
|
||||
- **ChromaDB**: Distributed vector database
|
||||
- **PGVector**: PostgreSQL with vector extensions
|
||||
|
||||
### Additional Services
|
||||
- **Dataset I/O**: Local filesystem and Hugging Face integration
|
||||
- **Tool Runtime**: Web search (Brave, Tavily) and RAG capabilities
|
||||
- **Evaluation**: Meta reference evaluation framework
|
||||
|
||||
## Running Llama Stack with OCI
|
||||
|
||||
You can run the OCI distribution via Docker or local virtual environment.
|
||||
|
||||
### Via venv
|
||||
|
||||
If you've set up your local development environment, you can also build the image using your local virtual environment.
|
||||
|
||||
```bash
|
||||
OCI_AUTH=$OCI_AUTH_TYPE OCI_REGION=$OCI_REGION OCI_COMPARTMENT_OCID=$OCI_COMPARTMENT_OCID llama stack run --port 8321 oci
|
||||
```
|
||||
|
||||
### Configuration Examples
|
||||
|
||||
#### Using Instance Principal (Recommended for Production)
|
||||
```bash
|
||||
export OCI_AUTH_TYPE=instance_principal
|
||||
export OCI_REGION=us-chicago-1
|
||||
export OCI_COMPARTMENT_OCID=ocid1.compartment.oc1..<your-compartment-id>
|
||||
```
|
||||
|
||||
#### Using API Key Authentication (Development)
|
||||
```bash
|
||||
export OCI_AUTH_TYPE=config_file
|
||||
export OCI_CONFIG_FILE_PATH=~/.oci/config
|
||||
export OCI_CLI_PROFILE=DEFAULT
|
||||
export OCI_REGION=us-chicago-1
|
||||
export OCI_COMPARTMENT_OCID=ocid1.compartment.oc1..your-compartment-id
|
||||
```
|
||||
|
||||
## Regional Endpoints
|
||||
|
||||
OCI Generative AI is available in multiple regions. The service automatically routes to the appropriate regional endpoint based on your configuration. For a full list of regional model availability, visit:
|
||||
|
||||
https://docs.oracle.com/en-us/iaas/Content/generative-ai/overview.htm#regions
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Authentication Errors**: Verify your OCI credentials and IAM policies
|
||||
2. **Model Not Found**: Ensure the model OCID is correct and the model is available in your region
|
||||
3. **Permission Denied**: Check compartment permissions and Generative AI service access
|
||||
4. **Region Unavailable**: Verify the specified region supports Generative AI services
|
||||
|
||||
### Getting Help
|
||||
|
||||
For additional support:
|
||||
- [OCI Generative AI Documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)
|
||||
- [Llama Stack Issues](https://github.com/meta-llama/llama-stack/issues)
|
||||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-watsonx` distribution consists of the following pro
|
|||
| inference | `remote::watsonx`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss` |
|
||||
|
||||
|
|
|
|||
|
|
@ -13,9 +13,9 @@ self
|
|||
The `llamastack/distribution-tgi` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |
|
||||
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference |
|
||||
|
||||
|
||||
The only difference vs. the `tgi` distribution is that it runs the Dell-TGI server for inference.
|
||||
|
|
|
|||
|
|
@ -22,7 +22,6 @@ The `llamastack/distribution-dell` distribution consists of the following provid
|
|||
| inference | `remote::tgi`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -166,10 +165,10 @@ docker run \
|
|||
|
||||
### Via venv
|
||||
|
||||
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
|
||||
Install the distribution dependencies before launching:
|
||||
|
||||
```bash
|
||||
llama stack build --distro dell --image-type venv
|
||||
llama stack list-deps dell | xargs -L1 uv pip install
|
||||
INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
DEH_URL=$DEH_URL \
|
||||
CHROMA_URL=$CHROMA_URL \
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-meta-reference-gpu` distribution consists of the fo
|
|||
| inference | `inline::meta-reference` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -41,31 +40,7 @@ The following environment variables can be configured:
|
|||
|
||||
## Prerequisite: Downloading Models
|
||||
|
||||
Please use `llama model list --downloaded` to check that you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](../../references/llama_cli_reference/download_models.md) here to download the models. Run `llama model list` to see the available models to download, and `llama model download` to download the checkpoints.
|
||||
|
||||
```
|
||||
$ llama model list --downloaded
|
||||
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ Model ┃ Size ┃ Modified Time ┃
|
||||
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ Llama3.2-1B-Instruct:int4-qlora-eo8 │ 1.53 GB │ 2025-02-26 11:22:28 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Llama3.2-1B │ 2.31 GB │ 2025-02-18 21:48:52 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Prompt-Guard-86M │ 0.02 GB │ 2025-02-26 11:29:28 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Llama3.2-3B-Instruct:int4-spinquant-eo8 │ 3.69 GB │ 2025-02-26 11:37:41 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Llama3.2-3B │ 5.99 GB │ 2025-02-18 21:51:26 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Llama3.1-8B │ 14.97 GB │ 2025-02-16 10:36:37 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Llama3.2-1B-Instruct:int4-spinquant-eo8 │ 1.51 GB │ 2025-02-26 11:35:02 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Llama-Guard-3-1B │ 2.80 GB │ 2025-02-26 11:20:46 │
|
||||
├─────────────────────────────────────────┼──────────┼─────────────────────┤
|
||||
│ Llama-Guard-3-1B:int4 │ 0.43 GB │ 2025-02-26 11:33:33 │
|
||||
└─────────────────────────────────────────┴──────────┴─────────────────────┘
|
||||
Please check that you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](../../references/llama_cli_reference/download_models.md) here to download the models using the Hugging Face CLI.
|
||||
```
|
||||
|
||||
## Running the Distribution
|
||||
|
|
@ -104,12 +79,39 @@ docker run \
|
|||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
### Via venv
|
||||
### Via Docker with Custom Run Configuration
|
||||
|
||||
Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available.
|
||||
You can also run the Docker container with a custom run configuration file by mounting it into the container:
|
||||
|
||||
```bash
|
||||
llama stack build --distro meta-reference-gpu --image-type venv
|
||||
# Set the path to your custom run.yaml file
|
||||
CUSTOM_RUN_CONFIG=/path/to/your/custom-run.yaml
|
||||
LLAMA_STACK_PORT=8321
|
||||
|
||||
docker run \
|
||||
-it \
|
||||
--pull always \
|
||||
--gpu all \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-v $CUSTOM_RUN_CONFIG:/app/custom-run.yaml \
|
||||
-e RUN_CONFIG_PATH=/app/custom-run.yaml \
|
||||
llamastack/distribution-meta-reference-gpu \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
**Note**: The run configuration must be mounted into the container before it can be used. The `-v` flag mounts your local file into the container, and the `RUN_CONFIG_PATH` environment variable tells the entrypoint script which configuration to use.
|
||||
|
||||
Available run configurations for this distribution:
|
||||
- `run.yaml`
|
||||
- `run-with-safety.yaml`
|
||||
|
||||
### Via venv
|
||||
|
||||
Make sure you have the Llama Stack CLI available.
|
||||
|
||||
```bash
|
||||
llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
|
||||
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
llama stack run distributions/meta-reference-gpu/run.yaml \
|
||||
--port 8321
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@ The `llamastack/distribution-nvidia` distribution consists of the following prov
|
|||
| post_training | `remote::nvidia` |
|
||||
| safety | `remote::nvidia` |
|
||||
| scoring | `inline::basic` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `inline::rag-runtime` |
|
||||
| vector_io | `inline::faiss` |
|
||||
|
||||
|
|
@ -128,20 +127,46 @@ docker run \
|
|||
-it \
|
||||
--pull always \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ./run.yaml:/root/my-run.yaml \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-e NVIDIA_API_KEY=$NVIDIA_API_KEY \
|
||||
llamastack/distribution-nvidia \
|
||||
--config /root/my-run.yaml \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
### Via Docker with Custom Run Configuration
|
||||
|
||||
You can also run the Docker container with a custom run configuration file by mounting it into the container:
|
||||
|
||||
```bash
|
||||
# Set the path to your custom run.yaml file
|
||||
CUSTOM_RUN_CONFIG=/path/to/your/custom-run.yaml
|
||||
LLAMA_STACK_PORT=8321
|
||||
|
||||
docker run \
|
||||
-it \
|
||||
--pull always \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-v $CUSTOM_RUN_CONFIG:/app/custom-run.yaml \
|
||||
-e RUN_CONFIG_PATH=/app/custom-run.yaml \
|
||||
-e NVIDIA_API_KEY=$NVIDIA_API_KEY \
|
||||
llamastack/distribution-nvidia \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
**Note**: The run configuration must be mounted into the container before it can be used. The `-v` flag mounts your local file into the container, and the `RUN_CONFIG_PATH` environment variable tells the entrypoint script which configuration to use.
|
||||
|
||||
Available run configurations for this distribution:
|
||||
- `run.yaml`
|
||||
- `run-with-safety.yaml`
|
||||
|
||||
### Via venv
|
||||
|
||||
If you've set up your local development environment, you can also build the image using your local virtual environment.
|
||||
If you've set up your local development environment, you can also install the distribution dependencies using your local virtual environment.
|
||||
|
||||
```bash
|
||||
INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
|
||||
llama stack build --distro nvidia --image-type venv
|
||||
llama stack list-deps nvidia | xargs -L1 uv pip install
|
||||
NVIDIA_API_KEY=$NVIDIA_API_KEY \
|
||||
INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
llama stack run ./run.yaml \
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-passthrough` distribution consists of the following
|
|||
| inference | `remote::passthrough`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `remote::wolfram-alpha`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,6 @@ The starter distribution consists of the following provider configurations:
|
|||
| inference | `remote::openai`, `remote::fireworks`, `remote::together`, `remote::ollama`, `remote::anthropic`, `remote::gemini`, `remote::groq`, `remote::sambanova`, `remote::vllm`, `remote::tgi`, `remote::cerebras`, `remote::llama-openai-compat`, `remote::nvidia`, `remote::hf::serverless`, `remote::hf::endpoint`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `inline::sqlite-vec`, `inline::milvus`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -119,7 +118,7 @@ The following environment variables can be configured:
|
|||
|
||||
### Telemetry Configuration
|
||||
- `OTEL_SERVICE_NAME`: OpenTelemetry service name
|
||||
- `TELEMETRY_SINKS`: Telemetry sinks (default: `console,sqlite`)
|
||||
- `OTEL_EXPORTER_OTLP_ENDPOINT`: OpenTelemetry collector endpoint URL
|
||||
|
||||
## Enabling Providers
|
||||
|
||||
|
|
@ -164,12 +163,53 @@ docker run \
|
|||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
### Via venv
|
||||
The container will run the distribution with a SQLite store by default. This store is used for the following components:
|
||||
|
||||
- Metadata store: store metadata about the models, providers, etc.
|
||||
- Inference store: collect of responses from the inference provider
|
||||
- Agents store: store agent configurations (sessions, turns, etc.)
|
||||
- Agents Responses store: store responses from the agents
|
||||
|
||||
However, you can use PostgreSQL instead by running the `starter::run-with-postgres-store.yaml` configuration:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-it \
|
||||
--pull always \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-e OPENAI_API_KEY=your_openai_key \
|
||||
-e FIREWORKS_API_KEY=your_fireworks_key \
|
||||
-e TOGETHER_API_KEY=your_together_key \
|
||||
-e POSTGRES_HOST=your_postgres_host \
|
||||
-e POSTGRES_PORT=your_postgres_port \
|
||||
-e POSTGRES_DB=your_postgres_db \
|
||||
-e POSTGRES_USER=your_postgres_user \
|
||||
-e POSTGRES_PASSWORD=your_postgres_password \
|
||||
llamastack/distribution-starter \
|
||||
starter::run-with-postgres-store.yaml
|
||||
```
|
||||
|
||||
Postgres environment variables:
|
||||
|
||||
- `POSTGRES_HOST`: Postgres host (default: `localhost`)
|
||||
- `POSTGRES_PORT`: Postgres port (default: `5432`)
|
||||
- `POSTGRES_DB`: Postgres database name (default: `llamastack`)
|
||||
- `POSTGRES_USER`: Postgres username (default: `llamastack`)
|
||||
- `POSTGRES_PASSWORD`: Postgres password (default: `llamastack`)
|
||||
|
||||
### Via Conda or venv
|
||||
|
||||
Ensure you have configured the starter distribution using the environment variables explained above.
|
||||
|
||||
```bash
|
||||
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server (with SQLite - default)
|
||||
uv run --with llama-stack llama stack run starter
|
||||
|
||||
# Or run with PostgreSQL
|
||||
uv run --with llama-stack llama stack run starter::run-with-postgres-store.yaml
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
|
@ -216,7 +256,6 @@ The starter distribution uses SQLite for local storage of various components:
|
|||
- **Files metadata**: `~/.llama/distributions/starter/files_metadata.db`
|
||||
- **Agents store**: `~/.llama/distributions/starter/agents_store.db`
|
||||
- **Responses store**: `~/.llama/distributions/starter/responses_store.db`
|
||||
- **Trace store**: `~/.llama/distributions/starter/trace_store.db`
|
||||
- **Evaluation store**: `~/.llama/distributions/starter/meta_reference_eval.db`
|
||||
- **Dataset I/O stores**: Various HuggingFace and local filesystem stores
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,17 @@ Another simple way to start interacting with Llama Stack is to just spin up a co
|
|||
If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](../deploying/kubernetes_deployment) for more details.
|
||||
|
||||
|
||||
## Configure logging
|
||||
|
||||
Control log output via environment variables before starting the server.
|
||||
|
||||
- `LLAMA_STACK_LOGGING` sets per-component levels, e.g. `LLAMA_STACK_LOGGING=server=debug;core=info`.
|
||||
- Supported categories: `all`, `core`, `server`, `router`, `inference`, `agents`, `safety`, `eval`, `tools`, `client`.
|
||||
- Levels: `debug`, `info`, `warning`, `error`, `critical` (default is `info`). Use `all=<level>` to apply globally.
|
||||
- `LLAMA_STACK_LOG_FILE=/path/to/log` mirrors logs to a file while still printing to stdout.
|
||||
|
||||
Export these variables prior to running `llama stack run`, launching a container, or starting the server through any other pathway.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
|
|
|||
|
|
@ -4,65 +4,24 @@
|
|||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient
|
||||
|
||||
vector_db_id = "my_demo_vector_db"
|
||||
client = LlamaStackClient(base_url="http://localhost:8321")
|
||||
import io, requests
|
||||
from openai import OpenAI
|
||||
|
||||
models = client.models.list()
|
||||
url="https://www.paulgraham.com/greatwork.html"
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
|
||||
|
||||
# Select the first LLM and first embedding models
|
||||
model_id = next(m for m in models if m.model_type == "llm").identifier
|
||||
embedding_model_id = (
|
||||
em := next(m for m in models if m.model_type == "embedding")
|
||||
).identifier
|
||||
embedding_dimension = em.metadata["embedding_dimension"]
|
||||
vs = client.vector_stores.create()
|
||||
response = requests.get(url)
|
||||
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
|
||||
uploaded_file = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants")
|
||||
client.vector_stores.files.create(vector_store_id=vs.id, file_id=uploaded_file.id)
|
||||
|
||||
vector_db = client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model=embedding_model_id,
|
||||
embedding_dimension=embedding_dimension,
|
||||
provider_id="faiss",
|
||||
)
|
||||
vector_db_id = vector_db.identifier
|
||||
source = "https://www.paulgraham.com/greatwork.html"
|
||||
print("rag_tool> Ingesting document:", source)
|
||||
document = RAGDocument(
|
||||
document_id="document_1",
|
||||
content=source,
|
||||
mime_type="text/html",
|
||||
metadata={},
|
||||
)
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=[document],
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=100,
|
||||
)
|
||||
agent = Agent(
|
||||
client,
|
||||
model=model_id,
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[
|
||||
{
|
||||
"name": "builtin::rag/knowledge_search",
|
||||
"args": {"vector_db_ids": [vector_db_id]},
|
||||
}
|
||||
],
|
||||
resp = client.responses.create(
|
||||
model="openai/gpt-4o",
|
||||
input="How do you do great work? Use the existing knowledge_search tool.",
|
||||
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
|
||||
include=["file_search_call.results"],
|
||||
)
|
||||
|
||||
prompt = "How do you do great work?"
|
||||
print("prompt>", prompt)
|
||||
|
||||
use_stream = True
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
session_id=agent.create_session("rag_session"),
|
||||
stream=use_stream,
|
||||
)
|
||||
|
||||
# Only call `AgentEventLogger().log(response)` for streaming responses.
|
||||
if use_stream:
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
else:
|
||||
print(response)
|
||||
print(resp)
|
||||
|
|
|
|||
|
|
@ -58,15 +58,19 @@ Llama Stack is a server that exposes multiple APIs, you connect with it using th
|
|||
|
||||
<Tabs>
|
||||
<TabItem value="venv" label="Using venv">
|
||||
You can use Python to build and run the Llama Stack server, which is useful for testing and development.
|
||||
You can use Python to install dependencies and run the Llama Stack server, which is useful for testing and development.
|
||||
|
||||
Llama Stack uses a [YAML configuration file](../distributions/configuration) to specify the stack setup,
|
||||
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml).
|
||||
Now let's build and run the Llama Stack config for Ollama.
|
||||
Now let's install dependencies and run the Llama Stack config for Ollama.
|
||||
We use `starter` as template. By default all providers are disabled, this requires enable ollama by passing environment variables.
|
||||
|
||||
```bash
|
||||
llama stack build --distro starter --image-type venv --run
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server
|
||||
llama stack run starter
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="container" label="Using a Container">
|
||||
|
|
@ -140,7 +144,7 @@ source .venv/bin/activate
|
|||
```bash
|
||||
uv venv client --python 3.12
|
||||
source client/bin/activate
|
||||
pip install llama-stack-client
|
||||
uv pip install llama-stack-client
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
@ -164,7 +168,7 @@ Available Models
|
|||
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ model_type ┃ identifier ┃ provider_resource_id ┃ metadata ┃ provider_id ┃
|
||||
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ embedding │ ollama/all-minilm:l6-v2 │ all-minilm:l6-v2 │ {'embedding_dimension': 384.0} │ ollama │
|
||||
│ embedding │ ollama/nomic-embed-text:v1.5 │ nomic-embed-text:v1.5 │ {'embedding_dimension': 768.0} │ ollama │
|
||||
├─────────────────┼─────────────────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────────┼───────────────────────┤
|
||||
│ ... │ ... │ ... │ │ ... │
|
||||
├─────────────────┼─────────────────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────────┼───────────────────────┤
|
||||
|
|
@ -235,8 +239,13 @@ client = LlamaStackClient(base_url="http://localhost:8321")
|
|||
models = client.models.list()
|
||||
|
||||
# Select the first LLM
|
||||
llm = next(m for m in models if m.model_type == "llm" and m.provider_id == "ollama")
|
||||
model_id = llm.identifier
|
||||
llm = next(
|
||||
m for m in models
|
||||
if m.custom_metadata
|
||||
and m.custom_metadata.get("model_type") == "llm"
|
||||
and m.custom_metadata.get("provider_id") == "ollama"
|
||||
)
|
||||
model_id = llm.id
|
||||
|
||||
print("Model:", model_id)
|
||||
|
||||
|
|
@ -275,8 +284,13 @@ import uuid
|
|||
client = LlamaStackClient(base_url=f"http://localhost:8321")
|
||||
|
||||
models = client.models.list()
|
||||
llm = next(m for m in models if m.model_type == "llm" and m.provider_id == "ollama")
|
||||
model_id = llm.identifier
|
||||
llm = next(
|
||||
m for m in models
|
||||
if m.custom_metadata
|
||||
and m.custom_metadata.get("model_type") == "llm"
|
||||
and m.custom_metadata.get("provider_id") == "ollama"
|
||||
)
|
||||
model_id = llm.id
|
||||
|
||||
agent = Agent(client, model=model_id, instructions="You are a helpful assistant.")
|
||||
|
||||
|
|
@ -304,7 +318,7 @@ stream = agent.create_turn(
|
|||
for event in AgentEventLogger().log(stream):
|
||||
event.print()
|
||||
```
|
||||
### ii. Run the Script
|
||||
#### ii. Run the Script
|
||||
Let's run the script using `uv`
|
||||
```bash
|
||||
uv run python agent.py
|
||||
|
|
@ -446,8 +460,11 @@ import uuid
|
|||
client = LlamaStackClient(base_url="http://localhost:8321")
|
||||
|
||||
# Create a vector database instance
|
||||
embed_lm = next(m for m in client.models.list() if m.model_type == "embedding")
|
||||
embedding_model = embed_lm.identifier
|
||||
embed_lm = next(
|
||||
m for m in client.models.list()
|
||||
if m.custom_metadata and m.custom_metadata.get("model_type") == "embedding"
|
||||
)
|
||||
embedding_model = embed_lm.id
|
||||
vector_db_id = f"v{uuid.uuid4().hex}"
|
||||
# The VectorDB API is deprecated; the server now returns its own authoritative ID.
|
||||
# We capture the correct ID from the response's .identifier attribute.
|
||||
|
|
@ -485,9 +502,11 @@ client.tool_runtime.rag_tool.insert(
|
|||
llm = next(
|
||||
m
|
||||
for m in client.models.list()
|
||||
if m.model_type == "llm" and m.provider_id == "ollama"
|
||||
if m.custom_metadata
|
||||
and m.custom_metadata.get("model_type") == "llm"
|
||||
and m.custom_metadata.get("provider_id") == "ollama"
|
||||
)
|
||||
model = llm.identifier
|
||||
model = llm.id
|
||||
|
||||
# Create the RAG agent
|
||||
rag_agent = Agent(
|
||||
|
|
|
|||
|
|
@ -24,111 +24,44 @@ ollama run llama3.2:3b --keepalive 60m
|
|||
|
||||
#### Step 2: Run the Llama Stack server
|
||||
|
||||
We will use `uv` to run the Llama Stack server.
|
||||
```python file=./demo_script.py title="demo_script.py"
|
||||
```
|
||||
|
||||
We will use `uv` to install dependencies and run the Llama Stack server.
|
||||
```bash
|
||||
OLLAMA_URL=http://localhost:11434 \
|
||||
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server
|
||||
OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter
|
||||
```
|
||||
#### Step 3: Run the demo
|
||||
Now open up a new terminal and copy the following script into a file named `demo_script.py`.
|
||||
|
||||
```python title="demo_script.py"
|
||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient
|
||||
|
||||
vector_db_id = "my_demo_vector_db"
|
||||
client = LlamaStackClient(base_url="http://localhost:8321")
|
||||
|
||||
models = client.models.list()
|
||||
|
||||
# Select the first LLM and first embedding models
|
||||
model_id = next(m for m in models if m.model_type == "llm").identifier
|
||||
embedding_model_id = (
|
||||
em := next(m for m in models if m.model_type == "embedding")
|
||||
).identifier
|
||||
embedding_dimension = em.metadata["embedding_dimension"]
|
||||
|
||||
vector_db = client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model=embedding_model_id,
|
||||
embedding_dimension=embedding_dimension,
|
||||
provider_id="faiss",
|
||||
)
|
||||
vector_db_id = vector_db.identifier
|
||||
source = "https://www.paulgraham.com/greatwork.html"
|
||||
print("rag_tool> Ingesting document:", source)
|
||||
document = RAGDocument(
|
||||
document_id="document_1",
|
||||
content=source,
|
||||
mime_type="text/html",
|
||||
metadata={},
|
||||
)
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=[document],
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=100,
|
||||
)
|
||||
agent = Agent(
|
||||
client,
|
||||
model=model_id,
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[
|
||||
{
|
||||
"name": "builtin::rag/knowledge_search",
|
||||
"args": {"vector_db_ids": [vector_db_id]},
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
prompt = "How do you do great work?"
|
||||
print("prompt>", prompt)
|
||||
|
||||
use_stream = True
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
session_id=agent.create_session("rag_session"),
|
||||
stream=use_stream,
|
||||
)
|
||||
|
||||
# Only call `AgentEventLogger().log(response)` for streaming responses.
|
||||
if use_stream:
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
else:
|
||||
print(response)
|
||||
```
|
||||
We will use `uv` to run the script
|
||||
```
|
||||
uv run --with llama-stack-client,fire,requests demo_script.py
|
||||
```
|
||||
And you should see output like below.
|
||||
```python
|
||||
>print(resp.output[1].content[0].text)
|
||||
To do great work, consider the following principles:
|
||||
|
||||
1. **Follow Your Interests**: Engage in work that genuinely excites you. If you find an area intriguing, pursue it without being overly concerned about external pressures or norms. You should create things that you would want for yourself, as this often aligns with what others in your circle might want too.
|
||||
|
||||
2. **Work Hard on Ambitious Projects**: Ambition is vital, but it should be tempered by genuine interest. Instead of detailed planning for the future, focus on exciting projects that keep your options open. This approach, known as "staying upwind," allows for adaptability and can lead to unforeseen achievements.
|
||||
|
||||
3. **Choose Quality Colleagues**: Collaborating with talented colleagues can significantly affect your own work. Seek out individuals who offer surprising insights and whom you admire. The presence of good colleagues can elevate the quality of your work and inspire you.
|
||||
|
||||
4. **Maintain High Morale**: Your attitude towards work and life affects your performance. Cultivating optimism and viewing yourself as lucky rather than victimized can boost your productivity. It’s essential to care for your physical health as well since it directly impacts your mental faculties and morale.
|
||||
|
||||
5. **Be Consistent**: Great work often comes from cumulative effort. Daily progress, even in small amounts, can result in substantial achievements over time. Emphasize consistency and make the work engaging, as this reduces the perceived burden of hard labor.
|
||||
|
||||
6. **Embrace Curiosity**: Curiosity is a driving force that can guide you in selecting fields of interest, pushing you to explore uncharted territories. Allow it to shape your work and continually seek knowledge and insights.
|
||||
|
||||
By focusing on these aspects, you can create an environment conducive to great work and personal fulfillment.
|
||||
```
|
||||
rag_tool> Ingesting document: https://www.paulgraham.com/greatwork.html
|
||||
|
||||
prompt> How do you do great work?
|
||||
|
||||
inference> [knowledge_search(query="What is the key to doing great work")]
|
||||
|
||||
tool_execution> Tool:knowledge_search Args:{'query': 'What is the key to doing great work'}
|
||||
|
||||
tool_execution> Tool:knowledge_search Response:[TextContentItem(text='knowledge_search tool found 5 chunks:\nBEGIN of knowledge_search tool results.\n', type='text'), TextContentItem(text="Result 1:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 2:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 3:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 4:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 5:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text='END of knowledge_search tool results.\n', type='text')]
|
||||
|
||||
inference> Based on the search results, it seems that doing great work means doing something important so well that you expand people's ideas of what's possible. However, there is no clear threshold for importance, and it can be difficult to judge at the time.
|
||||
|
||||
To further clarify, I would suggest that doing great work involves:
|
||||
|
||||
* Completing tasks with high quality and attention to detail
|
||||
* Expanding on existing knowledge or ideas
|
||||
* Making a positive impact on others through your work
|
||||
* Striving for excellence and continuous improvement
|
||||
|
||||
Ultimately, great work is about making a meaningful contribution and leaving a lasting impression.
|
||||
```
|
||||
Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳
|
||||
|
||||
:::tip HuggingFace access
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ Llama Stack is now available! See the [release notes](https://github.com/llamast
|
|||
|
||||
Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments. More specifically, it provides:
|
||||
|
||||
- **Unified API layer** for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
|
||||
- **Unified API layer** for Inference, RAG, Agents, Tools, Safety, Evals.
|
||||
- **Plugin architecture** to support the rich ecosystem of implementations of the different APIs in different environments like local development, on-premises, cloud, and mobile.
|
||||
- **Prepackaged verified distributions** which offer a one-stop solution for developers to get started quickly and reliably in any environment
|
||||
- **Multiple developer interfaces** like CLI and SDKs for Python, Node, iOS, and Android
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
description: "Agents
|
||||
description: |
|
||||
Agents
|
||||
|
||||
APIs for creating and interacting with agentic systems."
|
||||
APIs for creating and interacting with agentic systems.
|
||||
sidebar_label: Agents
|
||||
title: Agents
|
||||
---
|
||||
|
|
|
|||
|
|
@ -14,16 +14,18 @@ Meta's reference implementation of an agent system that can use tools, access ve
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `persistence_store` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `responses_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | |
|
||||
| `persistence` | `AgentPersistenceConfig` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
persistence_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/agents_store.db
|
||||
responses_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/responses_store.db
|
||||
persistence:
|
||||
agent_state:
|
||||
namespace: agents
|
||||
backend: kv_default
|
||||
responses:
|
||||
table_name: responses
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,14 +1,15 @@
|
|||
---
|
||||
description: "The Batches API enables efficient processing of multiple requests in a single operation,
|
||||
particularly useful for processing large datasets, batch evaluation workflows, and
|
||||
cost-effective inference at scale.
|
||||
description: |
|
||||
The Batches API enables efficient processing of multiple requests in a single operation,
|
||||
particularly useful for processing large datasets, batch evaluation workflows, and
|
||||
cost-effective inference at scale.
|
||||
|
||||
The API is designed to allow use of openai client libraries for seamless integration.
|
||||
The API is designed to allow use of openai client libraries for seamless integration.
|
||||
|
||||
This API provides the following extensions:
|
||||
- idempotent batch creation
|
||||
This API provides the following extensions:
|
||||
- idempotent batch creation
|
||||
|
||||
Note: This API is currently under active development and may undergo changes."
|
||||
Note: This API is currently under active development and may undergo changes.
|
||||
sidebar_label: Batches
|
||||
title: Batches
|
||||
---
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Reference implementation of batches API with KVStore persistence.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Configuration for the key-value store backend. |
|
||||
| `max_concurrent_batches` | `<class 'int'>` | No | 1 | Maximum number of concurrent batches to process simultaneously. |
|
||||
| `max_concurrent_requests_per_batch` | `<class 'int'>` | No | 10 | Maximum number of concurrent requests to process per batch. |
|
||||
| `kvstore` | `KVStoreReference` | No | | Configuration for the key-value store backend. |
|
||||
| `max_concurrent_batches` | `int` | No | 1 | Maximum number of concurrent batches to process simultaneously. |
|
||||
| `max_concurrent_requests_per_batch` | `int` | No | 10 | Maximum number of concurrent requests to process per batch. |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/batches.db
|
||||
namespace: batches
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ Local filesystem-based dataset I/O provider for reading and writing datasets to
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `kvstore` | `KVStoreReference` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/localfs_datasetio.db
|
||||
namespace: datasetio::localfs
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ HuggingFace datasets provider for accessing and managing datasets from the Huggi
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `kvstore` | `KVStoreReference` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/huggingface_datasetio.db
|
||||
namespace: datasetio::huggingface
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ NVIDIA's dataset I/O provider for accessing datasets from NVIDIA's data platform
|
|||
| `api_key` | `str \| None` | No | | The NVIDIA API key. |
|
||||
| `dataset_namespace` | `str \| None` | No | default | The NVIDIA dataset namespace. |
|
||||
| `project_id` | `str \| None` | No | test-project | The NVIDIA project ID. |
|
||||
| `datasets_url` | `<class 'str'>` | No | http://nemo.test | Base URL for the NeMo Dataset API |
|
||||
| `datasets_url` | `str` | No | http://nemo.test | Base URL for the NeMo Dataset API |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
---
|
||||
description: "Llama Stack Evaluation API for running evaluations on model and agent candidates."
|
||||
description: |
|
||||
Evaluations
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
sidebar_label: Eval
|
||||
title: Eval
|
||||
---
|
||||
|
|
@ -8,6 +11,8 @@ title: Eval
|
|||
|
||||
## Overview
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
Evaluations
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
|
||||
This section contains documentation for all available providers for the **eval** API.
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ Meta's reference implementation of evaluation tasks with support for multiple la
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `kvstore` | `KVStoreReference` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db
|
||||
namespace: eval
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `evaluator_url` | `<class 'str'>` | No | http://0.0.0.0:7331 | The url for accessing the evaluator service |
|
||||
| `evaluator_url` | `str` | No | http://0.0.0.0:7331 | The url for accessing the evaluator service |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ container_image: custom-vector-store:latest # optional
|
|||
All providers must contain a `get_provider_spec` function in their `provider` module. This is a standardized structure that Llama Stack expects and is necessary for getting things such as the config class. The `get_provider_spec` method returns a structure identical to the `adapter`. An example function may look like:
|
||||
|
||||
```python
|
||||
from llama_stack.providers.datatypes import (
|
||||
from llama_stack_api.providers.datatypes import (
|
||||
ProviderSpec,
|
||||
Api,
|
||||
RemoteProviderSpec,
|
||||
|
|
@ -240,6 +240,6 @@ additional_pip_packages:
|
|||
- sqlalchemy[asyncio]
|
||||
```
|
||||
|
||||
No other steps are required other than `llama stack build` and `llama stack run`. The build process will use `module` to install all of the provider dependencies, retrieve the spec, etc.
|
||||
No other steps are required beyond installing dependencies with `llama stack list-deps <distro> | xargs -L1 uv pip install` and then running `llama stack run`. The CLI will use `module` to install the provider dependencies, retrieve the spec, etc.
|
||||
|
||||
The provider will now be available in Llama Stack with the type `remote::ramalama`.
|
||||
|
|
|
|||
290
docs/docs/providers/files/files.mdx
Normal file
290
docs/docs/providers/files/files.mdx
Normal file
|
|
@ -0,0 +1,290 @@
|
|||
---
|
||||
sidebar_label: Files
|
||||
title: Files
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Files API provides file management capabilities for Llama Stack. It allows you to upload, store, retrieve, and manage files that can be used across various endpoints in your application.
|
||||
|
||||
## Features
|
||||
|
||||
- **File Upload**: Upload files with metadata and purpose classification
|
||||
- **File Management**: List, retrieve, and delete files
|
||||
- **Content Retrieval**: Access raw file content for processing
|
||||
- **API Compatibility**: Full compatibility with OpenAI Files API endpoints
|
||||
- **Flexible Storage**: Support for local filesystem and cloud storage backends
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Upload File
|
||||
|
||||
**POST** `/v1/openai/v1/files`
|
||||
|
||||
Upload a file that can be used across various endpoints.
|
||||
|
||||
**Request Body:**
|
||||
- `file`: The file object to be uploaded (multipart form data)
|
||||
- `purpose`: The intended purpose of the uploaded file
|
||||
|
||||
**Supported Purposes:**
|
||||
- `batch`: Files for batch operations
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "file-abc123",
|
||||
"object": "file",
|
||||
"bytes": 140,
|
||||
"created_at": 1613779121,
|
||||
"filename": "mydata.jsonl",
|
||||
"purpose": "batch"
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
import requests
|
||||
|
||||
with open("data.jsonl", "rb") as f:
|
||||
files = {"file": f}
|
||||
data = {"purpose": "batch"}
|
||||
response = requests.post(
|
||||
"http://localhost:8000/v1/openai/v1/files", files=files, data=data
|
||||
)
|
||||
file_info = response.json()
|
||||
```
|
||||
|
||||
### List Files
|
||||
|
||||
**GET** `/v1/openai/v1/files`
|
||||
|
||||
Returns a list of files that belong to the user's organization.
|
||||
|
||||
**Query Parameters:**
|
||||
- `after` (optional): A cursor for pagination
|
||||
- `limit` (optional): Limit on number of objects (1-10,000, default: 10,000)
|
||||
- `order` (optional): Sort order by created_at timestamp (`asc` or `desc`, default: `desc`)
|
||||
- `purpose` (optional): Filter files by purpose
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"object": "list",
|
||||
"data": [
|
||||
{
|
||||
"id": "file-abc123",
|
||||
"object": "file",
|
||||
"bytes": 140,
|
||||
"created_at": 1613779121,
|
||||
"filename": "mydata.jsonl",
|
||||
"purpose": "fine-tune"
|
||||
}
|
||||
],
|
||||
"has_more": false
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
import requests
|
||||
|
||||
# List all files
|
||||
response = requests.get("http://localhost:8000/v1/openai/v1/files")
|
||||
files = response.json()
|
||||
|
||||
# List files with pagination
|
||||
response = requests.get(
|
||||
"http://localhost:8000/v1/openAi/v1/files",
|
||||
params={"limit": 10, "after": "file-abc123"},
|
||||
)
|
||||
files = response.json()
|
||||
|
||||
# Filter by purpose
|
||||
response = requests.get(
|
||||
"http://localhost:8000/v1/openAi/v1/files", params={"purpose": "fine-tune"}
|
||||
)
|
||||
files = response.json()
|
||||
```
|
||||
|
||||
### Retrieve File
|
||||
|
||||
**GET** `/v1/openAi/v1/files/{file_id}`
|
||||
|
||||
Returns information about a specific file.
|
||||
|
||||
**Path Parameters:**
|
||||
- `file_id`: The ID of the file to retrieve
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "file-abc123",
|
||||
"object": "file",
|
||||
"bytes": 140,
|
||||
"created_at": 1613779121,
|
||||
"filename": "mydata.jsonl",
|
||||
"purpose": "fine-tune"
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
import requests
|
||||
|
||||
file_id = "file-abc123"
|
||||
response = requests.get(f"http://localhost:8000/v1/openAi/v1/files/{file_id}")
|
||||
file_info = response.json()
|
||||
```
|
||||
|
||||
### Delete File
|
||||
|
||||
**DELETE** `/v1/openAi/v1/files/{file_id}`
|
||||
|
||||
Delete a file.
|
||||
|
||||
**Path Parameters:**
|
||||
- `file_id`: The ID of the file to delete
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "file-abc123",
|
||||
"object": "file",
|
||||
"deleted": true
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
import requests
|
||||
|
||||
file_id = "file-abc123"
|
||||
response = requests.delete(f"http://localhost:8000/v1/openAi/v1/files/{file_id}")
|
||||
result = response.json()
|
||||
```
|
||||
|
||||
### Retrieve File Content
|
||||
|
||||
**GET** `/v1/openAi/v1/files/{file_id}/content`
|
||||
|
||||
Returns the raw file content as a binary response.
|
||||
|
||||
**Path Parameters:**
|
||||
- `file_id`: The ID of the file to retrieve content from
|
||||
|
||||
**Response:**
|
||||
Binary file content with appropriate headers:
|
||||
- `Content-Type`: `application/octet-stream`
|
||||
- `Content-Disposition`: `attachment; filename="filename"`
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
import requests
|
||||
|
||||
file_id = "file-abc123"
|
||||
response = requests.get(f"http://localhost:8000/v1/openAi/v1/files/{file_id}/content")
|
||||
|
||||
# Save content to file
|
||||
with open("downloaded_file.jsonl", "wb") as f:
|
||||
f.write(response.content)
|
||||
|
||||
# Or process content directly
|
||||
content = response.content
|
||||
```
|
||||
|
||||
## Vector Store Integration
|
||||
|
||||
The Files API integrates with Vector Stores to enable document processing and search. For detailed information about this integration, see [File Operations and Vector Store Integration](../concepts/file_operations_vector_stores.md).
|
||||
|
||||
### Vector Store File Operations
|
||||
|
||||
**List Vector Store Files:**
|
||||
- **GET** `/v1/openAi/v1/vector_stores/{vector_store_id}/files`
|
||||
|
||||
**Retrieve Vector Store File Content:**
|
||||
- **GET** `/v1/openAi/v1/vector_stores/{vector_store_id}/files/{file_id}/content`
|
||||
|
||||
**Attach File to Vector Store:**
|
||||
- **POST** `/v1/openAi/v1/vector_stores/{vector_store_id}/files`
|
||||
|
||||
## Error Handling
|
||||
|
||||
The Files API returns standard HTTP status codes and error responses:
|
||||
|
||||
- `400 Bad Request`: Invalid request parameters
|
||||
- `404 Not Found`: File not found
|
||||
- `429 Too Many Requests`: Rate limit exceeded
|
||||
- `500 Internal Server Error`: Server error
|
||||
|
||||
**Error Response Format:**
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"message": "Error description",
|
||||
"type": "invalid_request_error",
|
||||
"code": "file_not_found"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Rate Limits
|
||||
|
||||
The Files API implements rate limiting to ensure fair usage:
|
||||
- File uploads: 100 files per minute
|
||||
- File retrievals: 1000 requests per minute
|
||||
- File deletions: 100 requests per minute
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **File Organization**: Use descriptive filenames and appropriate purpose classifications
|
||||
2. **Batch Operations**: For multiple files, consider using batch endpoints when available
|
||||
3. **Error Handling**: Always check response status codes and handle errors gracefully
|
||||
4. **Content Types**: Ensure files are uploaded with appropriate content types
|
||||
5. **Cleanup**: Regularly delete unused files to manage storage costs
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### With Python Client
|
||||
|
||||
```python
|
||||
from llama_stack import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient("http://localhost:8000")
|
||||
|
||||
# Upload a file
|
||||
with open("data.jsonl", "rb") as f:
|
||||
file_info = await client.files.upload(file=f, purpose="fine-tune")
|
||||
|
||||
# List files
|
||||
files = await client.files.list(purpose="fine-tune")
|
||||
|
||||
# Retrieve file content
|
||||
content = await client.files.retrieve_content(file_info.id)
|
||||
```
|
||||
|
||||
### With cURL
|
||||
|
||||
```bash
|
||||
# Upload file
|
||||
curl -X POST http://localhost:8000/v1/openAi/v1/files \
|
||||
-F "file=@data.jsonl" \
|
||||
-F "purpose=fine-tune"
|
||||
|
||||
# List files
|
||||
curl http://localhost:8000/v1/openAi/v1/files
|
||||
|
||||
# Download file content
|
||||
curl http://localhost:8000/v1/openAi/v1/files/file-abc123/content \
|
||||
-o downloaded_file.jsonl
|
||||
```
|
||||
|
||||
## Provider Support
|
||||
|
||||
The Files API supports multiple storage backends:
|
||||
|
||||
- **Local Filesystem**: Store files on local disk (inline provider)
|
||||
- **S3**: Store files in AWS S3 or S3-compatible services (remote provider)
|
||||
- **Custom Backends**: Extensible architecture for custom storage providers
|
||||
|
||||
See the [Files Providers](index.md) documentation for detailed configuration options.
|
||||
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
description: "Files
|
||||
description: |
|
||||
Files
|
||||
|
||||
This API is used to upload documents that can be used with other Llama Stack APIs."
|
||||
This API is used to upload documents that can be used with other Llama Stack APIs.
|
||||
sidebar_label: Files
|
||||
title: Files
|
||||
---
|
||||
|
|
|
|||
|
|
@ -14,15 +14,15 @@ Local filesystem-based file storage provider for managing files and documents lo
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `storage_dir` | `<class 'str'>` | No | | Directory to store uploaded files |
|
||||
| `metadata_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | SQL store configuration for file metadata |
|
||||
| `ttl_secs` | `<class 'int'>` | No | 31536000 | |
|
||||
| `storage_dir` | `str` | No | | Directory to store uploaded files |
|
||||
| `metadata_store` | `SqlStoreReference` | No | | SQL store configuration for file metadata |
|
||||
| `ttl_secs` | `int` | No | 31536000 | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/dummy/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/files_metadata.db
|
||||
table_name: files_metadata
|
||||
backend: sql_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -0,0 +1,80 @@
|
|||
# File Operations Quick Reference
|
||||
|
||||
## Overview
|
||||
|
||||
As of release 0.2.14, Llama Stack provides comprehensive file operations and Vector Store API integration, following the [OpenAI Vector Store Files API specification](https://platform.openai.com/docs/api-reference/vector-stores-files).
|
||||
|
||||
> **Note**: For detailed overview and implementation details, see [Overview](../openai_file_operations_support.md#overview) in the full documentation.
|
||||
|
||||
## Supported Providers
|
||||
|
||||
> **Note**: For complete provider details and features, see [Supported Providers](../openai_file_operations_support.md#supported-providers) in the full documentation.
|
||||
|
||||
**Inline Providers**: FAISS, SQLite-vec, Milvus
|
||||
**Remote Providers**: ChromaDB, Qdrant, Weaviate, PGVector
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Upload File
|
||||
```python
|
||||
file_info = await client.files.upload(
|
||||
file=open("document.pdf", "rb"), purpose="assistants"
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Create Vector Store
|
||||
```python
|
||||
vector_store = client.vector_stores.create(name="my_docs")
|
||||
```
|
||||
|
||||
### 3. Attach File
|
||||
```python
|
||||
await client.vector_stores.files.create(
|
||||
vector_store_id=vector_store.id, file_id=file_info.id
|
||||
)
|
||||
```
|
||||
|
||||
### 4. Search
|
||||
```python
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store.id, query="What is the main topic?", max_num_results=5
|
||||
)
|
||||
```
|
||||
|
||||
## File Processing & Search
|
||||
|
||||
**Processing**: 800 tokens default chunk size, 400 token overlap
|
||||
**Formats**: PDF, DOCX, TXT, Code files, etc.
|
||||
**Search**: Vector similarity, Hybrid (SQLite-vec), Filtered with metadata
|
||||
|
||||
## Configuration
|
||||
|
||||
> **Note**: For detailed configuration examples and options, see [Configuration Examples](../openai_file_operations_support.md#configuration-examples) in the full documentation.
|
||||
|
||||
**Basic Setup**: Configure vector_io and files providers in your run.yaml
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
- **RAG Systems**: Document Q&A with file uploads
|
||||
- **Knowledge Bases**: Searchable document collections
|
||||
- **Content Analysis**: Document similarity and clustering
|
||||
- **Research Tools**: Literature review and analysis
|
||||
|
||||
## Performance Tips
|
||||
|
||||
> **Note**: For detailed performance optimization strategies, see [Performance Considerations](../openai_file_operations_support.md#performance-considerations) in the full documentation.
|
||||
|
||||
**Quick Tips**: Choose provider based on your needs (speed vs. storage vs. scalability)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
> **Note**: For comprehensive troubleshooting, see [Troubleshooting](../openai_file_operations_support.md#troubleshooting) in the full documentation.
|
||||
|
||||
**Quick Fixes**: Check file format compatibility, optimize chunk sizes, monitor storage
|
||||
|
||||
## Resources
|
||||
|
||||
- [Full Documentation](openai_file_operations_support.md)
|
||||
- [Integration Guide](../concepts/file_operations_vector_stores.md)
|
||||
- [Files API](files_api.md)
|
||||
- [Provider Details](../vector_io/index.md)
|
||||
291
docs/docs/providers/files/openai_file_operations_support.md
Normal file
291
docs/docs/providers/files/openai_file_operations_support.md
Normal file
|
|
@ -0,0 +1,291 @@
|
|||
# File Operations Support in Vector Store Providers
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides a comprehensive overview of file operations and Vector Store API support across all available vector store providers in Llama Stack. As of release 0.2.24, the following providers support full file operations integration.
|
||||
|
||||
## Supported Providers
|
||||
|
||||
### ✅ Full File Operations Support
|
||||
|
||||
The following providers support complete file operations integration, including file upload, automatic processing, and search:
|
||||
|
||||
#### Inline Providers (Single Node)
|
||||
|
||||
| Provider | File Operations | Key Features |
|
||||
|----------|----------------|--------------|
|
||||
| **FAISS** | ✅ Full Support | Fast in-memory search, GPU acceleration |
|
||||
| **SQLite-vec** | ✅ Full Support | Hybrid search, disk-based storage |
|
||||
| **Milvus** | ✅ Full Support | High-performance, scalable indexing |
|
||||
|
||||
#### Remote Providers (Hosted)
|
||||
|
||||
| Provider | File Operations | Key Features |
|
||||
|----------|----------------|--------------|
|
||||
| **ChromaDB** | ✅ Full Support | Metadata filtering, persistent storage |
|
||||
| **Qdrant** | ✅ Full Support | Payload filtering, advanced search |
|
||||
| **Weaviate** | ✅ Full Support | GraphQL interface, schema management |
|
||||
| **Postgres (PGVector)** | ✅ Full Support | SQL integration, ACID compliance |
|
||||
|
||||
### 🔄 Partial Support
|
||||
|
||||
Some providers may support basic vector operations but lack full file operations integration:
|
||||
|
||||
| Provider | Status | Notes |
|
||||
|----------|--------|-------|
|
||||
| **Meta Reference** | 🔄 Basic | Core vector operations only |
|
||||
|
||||
## File Operations Features
|
||||
|
||||
All supported providers offer the following file operations capabilities:
|
||||
|
||||
### Core Functionality
|
||||
|
||||
- **File Upload & Processing**: Automatic document ingestion and chunking
|
||||
- **Vector Storage**: Embedding generation and storage
|
||||
- **Search & Retrieval**: Semantic search with metadata filtering
|
||||
- **File Management**: List, retrieve, and manage files in vector stores
|
||||
|
||||
### Advanced Features
|
||||
|
||||
- **Automatic Chunking**: Configurable chunk sizes and overlap
|
||||
- **Metadata Preservation**: File attributes and chunk metadata
|
||||
- **Status Tracking**: Monitor file processing progress
|
||||
- **Error Handling**: Comprehensive error reporting and recovery
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### File Processing Pipeline
|
||||
|
||||
1. **Upload**: File uploaded via Files API
|
||||
2. **Extraction**: Text content extracted from various formats
|
||||
3. **Chunking**: Content split into optimal chunks (default: 800 tokens)
|
||||
4. **Embedding**: Chunks converted to vector embeddings
|
||||
5. **Storage**: Vectors stored with metadata in vector database
|
||||
6. **Indexing**: Search index updated for fast retrieval
|
||||
|
||||
### Supported File Formats
|
||||
|
||||
- **Documents**: PDF, DOCX, DOC
|
||||
- **Text**: TXT, MD, RST
|
||||
- **Code**: Python, JavaScript, Java, C++, etc.
|
||||
- **Data**: JSON, CSV, XML
|
||||
- **Web**: HTML files
|
||||
|
||||
### Chunking Strategies
|
||||
|
||||
- **Default**: 800 tokens with 400 token overlap
|
||||
- **Custom**: Configurable chunk sizes and overlap
|
||||
- **Static**: Fixed-size chunks with overlap
|
||||
|
||||
## Provider-Specific Features
|
||||
|
||||
### FAISS
|
||||
|
||||
- **Storage**: In-memory with optional persistence
|
||||
- **Performance**: Optimized for speed and GPU acceleration
|
||||
- **Use Case**: High-performance, memory-constrained environments
|
||||
|
||||
### SQLite-vec
|
||||
|
||||
- **Storage**: Disk-based with SQLite backend
|
||||
- **Search**: Hybrid vector + keyword search
|
||||
- **Use Case**: Large document collections, frequent updates
|
||||
|
||||
### Milvus
|
||||
|
||||
- **Storage**: Scalable distributed storage
|
||||
- **Indexing**: Multiple index types (IVF, HNSW)
|
||||
- **Use Case**: Production deployments, large-scale applications
|
||||
|
||||
### ChromaDB
|
||||
|
||||
- **Storage**: Persistent storage with metadata
|
||||
- **Filtering**: Advanced metadata filtering
|
||||
- **Use Case**: Applications requiring rich metadata
|
||||
|
||||
### Qdrant
|
||||
|
||||
- **Storage**: High-performance vector database
|
||||
- **Filtering**: Payload-based filtering
|
||||
- **Use Case**: Real-time applications, complex queries
|
||||
|
||||
### Weaviate
|
||||
|
||||
- **Storage**: GraphQL-native vector database
|
||||
- **Schema**: Flexible schema management
|
||||
- **Use Case**: Applications requiring complex data relationships
|
||||
|
||||
### Postgres (PGVector)
|
||||
|
||||
- **Storage**: SQL database with vector extensions
|
||||
- **Integration**: ACID compliance, existing SQL workflows
|
||||
- **Use Case**: Applications requiring transactional guarantees
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Basic Configuration
|
||||
|
||||
```yaml
|
||||
vector_io:
|
||||
- provider_id: faiss
|
||||
provider_type: inline::faiss
|
||||
config:
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ~/.llama/faiss_store.db
|
||||
```
|
||||
|
||||
### With FileResponse Support
|
||||
|
||||
```yaml
|
||||
vector_io:
|
||||
- provider_id: faiss
|
||||
provider_type: inline::faiss
|
||||
config:
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ~/.llama/faiss_store.db
|
||||
|
||||
files:
|
||||
- provider_id: local-files
|
||||
provider_type: inline::localfs
|
||||
config:
|
||||
storage_dir: ~/.llama/files
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ~/.llama/files_metadata.db
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Python Client
|
||||
|
||||
```python
|
||||
from llama_stack import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient("http://localhost:8000")
|
||||
|
||||
# Create vector store
|
||||
vector_store = client.vector_stores.create(name="documents")
|
||||
|
||||
# Upload and process file
|
||||
with open("document.pdf", "rb") as f:
|
||||
file_info = await client.files.upload(file=f, purpose="assistants")
|
||||
|
||||
# Attach to vector store
|
||||
await client.vector_stores.files.create(
|
||||
vector_store_id=vector_store.id, file_id=file_info.id
|
||||
)
|
||||
|
||||
# Search
|
||||
results = await client.vector_stores.search(
|
||||
vector_store_id=vector_store.id, query="What is the main topic?", max_num_results=5
|
||||
)
|
||||
```
|
||||
|
||||
### cURL Commands
|
||||
|
||||
```bash
|
||||
# Upload file
|
||||
curl -X POST http://localhost:8000/v1/openai/v1/files \
|
||||
-F "file=@document.pdf" \
|
||||
-F "purpose=assistants"
|
||||
|
||||
# Create vector store
|
||||
curl -X POST http://localhost:8000/v1/openai/v1/vector_stores \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "documents"}'
|
||||
|
||||
# Attach file to vector store
|
||||
curl -X POST http://localhost:8000/v1/openai/v1/vector_stores/{store_id}/files \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"file_id": "file-abc123"}'
|
||||
|
||||
# Search vector store
|
||||
curl -X POST http://localhost:8000/v1/openai/v1/vector_stores/{store_id}/search \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query": "What is the main topic?", "max_num_results": 5}'
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Chunk Size Optimization
|
||||
|
||||
- **Small chunks (400-600 tokens)**: Better precision, more results
|
||||
- **Large chunks (800-1200 tokens)**: Better context, fewer results
|
||||
- **Overlap (50%)**: Maintains context between chunks
|
||||
|
||||
### Storage Efficiency
|
||||
|
||||
- **FAISS**: Fastest, but memory-limited
|
||||
- **SQLite-vec**: Good balance of performance and storage
|
||||
- **Milvus**: Scalable, production-ready
|
||||
- **Remote providers**: Managed, but network-dependent
|
||||
|
||||
### Search Performance
|
||||
|
||||
- **Vector search**: Fastest for semantic queries
|
||||
- **Hybrid search**: Best accuracy (SQLite-vec only)
|
||||
- **Filtered search**: Fast with metadata constraints
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **File Processing Failures**
|
||||
- Check file format compatibility
|
||||
- Verify file size limits
|
||||
- Review error messages in file status
|
||||
|
||||
2. **Search Performance**
|
||||
- Optimize chunk sizes for your use case
|
||||
- Use filters to narrow search scope
|
||||
- Monitor vector store metrics
|
||||
|
||||
3. **Storage Issues**
|
||||
- Check available disk space
|
||||
- Verify database permissions
|
||||
- Monitor memory usage (for in-memory providers)
|
||||
|
||||
### Monitoring
|
||||
|
||||
```python
|
||||
# Check file processing status
|
||||
file_status = await client.vector_stores.files.retrieve(
|
||||
vector_store_id=vector_store.id, file_id=file_info.id
|
||||
)
|
||||
|
||||
if file_status.status == "failed":
|
||||
print(f"Error: {file_status.last_error.message}")
|
||||
|
||||
# Monitor vector store health
|
||||
health = await client.vector_stores.health(vector_store_id=vector_store.id)
|
||||
print(f"Status: {health.status}")
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **File Organization**: Use descriptive names and organize by purpose
|
||||
2. **Chunking Strategy**: Test different sizes for your specific use case
|
||||
3. **Metadata**: Add relevant attributes for better filtering
|
||||
4. **Monitoring**: Track processing status and search performance
|
||||
5. **Cleanup**: Regularly remove unused files to manage storage
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Planned improvements for file operations support:
|
||||
|
||||
- **Batch Processing**: Process multiple files simultaneously
|
||||
- **Advanced Chunking**: More sophisticated chunking algorithms
|
||||
- **Custom Embeddings**: Support for custom embedding models
|
||||
- **Real-time Updates**: Live file processing and indexing
|
||||
- **Multi-format Support**: Enhanced file format support
|
||||
|
||||
## Support and Resources
|
||||
|
||||
- **Documentation**: [File Operations and Vector Store Integration](../../concepts/file_operations_vector_stores.mdx)
|
||||
- **API Reference**: [Files API](files_api.md)
|
||||
- **Provider Docs**: [Vector Store Providers](../vector_io/index.md)
|
||||
- **Examples**: [Getting Started](../getting_started/index.md)
|
||||
- **Community**: [GitHub Discussions](https://github.com/meta-llama/llama-stack/discussions)
|
||||
27
docs/docs/providers/files/remote_openai.mdx
Normal file
27
docs/docs/providers/files/remote_openai.mdx
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
description: "OpenAI Files API provider for managing files through OpenAI's native file storage service."
|
||||
sidebar_label: Remote - Openai
|
||||
title: remote::openai
|
||||
---
|
||||
|
||||
# remote::openai
|
||||
|
||||
## Description
|
||||
|
||||
OpenAI Files API provider for managing files through OpenAI's native file storage service.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `str` | No | | OpenAI API key for authentication |
|
||||
| `metadata_store` | `SqlStoreReference` | No | | SQL store configuration for file metadata |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
api_key: ${env.OPENAI_API_KEY}
|
||||
metadata_store:
|
||||
table_name: openai_files_metadata
|
||||
backend: sql_default
|
||||
```
|
||||
|
|
@ -14,13 +14,13 @@ AWS S3-based file storage provider for scalable cloud file management with metad
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `bucket_name` | `<class 'str'>` | No | | S3 bucket name to store files |
|
||||
| `region` | `<class 'str'>` | No | us-east-1 | AWS region where the bucket is located |
|
||||
| `bucket_name` | `str` | No | | S3 bucket name to store files |
|
||||
| `region` | `str` | No | us-east-1 | AWS region where the bucket is located |
|
||||
| `aws_access_key_id` | `str \| None` | No | | AWS access key ID (optional if using IAM roles) |
|
||||
| `aws_secret_access_key` | `str \| None` | No | | AWS secret access key (optional if using IAM roles) |
|
||||
| `endpoint_url` | `str \| None` | No | | Custom S3 endpoint URL (for MinIO, LocalStack, etc.) |
|
||||
| `auto_create_bucket` | `<class 'bool'>` | No | False | Automatically create the S3 bucket if it doesn't exist |
|
||||
| `metadata_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | SQL store configuration for file metadata |
|
||||
| `auto_create_bucket` | `bool` | No | False | Automatically create the S3 bucket if it doesn't exist |
|
||||
| `metadata_store` | `SqlStoreReference` | No | | SQL store configuration for file metadata |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
@ -32,6 +32,6 @@ aws_secret_access_key: ${env.AWS_SECRET_ACCESS_KEY:=}
|
|||
endpoint_url: ${env.S3_ENDPOINT_URL:=}
|
||||
auto_create_bucket: ${env.S3_AUTO_CREATE_BUCKET:=false}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/s3_files_metadata.db
|
||||
table_name: s3_files_metadata
|
||||
backend: sql_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -22,12 +22,25 @@ Importantly, Llama Stack always strives to provide at least one fully inline pro
|
|||
## Provider Categories
|
||||
|
||||
- **[External Providers](external/index.mdx)** - Guide for building and using external providers
|
||||
- **[OpenAI Compatibility](./openai.mdx)** - OpenAI API compatibility layer
|
||||
- **[OpenAI Compatibility](../api-openai/index.mdx)** - OpenAI API compatibility layer
|
||||
- **[Inference](inference/index.mdx)** - LLM and embedding model providers
|
||||
- **[Agents](agents/index.mdx)** - Agentic system providers
|
||||
- **[DatasetIO](datasetio/index.mdx)** - Dataset and data loader providers
|
||||
- **[Safety](safety/index.mdx)** - Content moderation and safety providers
|
||||
- **[Telemetry](telemetry/index.mdx)** - Monitoring and observability providers
|
||||
- **[Vector IO](vector_io/index.mdx)** - Vector database providers
|
||||
- **[Tool Runtime](tool_runtime/index.mdx)** - Tool and protocol providers
|
||||
- **[Files](files/index.mdx)** - File system and storage providers
|
||||
|
||||
## API Documentation
|
||||
|
||||
For comprehensive API documentation and reference:
|
||||
|
||||
- **[API Reference](../api/index.mdx)** - Complete API documentation
|
||||
- **[Experimental APIs](../api-experimental/index.mdx)** - APIs in development
|
||||
- **[Deprecated APIs](../api-deprecated/index.mdx)** - Legacy APIs being phased out
|
||||
- **[OpenAI Compatibility](../api-openai/index.mdx)** - OpenAI API compatibility guide
|
||||
|
||||
## Additional Provider Information
|
||||
|
||||
- **[OpenAI Implementation Guide](./openai.mdx)** - Code examples and implementation details for OpenAI APIs
|
||||
- **[OpenAI-Compatible Responses Limitations](./openai_responses_limitations.mdx)** - Known limitations of the Responses API in Llama Stack
|
||||
|
|
|
|||
|
|
@ -1,11 +1,13 @@
|
|||
---
|
||||
description: "Inference
|
||||
description: |
|
||||
Inference
|
||||
|
||||
Llama Stack Inference API for generating completions, chat completions, and embeddings.
|
||||
Llama Stack Inference API for generating completions, chat completions, and embeddings.
|
||||
|
||||
This API provides the raw interface to the underlying models. Two kinds of models are supported:
|
||||
- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.
|
||||
- Embedding models: these models generate embeddings to be used for semantic search."
|
||||
This API provides the raw interface to the underlying models. Three kinds of models are supported:
|
||||
- LLM models: these models generate "raw" and "chat" (conversational) completions.
|
||||
- Embedding models: these models generate embeddings to be used for semantic search.
|
||||
- Rerank models: these models reorder the documents based on their relevance to a query.
|
||||
sidebar_label: Inference
|
||||
title: Inference
|
||||
---
|
||||
|
|
@ -18,8 +20,9 @@ Inference
|
|||
|
||||
Llama Stack Inference API for generating completions, chat completions, and embeddings.
|
||||
|
||||
This API provides the raw interface to the underlying models. Two kinds of models are supported:
|
||||
This API provides the raw interface to the underlying models. Three kinds of models are supported:
|
||||
- LLM models: these models generate "raw" and "chat" (conversational) completions.
|
||||
- Embedding models: these models generate embeddings to be used for semantic search.
|
||||
- Rerank models: these models reorder the documents based on their relevance to a query.
|
||||
|
||||
This section contains documentation for all available providers for the **inference** API.
|
||||
|
|
|
|||
|
|
@ -16,12 +16,12 @@ Meta's reference implementation of inference with support for various model form
|
|||
|-------|------|----------|---------|-------------|
|
||||
| `model` | `str \| None` | No | | |
|
||||
| `torch_seed` | `int \| None` | No | | |
|
||||
| `max_seq_len` | `<class 'int'>` | No | 4096 | |
|
||||
| `max_batch_size` | `<class 'int'>` | No | 1 | |
|
||||
| `max_seq_len` | `int` | No | 4096 | |
|
||||
| `max_batch_size` | `int` | No | 1 | |
|
||||
| `model_parallel_size` | `int \| None` | No | | |
|
||||
| `create_distributed_process_group` | `<class 'bool'>` | No | True | |
|
||||
| `create_distributed_process_group` | `bool` | No | True | |
|
||||
| `checkpoint_dir` | `str \| None` | No | | |
|
||||
| `quantization` | `Bf16QuantizationConfig \| Fp8QuantizationConfig \| Int4QuantizationConfig, annotation=NoneType, required=True, discriminator='type'` | No | | |
|
||||
| `quantization` | `Bf16QuantizationConfig \| Fp8QuantizationConfig \| Int4QuantizationConfig \| None` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,9 +14,9 @@ Anthropic inference provider for accessing Claude models and Anthropic's AI serv
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `str \| None` | No | | API key for Anthropic models |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -21,10 +21,10 @@ https://learn.microsoft.com/en-us/azure/ai-foundry/openai/overview
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | Azure API key for Azure |
|
||||
| `api_base` | `<class 'pydantic.networks.HttpUrl'>` | No | | Azure API base for Azure (e.g., https://your-resource-name.openai.azure.com) |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | | Azure API base for Azure (e.g., https://your-resource-name.openai.azure.com/openai/v1) |
|
||||
| `api_version` | `str \| None` | No | | Azure API version for Azure (e.g., 2024-12-01-preview) |
|
||||
| `api_type` | `str \| None` | No | azure | Azure API type for Azure (e.g., azure) |
|
||||
|
||||
|
|
@ -32,7 +32,7 @@ https://learn.microsoft.com/en-us/azure/ai-foundry/openai/overview
|
|||
|
||||
```yaml
|
||||
api_key: ${env.AZURE_API_KEY:=}
|
||||
api_base: ${env.AZURE_API_BASE:=}
|
||||
base_url: ${env.AZURE_API_BASE:=}
|
||||
api_version: ${env.AZURE_API_VERSION:=}
|
||||
api_type: ${env.AZURE_API_TYPE:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
description: "AWS Bedrock inference provider for accessing various AI models through AWS's managed service."
|
||||
description: "AWS Bedrock inference provider using OpenAI compatible endpoint."
|
||||
sidebar_label: Remote - Bedrock
|
||||
title: remote::bedrock
|
||||
---
|
||||
|
|
@ -8,27 +8,20 @@ title: remote::bedrock
|
|||
|
||||
## Description
|
||||
|
||||
AWS Bedrock inference provider for accessing various AI models through AWS's managed service.
|
||||
AWS Bedrock inference provider using OpenAI compatible endpoint.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `aws_access_key_id` | `str \| None` | No | | The AWS access key to use. Default use environment variable: AWS_ACCESS_KEY_ID |
|
||||
| `aws_secret_access_key` | `str \| None` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
|
||||
| `aws_session_token` | `str \| None` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
|
||||
| `region_name` | `str \| None` | No | | The default AWS Region to use, for example, us-west-1 or us-west-2.Default use environment variable: AWS_DEFAULT_REGION |
|
||||
| `profile_name` | `str \| None` | No | | The profile name that contains credentials to use.Default use environment variable: AWS_PROFILE |
|
||||
| `total_max_attempts` | `int \| None` | No | | An integer representing the maximum number of attempts that will be made for a single request, including the initial attempt. Default use environment variable: AWS_MAX_ATTEMPTS |
|
||||
| `retry_mode` | `str \| None` | No | | A string representing the type of retries Boto3 will perform.Default use environment variable: AWS_RETRY_MODE |
|
||||
| `connect_timeout` | `float \| None` | No | 60.0 | The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds. |
|
||||
| `read_timeout` | `float \| None` | No | 60.0 | The time in seconds till a timeout exception is thrown when attempting to read from a connection.The default is 60 seconds. |
|
||||
| `session_ttl` | `int \| None` | No | 3600 | The time in seconds till a session expires. The default is 3600 seconds (1 hour). |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `region_name` | `str` | No | us-east-2 | AWS Region for the Bedrock Runtime endpoint |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
{}
|
||||
api_key: ${env.AWS_BEARER_TOKEN_BEDROCK:=}
|
||||
region_name: ${env.AWS_DEFAULT_REGION:=us-east-2}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Cerebras inference provider for running models on Cerebras Cloud platform.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `base_url` | `<class 'str'>` | No | https://api.cerebras.ai | Base URL for the Cerebras API |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | Cerebras API Key |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://api.cerebras.ai/v1 | Base URL for the Cerebras API |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
base_url: https://api.cerebras.ai
|
||||
base_url: https://api.cerebras.ai/v1
|
||||
api_key: ${env.CEREBRAS_API_KEY:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Databricks inference provider for running models on Databricks' unified analytic
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `str \| None` | No | | The URL for the Databricks model serving endpoint |
|
||||
| `api_token` | `<class 'pydantic.types.SecretStr'>` | No | | The Databricks API token |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_token` | `SecretStr \| None` | No | | The Databricks API token |
|
||||
| `base_url` | `HttpUrl \| None` | No | | The URL for the Databricks model serving endpoint (should include /serving-endpoints path) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.DATABRICKS_HOST:=}
|
||||
base_url: ${env.DATABRICKS_HOST:=}
|
||||
api_token: ${env.DATABRICKS_TOKEN:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Fireworks AI inference provider for Llama models and other AI models on the Fire
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | https://api.fireworks.ai/inference/v1 | The URL for the Fireworks server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Fireworks.ai API Key |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://api.fireworks.ai/inference/v1 | The URL for the Fireworks server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: https://api.fireworks.ai/inference/v1
|
||||
base_url: https://api.fireworks.ai/inference/v1
|
||||
api_key: ${env.FIREWORKS_API_KEY:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,9 +14,9 @@ Google Gemini inference provider for accessing Gemini models and Google's AI ser
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `str \| None` | No | | API key for Gemini models |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Groq inference provider for ultra-fast inference using Groq's LPU technology.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `str \| None` | No | | The Groq API key |
|
||||
| `url` | `<class 'str'>` | No | https://api.groq.com | The URL for the Groq AI server |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://api.groq.com/openai/v1 | The URL for the Groq AI server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: https://api.groq.com
|
||||
base_url: https://api.groq.com/openai/v1
|
||||
api_key: ${env.GROQ_API_KEY:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ HuggingFace Inference Endpoints provider for dedicated model serving.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `endpoint_name` | `<class 'str'>` | No | | The name of the Hugging Face Inference Endpoint in the format of '{namespace}/{endpoint_name}' (e.g. 'my-cool-org/meta-llama-3-1-8b-instruct-rce'). Namespace is optional and will default to the user account if not provided. |
|
||||
| `api_token` | `pydantic.types.SecretStr \| None` | No | | Your Hugging Face user access token (will default to locally saved token if not provided) |
|
||||
| `endpoint_name` | `str` | No | | The name of the Hugging Face Inference Endpoint in the format of '{namespace}/{endpoint_name}' (e.g. 'my-cool-org/meta-llama-3-1-8b-instruct-rce'). Namespace is optional and will default to the user account if not provided. |
|
||||
| `api_token` | `SecretStr \| None` | No | | Your Hugging Face user access token (will default to locally saved token if not provided) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ HuggingFace Inference API serverless provider for on-demand model inference.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `huggingface_repo` | `<class 'str'>` | No | | The model ID of the model on the Hugging Face Hub (e.g. 'meta-llama/Meta-Llama-3.1-70B-Instruct') |
|
||||
| `api_token` | `pydantic.types.SecretStr \| None` | No | | Your Hugging Face user access token (will default to locally saved token if not provided) |
|
||||
| `huggingface_repo` | `str` | No | | The model ID of the model on the Hugging Face Hub (e.g. 'meta-llama/Meta-Llama-3.1-70B-Instruct') |
|
||||
| `api_token` | `SecretStr \| None` | No | | Your Hugging Face user access token (will default to locally saved token if not provided) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Llama OpenAI-compatible provider for using Llama models with OpenAI API format.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `str \| None` | No | | The Llama API key |
|
||||
| `openai_compat_api_base` | `<class 'str'>` | No | https://api.llama.com/compat/v1/ | The URL for the Llama API server |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://api.llama.com/compat/v1/ | The URL for the Llama API server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
openai_compat_api_base: https://api.llama.com/compat/v1/
|
||||
base_url: https://api.llama.com/compat/v1/
|
||||
api_key: ${env.LLAMA_API_KEY}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,17 +14,16 @@ NVIDIA inference provider for accessing NVIDIA NIM models and AI services.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | https://integrate.api.nvidia.com | A base url for accessing the NVIDIA NIM |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The NVIDIA API key, only needed of using the hosted service |
|
||||
| `timeout` | `<class 'int'>` | No | 60 | Timeout for the HTTP requests |
|
||||
| `append_api_version` | `<class 'bool'>` | No | True | When set to false, the API version will not be appended to the base_url. By default, it is true. |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://integrate.api.nvidia.com/v1 | A base url for accessing the NVIDIA NIM |
|
||||
| `timeout` | `int` | No | 60 | Timeout for the HTTP requests |
|
||||
| `rerank_model_to_url` | `dict[str, str]` | No | `{'nv-rerank-qa-mistral-4b:1': 'https://ai.api.nvidia.com/v1/retrieval/nvidia/reranking', 'nvidia/nv-rerankqa-mistral-4b-v3': 'https://ai.api.nvidia.com/v1/retrieval/nvidia/nv-rerankqa-mistral-4b-v3/reranking', 'nvidia/llama-3.2-nv-rerankqa-1b-v2': 'https://ai.api.nvidia.com/v1/retrieval/nvidia/llama-3_2-nv-rerankqa-1b-v2/reranking'}` | Mapping of rerank model identifiers to their API endpoints. |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.NVIDIA_BASE_URL:=https://integrate.api.nvidia.com}
|
||||
base_url: ${env.NVIDIA_BASE_URL:=https://integrate.api.nvidia.com/v1}
|
||||
api_key: ${env.NVIDIA_API_KEY:=}
|
||||
append_api_version: ${env.NVIDIA_APPEND_API_VERSION:=True}
|
||||
```
|
||||
|
|
|
|||
41
docs/docs/providers/inference/remote_oci.mdx
Normal file
41
docs/docs/providers/inference/remote_oci.mdx
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
description: |
|
||||
Oracle Cloud Infrastructure (OCI) Generative AI inference provider for accessing OCI's Generative AI Platform-as-a-Service models.
|
||||
Provider documentation
|
||||
https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm
|
||||
sidebar_label: Remote - Oci
|
||||
title: remote::oci
|
||||
---
|
||||
|
||||
# remote::oci
|
||||
|
||||
## Description
|
||||
|
||||
|
||||
Oracle Cloud Infrastructure (OCI) Generative AI inference provider for accessing OCI's Generative AI Platform-as-a-Service models.
|
||||
Provider documentation
|
||||
https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `oci_auth_type` | `str` | No | instance_principal | OCI authentication type (must be one of: instance_principal, config_file) |
|
||||
| `oci_region` | `str` | No | us-ashburn-1 | OCI region (e.g., us-ashburn-1) |
|
||||
| `oci_compartment_id` | `str` | No | | OCI compartment ID for the Generative AI service |
|
||||
| `oci_config_file_path` | `str` | No | ~/.oci/config | OCI config file path (required if oci_auth_type is config_file) |
|
||||
| `oci_config_profile` | `str` | No | DEFAULT | OCI config profile (required if oci_auth_type is config_file) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
oci_auth_type: ${env.OCI_AUTH_TYPE:=instance_principal}
|
||||
oci_config_file_path: ${env.OCI_CONFIG_FILE_PATH:=~/.oci/config}
|
||||
oci_config_profile: ${env.OCI_CLI_PROFILE:=DEFAULT}
|
||||
oci_region: ${env.OCI_REGION:=us-ashburn-1}
|
||||
oci_compartment_id: ${env.OCI_COMPARTMENT_OCID:=}
|
||||
```
|
||||
|
|
@ -14,12 +14,12 @@ Ollama inference provider for running local models through the Ollama runtime.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | http://localhost:11434 | |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | http://localhost:11434/v1 | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.OLLAMA_URL:=http://localhost:11434}
|
||||
base_url: ${env.OLLAMA_URL:=http://localhost:11434/v1}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,10 +14,10 @@ OpenAI inference provider for accessing GPT models and other OpenAI services.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `str \| None` | No | | API key for OpenAI models |
|
||||
| `base_url` | `<class 'str'>` | No | https://api.openai.com/v1 | Base URL for OpenAI API |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://api.openai.com/v1 | Base URL for OpenAI API |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Passthrough inference provider for connecting to any external inference service
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | | The URL for the passthrough endpoint |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | API Key for the passthrouth endpoint |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | | The URL for the passthrough endpoint |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.PASSTHROUGH_URL}
|
||||
base_url: ${env.PASSTHROUGH_URL}
|
||||
api_key: ${env.PASSTHROUGH_API_KEY}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ RunPod inference provider for running models on RunPod's cloud GPU platform.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `str \| None` | No | | The URL for the Runpod model serving endpoint |
|
||||
| `api_token` | `str \| None` | No | | The API token |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_token` | `SecretStr \| None` | No | | The API token |
|
||||
| `base_url` | `HttpUrl \| None` | No | | The URL for the Runpod model serving endpoint |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.RUNPOD_URL:=}
|
||||
base_url: ${env.RUNPOD_URL:=}
|
||||
api_token: ${env.RUNPOD_API_TOKEN}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ SambaNova inference provider for running models on SambaNova's dataflow architec
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | https://api.sambanova.ai/v1 | The URL for the SambaNova AI server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The SambaNova cloud API Key |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://api.sambanova.ai/v1 | The URL for the SambaNova AI server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: https://api.sambanova.ai/v1
|
||||
base_url: https://api.sambanova.ai/v1
|
||||
api_key: ${env.SAMBANOVA_API_KEY:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ Text Generation Inference (TGI) provider for HuggingFace model serving.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | | The URL for the TGI serving endpoint |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | | The URL for the TGI serving endpoint (should include /v1 path) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.TGI_URL:=}
|
||||
base_url: ${env.TGI_URL:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Together AI inference provider for open-source models and collaborative AI devel
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | https://api.together.xyz/v1 | The URL for the Together AI server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Together AI API Key |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://api.together.xyz/v1 | The URL for the Together AI server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: https://api.together.xyz/v1
|
||||
base_url: https://api.together.xyz/v1
|
||||
api_key: ${env.TOGETHER_API_KEY:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -53,10 +53,10 @@ Available Models:
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `project` | `<class 'str'>` | No | | Google Cloud project ID for Vertex AI |
|
||||
| `location` | `<class 'str'>` | No | us-central1 | Google Cloud location for Vertex AI |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `project` | `str` | No | | Google Cloud project ID for Vertex AI |
|
||||
| `location` | `str` | No | us-central1 | Google Cloud location for Vertex AI |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,17 +14,17 @@ Remote vLLM inference provider for connecting to vLLM servers.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `str \| None` | No | | The URL for the vLLM model serving endpoint |
|
||||
| `max_tokens` | `<class 'int'>` | No | 4096 | Maximum number of tokens to generate. |
|
||||
| `api_token` | `str \| None` | No | fake | The API token |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_token` | `SecretStr \| None` | No | | The API token |
|
||||
| `base_url` | `HttpUrl \| None` | No | | The URL for the vLLM model serving endpoint |
|
||||
| `max_tokens` | `int` | No | 4096 | Maximum number of tokens to generate. |
|
||||
| `tls_verify` | `bool \| str` | No | True | Whether to verify TLS certificates. Can be a boolean or a path to a CA certificate file. |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.VLLM_URL:=}
|
||||
base_url: ${env.VLLM_URL:=}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
|
|
|
|||
|
|
@ -14,17 +14,17 @@ IBM WatsonX inference provider for accessing AI models on IBM's WatsonX platform
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `url` | `<class 'str'>` | No | https://us-south.ml.cloud.ibm.com | A base url for accessing the watsonx.ai |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The watsonx.ai API key |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `HttpUrl \| None` | No | https://us-south.ml.cloud.ibm.com | A base url for accessing the watsonx.ai |
|
||||
| `project_id` | `str \| None` | No | | The watsonx.ai project ID |
|
||||
| `timeout` | `<class 'int'>` | No | 60 | Timeout for the HTTP requests |
|
||||
| `timeout` | `int` | No | 60 | Timeout for the HTTP requests |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.WATSONX_BASE_URL:=https://us-south.ml.cloud.ibm.com}
|
||||
base_url: ${env.WATSONX_BASE_URL:=https://us-south.ml.cloud.ibm.com}
|
||||
api_key: ${env.WATSONX_API_KEY:=}
|
||||
project_id: ${env.WATSONX_PROJECT_ID:=}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,8 +1,14 @@
|
|||
title: OpenAI Compatibility
|
||||
description: OpenAI API Compatibility
|
||||
sidebar_label: OpenAI Compatibility
|
||||
sidebar_position: 1
|
||||
---
|
||||
title: OpenAI Implementation Guide
|
||||
description: Code examples and implementation details for OpenAI API compatibility
|
||||
sidebar_label: OpenAI Implementation
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# OpenAI Implementation Guide
|
||||
|
||||
This guide provides detailed code examples and implementation details for using OpenAI-compatible APIs with Llama Stack. For a comprehensive overview of OpenAI compatibility features, see our [OpenAI API Compatibility Guide](../api-openai/index.mdx).
|
||||
|
||||
## OpenAI API Compatibility
|
||||
|
||||
### Server path
|
||||
|
|
@ -47,7 +53,7 @@ models = client.models.list()
|
|||
|
||||
#### Responses
|
||||
|
||||
> **Note:** The Responses API implementation is still in active development. While it is quite usable, there are still unimplemented parts of the API. We'd love feedback on any use-cases you try that do not work to help prioritize the pieces left to implement. Please open issues in the [meta-llama/llama-stack](https://github.com/meta-llama/llama-stack) GitHub repository with details of anything that does not work.
|
||||
> **Note:** The Responses API implementation is still in active development. While it is quite usable, there are still unimplemented parts of the API. See [Known Limitations of the OpenAI-compatible Responses API in Llama Stack](./openai_responses_limitations.mdx) for more details.
|
||||
|
||||
##### Simple inference
|
||||
|
||||
|
|
@ -194,3 +200,9 @@ Lines of code unfurl
|
|||
Logic whispers in the dark
|
||||
Art in hidden form
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **[OpenAI API Compatibility Guide](../api-openai/index.mdx)** - Comprehensive overview of OpenAI compatibility features
|
||||
- **[OpenAI Responses API Limitations](./openai_responses_limitations.mdx)** - Detailed limitations and known issues
|
||||
- **[Provider Documentation](../index.mdx)** - Complete provider ecosystem overview
|
||||
|
|
|
|||
299
docs/docs/providers/openai_responses_limitations.mdx
Normal file
299
docs/docs/providers/openai_responses_limitations.mdx
Normal file
|
|
@ -0,0 +1,299 @@
|
|||
---
|
||||
title: Known Limitations of the OpenAI-compatible Responses API in Llama Stack
|
||||
description: Limitations of Responses API
|
||||
sidebar_label: Limitations of Responses API
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
## Unresolved Issues
|
||||
|
||||
This document outlines known limitations and inconsistencies between Llama Stack's Responses API and OpenAI's Responses API. This comparison is based on OpenAI's API and reflects a comparison with the OpenAI APIs as of October 6, 2025 (OpenAI's client version `openai==1.107`).
|
||||
See the OpenAI [changelog](https://platform.openai.com/docs/changelog) for details of any new functionality that has been added since that date. Links to issues are included so readers can read about status, post comments, and/or subscribe for updates relating to any limitations that are of specific interest to them. We would also love any other feedback on any use-cases you try that do not work to help prioritize the pieces left to implement.
|
||||
Please open new issues in the [meta-llama/llama-stack](https://github.com/meta-llama/llama-stack) GitHub repository with details of anything that does not work that does not already have an open issue.
|
||||
|
||||
### Instructions
|
||||
**Status:** Partial Implementation + Work in Progress
|
||||
|
||||
**Issue:** [#3566](https://github.com/llamastack/llama-stack/issues/3566)
|
||||
|
||||
In Llama Stack, the instructions parameter is already implemented for creating a response, but it is not yet included in the output response object.
|
||||
|
||||
---
|
||||
|
||||
### Streaming
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
**Issue:** [#2364](https://github.com/llamastack/llama-stack/issues/2364)
|
||||
|
||||
Streaming functionality for the Responses API is partially implemented and does work to some extent, but some streaming response objects that would be needed for full compatibility are still missing.
|
||||
|
||||
---
|
||||
|
||||
### Prompt Templates
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
**Issue:** [#3321](https://github.com/llamastack/llama-stack/issues/3321)
|
||||
|
||||
OpenAI's platform supports [templated prompts using a structured language](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts). These templates can be stored server-side for organizational sharing. This feature is under development for Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Web-search tool compatibility
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
Both OpenAI and Llama Stack support a web-search built-in tool. The [OpenAI documentation](https://platform.openai.com/docs/api-reference/responses/create) for web search tool in a Responses tool list says:
|
||||
|
||||
> The type of the web search tool. One of `web_search` or `web_search_2025_08_26`.
|
||||
|
||||
Llama Stack now supports both `web_search` and `web_search_2025_08_26` types, matching OpenAI's API. For backward compatibility, Llama Stack also supports `web_search_preview` and `web_search_preview_2025_03_11` types.
|
||||
|
||||
The OpenAI web search tool also has fields for `filters` and `user_location` which are not yet implemented in Llama Stack. If feasible, it would be good to support these too.
|
||||
|
||||
---
|
||||
|
||||
### Other built-in Tools
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
OpenAI's Responses API includes an ecosystem of built-in tools (e.g., code interpreter) that lower the barrier to entry for agentic workflows. These tools are typically aligned with specific model training.
|
||||
|
||||
**Current Status in Llama Stack:**
|
||||
- Some built-in tools exist (file search, web search)
|
||||
- Missing tools include code interpreter, computer use, and image generation
|
||||
- Some built-in tools may require additional APIs (e.g., [containers API](https://platform.openai.com/docs/api-reference/containers) for code interpreter)
|
||||
|
||||
It's unclear whether there is demand for additional built-in tools in Llama Stack. No upstream issues have been filed for adding more built-in tools.
|
||||
|
||||
---
|
||||
|
||||
### Response Branching
|
||||
|
||||
**Status:** Not Working
|
||||
|
||||
Response branching, as discussed in the [Agents vs OpenAI Responses API documentation](https://llamastack.github.io/docs/building_applications/responses_vs_agents), is not currently functional.
|
||||
|
||||
---
|
||||
|
||||
### Include
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
The `include` parameter allows you to provide a list of values that indicate additional information for the system to include in the model response. The [OpenAI API](https://platform.openai.com/docs/api-reference/responses/create) specifies the following allowed values for this parameter.
|
||||
|
||||
- `web_search_call.action.sources`
|
||||
- `code_interpreter_call.outputs`
|
||||
- `computer_call_output.output.image_url`
|
||||
- `file_search_call.results`
|
||||
- `message.input_image.image_url`
|
||||
- `message.output_text.logprobs`
|
||||
- `reasoning.encrypted_content`
|
||||
|
||||
Some of these are not relevant to Llama Stack in its current form. For example, code interpreter is not implemented (see "Built-in tools" below), so `code_interpreter_call.outputs` would not be a useful directive to Llama Stack.
|
||||
|
||||
However, others might be useful. For example, `message.output_text.logprobs` can be useful for assessing how confident a model is in each token of its output.
|
||||
|
||||
---
|
||||
|
||||
### Tool Choice
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3548](https://github.com/llamastack/llama-stack/issues/3548)
|
||||
|
||||
In OpenAI's API, the `tool_choice` parameter allows you to set restrictions or requirements for which tools should be used when generating a response. This feature is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Safety Identification and Tracking
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
OpenAI's platform allows users to track agentic users using a safety identifier passed with each response. When requests violate moderation or safety rules, account holders are alerted and automated actions can be taken. This capability is not currently available in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Connectors
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
Connectors are MCP servers maintained and managed by the Responses API provider. OpenAI has documented their connectors at [https://platform.openai.com/docs/guides/tools-connectors-mcp](https://platform.openai.com/docs/guides/tools-connectors-mcp).
|
||||
|
||||
**Open Questions:**
|
||||
- Should Llama Stack include built-in support for some, all, or none of OpenAI's connectors?
|
||||
- Should there be a mechanism for administrators to add custom connectors via `run.yaml` or an API?
|
||||
|
||||
---
|
||||
|
||||
### Reasoning
|
||||
|
||||
**Status:** Partially Implemented
|
||||
|
||||
The `reasoning` object in the output of Responses works for inference providers such as vLLM that output reasoning traces in chat completions requests. It does not work for other providers such as OpenAI's hosted service. See [#3551](https://github.com/llamastack/llama-stack/issues/3551) for more details.
|
||||
|
||||
---
|
||||
|
||||
### Service Tier
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3550](https://github.com/llamastack/llama-stack/issues/3550)
|
||||
|
||||
Responses has a field `service_tier` that can be used to prioritize access to inference resources. Not all inference providers have such a concept, but Llama Stack pass through this value for those providers that do. Currently it does not.
|
||||
|
||||
---
|
||||
|
||||
### Top Logprobs
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3552](https://github.com/llamastack/llama-stack/issues/3552)
|
||||
|
||||
The `top_logprobs` parameter from OpenAI's Responses API extends the functionality obtained by including `message.output_text.logprobs` in the `include` parameter list (as discussed in the Include section above).
|
||||
It enables users to also get logprobs for alternative tokens.
|
||||
|
||||
---
|
||||
|
||||
### Max Tool Calls
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3563](https://github.com/llamastack/llama-stack/issues/3563)
|
||||
|
||||
The Responses API can accept a `max_tool_calls` parameter that limits the number of tool calls allowed to be executed for a given response. This feature needs full implementation and documentation.
|
||||
|
||||
---
|
||||
|
||||
### Max Output Tokens
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3562](https://github.com/llamastack/llama-stack/issues/3562)
|
||||
|
||||
The `max_output_tokens` field limits how many tokens the model is allowed to generate (for both reasoning and output combined). It is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Incomplete Details
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3567](https://github.com/llamastack/llama-stack/issues/3567)
|
||||
|
||||
The return object from a call to Responses includes a field for indicating why a response is incomplete if it is. For example, if the model stops generating because it has reached the specified max output tokens (see above), this field should be set to "IncompleteDetails(reason='max_output_tokens')". This is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Metadata
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3564](https://github.com/llamastack/llama-stack/issues/3564)
|
||||
|
||||
Metadata allows you to attach additional information to a response for your own reference and tracking. It is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Background
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3568](https://github.com/llamastack/llama-stack/issues/3568)
|
||||
|
||||
[Background mode](https://platform.openai.com/docs/guides/background) in OpenAI Responses lets you start a response generation job and then check back in on it later. This is useful if you might lose a connection during a generation and want to reconnect later and get the response back (for example if the client is running in a mobile app). It is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Global Guardrails
|
||||
|
||||
**Status:** Feature Request
|
||||
|
||||
When calling the OpenAI Responses API, model outputs go through safety models configured by OpenAI administrators. Perhaps Llama Stack should provide a mechanism to configure safety models (or non-model logic) for all Responses requests, either through `run.yaml` or an administrative API.
|
||||
|
||||
---
|
||||
|
||||
### User-Controlled Guardrails
|
||||
|
||||
**Status:** Feature Request
|
||||
|
||||
**Issue:** [#3325](https://github.com/llamastack/llama-stack/issues/3325)
|
||||
|
||||
OpenAI has not released a way for users to configure their own guardrails. However, Llama Stack users may want this capability to complement or replace global guardrails. This could be implemented as a non-breaking, additive difference from the OpenAI API.
|
||||
|
||||
---
|
||||
|
||||
### MCP Elicitations
|
||||
|
||||
**Status:** Unknown
|
||||
|
||||
Elicitations allow MCP servers to request additional information from users through the client during interactions (e.g., a tool requesting a username before proceeding).
|
||||
See the [MCP specification](https://modelcontextprotocol.io/specification/draft/client/elicitation) for details.
|
||||
|
||||
**Open Questions:**
|
||||
- Does this work in OpenAI's Responses API reference implementation?
|
||||
- If not, is there a reasonable way to make that work within the API as is? Or would the API need to change?
|
||||
- Does this work in Llama Stack?
|
||||
|
||||
---
|
||||
|
||||
### MCP Sampling
|
||||
|
||||
**Status:** Unknown
|
||||
|
||||
Sampling allows MCP tools to query the generative AI model. See the [MCP specification](https://modelcontextprotocol.io/specification/draft/client/sampling) for details.
|
||||
|
||||
**Open Questions:**
|
||||
- Does this work in OpenAI's Responses API reference implementation?
|
||||
- If not, is there a reasonable way to make that work within the API as is? Or would the API need to change?
|
||||
- Does this work in Llama Stack?
|
||||
|
||||
### Prompt Caching
|
||||
|
||||
**Status:** Unknown
|
||||
|
||||
OpenAI provides a [prompt caching](https://platform.openai.com/docs/guides/prompt-caching) mechanism in Responses that is enabled for its most recent models.
|
||||
|
||||
**Open Questions:**
|
||||
- Does this work in Llama Stack?
|
||||
- If not, is there a reasonable way to make that work for those inference providers that have this capability by passing through the provided `prompt_cache_key` to the inference provider?
|
||||
- Is there a reasonable way to make that work for inference providers that don't build in this capability by doing some sort of caching at the Llama Stack layer?
|
||||
|
||||
---
|
||||
|
||||
### Parallel Tool Calls
|
||||
|
||||
**Status:** Rumored Issue
|
||||
|
||||
There are reports that `parallel_tool_calls` may not work correctly. This needs verification and a ticket should be opened if confirmed.
|
||||
|
||||
---
|
||||
|
||||
## Resolved Issues
|
||||
|
||||
The following limitations have been addressed in recent releases:
|
||||
|
||||
### MCP and Function Tools with No Arguments
|
||||
|
||||
**Status:** ✅ Resolved
|
||||
|
||||
MCP and function tools now work correctly even when they have no arguments.
|
||||
|
||||
---
|
||||
|
||||
### `require_approval` Parameter for MCP Tools
|
||||
|
||||
**Status:** ✅ Resolved
|
||||
|
||||
The `require_approval` parameter for MCP tools in the Responses API now works correctly.
|
||||
|
||||
---
|
||||
|
||||
### MCP Tools with Array-Type Arguments
|
||||
|
||||
**Status:** ✅ Resolved
|
||||
|
||||
**Fixed in:** [#3003](https://github.com/llamastack/llama-stack/pull/3003) (Agent API), [#3602](https://github.com/llamastack/llama-stack/pull/3602) (Responses API)
|
||||
|
||||
MCP tools now correctly handle array-type arguments in both the Agent API and Responses API.
|
||||
|
|
@ -14,23 +14,23 @@ HuggingFace-based post-training provider for fine-tuning models using the Huggin
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `device` | `<class 'str'>` | No | cuda | |
|
||||
| `distributed_backend` | `Literal['fsdp', 'deepspeed'` | No | | |
|
||||
| `checkpoint_format` | `Literal['full_state', 'huggingface'` | No | huggingface | |
|
||||
| `chat_template` | `<class 'str'>` | No | `<|user|>`<br/>`{input}`<br/>`<|assistant|>`<br/>`{output}` | |
|
||||
| `model_specific_config` | `<class 'dict'>` | No | `{'trust_remote_code': True, 'attn_implementation': 'sdpa'}` | |
|
||||
| `max_seq_length` | `<class 'int'>` | No | 2048 | |
|
||||
| `gradient_checkpointing` | `<class 'bool'>` | No | False | |
|
||||
| `save_total_limit` | `<class 'int'>` | No | 3 | |
|
||||
| `logging_steps` | `<class 'int'>` | No | 10 | |
|
||||
| `warmup_ratio` | `<class 'float'>` | No | 0.1 | |
|
||||
| `weight_decay` | `<class 'float'>` | No | 0.01 | |
|
||||
| `dataloader_num_workers` | `<class 'int'>` | No | 4 | |
|
||||
| `dataloader_pin_memory` | `<class 'bool'>` | No | True | |
|
||||
| `dpo_beta` | `<class 'float'>` | No | 0.1 | |
|
||||
| `use_reference_model` | `<class 'bool'>` | No | True | |
|
||||
| `dpo_loss_type` | `Literal['sigmoid', 'hinge', 'ipo', 'kto_pair'` | No | sigmoid | |
|
||||
| `dpo_output_dir` | `<class 'str'>` | No | | |
|
||||
| `device` | `str` | No | cuda | |
|
||||
| `distributed_backend` | `Literal[fsdp, deepspeed] \| None` | No | | |
|
||||
| `checkpoint_format` | `Literal[full_state, huggingface] \| None` | No | huggingface | |
|
||||
| `chat_template` | `str` | No | `<|user|>`<br/>`{input}`<br/>`<|assistant|>`<br/>`{output}` | |
|
||||
| `model_specific_config` | `dict` | No | `{'trust_remote_code': True, 'attn_implementation': 'sdpa'}` | |
|
||||
| `max_seq_length` | `int` | No | 2048 | |
|
||||
| `gradient_checkpointing` | `bool` | No | False | |
|
||||
| `save_total_limit` | `int` | No | 3 | |
|
||||
| `logging_steps` | `int` | No | 10 | |
|
||||
| `warmup_ratio` | `float` | No | 0.1 | |
|
||||
| `weight_decay` | `float` | No | 0.01 | |
|
||||
| `dataloader_num_workers` | `int` | No | 4 | |
|
||||
| `dataloader_pin_memory` | `bool` | No | True | |
|
||||
| `dpo_beta` | `float` | No | 0.1 | |
|
||||
| `use_reference_model` | `bool` | No | True | |
|
||||
| `dpo_loss_type` | `Literal[sigmoid, hinge, ipo, kto_pair]` | No | sigmoid | |
|
||||
| `dpo_output_dir` | `str` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ TorchTune-based post-training provider for fine-tuning and optimizing models usi
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `torch_seed` | `int \| None` | No | | |
|
||||
| `checkpoint_format` | `Literal['meta', 'huggingface'` | No | meta | |
|
||||
| `checkpoint_format` | `Literal[meta, huggingface] \| None` | No | meta | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ TorchTune-based post-training provider for fine-tuning and optimizing models usi
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `torch_seed` | `int \| None` | No | | |
|
||||
| `checkpoint_format` | `Literal['meta', 'huggingface'` | No | meta | |
|
||||
| `checkpoint_format` | `Literal[meta, huggingface] \| None` | No | meta | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -18,9 +18,9 @@ NVIDIA's post-training provider for fine-tuning models on NVIDIA's platform.
|
|||
| `dataset_namespace` | `str \| None` | No | default | The NVIDIA dataset namespace. |
|
||||
| `project_id` | `str \| None` | No | test-example-model@v1 | The NVIDIA project ID. |
|
||||
| `customizer_url` | `str \| None` | No | | Base URL for the NeMo Customizer API |
|
||||
| `timeout` | `<class 'int'>` | No | 300 | Timeout for the NVIDIA Post Training API |
|
||||
| `max_retries` | `<class 'int'>` | No | 3 | Maximum number of retries for the NVIDIA Post Training API |
|
||||
| `output_model_dir` | `<class 'str'>` | No | test-example-model@v1 | Directory to save the output model |
|
||||
| `timeout` | `int` | No | 300 | Timeout for the NVIDIA Post Training API |
|
||||
| `max_retries` | `int` | No | 3 | Maximum number of retries for the NVIDIA Post Training API |
|
||||
| `output_model_dir` | `str` | No | test-example-model@v1 | Directory to save the output model |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
description: "Safety
|
||||
description: |
|
||||
Safety
|
||||
|
||||
OpenAI-compatible Moderations API."
|
||||
OpenAI-compatible Moderations API.
|
||||
sidebar_label: Safety
|
||||
title: Safety
|
||||
---
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ Llama Guard safety provider for content moderation and safety filtering using Me
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `excluded_categories` | `list[str` | No | [] | |
|
||||
| `excluded_categories` | `list[str]` | No | [] | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ Prompt Guard safety provider for detecting and filtering unsafe prompts and cont
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `guard_type` | `<class 'str'>` | No | injection | |
|
||||
| `guard_type` | `str` | No | injection | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ AWS Bedrock safety provider for content moderation using AWS's safety services.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `allowed_models` | `list[str] \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `bool` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `aws_access_key_id` | `str \| None` | No | | The AWS access key to use. Default use environment variable: AWS_ACCESS_KEY_ID |
|
||||
| `aws_secret_access_key` | `str \| None` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
|
||||
| `aws_session_token` | `str \| None` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ NVIDIA's safety provider for content moderation and safety filtering.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `guardrails_service_url` | `<class 'str'>` | No | http://0.0.0.0:7331 | The url for accessing the Guardrails service |
|
||||
| `guardrails_service_url` | `str` | No | http://0.0.0.0:7331 | The url for accessing the Guardrails service |
|
||||
| `config_id` | `str \| None` | No | self-check | Guardrails configuration ID to use from the Guardrails configuration store |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ SambaNova's safety provider for content moderation and safety filtering.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `<class 'str'>` | No | https://api.sambanova.ai/v1 | The URL for the SambaNova AI server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The SambaNova cloud API Key |
|
||||
| `url` | `str` | No | https://api.sambanova.ai/v1 | The URL for the SambaNova AI server |
|
||||
| `api_key` | `SecretStr \| None` | No | | The SambaNova cloud API Key |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
|||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
sidebar_label: Telemetry
|
||||
title: Telemetry
|
||||
---
|
||||
|
||||
# Telemetry
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **telemetry** API.
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue