mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-04 10:10:36 +00:00
Merge upstream/main into add-mongodb-vector_io
Resolved conflicts: - Integrated MongoDB provider with newly added Qdrant and Weaviate providers - Updated distribution configs to include all three providers - Merged build.yaml and run.yaml configs for ci-tests, starter, and starter-gpu distributions - Updated starter.py to include MongoDB, Qdrant, and Weaviate provider initialization - Added MongoDB provider files to src/ directory structure - Updated MongoDB provider to use new VectorStore API (was VectorDB) - Updated MongoDB config to use KVStoreReference instead of KVStoreConfig - Applied auto-formatting changes from pre-commit hooks
This commit is contained in:
commit
efe9c04849
1820 changed files with 402590 additions and 32499 deletions
|
|
@ -13,6 +13,42 @@ npm run serve
|
|||
```
|
||||
You can open up the docs in your browser at http://localhost:3000
|
||||
|
||||
## File Import System
|
||||
|
||||
This documentation uses `remark-code-import` to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.
|
||||
|
||||
### Importing Code Files
|
||||
|
||||
To import Python code (or any code files) with syntax highlighting, use this syntax in `.mdx` files:
|
||||
|
||||
```markdown
|
||||
```python file=./demo_script.py title="demo_script.py"
|
||||
```
|
||||
```
|
||||
|
||||
This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.
|
||||
|
||||
**Note:** Paths are relative to the current `.mdx` file location, not the repository root.
|
||||
|
||||
### Importing Markdown Files as Content
|
||||
|
||||
For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:
|
||||
|
||||
```jsx
|
||||
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
|
||||
<ReactMarkdown>{Contributing}</ReactMarkdown>
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
- Install dependencies: `npm install --save-dev raw-loader react-markdown`
|
||||
|
||||
**Path Resolution:**
|
||||
- For `remark-code-import`: Paths are relative to the current `.mdx` file location
|
||||
- For `raw-loader`: Paths are relative to the current `.mdx` file location
|
||||
- Use `../` to navigate up directories as needed
|
||||
|
||||
## Content
|
||||
|
||||
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
|
||||
|
|
|
|||
|
|
@ -51,8 +51,8 @@ device: cpu
|
|||
You can access the HuggingFace trainer via the `starter` distribution:
|
||||
|
||||
```bash
|
||||
llama stack build --distro starter --image-type venv
|
||||
llama stack run ~/.llama/distributions/starter/starter-run.yaml
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
llama stack run starter
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
|
|
|||
|
|
@ -175,8 +175,7 @@ llama-stack-client benchmarks register \
|
|||
**1. Start the Llama Stack API Server**
|
||||
|
||||
```bash
|
||||
# Build and run a distribution (example: together)
|
||||
llama stack build --distro together --image-type venv
|
||||
llama stack list-deps together | xargs -L1 uv pip install
|
||||
llama stack run together
|
||||
```
|
||||
|
||||
|
|
@ -209,7 +208,7 @@ The playground works with any Llama Stack distribution. Popular options include:
|
|||
<TabItem value="together" label="Together AI">
|
||||
|
||||
```bash
|
||||
llama stack build --distro together --image-type venv
|
||||
llama stack list-deps together | xargs -L1 uv pip install
|
||||
llama stack run together
|
||||
```
|
||||
|
||||
|
|
@ -222,7 +221,7 @@ llama stack run together
|
|||
<TabItem value="ollama" label="Ollama (Local)">
|
||||
|
||||
```bash
|
||||
llama stack build --distro ollama --image-type venv
|
||||
llama stack list-deps ollama | xargs -L1 uv pip install
|
||||
llama stack run ollama
|
||||
```
|
||||
|
||||
|
|
@ -235,7 +234,7 @@ llama stack run ollama
|
|||
<TabItem value="meta-reference" label="Meta Reference">
|
||||
|
||||
```bash
|
||||
llama stack build --distro meta-reference --image-type venv
|
||||
llama stack list-deps meta-reference | xargs -L1 uv pip install
|
||||
llama stack run meta-reference
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -10,358 +10,114 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
# Retrieval Augmented Generation (RAG)
|
||||
|
||||
RAG enables your applications to reference and recall information from previous interactions or external documents.
|
||||
|
||||
RAG enables your applications to reference and recall information from external documents. Llama Stack makes Agentic RAG available through OpenAI's Responses API.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Start the Server
|
||||
|
||||
In one terminal, start the Llama Stack server:
|
||||
|
||||
```bash
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
llama stack run starter
|
||||
```
|
||||
|
||||
### 2. Connect with OpenAI Client
|
||||
|
||||
In another terminal, use the standard OpenAI client with the Responses API:
|
||||
|
||||
```python
|
||||
import io, requests
|
||||
from openai import OpenAI
|
||||
|
||||
url = "https://www.paulgraham.com/greatwork.html"
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
|
||||
|
||||
# Create vector store - auto-detects default embedding model
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
response = requests.get(url)
|
||||
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
|
||||
file_id = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants").id
|
||||
client.vector_stores.files.create(vector_store_id=vs.id, file_id=file_id)
|
||||
|
||||
resp = client.responses.create(
|
||||
model="gpt-4o",
|
||||
input="How do you do great work? Use the existing knowledge_search tool.",
|
||||
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
|
||||
include=["file_search_call.results"],
|
||||
)
|
||||
|
||||
print(resp.output[-1].content[-1].text)
|
||||
```
|
||||
Which should give output like:
|
||||
```
|
||||
Doing great work is about more than just hard work and ambition; it involves combining several elements:
|
||||
|
||||
1. **Pursue What Excites You**: Engage in projects that are both ambitious and exciting to you. It's important to work on something you have a natural aptitude for and a deep interest in.
|
||||
|
||||
2. **Explore and Discover**: Great work often feels like a blend of discovery and creation. Focus on seeing possibilities and let ideas take their natural shape, rather than just executing a plan.
|
||||
|
||||
3. **Be Bold Yet Flexible**: Take bold steps in your work without over-planning. An adaptable approach that evolves with new ideas can often lead to breakthroughs.
|
||||
|
||||
4. **Work on Your Own Projects**: Develop a habit of working on projects of your own choosing, as these often lead to great achievements. These should be projects you find exciting and that challenge you intellectually.
|
||||
|
||||
5. **Be Earnest and Authentic**: Approach your work with earnestness and authenticity. Trying to impress others with affectation can be counterproductive, as genuine effort and intellectual honesty lead to better work outcomes.
|
||||
|
||||
6. **Build a Supportive Environment**: Work alongside great colleagues who inspire you and enhance your work. Surrounding yourself with motivating individuals creates a fertile environment for great work.
|
||||
|
||||
7. **Maintain High Morale**: High morale significantly impacts your ability to do great work. Stay optimistic and protect your mental well-being to maintain progress and momentum.
|
||||
|
||||
8. **Balance**: While hard work is essential, overworking can lead to diminishing returns. Balance periods of intensive work with rest to sustain productivity over time.
|
||||
|
||||
This approach shows that great work is less about following a strict formula and more about aligning your interests, ambition, and environment to foster creativity and innovation.
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
Llama Stack organizes the APIs that enable RAG into three layers:
|
||||
Llama Stack provides OpenAI-compatible RAG capabilities through:
|
||||
|
||||
1. **Lower-Level APIs**: Deal with raw storage and retrieval. These include Vector IO, KeyValue IO (coming soon) and Relational IO (also coming soon)
|
||||
2. **RAG Tool**: A first-class tool as part of the [Tools API](./tools) that allows you to ingest documents (from URLs, files, etc) with various chunking strategies and query them smartly
|
||||
3. **Agents API**: The top-level [Agents API](./agent) that allows you to create agents that can use the tools to answer questions, perform tasks, and more
|
||||
- **Vector Stores API**: OpenAI-compatible vector storage with automatic embedding model detection
|
||||
- **Files API**: Document upload and processing using OpenAI's file format
|
||||
- **Responses API**: Enhanced chat completions with agentic tool calling via file search
|
||||
|
||||

|
||||
## Configuring Default Embedding Models
|
||||
|
||||
The RAG system uses lower-level storage for different types of data:
|
||||
- **Vector IO**: For semantic search and retrieval
|
||||
- **Key-Value and Relational IO**: For structured data storage
|
||||
To enable automatic vector store creation without specifying embedding models, configure a default embedding model in your run.yaml like so:
|
||||
|
||||
:::info[Future Storage Types]
|
||||
We may add more storage types like Graph IO in the future.
|
||||
:::
|
||||
|
||||
## Setting up Vector Databases
|
||||
|
||||
For this guide, we will use [Ollama](https://ollama.com/) as the inference provider. Ollama is an LLM runtime that allows you to run Llama models locally.
|
||||
|
||||
Here's how to set up a vector database for RAG:
|
||||
|
||||
```python
|
||||
# Create HTTP client
|
||||
import os
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
|
||||
|
||||
# Register a vector database
|
||||
vector_db_id = "my_documents"
|
||||
response = client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model="all-MiniLM-L6-v2",
|
||||
embedding_dimension=384,
|
||||
provider_id="faiss",
|
||||
)
|
||||
```yaml
|
||||
vector_stores:
|
||||
default_provider_id: faiss
|
||||
default_embedding_model:
|
||||
provider_id: sentence-transformers
|
||||
model_id: nomic-ai/nomic-embed-text-v1.5
|
||||
```
|
||||
|
||||
## Document Ingestion
|
||||
With this configuration:
|
||||
- `client.vector_stores.create()` works without requiring embedding model or provider parameters
|
||||
- The system automatically uses the default vector store provider (`faiss`) when multiple providers are available
|
||||
- The system automatically uses the default embedding model (`sentence-transformers/nomic-ai/nomic-embed-text-v1.5`) for any newly created vector store
|
||||
- The `default_provider_id` specifies which vector storage backend to use
|
||||
- The `default_embedding_model` specifies both the inference provider and model for embeddings
|
||||
|
||||
You can ingest documents into the vector database using two methods: directly inserting pre-chunked documents or using the RAG Tool.
|
||||
## Vector Store Operations
|
||||
|
||||
### Direct Document Insertion
|
||||
### Creating Vector Stores
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="basic" label="Basic Insertion">
|
||||
You can create vector stores with automatic or explicit embedding model selection:
|
||||
|
||||
```python
|
||||
# You can insert a pre-chunked document directly into the vector db
|
||||
chunks = [
|
||||
{
|
||||
"content": "Your document text here",
|
||||
"mime_type": "text/plain",
|
||||
"metadata": {
|
||||
"document_id": "doc1",
|
||||
"author": "Jane Doe",
|
||||
},
|
||||
},
|
||||
]
|
||||
client.vector_io.insert(vector_db_id=vector_db_id, chunks=chunks)
|
||||
```
|
||||
# Automatic - uses default configured embedding model and vector store provider
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="embeddings" label="With Precomputed Embeddings">
|
||||
|
||||
If you decide to precompute embeddings for your documents, you can insert them directly into the vector database by including the embedding vectors in the chunk data. This is useful if you have a separate embedding service or if you want to customize the ingestion process.
|
||||
|
||||
```python
|
||||
chunks_with_embeddings = [
|
||||
{
|
||||
"content": "First chunk of text",
|
||||
"mime_type": "text/plain",
|
||||
"embedding": [0.1, 0.2, 0.3, ...], # Your precomputed embedding vector
|
||||
"metadata": {"document_id": "doc1", "section": "introduction"},
|
||||
},
|
||||
{
|
||||
"content": "Second chunk of text",
|
||||
"mime_type": "text/plain",
|
||||
"embedding": [0.2, 0.3, 0.4, ...], # Your precomputed embedding vector
|
||||
"metadata": {"document_id": "doc1", "section": "methodology"},
|
||||
},
|
||||
]
|
||||
client.vector_io.insert(vector_db_id=vector_db_id, chunks=chunks_with_embeddings)
|
||||
```
|
||||
|
||||
:::warning[Embedding Dimensions]
|
||||
When providing precomputed embeddings, ensure the embedding dimension matches the `embedding_dimension` specified when registering the vector database.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Document Retrieval
|
||||
|
||||
You can query the vector database to retrieve documents based on their embeddings.
|
||||
|
||||
```python
|
||||
# You can then query for these chunks
|
||||
chunks_response = client.vector_io.query(
|
||||
vector_db_id=vector_db_id,
|
||||
query="What do you know about..."
|
||||
# Explicit - specify embedding model and/or provider when you need specific ones
|
||||
vs = client.vector_stores.create(
|
||||
extra_body={
|
||||
"provider_id": "faiss", # Optional: specify vector store provider
|
||||
"embedding_model": "sentence-transformers/nomic-ai/nomic-embed-text-v1.5",
|
||||
"embedding_dimension": 768 # Optional: will be auto-detected if not provided
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Using the RAG Tool
|
||||
|
||||
:::danger[Deprecation Notice]
|
||||
The RAG Tool is being deprecated in favor of directly using the OpenAI-compatible Search API. We recommend migrating to the OpenAI APIs for better compatibility and future support.
|
||||
:::
|
||||
|
||||
A better way to ingest documents is to use the RAG Tool. This tool allows you to ingest documents from URLs, files, etc. and automatically chunks them into smaller pieces. More examples for how to format a RAGDocument can be found in the [appendix](#more-ragdocument-examples).
|
||||
|
||||
### OpenAI API Integration & Migration
|
||||
|
||||
The RAG tool has been updated to use OpenAI-compatible APIs. This provides several benefits:
|
||||
|
||||
- **Files API Integration**: Documents are now uploaded using OpenAI's file upload endpoints
|
||||
- **Vector Stores API**: Vector storage operations use OpenAI's vector store format with configurable chunking strategies
|
||||
- **Error Resilience**: When processing multiple documents, individual failures are logged but don't crash the operation. Failed documents are skipped while successful ones continue processing.
|
||||
|
||||
### Migration Path
|
||||
|
||||
We recommend migrating to the OpenAI-compatible Search API for:
|
||||
|
||||
1. **Better OpenAI Ecosystem Integration**: Direct compatibility with OpenAI tools and workflows including the Responses API
|
||||
2. **Future-Proof**: Continued support and feature development
|
||||
3. **Full OpenAI Compatibility**: Vector Stores, Files, and Search APIs are fully compatible with OpenAI's Responses API
|
||||
|
||||
The OpenAI APIs are used under the hood, so you can continue to use your existing RAG Tool code with minimal changes. However, we recommend updating your code to use the new OpenAI-compatible APIs for better long-term support. If any documents fail to process, they will be logged in the response but will not cause the entire operation to fail.
|
||||
|
||||
### RAG Tool Example
|
||||
|
||||
```python
|
||||
from llama_stack_client import RAGDocument
|
||||
|
||||
urls = ["memory_optimizations.rst", "chat.rst", "llama3.rst"]
|
||||
documents = [
|
||||
RAGDocument(
|
||||
document_id=f"num-{i}",
|
||||
content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}",
|
||||
mime_type="text/plain",
|
||||
metadata={},
|
||||
)
|
||||
for i, url in enumerate(urls)
|
||||
]
|
||||
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=documents,
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=512,
|
||||
)
|
||||
|
||||
# Query documents
|
||||
results = client.tool_runtime.rag_tool.query(
|
||||
vector_db_ids=[vector_db_id],
|
||||
content="What do you know about...",
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Context Configuration
|
||||
|
||||
You can configure how the RAG tool adds metadata to the context if you find it useful for your application:
|
||||
|
||||
```python
|
||||
# Query documents with custom template
|
||||
results = client.tool_runtime.rag_tool.query(
|
||||
vector_db_ids=[vector_db_id],
|
||||
content="What do you know about...",
|
||||
query_config={
|
||||
"chunk_template": "Result {index}\nContent: {chunk.content}\nMetadata: {metadata}\n",
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
## Building RAG-Enhanced Agents
|
||||
|
||||
One of the most powerful patterns is combining agents with RAG capabilities. Here's a complete example:
|
||||
|
||||
### Agent with Knowledge Search
|
||||
|
||||
```python
|
||||
from llama_stack_client import Agent
|
||||
|
||||
# Create agent with memory
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.3-70B-Instruct",
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[
|
||||
{
|
||||
"name": "builtin::rag/knowledge_search",
|
||||
"args": {
|
||||
"vector_db_ids": [vector_db_id],
|
||||
# Defaults
|
||||
"query_config": {
|
||||
"chunk_size_in_tokens": 512,
|
||||
"chunk_overlap_in_tokens": 0,
|
||||
"chunk_template": "Result {index}\nContent: {chunk.content}\nMetadata: {metadata}\n",
|
||||
},
|
||||
},
|
||||
}
|
||||
],
|
||||
)
|
||||
session_id = agent.create_session("rag_session")
|
||||
|
||||
# Ask questions about documents in the vector db, and the agent will query the db to answer the question.
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "How to optimize memory in PyTorch?"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
:::tip[Agent Instructions]
|
||||
The `instructions` field in the `AgentConfig` can be used to guide the agent's behavior. It is important to experiment with different instructions to see what works best for your use case.
|
||||
:::
|
||||
|
||||
### Document-Aware Conversations
|
||||
|
||||
You can also pass documents along with the user's message and ask questions about them:
|
||||
|
||||
```python
|
||||
# Initial document ingestion
|
||||
response = agent.create_turn(
|
||||
messages=[
|
||||
{"role": "user", "content": "I am providing some documents for reference."}
|
||||
],
|
||||
documents=[
|
||||
{
|
||||
"content": "https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/memory_optimizations.rst",
|
||||
"mime_type": "text/plain",
|
||||
}
|
||||
],
|
||||
session_id=session_id,
|
||||
)
|
||||
|
||||
# Query with RAG
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "What are the key topics in the documents?"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
### Viewing Agent Responses
|
||||
|
||||
You can print the response with the following:
|
||||
|
||||
```python
|
||||
from llama_stack_client import AgentEventLogger
|
||||
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
```
|
||||
|
||||
## Vector Database Management
|
||||
|
||||
### Unregistering Vector DBs
|
||||
|
||||
If you need to clean up and unregister vector databases, you can do so as follows:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="single" label="Single Database">
|
||||
|
||||
```python
|
||||
# Unregister a specified vector database
|
||||
vector_db_id = "my_vector_db_id"
|
||||
print(f"Unregistering vector database: {vector_db_id}")
|
||||
client.vector_dbs.unregister(vector_db_id=vector_db_id)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="all" label="All Databases">
|
||||
|
||||
```python
|
||||
# Unregister all vector databases
|
||||
for vector_db_id in client.vector_dbs.list():
|
||||
print(f"Unregistering vector database: {vector_db_id.identifier}")
|
||||
client.vector_dbs.unregister(vector_db_id=vector_db_id.identifier)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🎯 **Document Chunking**
|
||||
- Use appropriate chunk sizes (512 tokens is often a good starting point)
|
||||
- Consider overlap between chunks for better context preservation
|
||||
- Experiment with different chunking strategies for your content type
|
||||
|
||||
### 🔍 **Embedding Strategy**
|
||||
- Choose embedding models that match your domain
|
||||
- Consider the trade-off between embedding dimension and performance
|
||||
- Test different embedding models for your specific use case
|
||||
|
||||
### 📊 **Query Optimization**
|
||||
- Use specific, well-formed queries for better retrieval
|
||||
- Experiment with different search strategies
|
||||
- Consider hybrid approaches (keyword + semantic search)
|
||||
|
||||
### 🛡️ **Error Handling**
|
||||
- Implement proper error handling for failed document processing
|
||||
- Monitor ingestion success rates
|
||||
- Have fallback strategies for retrieval failures
|
||||
|
||||
## Appendix
|
||||
|
||||
### More RAGDocument Examples
|
||||
|
||||
Here are various ways to create RAGDocument objects for different content types:
|
||||
|
||||
```python
|
||||
from llama_stack_client import RAGDocument
|
||||
import base64
|
||||
|
||||
# File URI
|
||||
RAGDocument(document_id="num-0", content={"uri": "file://path/to/file"})
|
||||
|
||||
# Plain text
|
||||
RAGDocument(document_id="num-1", content="plain text")
|
||||
|
||||
# Explicit text input
|
||||
RAGDocument(
|
||||
document_id="num-2",
|
||||
content={
|
||||
"type": "text",
|
||||
"text": "plain text input",
|
||||
}, # for inputs that should be treated as text explicitly
|
||||
)
|
||||
|
||||
# Image from URL
|
||||
RAGDocument(
|
||||
document_id="num-3",
|
||||
content={
|
||||
"type": "image",
|
||||
"image": {"url": {"uri": "https://mywebsite.com/image.jpg"}},
|
||||
},
|
||||
)
|
||||
|
||||
# Base64 encoded image
|
||||
B64_ENCODED_IMAGE = base64.b64encode(
|
||||
requests.get(
|
||||
"https://raw.githubusercontent.com/meta-llama/llama-stack/refs/heads/main/docs/_static/llama-stack.png"
|
||||
).content
|
||||
)
|
||||
RAGDocument(
|
||||
document_id="num-4",
|
||||
content={"type": "image", "image": {"data": B64_ENCODED_IMAGE}},
|
||||
)
|
||||
```
|
||||
For more strongly typed interaction use the typed dicts found [here](https://github.com/meta-llama/llama-stack-client-python/blob/38cd91c9e396f2be0bec1ee96a19771582ba6f17/src/llama_stack_client/types/shared_params/document.py).
|
||||
|
|
|
|||
|
|
@ -391,5 +391,4 @@ client.shields.register(
|
|||
- **[Agents](./agent)** - Integrating safety shields with intelligent agents
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - Understanding safety in the execution flow
|
||||
- **[Evaluations](./evals)** - Evaluating safety shield effectiveness
|
||||
- **[Telemetry](./telemetry)** - Monitoring safety violations and metrics
|
||||
- **[Llama Guard Documentation](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)** - Advanced safety model details
|
||||
|
|
|
|||
|
|
@ -10,58 +10,8 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
# Telemetry
|
||||
|
||||
The Llama Stack telemetry system provides comprehensive tracing, metrics, and logging capabilities. It supports multiple sink types including OpenTelemetry, SQLite, and Console output for complete observability of your AI applications.
|
||||
The Llama Stack uses OpenTelemetry to provide comprehensive tracing, metrics, and logging capabilities.
|
||||
|
||||
## Event Types
|
||||
|
||||
The telemetry system supports three main types of events:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="unstructured" label="Unstructured Logs">
|
||||
|
||||
Free-form log messages with severity levels for general application logging:
|
||||
|
||||
```python
|
||||
unstructured_log_event = UnstructuredLogEvent(
|
||||
message="This is a log message",
|
||||
severity=LogSeverity.INFO
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="metrics" label="Metric Events">
|
||||
|
||||
Numerical measurements with units for tracking performance and usage:
|
||||
|
||||
```python
|
||||
metric_event = MetricEvent(
|
||||
metric="my_metric",
|
||||
value=10,
|
||||
unit="count"
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="structured" label="Structured Logs">
|
||||
|
||||
System events like span start/end that provide structured operation tracking:
|
||||
|
||||
```python
|
||||
structured_log_event = SpanStartPayload(
|
||||
name="my_span",
|
||||
parent_span_id="parent_span_id"
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Spans and Traces
|
||||
|
||||
- **Spans**: Represent individual operations with timing information and hierarchical relationships
|
||||
- **Traces**: Collections of related spans that form a complete request flow across your application
|
||||
|
||||
This hierarchical structure allows you to understand the complete execution path of requests through your Llama Stack application.
|
||||
|
||||
## Automatic Metrics Generation
|
||||
|
||||
|
|
@ -129,21 +79,6 @@ Send events to an OpenTelemetry Collector for integration with observability pla
|
|||
- Compatible with all OpenTelemetry collectors
|
||||
- Supports both traces and metrics
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="sqlite" label="SQLite">
|
||||
|
||||
Store events in a local SQLite database for direct querying:
|
||||
|
||||
**Use Cases:**
|
||||
- Local development and debugging
|
||||
- Custom analytics and reporting
|
||||
- Offline analysis of application behavior
|
||||
|
||||
**Features:**
|
||||
- Direct SQL querying capabilities
|
||||
- Persistent local storage
|
||||
- No external dependencies
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="console" label="Console">
|
||||
|
||||
|
|
@ -174,9 +109,8 @@ telemetry:
|
|||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "llama-stack-service"
|
||||
sinks: ['console', 'sqlite', 'otel_trace', 'otel_metric']
|
||||
sinks: ['console', 'otel_trace', 'otel_metric']
|
||||
otel_exporter_otlp_endpoint: "http://localhost:4318"
|
||||
sqlite_db_path: "/path/to/telemetry.db"
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
|
@ -185,7 +119,7 @@ Configure telemetry behavior using environment variables:
|
|||
|
||||
- **`OTEL_EXPORTER_OTLP_ENDPOINT`**: OpenTelemetry Collector endpoint (default: `http://localhost:4318`)
|
||||
- **`OTEL_SERVICE_NAME`**: Service name for telemetry (default: empty string)
|
||||
- **`TELEMETRY_SINKS`**: Comma-separated list of sinks (default: `console,sqlite`)
|
||||
- **`TELEMETRY_SINKS`**: Comma-separated list of sinks (default: `[]`)
|
||||
|
||||
### Quick Setup: Complete Telemetry Stack
|
||||
|
||||
|
|
@ -248,37 +182,10 @@ Forward metrics to other observability systems:
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## SQLite Querying
|
||||
|
||||
The `sqlite` sink allows you to query traces without an external system. This is particularly useful for development and custom analytics.
|
||||
|
||||
### Example Queries
|
||||
|
||||
```sql
|
||||
-- Query recent traces
|
||||
SELECT * FROM traces WHERE timestamp > datetime('now', '-1 hour');
|
||||
|
||||
-- Analyze span durations
|
||||
SELECT name, AVG(duration_ms) as avg_duration
|
||||
FROM spans
|
||||
GROUP BY name
|
||||
ORDER BY avg_duration DESC;
|
||||
|
||||
-- Find slow operations
|
||||
SELECT * FROM spans
|
||||
WHERE duration_ms > 1000
|
||||
ORDER BY duration_ms DESC;
|
||||
```
|
||||
|
||||
:::tip[Advanced Analytics]
|
||||
Refer to the [Getting Started notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb) for more examples on querying traces and spans programmatically.
|
||||
:::
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🔍 **Monitoring Strategy**
|
||||
- Use OpenTelemetry for production environments
|
||||
- Combine multiple sinks for development (console + SQLite)
|
||||
- Set up alerts on key metrics like token usage and error rates
|
||||
|
||||
### 📊 **Metrics Analysis**
|
||||
|
|
@ -293,45 +200,8 @@ Refer to the [Getting Started notebook](https://github.com/meta-llama/llama-stac
|
|||
|
||||
### 🔧 **Configuration Management**
|
||||
- Use environment variables for flexible deployment
|
||||
- Configure appropriate retention policies for SQLite
|
||||
- Ensure proper network access to OpenTelemetry collectors
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Basic Telemetry Setup
|
||||
|
||||
```python
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
# Client with telemetry headers
|
||||
client = LlamaStackClient(
|
||||
base_url="http://localhost:8000",
|
||||
extra_headers={
|
||||
"X-Telemetry-Service": "my-ai-app",
|
||||
"X-Telemetry-Version": "1.0.0"
|
||||
}
|
||||
)
|
||||
|
||||
# All API calls will be automatically traced
|
||||
response = client.chat.completions.create(
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
messages=[{"role": "user", "content": "Hello!"}]
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Telemetry Context
|
||||
|
||||
```python
|
||||
# Add custom span attributes for better tracking
|
||||
with tracer.start_as_current_span("custom_operation") as span:
|
||||
span.set_attribute("user_id", "user123")
|
||||
span.set_attribute("operation_type", "chat_completion")
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
messages=[{"role": "user", "content": "Hello!"}]
|
||||
)
|
||||
```
|
||||
|
||||
## Related Resources
|
||||
|
||||
|
|
|
|||
|
|
@ -62,6 +62,10 @@ The new `/v2` API must be introduced alongside the existing `/v1` API and run in
|
|||
|
||||
When a `/v2` API is introduced, a clear and generous deprecation policy for the `/v1` API must be published simultaneously. This policy must outline the timeline for the eventual removal of the `/v1` API, giving users ample time to migrate.
|
||||
|
||||
### Deprecated APIs
|
||||
|
||||
Deprecated APIs are those that are no longer actively maintained or supported. Depreated APIs are marked with the flag `deprecated = True` in the OpenAPI spec. These APIs will be removed in a future release.
|
||||
|
||||
### API Stability vs. Provider Stability
|
||||
|
||||
The leveling introduced in this document relates to the stability of the API and not specifically the providers within the API.
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@ A Llama Stack API is described as a collection of REST endpoints. We currently s
|
|||
- **Scoring**: evaluate outputs of the system
|
||||
- **Eval**: generate outputs (via Inference or Agents) and perform scoring
|
||||
- **VectorIO**: perform operations on vector stores, such as adding documents, searching, and deleting documents
|
||||
- **Telemetry**: collect telemetry data from the system
|
||||
- **Post Training**: fine-tune a model
|
||||
- **Tool Runtime**: interact with various tools and protocols
|
||||
- **Responses**: generate responses from an LLM using this OpenAI compatible API.
|
||||
|
|
|
|||
|
|
@ -1,233 +1,13 @@
|
|||
# Contributing to Llama Stack
|
||||
We want to make contributing to this project as easy and transparent as
|
||||
possible.
|
||||
---
|
||||
title: Contributing
|
||||
description: Contributing to Llama Stack
|
||||
sidebar_label: Contributing to Llama Stack
|
||||
sidebar_position: 3
|
||||
hide_title: true
|
||||
---
|
||||
|
||||
## Set up your development environment
|
||||
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
|
||||
We use [uv](https://github.com/astral-sh/uv) to manage python dependencies and virtual environments.
|
||||
You can install `uv` by following this [guide](https://docs.astral.sh/uv/getting-started/installation/).
|
||||
|
||||
You can install the dependencies by running:
|
||||
|
||||
```bash
|
||||
cd llama-stack
|
||||
uv sync --group dev
|
||||
uv pip install -e .
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
```{note}
|
||||
You can use a specific version of Python with `uv` by adding the `--python <version>` flag (e.g. `--python 3.12`).
|
||||
Otherwise, `uv` will automatically select a Python version according to the `requires-python` section of the `pyproject.toml`.
|
||||
For more info, see the [uv docs around Python versions](https://docs.astral.sh/uv/concepts/python-versions/).
|
||||
```
|
||||
|
||||
Note that you can create a dotenv file `.env` that includes necessary environment variables:
|
||||
```
|
||||
LLAMA_STACK_BASE_URL=http://localhost:8321
|
||||
LLAMA_STACK_CLIENT_LOG=debug
|
||||
LLAMA_STACK_PORT=8321
|
||||
LLAMA_STACK_CONFIG=<provider-name>
|
||||
TAVILY_SEARCH_API_KEY=
|
||||
BRAVE_SEARCH_API_KEY=
|
||||
```
|
||||
|
||||
And then use this dotenv file when running client SDK tests via the following:
|
||||
```bash
|
||||
uv run --env-file .env -- pytest -v tests/integration/inference/test_text_inference.py --text-model=meta-llama/Llama-3.1-8B-Instruct
|
||||
```
|
||||
|
||||
### Pre-commit Hooks
|
||||
|
||||
We use [pre-commit](https://pre-commit.com/) to run linting and formatting checks on your code. You can install the pre-commit hooks by running:
|
||||
|
||||
```bash
|
||||
uv run pre-commit install
|
||||
```
|
||||
|
||||
After that, pre-commit hooks will run automatically before each commit.
|
||||
|
||||
Alternatively, if you don't want to install the pre-commit hooks, you can run the checks manually by running:
|
||||
|
||||
```bash
|
||||
uv run pre-commit run --all-files
|
||||
```
|
||||
|
||||
```{caution}
|
||||
Before pushing your changes, make sure that the pre-commit hooks have passed successfully.
|
||||
```
|
||||
|
||||
## Discussions -> Issues -> Pull Requests
|
||||
|
||||
We actively welcome your pull requests. However, please read the following. This is heavily inspired by [Ghostty](https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTING.md).
|
||||
|
||||
If in doubt, please open a [discussion](https://github.com/meta-llama/llama-stack/discussions); we can always convert that to an issue later.
|
||||
|
||||
### Issues
|
||||
We use GitHub issues to track public bugs. Please ensure your description is
|
||||
clear and has sufficient instructions to be able to reproduce the issue.
|
||||
|
||||
Meta has a [bounty program](http://facebook.com/whitehat/info) for the safe
|
||||
disclosure of security bugs. In those cases, please go through the process
|
||||
outlined on that page and do not file a public issue.
|
||||
|
||||
### Contributor License Agreement ("CLA")
|
||||
In order to accept your pull request, we need you to submit a CLA. You only need
|
||||
to do this once to work on any of Meta's open source projects.
|
||||
|
||||
Complete your CLA here: [https://code.facebook.com/cla](https://code.facebook.com/cla)
|
||||
|
||||
**I'd like to contribute!**
|
||||
|
||||
If you are new to the project, start by looking at the issues tagged with "good first issue". If you're interested
|
||||
leave a comment on the issue and a triager will assign it to you.
|
||||
|
||||
Please avoid picking up too many issues at once. This helps you stay focused and ensures that others in the community also have opportunities to contribute.
|
||||
- Try to work on only 1–2 issues at a time, especially if you’re still getting familiar with the codebase.
|
||||
- Before taking an issue, check if it’s already assigned or being actively discussed.
|
||||
- If you’re blocked or can’t continue with an issue, feel free to unassign yourself or leave a comment so others can step in.
|
||||
|
||||
**I have a bug!**
|
||||
|
||||
1. Search the issue tracker and discussions for similar issues.
|
||||
2. If you don't have steps to reproduce, open a discussion.
|
||||
3. If you have steps to reproduce, open an issue.
|
||||
|
||||
**I have an idea for a feature!**
|
||||
|
||||
1. Open a discussion.
|
||||
|
||||
**I've implemented a feature!**
|
||||
|
||||
1. If there is an issue for the feature, open a pull request.
|
||||
2. If there is no issue, open a discussion and link to your branch.
|
||||
|
||||
**I have a question!**
|
||||
|
||||
1. Open a discussion or use [Discord](https://discord.gg/llama-stack).
|
||||
|
||||
|
||||
**Opening a Pull Request**
|
||||
|
||||
1. Fork the repo and create your branch from `main`.
|
||||
2. If you've changed APIs, update the documentation.
|
||||
3. Ensure the test suite passes.
|
||||
4. Make sure your code lints using `pre-commit`.
|
||||
5. If you haven't already, complete the Contributor License Agreement ("CLA").
|
||||
6. Ensure your pull request follows the [conventional commits format](https://www.conventionalcommits.org/en/v1.0.0/).
|
||||
7. Ensure your pull request follows the [coding style](#coding-style).
|
||||
|
||||
|
||||
Please keep pull requests (PRs) small and focused. If you have a large set of changes, consider splitting them into logically grouped, smaller PRs to facilitate review and testing.
|
||||
|
||||
```{tip}
|
||||
As a general guideline:
|
||||
- Experienced contributors should try to keep no more than 5 open PRs at a time.
|
||||
- New contributors are encouraged to have only one open PR at a time until they’re familiar with the codebase and process.
|
||||
```
|
||||
|
||||
## Repository guidelines
|
||||
|
||||
### Coding Style
|
||||
|
||||
* Comments should provide meaningful insights into the code. Avoid filler comments that simply
|
||||
describe the next step, as they create unnecessary clutter, same goes for docstrings.
|
||||
* Prefer comments to clarify surprising behavior and/or relationships between parts of the code
|
||||
rather than explain what the next line of code does.
|
||||
* Catching exceptions, prefer using a specific exception type rather than a broad catch-all like
|
||||
`Exception`.
|
||||
* Error messages should be prefixed with "Failed to ..."
|
||||
* 4 spaces for indentation rather than tab
|
||||
* When using `# noqa` to suppress a style or linter warning, include a comment explaining the
|
||||
justification for bypassing the check.
|
||||
* When using `# type: ignore` to suppress a mypy warning, include a comment explaining the
|
||||
justification for bypassing the check.
|
||||
* Don't use unicode characters in the codebase. ASCII-only is preferred for compatibility or
|
||||
readability reasons.
|
||||
* Providers configuration class should be Pydantic Field class. It should have a `description` field
|
||||
that describes the configuration. These descriptions will be used to generate the provider
|
||||
documentation.
|
||||
* When possible, use keyword arguments only when calling functions.
|
||||
* Llama Stack utilizes custom Exception classes for certain Resources that should be used where applicable.
|
||||
|
||||
### License
|
||||
By contributing to Llama, you agree that your contributions will be licensed
|
||||
under the LICENSE file in the root directory of this source tree.
|
||||
|
||||
## Common Tasks
|
||||
|
||||
Some tips about common tasks you work on while contributing to Llama Stack:
|
||||
|
||||
### Using `llama stack build`
|
||||
|
||||
Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
|
||||
|
||||
Example:
|
||||
```bash
|
||||
cd work/
|
||||
git clone https://github.com/meta-llama/llama-stack.git
|
||||
git clone https://github.com/meta-llama/llama-stack-client-python.git
|
||||
cd llama-stack
|
||||
LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --distro <...>
|
||||
```
|
||||
|
||||
### Updating distribution configurations
|
||||
|
||||
If you have made changes to a provider's configuration in any form (introducing a new config key, or
|
||||
changing models, etc.), you should run `./scripts/distro_codegen.py` to re-generate various YAML
|
||||
files as well as the documentation. You should not change `docs/source/.../distributions/` files
|
||||
manually as they are auto-generated.
|
||||
|
||||
### Updating the provider documentation
|
||||
|
||||
If you have made changes to a provider's configuration, you should run `./scripts/provider_codegen.py`
|
||||
to re-generate the documentation. You should not change `docs/source/.../providers/` files manually
|
||||
as they are auto-generated.
|
||||
Note that the provider "description" field will be used to generate the provider documentation.
|
||||
|
||||
### Building the Documentation
|
||||
|
||||
If you are making changes to the documentation at [https://llamastack.github.io/](https://llamastack.github.io/), you can use the following command to build the documentation and preview your changes.
|
||||
|
||||
```bash
|
||||
# This rebuilds the documentation pages and the OpenAPI spec.
|
||||
npm install
|
||||
npm run gen-api-docs all
|
||||
npm run build
|
||||
|
||||
# This will start a local server (usually at http://127.0.0.1:3000).
|
||||
npm run serve
|
||||
```
|
||||
|
||||
### Update API Documentation
|
||||
|
||||
If you modify or add new API endpoints, update the API documentation accordingly. You can do this by running the following command:
|
||||
|
||||
```bash
|
||||
uv run ./docs/openapi_generator/run_openapi_generator.sh
|
||||
```
|
||||
|
||||
The generated API schema will be available in `docs/static/`. Make sure to review the changes before committing.
|
||||
|
||||
## Adding a New Provider
|
||||
|
||||
See:
|
||||
- [Adding a New API Provider Page](./new_api_provider.mdx) which describes how to add new API providers to the Stack.
|
||||
- [Vector Database Page](./new_vector_database.mdx) which describes how to add a new vector databases with Llama Stack.
|
||||
- [External Provider Page](/docs/providers/external/) which describes how to add external providers to the Stack.
|
||||
|
||||
|
||||
## Testing
|
||||
|
||||
|
||||
See the [Testing README](https://github.com/meta-llama/llama-stack/blob/main/tests/README.md) for detailed testing information.
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
For developers who need deeper understanding of the testing system internals:
|
||||
|
||||
- [Record-Replay Testing](./testing/record-replay.mdx)
|
||||
|
||||
### Benchmarking
|
||||
|
||||
See the [Benchmarking README](https://github.com/meta-llama/llama-stack/blob/main/benchmarking/k8s-benchmark/README.md) for benchmarking information.
|
||||
<ReactMarkdown>{Contributing}</ReactMarkdown>
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ def get_base_url(self) -> str:
|
|||
|
||||
## Testing the Provider
|
||||
|
||||
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, you should install dependencies via `llama stack build --distro together`.
|
||||
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, install its dependencies with `llama stack list-deps together | xargs -L1 uv pip install`.
|
||||
|
||||
### 1. Integration Testing
|
||||
|
||||
|
|
|
|||
|
|
@ -5,225 +5,80 @@ sidebar_label: Build your own Distribution
|
|||
sidebar_position: 3
|
||||
---
|
||||
|
||||
This guide will walk you through the steps to get started with building a Llama Stack distribution from scratch with your choice of API providers.
|
||||
This guide walks you through inspecting existing distributions, customising their configuration, and building runnable artefacts for your own deployment.
|
||||
|
||||
### Explore existing distributions
|
||||
|
||||
### Setting your log level
|
||||
All first-party distributions live under `llama_stack/distributions/`. Each directory contains:
|
||||
|
||||
In order to specify the proper logging level users can apply the following environment variable `LLAMA_STACK_LOGGING` with the following format:
|
||||
- `build.yaml` – the distribution specification (providers, additional dependencies, optional external provider directories).
|
||||
- `run.yaml` – sample run configuration (when provided).
|
||||
- Documentation fragments that power this site.
|
||||
|
||||
`LLAMA_STACK_LOGGING=server=debug;core=info`
|
||||
|
||||
Where each category in the following list:
|
||||
|
||||
- all
|
||||
- core
|
||||
- server
|
||||
- router
|
||||
- inference
|
||||
- agents
|
||||
- safety
|
||||
- eval
|
||||
- tools
|
||||
- client
|
||||
|
||||
Can be set to any of the following log levels:
|
||||
|
||||
- debug
|
||||
- info
|
||||
- warning
|
||||
- error
|
||||
- critical
|
||||
|
||||
The default global log level is `info`. `all` sets the log level for all components.
|
||||
|
||||
A user can also set `LLAMA_STACK_LOG_FILE` which will pipe the logs to the specified path as well as to the terminal. An example would be: `export LLAMA_STACK_LOG_FILE=server.log`
|
||||
|
||||
### Llama Stack Build
|
||||
|
||||
In order to build your own distribution, we recommend you clone the `llama-stack` repository.
|
||||
|
||||
|
||||
```
|
||||
git clone git@github.com:meta-llama/llama-stack.git
|
||||
cd llama-stack
|
||||
pip install -e .
|
||||
```
|
||||
Use the CLI to build your distribution.
|
||||
The main points to consider are:
|
||||
1. **Image Type** - Do you want a venv environment or a Container (eg. Docker)
|
||||
2. **Template** - Do you want to use a template to build your distribution? or start from scratch ?
|
||||
3. **Config** - Do you want to use a pre-existing config file to build your distribution?
|
||||
|
||||
```
|
||||
llama stack build -h
|
||||
usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--distro DISTRIBUTION] [--list-distros] [--image-type {container,venv}] [--image-name IMAGE_NAME] [--print-deps-only]
|
||||
[--run] [--providers PROVIDERS]
|
||||
|
||||
Build a Llama stack container
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG Path to a config file to use for the build. You can find example configs in llama_stack.cores/**/build.yaml. If this argument is not provided, you will be prompted to
|
||||
enter information interactively (default: None)
|
||||
--template TEMPLATE (deprecated) Name of the example template config to use for build. You may use `llama stack build --list-distros` to check out the available distributions (default:
|
||||
None)
|
||||
--distro DISTRIBUTION, --distribution DISTRIBUTION
|
||||
Name of the distribution to use for build. You may use `llama stack build --list-distros` to check out the available distributions (default: None)
|
||||
--list-distros, --list-distributions
|
||||
Show the available distributions for building a Llama Stack distribution (default: False)
|
||||
--image-type {container,venv}
|
||||
Image Type to use for the build. If not specified, will use the image type from the template config. (default: None)
|
||||
--image-name IMAGE_NAME
|
||||
[for image-type=container|venv] Name of the virtual environment to use for the build. If not specified, currently active environment will be used if found. (default:
|
||||
None)
|
||||
--print-deps-only Print the dependencies for the stack only, without building the stack (default: False)
|
||||
--run Run the stack after building using the same image type, name, and other applicable arguments (default: False)
|
||||
--providers PROVIDERS
|
||||
Build a config for a list of providers and only those providers. This list is formatted like: api1=provider1,api2=provider2. Where there can be multiple providers per
|
||||
API. (default: None)
|
||||
```
|
||||
|
||||
After this step is complete, a file named `<name>-build.yaml` and template file `<name>-run.yaml` will be generated and saved at the output file path specified at the end of the command.
|
||||
Browse that folder to understand available providers and copy a distribution to use as a starting point. When creating a new stack, duplicate an existing directory, rename it, and adjust the `build.yaml` file to match your requirements.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="template" label="Building from a template">
|
||||
To build from alternative API providers, we provide distribution templates for users to get started building a distribution backed by different providers.
|
||||
<TabItem value="container" label="Building a container">
|
||||
|
||||
The following command will allow you to see the available templates and their corresponding providers.
|
||||
```
|
||||
llama stack build --list-templates
|
||||
Use the Containerfile at `containers/Containerfile`, which installs `llama-stack`, resolves distribution dependencies via `llama stack list-deps`, and sets the entrypoint to `llama stack run`.
|
||||
|
||||
```bash
|
||||
docker build . \
|
||||
-f containers/Containerfile \
|
||||
--build-arg DISTRO_NAME=starter \
|
||||
--tag llama-stack:starter
|
||||
```
|
||||
|
||||
```
|
||||
------------------------------+-----------------------------------------------------------------------------+
|
||||
| Template Name | Description |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| watsonx | Use watsonx for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| vllm-gpu | Use a built-in vLLM engine for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| together | Use Together.AI for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| tgi | Use (an external) TGI server for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| starter | Quick start template for running Llama Stack with several popular providers |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| sambanova | Use SambaNova for running LLM inference and safety |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| remote-vllm | Use (an external) vLLM server for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| postgres-demo | Quick start template for running Llama Stack with several popular providers |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| passthrough | Use Passthrough hosted llama-stack endpoint for LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| open-benchmark | Distribution for running open benchmarks |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| ollama | Use (an external) Ollama server for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| nvidia | Use NVIDIA NIM for running LLM inference, evaluation and safety |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| meta-reference-gpu | Use Meta Reference for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| llama_api | Distribution for running e2e tests in CI |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| hf-serverless | Use (an external) Hugging Face Inference Endpoint for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| hf-endpoint | Use (an external) Hugging Face Inference Endpoint for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| groq | Use Groq for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| fireworks | Use Fireworks.AI for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| experimental-post-training | Experimental template for post training |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| dell | Dell's distribution of Llama Stack. TGI inference via Dell's custom |
|
||||
| | container |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| ci-tests | Distribution for running e2e tests in CI |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| cerebras | Use Cerebras for running LLM inference |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
| bedrock | Use AWS Bedrock for running LLM inference and safety |
|
||||
+------------------------------+-----------------------------------------------------------------------------+
|
||||
```
|
||||
Handy build arguments:
|
||||
|
||||
You may then pick a template to build your distribution with providers fitted to your liking.
|
||||
- `DISTRO_NAME` – distribution directory name (defaults to `starter`).
|
||||
- `RUN_CONFIG_PATH` – absolute path inside the build context for a run config that should be baked into the image (e.g. `/workspace/run.yaml`).
|
||||
- `INSTALL_MODE=editable` – install the repository copied into `/workspace` with `uv pip install -e`. Pair it with `--build-arg LLAMA_STACK_DIR=/workspace`.
|
||||
- `LLAMA_STACK_CLIENT_DIR` – optional editable install of the Python client.
|
||||
- `PYPI_VERSION` / `TEST_PYPI_VERSION` – pin specific releases when not using editable installs.
|
||||
- `KEEP_WORKSPACE=1` – retain `/workspace` in the final image if you need to access additional files (such as sample configs or provider bundles).
|
||||
|
||||
For example, to build a distribution with TGI as the inference provider, you can run:
|
||||
```
|
||||
$ llama stack build --distro starter
|
||||
...
|
||||
You can now edit ~/.llama/distributions/llamastack-starter/starter-run.yaml and run `llama stack run ~/.llama/distributions/llamastack-starter/starter-run.yaml`
|
||||
```
|
||||
Make sure any custom `build.yaml`, run configs, or provider directories you reference are included in the Docker build context so the Containerfile can read them.
|
||||
|
||||
```{tip}
|
||||
The generated `run.yaml` file is a starting point for your configuration. For comprehensive guidance on customizing it for your specific needs, infrastructure, and deployment scenarios, see [Customizing Your run.yaml Configuration](customizing_run_yaml.md).
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="scratch" label="Building from Scratch">
|
||||
<TabItem value="external" label="Building with external providers">
|
||||
|
||||
If the provided templates do not fit your use case, you could start off with running `llama stack build` which will allow you to a interactively enter wizard where you will be prompted to enter build configurations.
|
||||
External providers live outside the main repository but can be bundled by pointing `external_providers_dir` to a directory that contains your provider packages.
|
||||
|
||||
It would be best to start with a template and understand the structure of the config file and the various concepts ( APIS, providers, resources, etc.) before starting from scratch.
|
||||
```
|
||||
llama stack build
|
||||
1. Copy providers into the build context, for example `cp -R path/to/providers providers.d`.
|
||||
2. Update `build.yaml` with the directory and provider entries.
|
||||
3. Adjust run configs to use the in-container path (usually `/.llama/providers.d`). Pass `--build-arg RUN_CONFIG_PATH=/workspace/run.yaml` if you want to bake the config.
|
||||
|
||||
> Enter a name for your Llama Stack (e.g. my-local-stack): my-stack
|
||||
> Enter the image type you want your Llama Stack to be built as (container or venv): venv
|
||||
|
||||
Llama Stack is composed of several APIs working together. Let's select
|
||||
the provider types (implementations) you want to use for these APIs.
|
||||
|
||||
Tip: use <TAB> to see options for the providers.
|
||||
|
||||
> Enter provider for API inference: inline::meta-reference
|
||||
> Enter provider for API safety: inline::llama-guard
|
||||
> Enter provider for API agents: inline::meta-reference
|
||||
> Enter provider for API memory: inline::faiss
|
||||
> Enter provider for API datasetio: inline::meta-reference
|
||||
> Enter provider for API scoring: inline::meta-reference
|
||||
> Enter provider for API eval: inline::meta-reference
|
||||
> Enter provider for API telemetry: inline::meta-reference
|
||||
|
||||
> (Optional) Enter a short description for your Llama Stack:
|
||||
|
||||
You can now edit ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml and run `llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml`
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="config" label="Building from a pre-existing build config file">
|
||||
- In addition to templates, you may customize the build to your liking through editing config files and build from config files with the following command.
|
||||
|
||||
- The config file will be of contents like the ones in `llama_stack/distributions/*build.yaml`.
|
||||
|
||||
```
|
||||
llama stack build --config llama_stack/distributions/starter/build.yaml
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="external" label="Building with External Providers">
|
||||
|
||||
Llama Stack supports external providers that live outside of the main codebase. This allows you to create and maintain your own providers independently or use community-provided providers.
|
||||
|
||||
To build a distribution with external providers, you need to:
|
||||
|
||||
1. Configure the `external_providers_dir` in your build configuration file:
|
||||
Example `build.yaml` excerpt for a custom Ollama provider:
|
||||
|
||||
```yaml
|
||||
# Example my-external-stack.yaml with external providers
|
||||
version: '2'
|
||||
distribution_spec:
|
||||
description: Custom distro for CI tests
|
||||
providers:
|
||||
inference:
|
||||
- remote::custom_ollama
|
||||
# Add more providers as needed
|
||||
image_type: container
|
||||
image_name: ci-test
|
||||
# Path to external provider implementations
|
||||
external_providers_dir: ~/.llama/providers.d
|
||||
- remote::custom_ollama
|
||||
external_providers_dir: /workspace/providers.d
|
||||
```
|
||||
|
||||
Inside `providers.d/custom_ollama/provider.py`, define `get_provider_spec()` so the CLI can discover dependencies:
|
||||
|
||||
```python
|
||||
from llama_stack.providers.datatypes import ProviderSpec
|
||||
|
||||
|
||||
def get_provider_spec() -> ProviderSpec:
|
||||
return ProviderSpec(
|
||||
provider_type="remote::custom_ollama",
|
||||
module="llama_stack_ollama_provider",
|
||||
config_class="llama_stack_ollama_provider.config.OllamaImplConfig",
|
||||
pip_packages=[
|
||||
"ollama",
|
||||
"aiohttp",
|
||||
"llama-stack-provider-ollama",
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
Here's an example for a custom Ollama provider:
|
||||
|
|
@ -232,9 +87,9 @@ Here's an example for a custom Ollama provider:
|
|||
adapter:
|
||||
adapter_type: custom_ollama
|
||||
pip_packages:
|
||||
- ollama
|
||||
- aiohttp
|
||||
- llama-stack-provider-ollama # This is the provider package
|
||||
- ollama
|
||||
- aiohttp
|
||||
- llama-stack-provider-ollama # This is the provider package
|
||||
config_class: llama_stack_ollama_provider.config.OllamaImplConfig
|
||||
module: llama_stack_ollama_provider
|
||||
api_dependencies: []
|
||||
|
|
@ -245,53 +100,22 @@ The `pip_packages` section lists the Python packages required by the provider, a
|
|||
provider package itself. The package must be available on PyPI or can be provided from a local
|
||||
directory or a git repository (git must be installed on the build environment).
|
||||
|
||||
2. Build your distribution using the config file:
|
||||
For deeper guidance, see the [External Providers documentation](../providers/external/).
|
||||
|
||||
```
|
||||
llama stack build --config my-external-stack.yaml
|
||||
```
|
||||
|
||||
For more information on external providers, including directory structure, provider types, and implementation requirements, see the [External Providers documentation](../providers/external/).
|
||||
</TabItem>
|
||||
<TabItem value="container" label="Building Container">
|
||||
</Tabs>
|
||||
|
||||
:::tip Podman Alternative
|
||||
Podman is supported as an alternative to Docker. Set `CONTAINER_BINARY` to `podman` in your environment to use Podman.
|
||||
:::
|
||||
### Run your stack server
|
||||
|
||||
To build a container image, you may start off from a template and use the `--image-type container` flag to specify `container` as the build image type.
|
||||
|
||||
```
|
||||
llama stack build --distro starter --image-type container
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack build --distro starter --image-type container
|
||||
...
|
||||
Containerfile created successfully in /tmp/tmp.viA3a3Rdsg/ContainerfileFROM python:3.10-slim
|
||||
...
|
||||
```
|
||||
|
||||
You can now edit ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml and run `llama stack run ~/meta-llama/llama-stack/tmp/configs/ollama-run.yaml`
|
||||
```
|
||||
|
||||
Now set some environment variables for the inference model ID and Llama Stack Port and create a local directory to mount into the container's file system.
|
||||
After building the image, launch it directly with Docker or Podman—the entrypoint calls `llama stack run` using the baked distribution or the bundled run config:
|
||||
|
||||
```bash
|
||||
export INFERENCE_MODEL="llama3.2:3b"
|
||||
export LLAMA_STACK_PORT=8321
|
||||
mkdir -p ~/.llama
|
||||
```
|
||||
|
||||
After this step is successful, you should be able to find the built container image and test it with the below Docker command:
|
||||
|
||||
```
|
||||
docker run -d \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-e INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
-e OLLAMA_URL=http://host.docker.internal:11434 \
|
||||
localhost/distribution-ollama:dev \
|
||||
llama-stack:starter \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
|
|
@ -311,131 +135,14 @@ Here are the docker flags and their uses:
|
|||
|
||||
* `--port $LLAMA_STACK_PORT`: Port number for the server to listen on
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
### Running your Stack server
|
||||
Now, let's start the Llama Stack Distribution Server. You will need the YAML configuration file which was written out at the end by the `llama stack build` step.
|
||||
If you prepared a custom run config, mount it into the container and reference it explicitly:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v $(pwd)/run.yaml:/app/run.yaml \
|
||||
llama-stack:starter \
|
||||
/app/run.yaml
|
||||
```
|
||||
llama stack run -h
|
||||
usage: llama stack run [-h] [--port PORT] [--image-name IMAGE_NAME]
|
||||
[--image-type {venv}] [--enable-ui]
|
||||
[config | distro]
|
||||
|
||||
Start the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution.
|
||||
|
||||
positional arguments:
|
||||
config | distro Path to config file to use for the run or name of known distro (`llama stack list` for a list). (default: None)
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--port PORT Port to run the server on. It can also be passed via the env var LLAMA_STACK_PORT. (default: 8321)
|
||||
--image-name IMAGE_NAME
|
||||
[DEPRECATED] This flag is no longer supported. Please activate your virtual environment before running. (default: None)
|
||||
--image-type {venv}
|
||||
[DEPRECATED] This flag is no longer supported. Please activate your virtual environment before running. (default: None)
|
||||
--enable-ui Start the UI server (default: False)
|
||||
```
|
||||
|
||||
**Note:** Container images built with `llama stack build --image-type container` cannot be run using `llama stack run`. Instead, they must be run directly using Docker or Podman commands as shown in the container building section above.
|
||||
|
||||
```
|
||||
# Start using template name
|
||||
llama stack run tgi
|
||||
|
||||
# Start using config file
|
||||
llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
|
||||
```
|
||||
|
||||
```
|
||||
$ llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
|
||||
|
||||
Serving API inspect
|
||||
GET /health
|
||||
GET /providers/list
|
||||
GET /routes/list
|
||||
Serving API inference
|
||||
POST /inference/chat_completion
|
||||
POST /inference/completion
|
||||
POST /inference/embeddings
|
||||
...
|
||||
Serving API agents
|
||||
POST /agents/create
|
||||
POST /agents/session/create
|
||||
POST /agents/turn/create
|
||||
POST /agents/delete
|
||||
POST /agents/session/delete
|
||||
POST /agents/session/get
|
||||
POST /agents/step/get
|
||||
POST /agents/turn/get
|
||||
|
||||
Listening on ['::', '0.0.0.0']:8321
|
||||
INFO: Started server process [2935911]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit)
|
||||
INFO: 2401:db00:35c:2d2b:face:0:c9:0:54678 - "GET /models/list HTTP/1.1" 200 OK
|
||||
```
|
||||
|
||||
### Listing Distributions
|
||||
Using the list command, you can view all existing Llama Stack distributions, including stacks built from templates, from scratch, or using custom configuration files.
|
||||
|
||||
```
|
||||
llama stack list -h
|
||||
usage: llama stack list [-h]
|
||||
|
||||
list the build stacks
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
```
|
||||
|
||||
Example Usage
|
||||
|
||||
```
|
||||
llama stack list
|
||||
```
|
||||
|
||||
```
|
||||
------------------------------+-----------------------------------------------------------------+--------------+------------+
|
||||
| Stack Name | Path | Build Config | Run Config |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| together | ~/.llama/distributions/together | Yes | No |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| bedrock | ~/.llama/distributions/bedrock | Yes | No |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| starter | ~/.llama/distributions/starter | Yes | Yes |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
| remote-vllm | ~/.llama/distributions/remote-vllm | Yes | Yes |
|
||||
+------------------------------+-----------------------------------------------------------------------------+--------------+
|
||||
```
|
||||
|
||||
### Removing a Distribution
|
||||
Use the remove command to delete a distribution you've previously built.
|
||||
|
||||
```
|
||||
llama stack rm -h
|
||||
usage: llama stack rm [-h] [--all] [name]
|
||||
|
||||
Remove the build stack
|
||||
|
||||
positional arguments:
|
||||
name Name of the stack to delete (default: None)
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
--all, -a Delete all stacks (use with caution) (default: False)
|
||||
```
|
||||
|
||||
Example
|
||||
```
|
||||
llama stack rm llamastack-test
|
||||
```
|
||||
|
||||
To keep your environment organized and avoid clutter, consider using `llama stack list` to review old or unused distributions and `llama stack rm <name>` to delete them when they're no longer needed.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If you encounter any issues, ask questions in our discord or search through our [GitHub Issues](https://github.com/meta-llama/llama-stack/issues), or file an new issue.
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ apis:
|
|||
- inference
|
||||
- vector_io
|
||||
- safety
|
||||
- telemetry
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: ollama
|
||||
|
|
@ -44,18 +43,36 @@ providers:
|
|||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: sqlite
|
||||
namespace: null
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/agents_store.db
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config: {}
|
||||
metadata_store:
|
||||
namespace: null
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/registry.db
|
||||
persistence:
|
||||
agent_state:
|
||||
backend: kv_default
|
||||
namespace: agents
|
||||
responses:
|
||||
backend: sql_default
|
||||
table_name: responses
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/kvstore.db
|
||||
sql_default:
|
||||
type: sql_sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/sqlstore.db
|
||||
stores:
|
||||
metadata:
|
||||
backend: kv_default
|
||||
namespace: registry
|
||||
inference:
|
||||
backend: sql_default
|
||||
table_name: inference_store
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
conversations:
|
||||
backend: sql_default
|
||||
table_name: openai_conversations
|
||||
prompts:
|
||||
backend: kv_default
|
||||
namespace: prompts
|
||||
models:
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
|
|
@ -78,7 +95,6 @@ apis:
|
|||
- inference
|
||||
- vector_io
|
||||
- safety
|
||||
- telemetry
|
||||
```
|
||||
|
||||
## Providers
|
||||
|
|
@ -575,24 +591,13 @@ created by users sharing a team with them:
|
|||
|
||||
In addition to resource-based access control, Llama Stack supports endpoint-level authorization using OAuth 2.0 style scopes. When authentication is enabled, specific API endpoints require users to have particular scopes in their authentication token.
|
||||
|
||||
**Scope-Gated APIs:**
|
||||
The following APIs are currently gated by scopes:
|
||||
|
||||
- **Telemetry API** (scope: `telemetry.read`):
|
||||
- `POST /telemetry/traces` - Query traces
|
||||
- `GET /telemetry/traces/{trace_id}` - Get trace by ID
|
||||
- `GET /telemetry/traces/{trace_id}/spans/{span_id}` - Get span by ID
|
||||
- `POST /telemetry/spans/{span_id}/tree` - Get span tree
|
||||
- `POST /telemetry/spans` - Query spans
|
||||
- `POST /telemetry/metrics/{metric_name}` - Query metrics
|
||||
|
||||
**Authentication Configuration:**
|
||||
|
||||
For **JWT/OAuth2 providers**, scopes should be included in the JWT's claims:
|
||||
```json
|
||||
{
|
||||
"sub": "user123",
|
||||
"scope": "telemetry.read",
|
||||
"scope": "<scope>",
|
||||
"aud": "llama-stack"
|
||||
}
|
||||
```
|
||||
|
|
@ -602,7 +607,7 @@ For **custom authentication providers**, the endpoint must return user attribute
|
|||
{
|
||||
"principal": "user123",
|
||||
"attributes": {
|
||||
"scopes": ["telemetry.read"]
|
||||
"scopes": ["<scope>"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ This avoids the overhead of setting up a server.
|
|||
```bash
|
||||
# setup
|
||||
uv pip install llama-stack
|
||||
llama stack build --distro starter --image-type venv
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
```
|
||||
|
||||
```python
|
||||
|
|
|
|||
|
|
@ -1,56 +1,163 @@
|
|||
apiVersion: v1
|
||||
data:
|
||||
stack_run_config.yaml: "version: '2'\nimage_name: kubernetes-demo\napis:\n- agents\n-
|
||||
inference\n- files\n- safety\n- telemetry\n- tool_runtime\n- vector_io\nproviders:\n
|
||||
\ inference:\n - provider_id: vllm-inference\n provider_type: remote::vllm\n
|
||||
\ config:\n url: ${env.VLLM_URL:=http://localhost:8000/v1}\n max_tokens:
|
||||
${env.VLLM_MAX_TOKENS:=4096}\n api_token: ${env.VLLM_API_TOKEN:=fake}\n tls_verify:
|
||||
${env.VLLM_TLS_VERIFY:=true}\n - provider_id: vllm-safety\n provider_type:
|
||||
remote::vllm\n config:\n url: ${env.VLLM_SAFETY_URL:=http://localhost:8000/v1}\n
|
||||
\ max_tokens: ${env.VLLM_MAX_TOKENS:=4096}\n api_token: ${env.VLLM_API_TOKEN:=fake}\n
|
||||
\ tls_verify: ${env.VLLM_TLS_VERIFY:=true}\n - provider_id: sentence-transformers\n
|
||||
\ provider_type: inline::sentence-transformers\n config: {}\n vector_io:\n
|
||||
\ - provider_id: ${env.ENABLE_CHROMADB:+chromadb}\n provider_type: remote::chromadb\n
|
||||
\ config:\n url: ${env.CHROMADB_URL:=}\n kvstore:\n type: postgres\n
|
||||
\ host: ${env.POSTGRES_HOST:=localhost}\n port: ${env.POSTGRES_PORT:=5432}\n
|
||||
\ db: ${env.POSTGRES_DB:=llamastack}\n user: ${env.POSTGRES_USER:=llamastack}\n
|
||||
\ password: ${env.POSTGRES_PASSWORD:=llamastack}\n files:\n - provider_id:
|
||||
meta-reference-files\n provider_type: inline::localfs\n config:\n storage_dir:
|
||||
${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}\n metadata_store:\n
|
||||
\ type: sqlite\n db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/starter}/files_metadata.db
|
||||
\ \n safety:\n - provider_id: llama-guard\n provider_type: inline::llama-guard\n
|
||||
\ config:\n excluded_categories: []\n agents:\n - provider_id: meta-reference\n
|
||||
\ provider_type: inline::meta-reference\n config:\n persistence_store:\n
|
||||
\ type: postgres\n host: ${env.POSTGRES_HOST:=localhost}\n port:
|
||||
${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n user:
|
||||
${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\n
|
||||
\ responses_store:\n type: postgres\n host: ${env.POSTGRES_HOST:=localhost}\n
|
||||
\ port: ${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n
|
||||
\ user: ${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\n
|
||||
\ telemetry:\n - provider_id: meta-reference\n provider_type: inline::meta-reference\n
|
||||
\ config:\n service_name: \"${env.OTEL_SERVICE_NAME:=\\u200B}\"\n sinks:
|
||||
${env.TELEMETRY_SINKS:=console}\n tool_runtime:\n - provider_id: brave-search\n
|
||||
\ provider_type: remote::brave-search\n config:\n api_key: ${env.BRAVE_SEARCH_API_KEY:+}\n
|
||||
\ max_results: 3\n - provider_id: tavily-search\n provider_type: remote::tavily-search\n
|
||||
\ config:\n api_key: ${env.TAVILY_SEARCH_API_KEY:+}\n max_results:
|
||||
3\n - provider_id: rag-runtime\n provider_type: inline::rag-runtime\n config:
|
||||
{}\n - provider_id: model-context-protocol\n provider_type: remote::model-context-protocol\n
|
||||
\ config: {}\nmetadata_store:\n type: postgres\n host: ${env.POSTGRES_HOST:=localhost}\n
|
||||
\ port: ${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n user:
|
||||
${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\n
|
||||
\ table_name: llamastack_kvstore\ninference_store:\n type: postgres\n host:
|
||||
${env.POSTGRES_HOST:=localhost}\n port: ${env.POSTGRES_PORT:=5432}\n db: ${env.POSTGRES_DB:=llamastack}\n
|
||||
\ user: ${env.POSTGRES_USER:=llamastack}\n password: ${env.POSTGRES_PASSWORD:=llamastack}\nmodels:\n-
|
||||
metadata:\n embedding_dimension: 384\n model_id: all-MiniLM-L6-v2\n provider_id:
|
||||
sentence-transformers\n model_type: embedding\n- metadata: {}\n model_id: ${env.INFERENCE_MODEL}\n
|
||||
\ provider_id: vllm-inference\n model_type: llm\n- metadata: {}\n model_id:
|
||||
${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}\n provider_id: vllm-safety\n
|
||||
\ model_type: llm\nshields:\n- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}\nvector_dbs:
|
||||
[]\ndatasets: []\nscoring_fns: []\nbenchmarks: []\ntool_groups:\n- toolgroup_id:
|
||||
builtin::websearch\n provider_id: tavily-search\n- toolgroup_id: builtin::rag\n
|
||||
\ provider_id: rag-runtime\nserver:\n port: 8321\n auth:\n provider_config:\n
|
||||
\ type: github_token\n"
|
||||
stack_run_config.yaml: |
|
||||
version: '2'
|
||||
image_name: kubernetes-demo
|
||||
apis:
|
||||
- agents
|
||||
- inference
|
||||
- files
|
||||
- safety
|
||||
- telemetry
|
||||
- tool_runtime
|
||||
- vector_io
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: vllm-inference
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: vllm-safety
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_SAFETY_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: sentence-transformers
|
||||
provider_type: inline::sentence-transformers
|
||||
config: {}
|
||||
vector_io:
|
||||
- provider_id: ${env.ENABLE_CHROMADB:+chromadb}
|
||||
provider_type: remote::chromadb
|
||||
config:
|
||||
url: ${env.CHROMADB_URL:=}
|
||||
kvstore:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
files:
|
||||
- provider_id: meta-reference-files
|
||||
provider_type: inline::localfs
|
||||
config:
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/starter}/files_metadata.db
|
||||
safety:
|
||||
- provider_id: llama-guard
|
||||
provider_type: inline::llama-guard
|
||||
config:
|
||||
excluded_categories: []
|
||||
agents:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
responses_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "${env.OTEL_SERVICE_NAME:=\u200B}"
|
||||
sinks: ${env.TELEMETRY_SINKS:=console}
|
||||
tool_runtime:
|
||||
- provider_id: brave-search
|
||||
provider_type: remote::brave-search
|
||||
config:
|
||||
api_key: ${env.BRAVE_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: tavily-search
|
||||
provider_type: remote::tavily-search
|
||||
config:
|
||||
api_key: ${env.TAVILY_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: rag-runtime
|
||||
provider_type: inline::rag-runtime
|
||||
config: {}
|
||||
- provider_id: model-context-protocol
|
||||
provider_type: remote::model-context-protocol
|
||||
config: {}
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: ${env.POSTGRES_TABLE_NAME:=llamastack_kvstore}
|
||||
sql_default:
|
||||
type: sql_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
stores:
|
||||
metadata:
|
||||
backend: kv_default
|
||||
namespace: registry
|
||||
inference:
|
||||
backend: sql_default
|
||||
table_name: inference_store
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
conversations:
|
||||
backend: sql_default
|
||||
table_name: openai_conversations
|
||||
prompts:
|
||||
backend: kv_default
|
||||
namespace: prompts
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 768
|
||||
model_id: nomic-embed-text-v1.5
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
server:
|
||||
port: 8321
|
||||
auth:
|
||||
provider_config:
|
||||
type: github_token
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: llama-stack-config
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ apis:
|
|||
- inference
|
||||
- files
|
||||
- safety
|
||||
- telemetry
|
||||
- tool_runtime
|
||||
- vector_io
|
||||
providers:
|
||||
|
|
@ -32,21 +31,17 @@ providers:
|
|||
provider_type: remote::chromadb
|
||||
config:
|
||||
url: ${env.CHROMADB_URL:=}
|
||||
kvstore:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
persistence:
|
||||
namespace: vector_io::chroma_remote
|
||||
backend: kv_default
|
||||
files:
|
||||
- provider_id: meta-reference-files
|
||||
provider_type: inline::localfs
|
||||
config:
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/starter}/files_metadata.db
|
||||
table_name: files_metadata
|
||||
backend: sql_default
|
||||
safety:
|
||||
- provider_id: llama-guard
|
||||
provider_type: inline::llama-guard
|
||||
|
|
@ -56,26 +51,15 @@ providers:
|
|||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
responses_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "${env.OTEL_SERVICE_NAME:=\u200B}"
|
||||
sinks: ${env.TELEMETRY_SINKS:=console}
|
||||
persistence:
|
||||
agent_state:
|
||||
namespace: agents
|
||||
backend: kv_default
|
||||
responses:
|
||||
table_name: responses
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
tool_runtime:
|
||||
- provider_id: brave-search
|
||||
provider_type: remote::brave-search
|
||||
|
|
@ -93,48 +77,73 @@ providers:
|
|||
- provider_id: model-context-protocol
|
||||
provider_type: remote::model-context-protocol
|
||||
config: {}
|
||||
metadata_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: llamastack_kvstore
|
||||
inference_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 384
|
||||
model_id: all-MiniLM-L6-v2
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: ${env.POSTGRES_TABLE_NAME:=llamastack_kvstore}
|
||||
sql_default:
|
||||
type: sql_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
stores:
|
||||
metadata:
|
||||
namespace: registry
|
||||
backend: kv_default
|
||||
inference:
|
||||
table_name: inference_store
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
conversations:
|
||||
table_name: openai_conversations
|
||||
backend: sql_default
|
||||
prompts:
|
||||
namespace: prompts
|
||||
backend: kv_default
|
||||
registered_resources:
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 768
|
||||
model_id: nomic-embed-text-v1.5
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
server:
|
||||
port: 8321
|
||||
auth:
|
||||
provider_config:
|
||||
type: github_token
|
||||
telemetry:
|
||||
enabled: true
|
||||
vector_stores:
|
||||
default_provider_id: chromadb
|
||||
default_embedding_model:
|
||||
provider_id: sentence-transformers
|
||||
model_id: nomic-ai/nomic-embed-text-v1.5
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
|
|||
uv venv starter --python 3.12
|
||||
source starter/bin/activate # On Windows: starter\Scripts\activate
|
||||
pip install --no-cache llama-stack==0.2.2
|
||||
llama stack build --distro starter --image-type venv
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
export FIREWORKS_API_KEY=<SOME_KEY>
|
||||
llama stack run starter --port 5050
|
||||
```
|
||||
|
|
|
|||
|
|
@ -2,10 +2,10 @@
|
|||
|
||||
Remote-Hosted distributions are available endpoints serving Llama Stack API that you can directly connect to.
|
||||
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety | Telemetry |
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety |
|
||||
|-------------|----------|-----------|---------|---------|---------|------------|
|
||||
| Together | [https://llama-stack.together.ai](https://llama-stack.together.ai) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [https://llamastack-preview.fireworks.ai](https://llamastack-preview.fireworks.ai) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Together | [https://llama-stack.together.ai](https://llama-stack.together.ai) | remote::together | meta-reference | remote::weaviate | meta-reference |
|
||||
| Fireworks | [https://llamastack-preview.fireworks.ai](https://llamastack-preview.fireworks.ai) | remote::fireworks | meta-reference | remote::weaviate | meta-reference |
|
||||
|
||||
## Connecting to Remote-Hosted Distributions
|
||||
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-watsonx` distribution consists of the following pro
|
|||
| inference | `remote::watsonx`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss` |
|
||||
|
||||
|
|
|
|||
|
|
@ -13,9 +13,9 @@ self
|
|||
The `llamastack/distribution-tgi` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |
|
||||
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference |
|
||||
|
||||
|
||||
The only difference vs. the `tgi` distribution is that it runs the Dell-TGI server for inference.
|
||||
|
|
|
|||
|
|
@ -22,7 +22,6 @@ The `llamastack/distribution-dell` distribution consists of the following provid
|
|||
| inference | `remote::tgi`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -166,10 +165,10 @@ docker run \
|
|||
|
||||
### Via venv
|
||||
|
||||
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
|
||||
Install the distribution dependencies before launching:
|
||||
|
||||
```bash
|
||||
llama stack build --distro dell --image-type venv
|
||||
llama stack list-deps dell | xargs -L1 uv pip install
|
||||
INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
DEH_URL=$DEH_URL \
|
||||
CHROMA_URL=$CHROMA_URL \
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-meta-reference-gpu` distribution consists of the fo
|
|||
| inference | `inline::meta-reference` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -82,10 +81,10 @@ docker run \
|
|||
|
||||
### Via venv
|
||||
|
||||
Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available.
|
||||
Make sure you have the Llama Stack CLI available.
|
||||
|
||||
```bash
|
||||
llama stack build --distro meta-reference-gpu --image-type venv
|
||||
llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
|
||||
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
llama stack run distributions/meta-reference-gpu/run.yaml \
|
||||
--port 8321
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@ The `llamastack/distribution-nvidia` distribution consists of the following prov
|
|||
| post_training | `remote::nvidia` |
|
||||
| safety | `remote::nvidia` |
|
||||
| scoring | `inline::basic` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `inline::rag-runtime` |
|
||||
| vector_io | `inline::faiss` |
|
||||
|
||||
|
|
@ -137,11 +136,11 @@ docker run \
|
|||
|
||||
### Via venv
|
||||
|
||||
If you've set up your local development environment, you can also build the image using your local virtual environment.
|
||||
If you've set up your local development environment, you can also install the distribution dependencies using your local virtual environment.
|
||||
|
||||
```bash
|
||||
INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
|
||||
llama stack build --distro nvidia --image-type venv
|
||||
llama stack list-deps nvidia | xargs -L1 uv pip install
|
||||
NVIDIA_API_KEY=$NVIDIA_API_KEY \
|
||||
INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
llama stack run ./run.yaml \
|
||||
|
|
|
|||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-passthrough` distribution consists of the following
|
|||
| inference | `remote::passthrough`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `remote::wolfram-alpha`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,6 @@ The starter distribution consists of the following provider configurations:
|
|||
| inference | `remote::openai`, `remote::fireworks`, `remote::together`, `remote::ollama`, `remote::anthropic`, `remote::gemini`, `remote::groq`, `remote::sambanova`, `remote::vllm`, `remote::tgi`, `remote::cerebras`, `remote::llama-openai-compat`, `remote::nvidia`, `remote::hf::serverless`, `remote::hf::endpoint`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `inline::sqlite-vec`, `inline::milvus`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -119,7 +118,7 @@ The following environment variables can be configured:
|
|||
|
||||
### Telemetry Configuration
|
||||
- `OTEL_SERVICE_NAME`: OpenTelemetry service name
|
||||
- `TELEMETRY_SINKS`: Telemetry sinks (default: `console,sqlite`)
|
||||
- `OTEL_EXPORTER_OTLP_ENDPOINT`: OpenTelemetry collector endpoint URL
|
||||
|
||||
## Enabling Providers
|
||||
|
||||
|
|
@ -169,7 +168,11 @@ docker run \
|
|||
Ensure you have configured the starter distribution using the environment variables explained above.
|
||||
|
||||
```bash
|
||||
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server
|
||||
uv run --with llama-stack llama stack run starter
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
|
@ -216,7 +219,6 @@ The starter distribution uses SQLite for local storage of various components:
|
|||
- **Files metadata**: `~/.llama/distributions/starter/files_metadata.db`
|
||||
- **Agents store**: `~/.llama/distributions/starter/agents_store.db`
|
||||
- **Responses store**: `~/.llama/distributions/starter/responses_store.db`
|
||||
- **Trace store**: `~/.llama/distributions/starter/trace_store.db`
|
||||
- **Evaluation store**: `~/.llama/distributions/starter/meta_reference_eval.db`
|
||||
- **Dataset I/O stores**: Various HuggingFace and local filesystem stores
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,17 @@ Another simple way to start interacting with Llama Stack is to just spin up a co
|
|||
If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](../deploying/kubernetes_deployment) for more details.
|
||||
|
||||
|
||||
## Configure logging
|
||||
|
||||
Control log output via environment variables before starting the server.
|
||||
|
||||
- `LLAMA_STACK_LOGGING` sets per-component levels, e.g. `LLAMA_STACK_LOGGING=server=debug;core=info`.
|
||||
- Supported categories: `all`, `core`, `server`, `router`, `inference`, `agents`, `safety`, `eval`, `tools`, `client`.
|
||||
- Levels: `debug`, `info`, `warning`, `error`, `critical` (default is `info`). Use `all=<level>` to apply globally.
|
||||
- `LLAMA_STACK_LOG_FILE=/path/to/log` mirrors logs to a file while still printing to stdout.
|
||||
|
||||
Export these variables prior to running `llama stack run`, launching a container, or starting the server through any other pathway.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
|
|
|||
|
|
@ -4,65 +4,24 @@
|
|||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient
|
||||
|
||||
vector_db_id = "my_demo_vector_db"
|
||||
client = LlamaStackClient(base_url="http://localhost:8321")
|
||||
import io, requests
|
||||
from openai import OpenAI
|
||||
|
||||
models = client.models.list()
|
||||
url="https://www.paulgraham.com/greatwork.html"
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
|
||||
|
||||
# Select the first LLM and first embedding models
|
||||
model_id = next(m for m in models if m.model_type == "llm").identifier
|
||||
embedding_model_id = (
|
||||
em := next(m for m in models if m.model_type == "embedding")
|
||||
).identifier
|
||||
embedding_dimension = em.metadata["embedding_dimension"]
|
||||
vs = client.vector_stores.create()
|
||||
response = requests.get(url)
|
||||
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
|
||||
uploaded_file = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants")
|
||||
client.vector_stores.files.create(vector_store_id=vs.id, file_id=uploaded_file.id)
|
||||
|
||||
vector_db = client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model=embedding_model_id,
|
||||
embedding_dimension=embedding_dimension,
|
||||
provider_id="faiss",
|
||||
)
|
||||
vector_db_id = vector_db.identifier
|
||||
source = "https://www.paulgraham.com/greatwork.html"
|
||||
print("rag_tool> Ingesting document:", source)
|
||||
document = RAGDocument(
|
||||
document_id="document_1",
|
||||
content=source,
|
||||
mime_type="text/html",
|
||||
metadata={},
|
||||
)
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=[document],
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=100,
|
||||
)
|
||||
agent = Agent(
|
||||
client,
|
||||
model=model_id,
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[
|
||||
{
|
||||
"name": "builtin::rag/knowledge_search",
|
||||
"args": {"vector_db_ids": [vector_db_id]},
|
||||
}
|
||||
],
|
||||
resp = client.responses.create(
|
||||
model="openai/gpt-4o",
|
||||
input="How do you do great work? Use the existing knowledge_search tool.",
|
||||
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
|
||||
include=["file_search_call.results"],
|
||||
)
|
||||
|
||||
prompt = "How do you do great work?"
|
||||
print("prompt>", prompt)
|
||||
|
||||
use_stream = True
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
session_id=agent.create_session("rag_session"),
|
||||
stream=use_stream,
|
||||
)
|
||||
|
||||
# Only call `AgentEventLogger().log(response)` for streaming responses.
|
||||
if use_stream:
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
else:
|
||||
print(response)
|
||||
print(resp)
|
||||
|
|
|
|||
|
|
@ -58,15 +58,19 @@ Llama Stack is a server that exposes multiple APIs, you connect with it using th
|
|||
|
||||
<Tabs>
|
||||
<TabItem value="venv" label="Using venv">
|
||||
You can use Python to build and run the Llama Stack server, which is useful for testing and development.
|
||||
You can use Python to install dependencies and run the Llama Stack server, which is useful for testing and development.
|
||||
|
||||
Llama Stack uses a [YAML configuration file](../distributions/configuration) to specify the stack setup,
|
||||
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml).
|
||||
Now let's build and run the Llama Stack config for Ollama.
|
||||
Now let's install dependencies and run the Llama Stack config for Ollama.
|
||||
We use `starter` as template. By default all providers are disabled, this requires enable ollama by passing environment variables.
|
||||
|
||||
```bash
|
||||
llama stack build --distro starter --image-type venv --run
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server
|
||||
llama stack run starter
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="container" label="Using a Container">
|
||||
|
|
@ -164,7 +168,7 @@ Available Models
|
|||
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ model_type ┃ identifier ┃ provider_resource_id ┃ metadata ┃ provider_id ┃
|
||||
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ embedding │ ollama/all-minilm:l6-v2 │ all-minilm:l6-v2 │ {'embedding_dimension': 384.0} │ ollama │
|
||||
│ embedding │ ollama/nomic-embed-text:v1.5 │ nomic-embed-text:v1.5 │ {'embedding_dimension': 768.0} │ ollama │
|
||||
├─────────────────┼─────────────────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────────┼───────────────────────┤
|
||||
│ ... │ ... │ ... │ │ ... │
|
||||
├─────────────────┼─────────────────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────────┼───────────────────────┤
|
||||
|
|
@ -304,7 +308,7 @@ stream = agent.create_turn(
|
|||
for event in AgentEventLogger().log(stream):
|
||||
event.print()
|
||||
```
|
||||
### ii. Run the Script
|
||||
#### ii. Run the Script
|
||||
Let's run the script using `uv`
|
||||
```bash
|
||||
uv run python agent.py
|
||||
|
|
|
|||
|
|
@ -24,111 +24,44 @@ ollama run llama3.2:3b --keepalive 60m
|
|||
|
||||
#### Step 2: Run the Llama Stack server
|
||||
|
||||
We will use `uv` to run the Llama Stack server.
|
||||
```python file=./demo_script.py title="demo_script.py"
|
||||
```
|
||||
|
||||
We will use `uv` to install dependencies and run the Llama Stack server.
|
||||
```bash
|
||||
OLLAMA_URL=http://localhost:11434 \
|
||||
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server
|
||||
OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter
|
||||
```
|
||||
#### Step 3: Run the demo
|
||||
Now open up a new terminal and copy the following script into a file named `demo_script.py`.
|
||||
|
||||
```python title="demo_script.py"
|
||||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient
|
||||
|
||||
vector_db_id = "my_demo_vector_db"
|
||||
client = LlamaStackClient(base_url="http://localhost:8321")
|
||||
|
||||
models = client.models.list()
|
||||
|
||||
# Select the first LLM and first embedding models
|
||||
model_id = next(m for m in models if m.model_type == "llm").identifier
|
||||
embedding_model_id = (
|
||||
em := next(m for m in models if m.model_type == "embedding")
|
||||
).identifier
|
||||
embedding_dimension = em.metadata["embedding_dimension"]
|
||||
|
||||
vector_db = client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model=embedding_model_id,
|
||||
embedding_dimension=embedding_dimension,
|
||||
provider_id="faiss",
|
||||
)
|
||||
vector_db_id = vector_db.identifier
|
||||
source = "https://www.paulgraham.com/greatwork.html"
|
||||
print("rag_tool> Ingesting document:", source)
|
||||
document = RAGDocument(
|
||||
document_id="document_1",
|
||||
content=source,
|
||||
mime_type="text/html",
|
||||
metadata={},
|
||||
)
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=[document],
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=100,
|
||||
)
|
||||
agent = Agent(
|
||||
client,
|
||||
model=model_id,
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[
|
||||
{
|
||||
"name": "builtin::rag/knowledge_search",
|
||||
"args": {"vector_db_ids": [vector_db_id]},
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
prompt = "How do you do great work?"
|
||||
print("prompt>", prompt)
|
||||
|
||||
use_stream = True
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
session_id=agent.create_session("rag_session"),
|
||||
stream=use_stream,
|
||||
)
|
||||
|
||||
# Only call `AgentEventLogger().log(response)` for streaming responses.
|
||||
if use_stream:
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
else:
|
||||
print(response)
|
||||
```
|
||||
We will use `uv` to run the script
|
||||
```
|
||||
uv run --with llama-stack-client,fire,requests demo_script.py
|
||||
```
|
||||
And you should see output like below.
|
||||
```python
|
||||
>print(resp.output[1].content[0].text)
|
||||
To do great work, consider the following principles:
|
||||
|
||||
1. **Follow Your Interests**: Engage in work that genuinely excites you. If you find an area intriguing, pursue it without being overly concerned about external pressures or norms. You should create things that you would want for yourself, as this often aligns with what others in your circle might want too.
|
||||
|
||||
2. **Work Hard on Ambitious Projects**: Ambition is vital, but it should be tempered by genuine interest. Instead of detailed planning for the future, focus on exciting projects that keep your options open. This approach, known as "staying upwind," allows for adaptability and can lead to unforeseen achievements.
|
||||
|
||||
3. **Choose Quality Colleagues**: Collaborating with talented colleagues can significantly affect your own work. Seek out individuals who offer surprising insights and whom you admire. The presence of good colleagues can elevate the quality of your work and inspire you.
|
||||
|
||||
4. **Maintain High Morale**: Your attitude towards work and life affects your performance. Cultivating optimism and viewing yourself as lucky rather than victimized can boost your productivity. It’s essential to care for your physical health as well since it directly impacts your mental faculties and morale.
|
||||
|
||||
5. **Be Consistent**: Great work often comes from cumulative effort. Daily progress, even in small amounts, can result in substantial achievements over time. Emphasize consistency and make the work engaging, as this reduces the perceived burden of hard labor.
|
||||
|
||||
6. **Embrace Curiosity**: Curiosity is a driving force that can guide you in selecting fields of interest, pushing you to explore uncharted territories. Allow it to shape your work and continually seek knowledge and insights.
|
||||
|
||||
By focusing on these aspects, you can create an environment conducive to great work and personal fulfillment.
|
||||
```
|
||||
rag_tool> Ingesting document: https://www.paulgraham.com/greatwork.html
|
||||
|
||||
prompt> How do you do great work?
|
||||
|
||||
inference> [knowledge_search(query="What is the key to doing great work")]
|
||||
|
||||
tool_execution> Tool:knowledge_search Args:{'query': 'What is the key to doing great work'}
|
||||
|
||||
tool_execution> Tool:knowledge_search Response:[TextContentItem(text='knowledge_search tool found 5 chunks:\nBEGIN of knowledge_search tool results.\n', type='text'), TextContentItem(text="Result 1:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 2:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 3:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 4:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text="Result 5:\nDocument_id:docum\nContent: work. Doing great work means doing something important\nso well that you expand people's ideas of what's possible. But\nthere's no threshold for importance. It's a matter of degree, and\noften hard to judge at the time anyway.\n", type='text'), TextContentItem(text='END of knowledge_search tool results.\n', type='text')]
|
||||
|
||||
inference> Based on the search results, it seems that doing great work means doing something important so well that you expand people's ideas of what's possible. However, there is no clear threshold for importance, and it can be difficult to judge at the time.
|
||||
|
||||
To further clarify, I would suggest that doing great work involves:
|
||||
|
||||
* Completing tasks with high quality and attention to detail
|
||||
* Expanding on existing knowledge or ideas
|
||||
* Making a positive impact on others through your work
|
||||
* Striving for excellence and continuous improvement
|
||||
|
||||
Ultimately, great work is about making a meaningful contribution and leaving a lasting impression.
|
||||
```
|
||||
Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳
|
||||
|
||||
:::tip HuggingFace access
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ Llama Stack is now available! See the [release notes](https://github.com/llamast
|
|||
|
||||
Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments. More specifically, it provides:
|
||||
|
||||
- **Unified API layer** for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
|
||||
- **Unified API layer** for Inference, RAG, Agents, Tools, Safety, Evals.
|
||||
- **Plugin architecture** to support the rich ecosystem of implementations of the different APIs in different environments like local development, on-premises, cloud, and mobile.
|
||||
- **Prepackaged verified distributions** which offer a one-stop solution for developers to get started quickly and reliably in any environment
|
||||
- **Multiple developer interfaces** like CLI and SDKs for Python, Node, iOS, and Android
|
||||
|
|
|
|||
|
|
@ -14,16 +14,18 @@ Meta's reference implementation of an agent system that can use tools, access ve
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `persistence_store` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `responses_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | |
|
||||
| `persistence` | `<class 'inline.agents.meta_reference.config.AgentPersistenceConfig'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
persistence_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/agents_store.db
|
||||
responses_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/responses_store.db
|
||||
persistence:
|
||||
agent_state:
|
||||
namespace: agents
|
||||
backend: kv_default
|
||||
responses:
|
||||
table_name: responses
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ Reference implementation of batches API with KVStore persistence.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Configuration for the key-value store backend. |
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Configuration for the key-value store backend. |
|
||||
| `max_concurrent_batches` | `<class 'int'>` | No | 1 | Maximum number of concurrent batches to process simultaneously. |
|
||||
| `max_concurrent_requests_per_batch` | `<class 'int'>` | No | 10 | Maximum number of concurrent requests to process per batch. |
|
||||
|
||||
|
|
@ -22,6 +22,6 @@ Reference implementation of batches API with KVStore persistence.
|
|||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/batches.db
|
||||
namespace: batches
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ Local filesystem-based dataset I/O provider for reading and writing datasets to
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/localfs_datasetio.db
|
||||
namespace: datasetio::localfs
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ HuggingFace datasets provider for accessing and managing datasets from the Huggi
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/huggingface_datasetio.db
|
||||
namespace: datasetio::huggingface
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
description: "Llama Stack Evaluation API for running evaluations on model and agent candidates."
|
||||
description: "Evaluations
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates."
|
||||
sidebar_label: Eval
|
||||
title: Eval
|
||||
---
|
||||
|
|
@ -8,6 +10,8 @@ title: Eval
|
|||
|
||||
## Overview
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
Evaluations
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
|
||||
This section contains documentation for all available providers for the **eval** API.
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ Meta's reference implementation of evaluation tasks with support for multiple la
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db
|
||||
namespace: eval
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -240,6 +240,6 @@ additional_pip_packages:
|
|||
- sqlalchemy[asyncio]
|
||||
```
|
||||
|
||||
No other steps are required other than `llama stack build` and `llama stack run`. The build process will use `module` to install all of the provider dependencies, retrieve the spec, etc.
|
||||
No other steps are required beyond installing dependencies with `llama stack list-deps <distro> | xargs -L1 uv pip install` and then running `llama stack run`. The CLI will use `module` to install the provider dependencies, retrieve the spec, etc.
|
||||
|
||||
The provider will now be available in Llama Stack with the type `remote::ramalama`.
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ Local filesystem-based file storage provider for managing files and documents lo
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `storage_dir` | `<class 'str'>` | No | | Directory to store uploaded files |
|
||||
| `metadata_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | SQL store configuration for file metadata |
|
||||
| `metadata_store` | `<class 'llama_stack.core.storage.datatypes.SqlStoreReference'>` | No | | SQL store configuration for file metadata |
|
||||
| `ttl_secs` | `<class 'int'>` | No | 31536000 | |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
@ -23,6 +23,6 @@ Local filesystem-based file storage provider for managing files and documents lo
|
|||
```yaml
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/dummy/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/files_metadata.db
|
||||
table_name: files_metadata
|
||||
backend: sql_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ AWS S3-based file storage provider for scalable cloud file management with metad
|
|||
| `aws_secret_access_key` | `str \| None` | No | | AWS secret access key (optional if using IAM roles) |
|
||||
| `endpoint_url` | `str \| None` | No | | Custom S3 endpoint URL (for MinIO, LocalStack, etc.) |
|
||||
| `auto_create_bucket` | `<class 'bool'>` | No | False | Automatically create the S3 bucket if it doesn't exist |
|
||||
| `metadata_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | SQL store configuration for file metadata |
|
||||
| `metadata_store` | `<class 'llama_stack.core.storage.datatypes.SqlStoreReference'>` | No | | SQL store configuration for file metadata |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
@ -32,6 +32,6 @@ aws_secret_access_key: ${env.AWS_SECRET_ACCESS_KEY:=}
|
|||
endpoint_url: ${env.S3_ENDPOINT_URL:=}
|
||||
auto_create_bucket: ${env.S3_AUTO_CREATE_BUCKET:=false}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/s3_files_metadata.db
|
||||
table_name: s3_files_metadata
|
||||
backend: sql_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -22,12 +22,14 @@ Importantly, Llama Stack always strives to provide at least one fully inline pro
|
|||
## Provider Categories
|
||||
|
||||
- **[External Providers](external/index.mdx)** - Guide for building and using external providers
|
||||
- **[OpenAI Compatibility](./openai.mdx)** - OpenAI API compatibility layer
|
||||
- **[Inference](inference/index.mdx)** - LLM and embedding model providers
|
||||
- **[Agents](agents/index.mdx)** - Agentic system providers
|
||||
- **[DatasetIO](datasetio/index.mdx)** - Dataset and data loader providers
|
||||
- **[Safety](safety/index.mdx)** - Content moderation and safety providers
|
||||
- **[Telemetry](telemetry/index.mdx)** - Monitoring and observability providers
|
||||
- **[Vector IO](vector_io/index.mdx)** - Vector database providers
|
||||
- **[Tool Runtime](tool_runtime/index.mdx)** - Tool and protocol providers
|
||||
- **[Files](files/index.mdx)** - File system and storage providers
|
||||
|
||||
## Other information about Providers
|
||||
- **[OpenAI Compatibility](./openai.mdx)** - OpenAI API compatibility layer
|
||||
- **[OpenAI-Compatible Responses Limitations](./openai_responses_limitations.mdx)** - Known limitations of the Responses API in Llama Stack
|
||||
|
|
|
|||
|
|
@ -3,9 +3,10 @@ description: "Inference
|
|||
|
||||
Llama Stack Inference API for generating completions, chat completions, and embeddings.
|
||||
|
||||
This API provides the raw interface to the underlying models. Two kinds of models are supported:
|
||||
This API provides the raw interface to the underlying models. Three kinds of models are supported:
|
||||
- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.
|
||||
- Embedding models: these models generate embeddings to be used for semantic search."
|
||||
- Embedding models: these models generate embeddings to be used for semantic search.
|
||||
- Rerank models: these models reorder the documents based on their relevance to a query."
|
||||
sidebar_label: Inference
|
||||
title: Inference
|
||||
---
|
||||
|
|
@ -18,8 +19,9 @@ Inference
|
|||
|
||||
Llama Stack Inference API for generating completions, chat completions, and embeddings.
|
||||
|
||||
This API provides the raw interface to the underlying models. Two kinds of models are supported:
|
||||
This API provides the raw interface to the underlying models. Three kinds of models are supported:
|
||||
- LLM models: these models generate "raw" and "chat" (conversational) completions.
|
||||
- Embedding models: these models generate embeddings to be used for semantic search.
|
||||
- Rerank models: these models reorder the documents based on their relevance to a query.
|
||||
|
||||
This section contains documentation for all available providers for the **inference** API.
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
---
|
||||
title: OpenAI Compatibility
|
||||
description: OpenAI API Compatibility
|
||||
sidebar_label: OpenAI Compatibility
|
||||
|
|
@ -47,7 +48,7 @@ models = client.models.list()
|
|||
|
||||
#### Responses
|
||||
|
||||
> **Note:** The Responses API implementation is still in active development. While it is quite usable, there are still unimplemented parts of the API. We'd love feedback on any use-cases you try that do not work to help prioritize the pieces left to implement. Please open issues in the [meta-llama/llama-stack](https://github.com/meta-llama/llama-stack) GitHub repository with details of anything that does not work.
|
||||
> **Note:** The Responses API implementation is still in active development. While it is quite usable, there are still unimplemented parts of the API. See [Known Limitations of the OpenAI-compatible Responses API in Llama Stack](./openai_responses_limitations.mdx) for more details.
|
||||
|
||||
##### Simple inference
|
||||
|
||||
|
|
|
|||
301
docs/docs/providers/openai_responses_limitations.mdx
Normal file
301
docs/docs/providers/openai_responses_limitations.mdx
Normal file
|
|
@ -0,0 +1,301 @@
|
|||
---
|
||||
title: Known Limitations of the OpenAI-compatible Responses API in Llama Stack
|
||||
description: Limitations of Responses API
|
||||
sidebar_label: Limitations of Responses API
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
## Unresolved Issues
|
||||
|
||||
This document outlines known limitations and inconsistencies between Llama Stack's Responses API and OpenAI's Responses API. This comparison is based on OpenAI's API and reflects a comparison with the OpenAI APIs as of October 6, 2025 (OpenAI's client version `openai==1.107`).
|
||||
See the OpenAI [changelog](https://platform.openai.com/docs/changelog) for details of any new functionality that has been added since that date. Links to issues are included so readers can read about status, post comments, and/or subscribe for updates relating to any limitations that are of specific interest to them. We would also love any other feedback on any use-cases you try that do not work to help prioritize the pieces left to implement.
|
||||
Please open new issues in the [meta-llama/llama-stack](https://github.com/meta-llama/llama-stack) GitHub repository with details of anything that does not work that does not already have an open issue.
|
||||
|
||||
### Instructions
|
||||
**Status:** Partial Implementation + Work in Progress
|
||||
|
||||
**Issue:** [#3566](https://github.com/llamastack/llama-stack/issues/3566)
|
||||
|
||||
In Llama Stack, the instructions parameter is already implemented for creating a response, but it is not yet included in the output response object.
|
||||
|
||||
---
|
||||
|
||||
### Streaming
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
**Issue:** [#2364](https://github.com/llamastack/llama-stack/issues/2364)
|
||||
|
||||
Streaming functionality for the Responses API is partially implemented and does work to some extent, but some streaming response objects that would be needed for full compatibility are still missing.
|
||||
|
||||
---
|
||||
|
||||
### Prompt Templates
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
**Issue:** [#3321](https://github.com/llamastack/llama-stack/issues/3321)
|
||||
|
||||
OpenAI's platform supports [templated prompts using a structured language](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts). These templates can be stored server-side for organizational sharing. This feature is under development for Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Web-search tool compatibility
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
Both OpenAI and Llama Stack support a web-search built-in tool. The [OpenAI documentation](https://platform.openai.com/docs/api-reference/responses/create) for web search tool in a Responses tool list says:
|
||||
|
||||
> The type of the web search tool. One of `web_search` or `web_search_2025_08_26`.
|
||||
|
||||
In contrast, the [Llama Stack documentation](https://llamastack.github.io/docs/api/create-a-new-open-ai-response) says that the allowed values for `type` for web search are `MOD1`, `MOD2` and `MOD3`.
|
||||
Is that correct? If so, what are the meanings of each of them? It might make sense for the allowed values for OpenAI map to some values for Llama Stack so that code written to the OpenAI specification
|
||||
also work with Llama Stack.
|
||||
|
||||
The OpenAI web search tool also has fields for `filters` and `user_location` which are not documented as options for Llama Stack. If feasible, it would be good to support these too.
|
||||
|
||||
---
|
||||
|
||||
### Other built-in Tools
|
||||
|
||||
**Status:** Partial Implementation
|
||||
|
||||
OpenAI's Responses API includes an ecosystem of built-in tools (e.g., code interpreter) that lower the barrier to entry for agentic workflows. These tools are typically aligned with specific model training.
|
||||
|
||||
**Current Status in Llama Stack:**
|
||||
- Some built-in tools exist (file search, web search)
|
||||
- Missing tools include code interpreter, computer use, and image generation
|
||||
- Some built-in tools may require additional APIs (e.g., [containers API](https://platform.openai.com/docs/api-reference/containers) for code interpreter)
|
||||
|
||||
It's unclear whether there is demand for additional built-in tools in Llama Stack. No upstream issues have been filed for adding more built-in tools.
|
||||
|
||||
---
|
||||
|
||||
### Response Branching
|
||||
|
||||
**Status:** Not Working
|
||||
|
||||
Response branching, as discussed in the [Agents vs OpenAI Responses API documentation](https://llamastack.github.io/docs/building_applications/responses_vs_agents), is not currently functional.
|
||||
|
||||
---
|
||||
|
||||
### Include
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
The `include` parameter allows you to provide a list of values that indicate additional information for the system to include in the model response. The [OpenAI API](https://platform.openai.com/docs/api-reference/responses/create) specifies the following allowed values for this parameter.
|
||||
|
||||
- `web_search_call.action.sources`
|
||||
- `code_interpreter_call.outputs`
|
||||
- `computer_call_output.output.image_url`
|
||||
- `file_search_call.results`
|
||||
- `message.input_image.image_url`
|
||||
- `message.output_text.logprobs`
|
||||
- `reasoning.encrypted_content`
|
||||
|
||||
Some of these are not relevant to Llama Stack in its current form. For example, code interpreter is not implemented (see "Built-in tools" below), so `code_interpreter_call.outputs` would not be a useful directive to Llama Stack.
|
||||
|
||||
However, others might be useful. For example, `message.output_text.logprobs` can be useful for assessing how confident a model is in each token of its output.
|
||||
|
||||
---
|
||||
|
||||
### Tool Choice
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3548](https://github.com/llamastack/llama-stack/issues/3548)
|
||||
|
||||
In OpenAI's API, the `tool_choice` parameter allows you to set restrictions or requirements for which tools should be used when generating a response. This feature is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Safety Identification and Tracking
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
OpenAI's platform allows users to track agentic users using a safety identifier passed with each response. When requests violate moderation or safety rules, account holders are alerted and automated actions can be taken. This capability is not currently available in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Connectors
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
Connectors are MCP servers maintained and managed by the Responses API provider. OpenAI has documented their connectors at [https://platform.openai.com/docs/guides/tools-connectors-mcp](https://platform.openai.com/docs/guides/tools-connectors-mcp).
|
||||
|
||||
**Open Questions:**
|
||||
- Should Llama Stack include built-in support for some, all, or none of OpenAI's connectors?
|
||||
- Should there be a mechanism for administrators to add custom connectors via `run.yaml` or an API?
|
||||
|
||||
---
|
||||
|
||||
### Reasoning
|
||||
|
||||
**Status:** Partially Implemented
|
||||
|
||||
The `reasoning` object in the output of Responses works for inference providers such as vLLM that output reasoning traces in chat completions requests. It does not work for other providers such as OpenAI's hosted service. See [#3551](https://github.com/llamastack/llama-stack/issues/3551) for more details.
|
||||
|
||||
---
|
||||
|
||||
### Service Tier
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3550](https://github.com/llamastack/llama-stack/issues/3550)
|
||||
|
||||
Responses has a field `service_tier` that can be used to prioritize access to inference resources. Not all inference providers have such a concept, but Llama Stack pass through this value for those providers that do. Currently it does not.
|
||||
|
||||
---
|
||||
|
||||
### Top Logprobs
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3552](https://github.com/llamastack/llama-stack/issues/3552)
|
||||
|
||||
The `top_logprobs` parameter from OpenAI's Responses API extends the functionality obtained by including `message.output_text.logprobs` in the `include` parameter list (as discussed in the Include section above).
|
||||
It enables users to also get logprobs for alternative tokens.
|
||||
|
||||
---
|
||||
|
||||
### Max Tool Calls
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3563](https://github.com/llamastack/llama-stack/issues/3563)
|
||||
|
||||
The Responses API can accept a `max_tool_calls` parameter that limits the number of tool calls allowed to be executed for a given response. This feature needs full implementation and documentation.
|
||||
|
||||
---
|
||||
|
||||
### Max Output Tokens
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3562](https://github.com/llamastack/llama-stack/issues/3562)
|
||||
|
||||
The `max_output_tokens` field limits how many tokens the model is allowed to generate (for both reasoning and output combined). It is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Incomplete Details
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3567](https://github.com/llamastack/llama-stack/issues/3567)
|
||||
|
||||
The return object from a call to Responses includes a field for indicating why a response is incomplete if it is. For example, if the model stops generating because it has reached the specified max output tokens (see above), this field should be set to "IncompleteDetails(reason='max_output_tokens')". This is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Metadata
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3564](https://github.com/llamastack/llama-stack/issues/3564)
|
||||
|
||||
Metadata allows you to attach additional information to a response for your own reference and tracking. It is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Background
|
||||
|
||||
**Status:** Not Implemented
|
||||
|
||||
**Issue:** [#3568](https://github.com/llamastack/llama-stack/issues/3568)
|
||||
|
||||
[Background mode](https://platform.openai.com/docs/guides/background) in OpenAI Responses lets you start a response generation job and then check back in on it later. This is useful if you might lose a connection during a generation and want to reconnect later and get the response back (for example if the client is running in a mobile app). It is not implemented in Llama Stack.
|
||||
|
||||
---
|
||||
|
||||
### Global Guardrails
|
||||
|
||||
**Status:** Feature Request
|
||||
|
||||
When calling the OpenAI Responses API, model outputs go through safety models configured by OpenAI administrators. Perhaps Llama Stack should provide a mechanism to configure safety models (or non-model logic) for all Responses requests, either through `run.yaml` or an administrative API.
|
||||
|
||||
---
|
||||
|
||||
### User-Controlled Guardrails
|
||||
|
||||
**Status:** Feature Request
|
||||
|
||||
**Issue:** [#3325](https://github.com/llamastack/llama-stack/issues/3325)
|
||||
|
||||
OpenAI has not released a way for users to configure their own guardrails. However, Llama Stack users may want this capability to complement or replace global guardrails. This could be implemented as a non-breaking, additive difference from the OpenAI API.
|
||||
|
||||
---
|
||||
|
||||
### MCP Elicitations
|
||||
|
||||
**Status:** Unknown
|
||||
|
||||
Elicitations allow MCP servers to request additional information from users through the client during interactions (e.g., a tool requesting a username before proceeding).
|
||||
See the [MCP specification](https://modelcontextprotocol.io/specification/draft/client/elicitation) for details.
|
||||
|
||||
**Open Questions:**
|
||||
- Does this work in OpenAI's Responses API reference implementation?
|
||||
- If not, is there a reasonable way to make that work within the API as is? Or would the API need to change?
|
||||
- Does this work in Llama Stack?
|
||||
|
||||
---
|
||||
|
||||
### MCP Sampling
|
||||
|
||||
**Status:** Unknown
|
||||
|
||||
Sampling allows MCP tools to query the generative AI model. See the [MCP specification](https://modelcontextprotocol.io/specification/draft/client/sampling) for details.
|
||||
|
||||
**Open Questions:**
|
||||
- Does this work in OpenAI's Responses API reference implementation?
|
||||
- If not, is there a reasonable way to make that work within the API as is? Or would the API need to change?
|
||||
- Does this work in Llama Stack?
|
||||
|
||||
### Prompt Caching
|
||||
|
||||
**Status:** Unknown
|
||||
|
||||
OpenAI provides a [prompt caching](https://platform.openai.com/docs/guides/prompt-caching) mechanism in Responses that is enabled for its most recent models.
|
||||
|
||||
**Open Questions:**
|
||||
- Does this work in Llama Stack?
|
||||
- If not, is there a reasonable way to make that work for those inference providers that have this capability by passing through the provided `prompt_cache_key` to the inference provider?
|
||||
- Is there a reasonable way to make that work for inference providers that don't build in this capability by doing some sort of caching at the Llama Stack layer?
|
||||
|
||||
---
|
||||
|
||||
### Parallel Tool Calls
|
||||
|
||||
**Status:** Rumored Issue
|
||||
|
||||
There are reports that `parallel_tool_calls` may not work correctly. This needs verification and a ticket should be opened if confirmed.
|
||||
|
||||
---
|
||||
|
||||
## Resolved Issues
|
||||
|
||||
The following limitations have been addressed in recent releases:
|
||||
|
||||
### MCP and Function Tools with No Arguments
|
||||
|
||||
**Status:** ✅ Resolved
|
||||
|
||||
MCP and function tools now work correctly even when they have no arguments.
|
||||
|
||||
---
|
||||
|
||||
### `require_approval` Parameter for MCP Tools
|
||||
|
||||
**Status:** ✅ Resolved
|
||||
|
||||
The `require_approval` parameter for MCP tools in the Responses API now works correctly.
|
||||
|
||||
---
|
||||
|
||||
### MCP Tools with Array-Type Arguments
|
||||
|
||||
**Status:** ✅ Resolved
|
||||
|
||||
**Fixed in:** [#3003](https://github.com/llamastack/llama-stack/pull/3003) (Agent API), [#3602](https://github.com/llamastack/llama-stack/pull/3602) (Responses API)
|
||||
|
||||
MCP tools now correctly handle array-type arguments in both the Agent API and Responses API.
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
sidebar_label: Telemetry
|
||||
title: Telemetry
|
||||
---
|
||||
|
||||
# Telemetry
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **telemetry** API.
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
---
|
||||
description: "Meta's reference implementation of telemetry and observability using OpenTelemetry."
|
||||
sidebar_label: Meta-Reference
|
||||
title: inline::meta-reference
|
||||
---
|
||||
|
||||
# inline::meta-reference
|
||||
|
||||
## Description
|
||||
|
||||
Meta's reference implementation of telemetry and observability using OpenTelemetry.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `otel_exporter_otlp_endpoint` | `str \| None` | No | | The OpenTelemetry collector endpoint URL (base URL for traces, metrics, and logs). If not set, the SDK will use OTEL_EXPORTER_OTLP_ENDPOINT environment variable. |
|
||||
| `service_name` | `<class 'str'>` | No | | The service name to use for telemetry |
|
||||
| `sinks` | `list[inline.telemetry.meta_reference.config.TelemetrySink` | No | [<TelemetrySink.SQLITE: 'sqlite'>] | List of telemetry sinks to enable (possible values: otel_trace, otel_metric, sqlite, console) |
|
||||
| `sqlite_db_path` | `<class 'str'>` | No | ~/.llama/runtime/trace_store.db | The path to the SQLite database to use for storing traces |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
service_name: "${env.OTEL_SERVICE_NAME:=\u200B}"
|
||||
sinks: ${env.TELEMETRY_SINKS:=sqlite}
|
||||
sqlite_db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/trace_store.db
|
||||
otel_exporter_otlp_endpoint: ${env.OTEL_EXPORTER_OTLP_ENDPOINT:=}
|
||||
```
|
||||
|
|
@ -79,13 +79,13 @@ See [Chroma's documentation](https://docs.trychroma.com/docs/overview/introducti
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `db_path` | `<class 'str'>` | No | | |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Config for KV store backend |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Config for KV store backend |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
db_path: ${env.CHROMADB_PATH}
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/chroma_inline_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::chroma
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -95,12 +95,12 @@ more details about Faiss in general.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/faiss_store.db
|
||||
persistence:
|
||||
namespace: vector_io::faiss
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -14,14 +14,14 @@ Meta's reference implementation of a vector database.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/faiss_store.db
|
||||
persistence:
|
||||
namespace: vector_io::faiss
|
||||
backend: kv_default
|
||||
```
|
||||
## Deprecation Notice
|
||||
|
||||
|
|
|
|||
|
|
@ -17,14 +17,14 @@ Please refer to the remote provider documentation.
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `db_path` | `<class 'str'>` | No | | |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Config for KV store backend (SQLite only for now) |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Config for KV store backend (SQLite only for now) |
|
||||
| `consistency_level` | `<class 'str'>` | No | Strong | The consistency level of the Milvus server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
db_path: ${env.MILVUS_DB_PATH:=~/.llama/dummy}/milvus.db
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/milvus_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::milvus
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -98,13 +98,13 @@ See the [Qdrant documentation](https://qdrant.tech/documentation/) for more deta
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `path` | `<class 'str'>` | No | | |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
path: ${env.QDRANT_PATH:=~/.llama/~/.llama/dummy}/qdrant.db
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/qdrant_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::qdrant
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -72,14 +72,14 @@ description: |
|
|||
Example with hybrid search:
|
||||
```python
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={"mode": "hybrid", "max_chunks": 3, "score_threshold": 0.7},
|
||||
)
|
||||
|
||||
# Using RRF ranker
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={
|
||||
"mode": "hybrid",
|
||||
|
|
@ -91,7 +91,7 @@ description: |
|
|||
|
||||
# Using weighted ranker
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={
|
||||
"mode": "hybrid",
|
||||
|
|
@ -105,7 +105,7 @@ description: |
|
|||
Example with explicit vector search:
|
||||
```python
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={"mode": "vector", "max_chunks": 3, "score_threshold": 0.7},
|
||||
)
|
||||
|
|
@ -114,7 +114,7 @@ description: |
|
|||
Example with keyword search:
|
||||
```python
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={"mode": "keyword", "max_chunks": 3, "score_threshold": 0.7},
|
||||
)
|
||||
|
|
@ -277,14 +277,14 @@ The SQLite-vec provider supports three search modes:
|
|||
Example with hybrid search:
|
||||
```python
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={"mode": "hybrid", "max_chunks": 3, "score_threshold": 0.7},
|
||||
)
|
||||
|
||||
# Using RRF ranker
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={
|
||||
"mode": "hybrid",
|
||||
|
|
@ -296,7 +296,7 @@ response = await vector_io.query_chunks(
|
|||
|
||||
# Using weighted ranker
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={
|
||||
"mode": "hybrid",
|
||||
|
|
@ -310,7 +310,7 @@ response = await vector_io.query_chunks(
|
|||
Example with explicit vector search:
|
||||
```python
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={"mode": "vector", "max_chunks": 3, "score_threshold": 0.7},
|
||||
)
|
||||
|
|
@ -319,7 +319,7 @@ response = await vector_io.query_chunks(
|
|||
Example with keyword search:
|
||||
```python
|
||||
response = await vector_io.query_chunks(
|
||||
vector_db_id="my_db",
|
||||
vector_store_id="my_db",
|
||||
query="your query here",
|
||||
params={"mode": "keyword", "max_chunks": 3, "score_threshold": 0.7},
|
||||
)
|
||||
|
|
@ -408,13 +408,13 @@ See [sqlite-vec's GitHub repo](https://github.com/asg017/sqlite-vec/tree/main) f
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `db_path` | `<class 'str'>` | No | | Path to the SQLite database file |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Config for KV store backend (SQLite only for now) |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Config for KV store backend (SQLite only for now) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/sqlite_vec.db
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/sqlite_vec_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::sqlite_vec
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -17,15 +17,15 @@ Please refer to the sqlite-vec provider documentation.
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `db_path` | `<class 'str'>` | No | | Path to the SQLite database file |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Config for KV store backend (SQLite only for now) |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Config for KV store backend (SQLite only for now) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/sqlite_vec.db
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/sqlite_vec_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::sqlite_vec
|
||||
backend: kv_default
|
||||
```
|
||||
## Deprecation Notice
|
||||
|
||||
|
|
|
|||
|
|
@ -78,13 +78,13 @@ See [Chroma's documentation](https://docs.trychroma.com/docs/overview/introducti
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `str \| None` | No | | |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Config for KV store backend |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Config for KV store backend |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
url: ${env.CHROMADB_URL}
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/chroma_remote_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::chroma_remote
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -408,7 +408,7 @@ For more details on TLS configuration, refer to the [TLS setup guide](https://mi
|
|||
| `uri` | `<class 'str'>` | No | | The URI of the Milvus server |
|
||||
| `token` | `str \| None` | No | | The token of the Milvus server |
|
||||
| `consistency_level` | `<class 'str'>` | No | Strong | The consistency level of the Milvus server |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Config for KV store backend |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Config for KV store backend |
|
||||
| `config` | `dict` | No | `{}` | This configuration allows additional fields to be passed through to the underlying Milvus client. See the [Milvus](https://milvus.io/docs/install-overview.md) documentation for more details about Milvus in general. |
|
||||
|
||||
:::note
|
||||
|
|
@ -420,7 +420,7 @@ This configuration class accepts additional fields beyond those listed above. Yo
|
|||
```yaml
|
||||
uri: ${env.MILVUS_ENDPOINT}
|
||||
token: ${env.MILVUS_TOKEN}
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/milvus_remote_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::milvus_remote
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -239,30 +239,3 @@ See [MongoDB Atlas Vector Search documentation](https://www.mongodb.com/docs/atl
|
|||
For general MongoDB documentation, visit [MongoDB Documentation](https://docs.mongodb.com/).
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `connection_string` | `<class 'str'>` | No | | MongoDB Atlas connection string (e.g., mongodb+srv://user:pass@cluster.mongodb.net/) |
|
||||
| `database_name` | `<class 'str'>` | No | llama_stack | Database name to use for vector collections |
|
||||
| `index_name` | `<class 'str'>` | No | vector_index | Name of the vector search index |
|
||||
| `path_field` | `<class 'str'>` | No | embedding | Field name for storing embeddings |
|
||||
| `similarity_metric` | `<class 'str'>` | No | cosine | Similarity metric: cosine, euclidean, or dotProduct |
|
||||
| `max_pool_size` | `<class 'int'>` | No | 100 | Maximum connection pool size |
|
||||
| `timeout_ms` | `<class 'int'>` | No | 30000 | Connection timeout in milliseconds |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Config for KV store backend for metadata storage |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
connection_string: ${env.MONGODB_CONNECTION_STRING:=}
|
||||
database_name: ${env.MONGODB_DATABASE_NAME:=llama_stack}
|
||||
index_name: ${env.MONGODB_INDEX_NAME:=vector_index}
|
||||
path_field: ${env.MONGODB_PATH_FIELD:=embedding}
|
||||
similarity_metric: ${env.MONGODB_SIMILARITY_METRIC:=cosine}
|
||||
max_pool_size: ${env.MONGODB_MAX_POOL_SIZE:=100}
|
||||
timeout_ms: ${env.MONGODB_TIMEOUT_MS:=30000}
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/mongodb_registry.db
|
||||
```
|
||||
|
|
|
|||
|
|
@ -218,7 +218,7 @@ See [PGVector's documentation](https://github.com/pgvector/pgvector) for more de
|
|||
| `db` | `str \| None` | No | postgres | |
|
||||
| `user` | `str \| None` | No | postgres | |
|
||||
| `password` | `str \| None` | No | mysecretpassword | |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig, annotation=NoneType, required=False, default='sqlite', discriminator='type'` | No | | Config for KV store backend (SQLite only for now) |
|
||||
| `persistence` | `llama_stack.core.storage.datatypes.KVStoreReference \| None` | No | | Config for KV store backend (SQLite only for now) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
@ -228,7 +228,7 @@ port: ${env.PGVECTOR_PORT:=5432}
|
|||
db: ${env.PGVECTOR_DB}
|
||||
user: ${env.PGVECTOR_USER}
|
||||
password: ${env.PGVECTOR_PASSWORD}
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/pgvector_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::pgvector
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -26,13 +26,13 @@ Please refer to the inline provider documentation.
|
|||
| `prefix` | `str \| None` | No | | |
|
||||
| `timeout` | `int \| None` | No | | |
|
||||
| `host` | `str \| None` | No | | |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `persistence` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
api_key: ${env.QDRANT_API_KEY:=}
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/qdrant_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::qdrant_remote
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -75,14 +75,14 @@ See [Weaviate's documentation](https://weaviate.io/developers/weaviate) for more
|
|||
|-------|------|----------|---------|-------------|
|
||||
| `weaviate_api_key` | `str \| None` | No | | The API key for the Weaviate instance |
|
||||
| `weaviate_cluster_url` | `str \| None` | No | localhost:8080 | The URL of the Weaviate cluster |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig, annotation=NoneType, required=False, default='sqlite', discriminator='type'` | No | | Config for KV store backend (SQLite only for now) |
|
||||
| `persistence` | `llama_stack.core.storage.datatypes.KVStoreReference \| None` | No | | Config for KV store backend (SQLite only for now) |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
weaviate_api_key: null
|
||||
weaviate_cluster_url: ${env.WEAVIATE_CLUSTER_URL:=localhost:8080}
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/weaviate_registry.db
|
||||
persistence:
|
||||
namespace: vector_io::weaviate
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
|
|||
|
|
@ -32,7 +32,6 @@ Commands:
|
|||
scoring_functions Manage scoring functions.
|
||||
shields Manage safety shield services.
|
||||
toolgroups Manage available tool groups.
|
||||
vector_dbs Manage vector databases.
|
||||
```
|
||||
|
||||
### `llama-stack-client configure`
|
||||
|
|
@ -79,8 +78,6 @@ llama-stack-client providers list
|
|||
+-----------+----------------+-----------------+
|
||||
| agents | meta-reference | meta-reference |
|
||||
+-----------+----------------+-----------------+
|
||||
| telemetry | meta-reference | meta-reference |
|
||||
+-----------+----------------+-----------------+
|
||||
| safety | meta-reference | meta-reference |
|
||||
+-----------+----------------+-----------------+
|
||||
```
|
||||
|
|
@ -211,53 +208,6 @@ Unregister a model from distribution endpoint
|
|||
llama-stack-client models unregister <model_id>
|
||||
```
|
||||
|
||||
## Vector DB Management
|
||||
Manage vector databases.
|
||||
|
||||
|
||||
### `llama-stack-client vector_dbs list`
|
||||
Show available vector dbs on distribution endpoint
|
||||
```bash
|
||||
llama-stack-client vector_dbs list
|
||||
```
|
||||
```
|
||||
┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ identifier ┃ provider_id ┃ provider_resource_id ┃ vector_db_type ┃ params ┃
|
||||
┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ my_demo_vector_db │ faiss │ my_demo_vector_db │ │ embedding_dimension: 384 │
|
||||
│ │ │ │ │ embedding_model: all-MiniLM-L6-v2 │
|
||||
│ │ │ │ │ type: vector_db │
|
||||
│ │ │ │ │ │
|
||||
└──────────────────────────┴─────────────┴──────────────────────────┴────────────────┴───────────────────────────────────┘
|
||||
```
|
||||
|
||||
### `llama-stack-client vector_dbs register`
|
||||
Create a new vector db
|
||||
```bash
|
||||
llama-stack-client vector_dbs register <vector-db-id> [--provider-id <provider-id>] [--provider-vector-db-id <provider-vector-db-id>] [--embedding-model <embedding-model>] [--embedding-dimension <embedding-dimension>]
|
||||
```
|
||||
|
||||
|
||||
Required arguments:
|
||||
- `VECTOR_DB_ID`: Vector DB ID
|
||||
|
||||
Optional arguments:
|
||||
- `--provider-id`: Provider ID for the vector db
|
||||
- `--provider-vector-db-id`: Provider's vector db ID
|
||||
- `--embedding-model`: Embedding model to use. Default: `all-MiniLM-L6-v2`
|
||||
- `--embedding-dimension`: Dimension of embeddings. Default: 384
|
||||
|
||||
### `llama-stack-client vector_dbs unregister`
|
||||
Delete a vector db
|
||||
```bash
|
||||
llama-stack-client vector_dbs unregister <vector-db-id>
|
||||
```
|
||||
|
||||
|
||||
Required arguments:
|
||||
- `VECTOR_DB_ID`: Vector DB ID
|
||||
|
||||
|
||||
## Shield Management
|
||||
Manage safety shield services.
|
||||
### `llama-stack-client shields list`
|
||||
|
|
|
|||
|
|
@ -71,6 +71,11 @@ const config: Config = {
|
|||
docs: {
|
||||
sidebarPath: require.resolve("./sidebars.ts"),
|
||||
docItemComponent: "@theme/ApiItem", // Derived from docusaurus-theme-openapi
|
||||
remarkPlugins: [
|
||||
[require('remark-code-import'), {
|
||||
rootDir: require('path').join(__dirname, '..') // Repository root
|
||||
}]
|
||||
],
|
||||
},
|
||||
blog: false,
|
||||
theme: {
|
||||
|
|
|
|||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
|
@ -2864,7 +2864,7 @@
|
|||
}
|
||||
],
|
||||
"source": [
|
||||
"!llama stack build --distro experimental-post-training --image-type venv --image-name __system__"
|
||||
"!llama stack list-deps experimental-post-training | xargs -L1 uv pip install"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@
|
|||
"source": [
|
||||
"# NBVAL_SKIP\n",
|
||||
"!pip install -U llama-stack\n",
|
||||
"!UV_SYSTEM_PYTHON=1 llama stack build --distro fireworks --image-type venv"
|
||||
"llama stack list-deps fireworks | xargs -L1 uv pip install\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -831,7 +831,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
|
@ -860,8 +860,8 @@
|
|||
"vector_db_id = f\"test_vector_db_{uuid.uuid4()}\"\n",
|
||||
"client.vector_dbs.register(\n",
|
||||
" vector_db_id=vector_db_id,\n",
|
||||
" embedding_model=\"all-MiniLM-L6-v2\",\n",
|
||||
" embedding_dimension=384,\n",
|
||||
" embedding_model=\"nomic-embed-text-v1.5\",\n",
|
||||
" embedding_dimension=768,\n",
|
||||
" provider_id=selected_vector_provider.provider_id,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
|
|
|
|||
|
|
@ -136,7 +136,8 @@
|
|||
" \"\"\"Build and run LlamaStack server in one step using --run flag\"\"\"\n",
|
||||
" log_file = open(\"llama_stack_server.log\", \"w\")\n",
|
||||
" process = subprocess.Popen(\n",
|
||||
" \"uv run --with llama-stack llama stack build --distro starter --image-type venv --run\",\n",
|
||||
" \"uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install\",\n",
|
||||
" \"uv run --with llama-stack llama stack run starter\",\n",
|
||||
" shell=True,\n",
|
||||
" stdout=log_file,\n",
|
||||
" stderr=log_file,\n",
|
||||
|
|
@ -172,7 +173,7 @@
|
|||
"\n",
|
||||
"def kill_llama_stack_server():\n",
|
||||
" # Kill any existing llama stack server processes using pkill command\n",
|
||||
" os.system(\"pkill -f llama_stack.core.server.server\")"
|
||||
" os.system(\"pkill -f llama_stack.core.server.server\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
|||
|
|
@ -105,7 +105,8 @@
|
|||
" \"\"\"Build and run LlamaStack server in one step using --run flag\"\"\"\n",
|
||||
" log_file = open(\"llama_stack_server.log\", \"w\")\n",
|
||||
" process = subprocess.Popen(\n",
|
||||
" \"uv run --with llama-stack llama stack build --distro starter --image-type venv --run\",\n",
|
||||
" \"uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install\",\n",
|
||||
" \"uv run --with llama-stack llama stack run starter\",\n",
|
||||
" shell=True,\n",
|
||||
" stdout=log_file,\n",
|
||||
" stderr=log_file,\n",
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@
|
|||
"metadata": {},
|
||||
"source": [
|
||||
"```bash\n",
|
||||
"LLAMA_STACK_DIR=$(pwd) llama stack build --distro nvidia --image-type venv\n",
|
||||
"uv run --with llama-stack llama stack list-deps nvidia | xargs -L1 uv pip install\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@
|
|||
"metadata": {},
|
||||
"source": [
|
||||
"```bash\n",
|
||||
"LLAMA_STACK_DIR=$(pwd) llama stack build --distro nvidia --image-type venv\n",
|
||||
"uv run --with llama-stack llama stack list-deps nvidia | xargs -L1 uv pip install\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ from llama_stack.strong_typing.inspection import (
|
|||
is_generic_list,
|
||||
is_type_optional,
|
||||
is_type_union,
|
||||
is_unwrapped_body_param,
|
||||
unwrap_generic_list,
|
||||
unwrap_optional_type,
|
||||
unwrap_union_types,
|
||||
|
|
@ -769,24 +770,30 @@ class Generator:
|
|||
first = next(iter(op.request_params))
|
||||
request_name, request_type = first
|
||||
|
||||
op_name = "".join(word.capitalize() for word in op.name.split("_"))
|
||||
request_name = f"{op_name}Request"
|
||||
fields = [
|
||||
(
|
||||
name,
|
||||
type_,
|
||||
)
|
||||
for name, type_ in op.request_params
|
||||
]
|
||||
request_type = make_dataclass(
|
||||
request_name,
|
||||
fields,
|
||||
namespace={
|
||||
"__doc__": create_docstring_for_request(
|
||||
request_name, fields, doc_params
|
||||
# Special case: if there's a single parameter with Body(embed=False) that's a BaseModel,
|
||||
# unwrap it to show the flat structure in the OpenAPI spec
|
||||
# Example: openai_chat_completion()
|
||||
if (len(op.request_params) == 1 and is_unwrapped_body_param(request_type)):
|
||||
pass
|
||||
else:
|
||||
op_name = "".join(word.capitalize() for word in op.name.split("_"))
|
||||
request_name = f"{op_name}Request"
|
||||
fields = [
|
||||
(
|
||||
name,
|
||||
type_,
|
||||
)
|
||||
},
|
||||
)
|
||||
for name, type_ in op.request_params
|
||||
]
|
||||
request_type = make_dataclass(
|
||||
request_name,
|
||||
fields,
|
||||
namespace={
|
||||
"__doc__": create_docstring_for_request(
|
||||
request_name, fields, doc_params
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
requestBody = RequestBody(
|
||||
content={
|
||||
|
|
|
|||
|
|
@ -8,10 +8,11 @@ import json
|
|||
import typing
|
||||
import inspect
|
||||
from pathlib import Path
|
||||
from typing import TextIO
|
||||
from typing import Any, List, Optional, Union, get_type_hints, get_origin, get_args
|
||||
from typing import Any, List, Optional, TextIO, Union, get_type_hints, get_origin, get_args
|
||||
|
||||
from pydantic import BaseModel
|
||||
from llama_stack.strong_typing.schema import object_to_json, StrictJsonType
|
||||
from llama_stack.strong_typing.inspection import is_unwrapped_body_param
|
||||
from llama_stack.core.resolver import api_protocol_map
|
||||
|
||||
from .generator import Generator
|
||||
|
|
@ -205,6 +206,14 @@ def _validate_has_return_in_docstring(method) -> str | None:
|
|||
def _validate_has_params_in_docstring(method) -> str | None:
|
||||
source = inspect.getsource(method)
|
||||
sig = inspect.signature(method)
|
||||
|
||||
params_list = [p for p in sig.parameters.values() if p.name != "self"]
|
||||
if len(params_list) == 1:
|
||||
param = params_list[0]
|
||||
param_type = param.annotation
|
||||
if is_unwrapped_body_param(param_type):
|
||||
return
|
||||
|
||||
# Only check if the method has more than one parameter
|
||||
if len(sig.parameters) > 1 and ":param" not in source:
|
||||
return "does not have a ':param' in its docstring"
|
||||
|
|
|
|||
|
|
@ -30,3 +30,5 @@ fi
|
|||
stack_dir=$(dirname $(dirname $THIS_DIR))
|
||||
PYTHONPATH=$PYTHONPATH:$stack_dir \
|
||||
python -m docs.openapi_generator.generate $(dirname $THIS_DIR)/static
|
||||
|
||||
cp $stack_dir/docs/static/stainless-llama-stack-spec.yaml $stack_dir/client-sdks/stainless/openapi.yml
|
||||
|
|
|
|||
1044
docs/package-lock.json
generated
1044
docs/package-lock.json
generated
File diff suppressed because it is too large
Load diff
|
|
@ -15,7 +15,8 @@
|
|||
"gen-api-docs": "docusaurus gen-api-docs",
|
||||
"clean-api-docs": "docusaurus clean-api-docs",
|
||||
"gen-api-docs:version": "docusaurus gen-api-docs:version",
|
||||
"clean-api-docs:version": "docusaurus clean-api-docs:version"
|
||||
"clean-api-docs:version": "docusaurus clean-api-docs:version",
|
||||
"sync-files": "node scripts/sync-files.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "3.8.1",
|
||||
|
|
@ -27,7 +28,8 @@
|
|||
"docusaurus-theme-openapi-docs": "4.3.7",
|
||||
"prism-react-renderer": "^2.3.0",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0"
|
||||
"react-dom": "^19.0.0",
|
||||
"remark-code-import": "^1.2.0"
|
||||
},
|
||||
"browserslist": {
|
||||
"production": [
|
||||
|
|
@ -40,5 +42,9 @@
|
|||
"last 1 firefox version",
|
||||
"last 1 safari version"
|
||||
]
|
||||
},
|
||||
"devDependencies": {
|
||||
"raw-loader": "^4.0.2",
|
||||
"react-markdown": "^10.1.0"
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,366 +1,399 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c1e7571c",
|
||||
"metadata": {
|
||||
"id": "c1e7571c"
|
||||
},
|
||||
"source": [
|
||||
"[](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)\n",
|
||||
"\n",
|
||||
"# Llama Stack - Building AI Applications\n",
|
||||
"\n",
|
||||
"<img src=\"https://llamastack.github.io/latest/_images/llama-stack.png\" alt=\"drawing\" width=\"500\"/>\n",
|
||||
"\n",
|
||||
"Get started with Llama Stack in minutes!\n",
|
||||
"\n",
|
||||
"[Llama Stack](https://github.com/meta-llama/llama-stack) is a stateful service with REST APIs to support the seamless transition of AI applications across different environments. You can build and test using a local server first and deploy to a hosted endpoint for production.\n",
|
||||
"\n",
|
||||
"In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)\n",
|
||||
"as the inference [provider](docs/source/providers/index.md#inference) for a Llama Model.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4CV1Q19BDMVw",
|
||||
"metadata": {
|
||||
"id": "4CV1Q19BDMVw"
|
||||
},
|
||||
"source": [
|
||||
"## Step 1: Install and setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "K4AvfUAJZOeS",
|
||||
"metadata": {
|
||||
"id": "K4AvfUAJZOeS"
|
||||
},
|
||||
"source": [
|
||||
"### 1.1. Install uv and test inference with Ollama\n",
|
||||
"\n",
|
||||
"We'll install [uv](https://docs.astral.sh/uv/) to setup the Python virtual environment, along with [colab-xterm](https://github.com/InfuseAI/colab-xterm) for running command-line tools, and [Ollama](https://ollama.com/download) as the inference provider."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7a2d7b85",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install uv llama_stack llama-stack-client\n",
|
||||
"\n",
|
||||
"## If running on Collab:\n",
|
||||
"# !pip install colab-xterm\n",
|
||||
"# %load_ext colabxterm\n",
|
||||
"\n",
|
||||
"!curl https://ollama.ai/install.sh | sh"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "39fa584b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 1.2. Test inference with Ollama"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3bf81522",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We’ll now launch a terminal and run inference on a Llama model with Ollama to verify that the model is working correctly."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a7e8e0f1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"## If running on Colab:\n",
|
||||
"# %xterm\n",
|
||||
"\n",
|
||||
"## To be ran in the terminal:\n",
|
||||
"# ollama serve &\n",
|
||||
"# ollama run llama3.2:3b --keepalive 60m"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f3c5f243",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If successful, you should see the model respond to a prompt.\n",
|
||||
"\n",
|
||||
"...\n",
|
||||
"```\n",
|
||||
">>> hi\n",
|
||||
"Hello! How can I assist you today?\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "oDUB7M_qe-Gs",
|
||||
"metadata": {
|
||||
"id": "oDUB7M_qe-Gs"
|
||||
},
|
||||
"source": [
|
||||
"## Step 2: Run the Llama Stack server\n",
|
||||
"\n",
|
||||
"In this showcase, we will start a Llama Stack server that is running locally."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "732eadc6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 2.1. Setup the Llama Stack Server"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "J2kGed0R5PSf",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"collapsed": true,
|
||||
"id": "J2kGed0R5PSf",
|
||||
"outputId": "2478ea60-8d35-48a1-b011-f233831740c5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"if \"UV_SYSTEM_PYTHON\" in os.environ:\n",
|
||||
" del os.environ[\"UV_SYSTEM_PYTHON\"]\n",
|
||||
"\n",
|
||||
"# this command installs all the dependencies needed for the llama stack server with the ollama inference provider\n",
|
||||
"!uv run --with llama-stack llama stack build --distro starter\n",
|
||||
"\n",
|
||||
"def run_llama_stack_server_background():\n",
|
||||
" log_file = open(\"llama_stack_server.log\", \"w\")\n",
|
||||
" process = subprocess.Popen(\n",
|
||||
" f\"OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter\n",
|
||||
" shell=True,\n",
|
||||
" stdout=log_file,\n",
|
||||
" stderr=log_file,\n",
|
||||
" text=True\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" print(f\"Starting Llama Stack server with PID: {process.pid}\")\n",
|
||||
" return process\n",
|
||||
"\n",
|
||||
"def wait_for_server_to_start():\n",
|
||||
" import requests\n",
|
||||
" from requests.exceptions import ConnectionError\n",
|
||||
" import time\n",
|
||||
"\n",
|
||||
" url = \"http://0.0.0.0:8321/v1/health\"\n",
|
||||
" max_retries = 30\n",
|
||||
" retry_interval = 1\n",
|
||||
"\n",
|
||||
" print(\"Waiting for server to start\", end=\"\")\n",
|
||||
" for _ in range(max_retries):\n",
|
||||
" try:\n",
|
||||
" response = requests.get(url)\n",
|
||||
" if response.status_code == 200:\n",
|
||||
" print(\"\\nServer is ready!\")\n",
|
||||
" return True\n",
|
||||
" except ConnectionError:\n",
|
||||
" print(\".\", end=\"\", flush=True)\n",
|
||||
" time.sleep(retry_interval)\n",
|
||||
"\n",
|
||||
" print(\"\\nServer failed to start after\", max_retries * retry_interval, \"seconds\")\n",
|
||||
" return False\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# use this helper if needed to kill the server\n",
|
||||
"def kill_llama_stack_server():\n",
|
||||
" # Kill any existing llama stack server processes\n",
|
||||
" os.system(\"ps aux | grep -v grep | grep llama_stack.core.server.server | awk '{print $2}' | xargs kill -9\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c40e9efd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 2.2. Start the Llama Stack Server"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "f779283d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Starting Llama Stack server with PID: 787100\n",
|
||||
"Waiting for server to start\n",
|
||||
"Server is ready!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"server_process = run_llama_stack_server_background()\n",
|
||||
"assert wait_for_server_to_start()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "28477c03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 3: Run the demo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "7da71011",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"rag_tool> Ingesting document: https://www.paulgraham.com/greatwork.html\n",
|
||||
"prompt> How do you do great work?\n",
|
||||
"\u001b[33minference> \u001b[0m\u001b[33m[k\u001b[0m\u001b[33mnowledge\u001b[0m\u001b[33m_search\u001b[0m\u001b[33m(query\u001b[0m\u001b[33m=\"\u001b[0m\u001b[33mWhat\u001b[0m\u001b[33m is\u001b[0m\u001b[33m the\u001b[0m\u001b[33m key\u001b[0m\u001b[33m to\u001b[0m\u001b[33m doing\u001b[0m\u001b[33m great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m\")]\u001b[0m\u001b[97m\u001b[0m\n",
|
||||
"\u001b[32mtool_execution> Tool:knowledge_search Args:{'query': 'What is the key to doing great work'}\u001b[0m\n",
|
||||
"\u001b[32mtool_execution> Tool:knowledge_search Response:[TextContentItem(text='knowledge_search tool found 5 chunks:\\nBEGIN of knowledge_search tool results.\\n', type='text'), TextContentItem(text=\"Result 1:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 2:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 3:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 4:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text=\"Result 5:\\nDocument_id:docum\\nContent: work. Doing great work means doing something important\\nso well that you expand people's ideas of what's possible. But\\nthere's no threshold for importance. It's a matter of degree, and\\noften hard to judge at the time anyway.\\n\", type='text'), TextContentItem(text='END of knowledge_search tool results.\\n', type='text'), TextContentItem(text='The above results were retrieved to help answer the user\\'s query: \"What is the key to doing great work\". Use them as supporting information only in answering this query.\\n', type='text')]\u001b[0m\n",
|
||||
"\u001b[33minference> \u001b[0m\u001b[33mDoing\u001b[0m\u001b[33m great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m means\u001b[0m\u001b[33m doing\u001b[0m\u001b[33m something\u001b[0m\u001b[33m important\u001b[0m\u001b[33m so\u001b[0m\u001b[33m well\u001b[0m\u001b[33m that\u001b[0m\u001b[33m you\u001b[0m\u001b[33m expand\u001b[0m\u001b[33m people\u001b[0m\u001b[33m's\u001b[0m\u001b[33m ideas\u001b[0m\u001b[33m of\u001b[0m\u001b[33m what\u001b[0m\u001b[33m's\u001b[0m\u001b[33m possible\u001b[0m\u001b[33m.\u001b[0m\u001b[33m However\u001b[0m\u001b[33m,\u001b[0m\u001b[33m there\u001b[0m\u001b[33m's\u001b[0m\u001b[33m no\u001b[0m\u001b[33m threshold\u001b[0m\u001b[33m for\u001b[0m\u001b[33m importance\u001b[0m\u001b[33m,\u001b[0m\u001b[33m and\u001b[0m\u001b[33m it\u001b[0m\u001b[33m's\u001b[0m\u001b[33m often\u001b[0m\u001b[33m hard\u001b[0m\u001b[33m to\u001b[0m\u001b[33m judge\u001b[0m\u001b[33m at\u001b[0m\u001b[33m the\u001b[0m\u001b[33m time\u001b[0m\u001b[33m anyway\u001b[0m\u001b[33m.\u001b[0m\u001b[33m Great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m is\u001b[0m\u001b[33m a\u001b[0m\u001b[33m matter\u001b[0m\u001b[33m of\u001b[0m\u001b[33m degree\u001b[0m\u001b[33m,\u001b[0m\u001b[33m and\u001b[0m\u001b[33m it\u001b[0m\u001b[33m can\u001b[0m\u001b[33m be\u001b[0m\u001b[33m difficult\u001b[0m\u001b[33m to\u001b[0m\u001b[33m determine\u001b[0m\u001b[33m whether\u001b[0m\u001b[33m someone\u001b[0m\u001b[33m has\u001b[0m\u001b[33m done\u001b[0m\u001b[33m great\u001b[0m\u001b[33m work\u001b[0m\u001b[33m until\u001b[0m\u001b[33m after\u001b[0m\u001b[33m the\u001b[0m\u001b[33m fact\u001b[0m\u001b[33m.\u001b[0m\u001b[97m\u001b[0m\n",
|
||||
"\u001b[30m\u001b[0m"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient\n",
|
||||
"\n",
|
||||
"vector_db_id = \"my_demo_vector_db\"\n",
|
||||
"client = LlamaStackClient(base_url=\"http://0.0.0.0:8321\")\n",
|
||||
"\n",
|
||||
"models = client.models.list()\n",
|
||||
"\n",
|
||||
"# Select the first ollama and first ollama's embedding model\n",
|
||||
"model_id = next(m for m in models if m.model_type == \"llm\" and m.provider_id == \"ollama\").identifier\n",
|
||||
"embedding_model = next(m for m in models if m.model_type == \"embedding\" and m.provider_id == \"ollama\")\n",
|
||||
"embedding_model_id = embedding_model.identifier\n",
|
||||
"embedding_dimension = embedding_model.metadata[\"embedding_dimension\"]\n",
|
||||
"\n",
|
||||
"_ = client.vector_dbs.register(\n",
|
||||
" vector_db_id=vector_db_id,\n",
|
||||
" embedding_model=embedding_model_id,\n",
|
||||
" embedding_dimension=embedding_dimension,\n",
|
||||
" provider_id=\"faiss\",\n",
|
||||
")\n",
|
||||
"source = \"https://www.paulgraham.com/greatwork.html\"\n",
|
||||
"print(\"rag_tool> Ingesting document:\", source)\n",
|
||||
"document = RAGDocument(\n",
|
||||
" document_id=\"document_1\",\n",
|
||||
" content=source,\n",
|
||||
" mime_type=\"text/html\",\n",
|
||||
" metadata={},\n",
|
||||
")\n",
|
||||
"client.tool_runtime.rag_tool.insert(\n",
|
||||
" documents=[document],\n",
|
||||
" vector_db_id=vector_db_id,\n",
|
||||
" chunk_size_in_tokens=50,\n",
|
||||
")\n",
|
||||
"agent = Agent(\n",
|
||||
" client,\n",
|
||||
" model=model_id,\n",
|
||||
" instructions=\"You are a helpful assistant\",\n",
|
||||
" tools=[\n",
|
||||
" {\n",
|
||||
" \"name\": \"builtin::rag/knowledge_search\",\n",
|
||||
" \"args\": {\"vector_db_ids\": [vector_db_id]},\n",
|
||||
" }\n",
|
||||
" ],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = \"How do you do great work?\"\n",
|
||||
"print(\"prompt>\", prompt)\n",
|
||||
"\n",
|
||||
"response = agent.create_turn(\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
|
||||
" session_id=agent.create_session(\"rag_session\"),\n",
|
||||
" stream=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for log in AgentEventLogger().log(response):\n",
|
||||
" log.print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "341aaadf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e88e1185",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Next Steps"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bcb73600",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now you're ready to dive deeper into Llama Stack!\n",
|
||||
"- Explore the [Detailed Tutorial](./detailed_tutorial.md).\n",
|
||||
"- Try the [Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).\n",
|
||||
"- Browse more [Notebooks on GitHub](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks).\n",
|
||||
"- Learn about Llama Stack [Concepts](../concepts/index.md).\n",
|
||||
"- Discover how to [Build Llama Stacks](../distributions/index.md).\n",
|
||||
"- Refer to our [References](../references/index.md) for details on the Llama CLI and Python SDK.\n",
|
||||
"- Check out the [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository for example applications and tutorials."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"accelerator": "GPU",
|
||||
"colab": {
|
||||
"gpuType": "T4",
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c1e7571c",
|
||||
"metadata": {
|
||||
"id": "c1e7571c"
|
||||
},
|
||||
"source": [
|
||||
"[](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)\n",
|
||||
"\n",
|
||||
"# Llama Stack - Building AI Applications\n",
|
||||
"\n",
|
||||
"<img src=\"https://llamastack.github.io/latest/_images/llama-stack.png\" alt=\"drawing\" width=\"500\"/>\n",
|
||||
"\n",
|
||||
"Get started with Llama Stack in minutes!\n",
|
||||
"\n",
|
||||
"[Llama Stack](https://github.com/meta-llama/llama-stack) is a stateful service with REST APIs to support the seamless transition of AI applications across different environments. You can build and test using a local server first and deploy to a hosted endpoint for production.\n",
|
||||
"\n",
|
||||
"In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)\n",
|
||||
"as the inference [provider](docs/source/providers/index.md#inference) for a Llama Model.\n"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4CV1Q19BDMVw",
|
||||
"metadata": {
|
||||
"id": "4CV1Q19BDMVw"
|
||||
},
|
||||
"source": [
|
||||
"## Step 1: Install and setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "K4AvfUAJZOeS",
|
||||
"metadata": {
|
||||
"id": "K4AvfUAJZOeS"
|
||||
},
|
||||
"source": [
|
||||
"### 1.1. Install uv and test inference with Ollama\n",
|
||||
"\n",
|
||||
"We'll install [uv](https://docs.astral.sh/uv/) to setup the Python virtual environment, along with [colab-xterm](https://github.com/InfuseAI/colab-xterm) for running command-line tools, and [Ollama](https://ollama.com/download) as the inference provider."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7a2d7b85",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install uv llama_stack llama-stack-client\n",
|
||||
"\n",
|
||||
"## If running on Collab:\n",
|
||||
"# !pip install colab-xterm\n",
|
||||
"# %load_ext colabxterm\n",
|
||||
"\n",
|
||||
"!curl https://ollama.ai/install.sh | sh"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "39fa584b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 1.2. Test inference with Ollama"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3bf81522",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We’ll now launch a terminal and run inference on a Llama model with Ollama to verify that the model is working correctly."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a7e8e0f1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"## If running on Colab:\n",
|
||||
"# %xterm\n",
|
||||
"\n",
|
||||
"## To be ran in the terminal:\n",
|
||||
"# ollama serve &\n",
|
||||
"# ollama run llama3.2:3b --keepalive 60m"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f3c5f243",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If successful, you should see the model respond to a prompt.\n",
|
||||
"\n",
|
||||
"...\n",
|
||||
"```\n",
|
||||
">>> hi\n",
|
||||
"Hello! How can I assist you today?\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "oDUB7M_qe-Gs",
|
||||
"metadata": {
|
||||
"id": "oDUB7M_qe-Gs"
|
||||
},
|
||||
"source": [
|
||||
"## Step 2: Run the Llama Stack server\n",
|
||||
"\n",
|
||||
"In this showcase, we will start a Llama Stack server that is running locally."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "732eadc6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 2.1. Setup the Llama Stack Server"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "J2kGed0R5PSf",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "J2kGed0R5PSf",
|
||||
"outputId": "2478ea60-8d35-48a1-b011-f233831740c5"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||||
"\u001b[2mAudited \u001b[1m52 packages\u001b[0m \u001b[2min 1.56s\u001b[0m\u001b[0m\n",
|
||||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||||
"\u001b[2mAudited \u001b[1m3 packages\u001b[0m \u001b[2min 122ms\u001b[0m\u001b[0m\n",
|
||||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||||
"\u001b[2mAudited \u001b[1m3 packages\u001b[0m \u001b[2min 197ms\u001b[0m\u001b[0m\n",
|
||||
"\u001b[2mUsing Python 3.12.12 environment at: /opt/homebrew/Caskroom/miniconda/base/envs/test\u001b[0m\n",
|
||||
"\u001b[2mAudited \u001b[1m1 package\u001b[0m \u001b[2min 11ms\u001b[0m\u001b[0m\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"if \"UV_SYSTEM_PYTHON\" in os.environ:\n",
|
||||
" del os.environ[\"UV_SYSTEM_PYTHON\"]\n",
|
||||
"\n",
|
||||
"# this command installs all the dependencies needed for the llama stack server with the ollama inference provider\n",
|
||||
"!uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install\n",
|
||||
"\n",
|
||||
"def run_llama_stack_server_background():\n",
|
||||
" log_file = open(\"llama_stack_server.log\", \"w\")\n",
|
||||
" process = subprocess.Popen(\n",
|
||||
" f\"OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter\",\n",
|
||||
" shell=True,\n",
|
||||
" stdout=log_file,\n",
|
||||
" stderr=log_file,\n",
|
||||
" text=True\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" print(f\"Starting Llama Stack server with PID: {process.pid}\")\n",
|
||||
" return process\n",
|
||||
"\n",
|
||||
"def wait_for_server_to_start():\n",
|
||||
" import requests\n",
|
||||
" from requests.exceptions import ConnectionError\n",
|
||||
" import time\n",
|
||||
"\n",
|
||||
" url = \"http://0.0.0.0:8321/v1/health\"\n",
|
||||
" max_retries = 30\n",
|
||||
" retry_interval = 1\n",
|
||||
"\n",
|
||||
" print(\"Waiting for server to start\", end=\"\")\n",
|
||||
" for _ in range(max_retries):\n",
|
||||
" try:\n",
|
||||
" response = requests.get(url)\n",
|
||||
" if response.status_code == 200:\n",
|
||||
" print(\"\\nServer is ready!\")\n",
|
||||
" return True\n",
|
||||
" except ConnectionError:\n",
|
||||
" print(\".\", end=\"\", flush=True)\n",
|
||||
" time.sleep(retry_interval)\n",
|
||||
"\n",
|
||||
" print(\"\\nServer failed to start after\", max_retries * retry_interval, \"seconds\")\n",
|
||||
" return False\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# use this helper if needed to kill the server\n",
|
||||
"def kill_llama_stack_server():\n",
|
||||
" # Kill any existing llama stack server processes\n",
|
||||
" os.system(\"ps aux | grep -v grep | grep llama_stack.core.server.server | awk '{print $2}' | xargs kill -9\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c40e9efd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 2.2. Start the Llama Stack Server"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f779283d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Starting Llama Stack server with PID: 20778\n",
|
||||
"Waiting for server to start........\n",
|
||||
"Server is ready!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"server_process = run_llama_stack_server_background()\n",
|
||||
"assert wait_for_server_to_start()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "28477c03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 3: Run the demo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "7da71011",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"INFO:httpx:HTTP Request: GET http://0.0.0.0:8321/v1/models \"HTTP/1.1 200 OK\"\n",
|
||||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/files \"HTTP/1.1 200 OK\"\n",
|
||||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/vector_stores \"HTTP/1.1 200 OK\"\n",
|
||||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/conversations \"HTTP/1.1 200 OK\"\n",
|
||||
"INFO:httpx:HTTP Request: POST http://0.0.0.0:8321/v1/responses \"HTTP/1.1 200 OK\"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"prompt> How do you do great work?\n",
|
||||
"🤔 Doing great work involves a combination of skills, habits, and mindsets. Here are some key principles:\n",
|
||||
"\n",
|
||||
"1. **Set Clear Goals**: Start with a clear vision of what you want to achieve. Define specific, measurable, achievable, relevant, and time-bound (SMART) goals.\n",
|
||||
"\n",
|
||||
"2. **Plan and Prioritize**: Break your goals into smaller, manageable tasks. Prioritize these tasks based on their importance and urgency.\n",
|
||||
"\n",
|
||||
"3. **Focus on Quality**: Aim for high-quality outcomes rather than just finishing tasks. Pay attention to detail, and ensure your work meets or exceeds standards.\n",
|
||||
"\n",
|
||||
"4. **Stay Organized**: Keep your workspace, both physical and digital, organized to help you stay focused and efficient.\n",
|
||||
"\n",
|
||||
"5. **Manage Your Time**: Use time management techniques such as the Pomodoro Technique, time blocking, or the Eisenhower Box to maximize productivity.\n",
|
||||
"\n",
|
||||
"6. **Seek Feedback and Learn**: Regularly seek feedback from peers, mentors, or supervisors. Use constructive criticism to improve continuously.\n",
|
||||
"\n",
|
||||
"7. **Innovate and Improve**: Look for ways to improve processes or introduce new ideas. Be open to change and willing to adapt.\n",
|
||||
"\n",
|
||||
"8. **Stay Motivated and Persistent**: Keep your end goals in mind to stay motivated. Overcome setbacks with resilience and persistence.\n",
|
||||
"\n",
|
||||
"9. **Balance and Rest**: Ensure you maintain a healthy work-life balance. Take breaks and manage stress to sustain long-term productivity.\n",
|
||||
"\n",
|
||||
"10. **Reflect and Adjust**: Regularly assess your progress and adjust your strategies as needed. Reflect on what works well and what doesn't.\n",
|
||||
"\n",
|
||||
"By incorporating these elements, you can consistently produce high-quality work and achieve excellence in your endeavors.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from llama_stack_client import Agent, AgentEventLogger, RAGDocument, LlamaStackClient\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"vector_store_id = \"my_demo_vector_db\"\n",
|
||||
"client = LlamaStackClient(base_url=\"http://0.0.0.0:8321\")\n",
|
||||
"\n",
|
||||
"models = client.models.list()\n",
|
||||
"\n",
|
||||
"# Select the first ollama and first ollama's embedding model\n",
|
||||
"model_id = next(m for m in models if m.model_type == \"llm\" and m.provider_id == \"ollama\").identifier\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"source = \"https://www.paulgraham.com/greatwork.html\"\n",
|
||||
"response = requests.get(source)\n",
|
||||
"file = client.files.create(\n",
|
||||
" file=response.content,\n",
|
||||
" purpose='assistants'\n",
|
||||
")\n",
|
||||
"vector_store = client.vector_stores.create(\n",
|
||||
" name=vector_store_id,\n",
|
||||
" file_ids=[file.id],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"agent = Agent(\n",
|
||||
" client,\n",
|
||||
" model=model_id,\n",
|
||||
" instructions=\"You are a helpful assistant\",\n",
|
||||
" tools=[\n",
|
||||
" {\n",
|
||||
" \"type\": \"file_search\",\n",
|
||||
" \"vector_store_ids\": [vector_store_id],\n",
|
||||
" }\n",
|
||||
" ],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = \"How do you do great work?\"\n",
|
||||
"print(\"prompt>\", prompt)\n",
|
||||
"\n",
|
||||
"response = agent.create_turn(\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": prompt}],\n",
|
||||
" session_id=agent.create_session(\"rag_session\"),\n",
|
||||
" stream=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"for log in AgentEventLogger().log(response):\n",
|
||||
" print(log, end=\"\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "341aaadf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e88e1185",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Next Steps"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bcb73600",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now you're ready to dive deeper into Llama Stack!\n",
|
||||
"- Explore the [Detailed Tutorial](./detailed_tutorial.md).\n",
|
||||
"- Try the [Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).\n",
|
||||
"- Browse more [Notebooks on GitHub](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks).\n",
|
||||
"- Learn about Llama Stack [Concepts](../concepts/index.md).\n",
|
||||
"- Discover how to [Build Llama Stacks](../distributions/index.md).\n",
|
||||
"- Refer to our [References](../references/index.md) for details on the Llama CLI and Python SDK.\n",
|
||||
"- Check out the [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository for example applications and tutorials."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"accelerator": "GPU",
|
||||
"colab": {
|
||||
"gpuType": "T4",
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
|
|
|||
145
docs/scripts/sync-files.js
Executable file
145
docs/scripts/sync-files.js
Executable file
|
|
@ -0,0 +1,145 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
// Repository root is always one level up from docs
|
||||
const repoRoot = path.join(__dirname, '..', '..');
|
||||
|
||||
// Get all requested files from the usage tracking file
|
||||
function getRequestedFiles() {
|
||||
const usageFile = path.join(__dirname, '..', 'static', 'imported-files', 'usage.json');
|
||||
if (!fs.existsSync(usageFile)) {
|
||||
return [];
|
||||
}
|
||||
|
||||
try {
|
||||
const usage = JSON.parse(fs.readFileSync(usageFile, 'utf8'));
|
||||
return usage.files || [];
|
||||
} catch (error) {
|
||||
console.warn('Could not read usage file:', error.message);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// Track file usage
|
||||
function trackFileUsage(filePath) {
|
||||
const usageFile = path.join(__dirname, '..', 'static', 'imported-files', 'usage.json');
|
||||
const usageDir = path.dirname(usageFile);
|
||||
|
||||
// Ensure directory exists
|
||||
if (!fs.existsSync(usageDir)) {
|
||||
fs.mkdirSync(usageDir, { recursive: true });
|
||||
}
|
||||
|
||||
let usage = { files: [] };
|
||||
if (fs.existsSync(usageFile)) {
|
||||
try {
|
||||
usage = JSON.parse(fs.readFileSync(usageFile, 'utf8'));
|
||||
} catch (error) {
|
||||
console.warn('Could not read existing usage file, creating new one');
|
||||
}
|
||||
}
|
||||
|
||||
if (!usage.files.includes(filePath)) {
|
||||
usage.files.push(filePath);
|
||||
fs.writeFileSync(usageFile, JSON.stringify(usage, null, 2));
|
||||
}
|
||||
}
|
||||
|
||||
// Filter content based on file type and options
|
||||
function filterContent(content, filePath) {
|
||||
let lines = content.split('\n');
|
||||
|
||||
// Skip copyright header for Python files
|
||||
if (filePath.endsWith('.py')) {
|
||||
// Read the license header file
|
||||
const licenseHeaderPath = path.join(repoRoot, 'docs', 'license_header.txt');
|
||||
if (fs.existsSync(licenseHeaderPath)) {
|
||||
try {
|
||||
const licenseText = fs.readFileSync(licenseHeaderPath, 'utf8');
|
||||
const licenseLines = licenseText.trim().split('\n');
|
||||
|
||||
// Check if file starts with the license header (accounting for # comments)
|
||||
if (lines.length >= licenseLines.length) {
|
||||
let matches = true;
|
||||
for (let i = 0; i < licenseLines.length; i++) {
|
||||
const codeLine = lines[i]?.replace(/^#\s*/, '').trim();
|
||||
const licenseLine = licenseLines[i]?.trim();
|
||||
if (codeLine !== licenseLine) {
|
||||
matches = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (matches) {
|
||||
// Skip the license header and any trailing empty lines
|
||||
let skipTo = licenseLines.length;
|
||||
while (skipTo < lines.length && lines[skipTo].trim() === '') {
|
||||
skipTo++;
|
||||
}
|
||||
lines = lines.slice(skipTo);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.warn(`Could not read license header, skipping filtering for ${filePath}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Trim empty lines from start and end
|
||||
while (lines.length > 0 && lines[0].trim() === '') {
|
||||
lines.shift();
|
||||
}
|
||||
while (lines.length > 0 && lines[lines.length - 1].trim() === '') {
|
||||
lines.pop();
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
// Sync a file from repo root to static directory
|
||||
function syncFile(filePath) {
|
||||
const sourcePath = path.join(repoRoot, filePath);
|
||||
const destPath = path.join(__dirname, '..', 'static', 'imported-files', filePath);
|
||||
const destDir = path.dirname(destPath);
|
||||
|
||||
// Ensure destination directory exists
|
||||
if (!fs.existsSync(destDir)) {
|
||||
fs.mkdirSync(destDir, { recursive: true });
|
||||
}
|
||||
|
||||
try {
|
||||
if (fs.existsSync(sourcePath)) {
|
||||
const content = fs.readFileSync(sourcePath, 'utf8');
|
||||
const filteredContent = filterContent(content, filePath);
|
||||
fs.writeFileSync(destPath, filteredContent);
|
||||
console.log(`✅ Synced ${filePath}`);
|
||||
trackFileUsage(filePath);
|
||||
return true;
|
||||
} else {
|
||||
console.warn(`⚠️ Source file not found: ${sourcePath}`);
|
||||
return false;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`❌ Error syncing ${filePath}:`, error.message);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Main execution
|
||||
console.log(`📁 Repository root: ${path.resolve(repoRoot)}`);
|
||||
|
||||
// Get files that are being requested by the documentation
|
||||
const requestedFiles = getRequestedFiles();
|
||||
console.log(`📄 Syncing ${requestedFiles.length} requested files...`);
|
||||
|
||||
if (requestedFiles.length === 0) {
|
||||
console.log('ℹ️ No files requested yet. Files will be synced when first referenced in documentation.');
|
||||
} else {
|
||||
requestedFiles.forEach(filePath => {
|
||||
syncFile(filePath);
|
||||
});
|
||||
}
|
||||
|
||||
console.log('✅ File sync complete!');
|
||||
93
docs/src/components/CodeFromFile.jsx
Normal file
93
docs/src/components/CodeFromFile.jsx
Normal file
|
|
@ -0,0 +1,93 @@
|
|||
import React, { useState, useEffect } from 'react';
|
||||
import CodeBlock from '@theme/CodeBlock';
|
||||
|
||||
export default function CodeFromFile({
|
||||
src,
|
||||
language = 'python',
|
||||
title,
|
||||
startLine,
|
||||
endLine,
|
||||
highlightLines
|
||||
}) {
|
||||
const [content, setContent] = useState('');
|
||||
const [error, setError] = useState(null);
|
||||
|
||||
useEffect(() => {
|
||||
async function loadFile() {
|
||||
try {
|
||||
// File registration is now handled by the file-sync-plugin during build
|
||||
|
||||
// Load file from static/imported-files directory
|
||||
const response = await fetch(`/imported-files/${src}`);
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch: ${response.status}`);
|
||||
}
|
||||
let text = await response.text();
|
||||
|
||||
// Handle line range if specified (filtering is done at build time)
|
||||
if (startLine || endLine) {
|
||||
const lines = text.split('\n');
|
||||
const start = startLine ? Math.max(0, startLine - 1) : 0;
|
||||
const end = endLine ? Math.min(lines.length, endLine) : lines.length;
|
||||
text = lines.slice(start, end).join('\n');
|
||||
}
|
||||
|
||||
setContent(text);
|
||||
} catch (err) {
|
||||
console.error('Failed to load file:', err);
|
||||
setError(`Failed to load ${src}: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
loadFile();
|
||||
}, [src, startLine, endLine]);
|
||||
|
||||
if (error) {
|
||||
return <div style={{ color: 'red', padding: '1rem', border: '1px solid red', borderRadius: '4px' }}>
|
||||
Error: {error}
|
||||
</div>;
|
||||
}
|
||||
|
||||
if (!content) {
|
||||
return <div>Loading {src}...</div>;
|
||||
}
|
||||
|
||||
// Auto-detect language from file extension if not provided
|
||||
const detectedLanguage = language || getLanguageFromExtension(src);
|
||||
|
||||
return (
|
||||
<CodeBlock
|
||||
language={detectedLanguage}
|
||||
title={title || src}
|
||||
metastring={highlightLines ? `{${highlightLines}}` : undefined}
|
||||
>
|
||||
{content}
|
||||
</CodeBlock>
|
||||
);
|
||||
}
|
||||
|
||||
function getLanguageFromExtension(filename) {
|
||||
const ext = filename.split('.').pop();
|
||||
const languageMap = {
|
||||
'py': 'python',
|
||||
'js': 'javascript',
|
||||
'jsx': 'jsx',
|
||||
'ts': 'typescript',
|
||||
'tsx': 'tsx',
|
||||
'md': 'markdown',
|
||||
'sh': 'bash',
|
||||
'yaml': 'yaml',
|
||||
'yml': 'yaml',
|
||||
'json': 'json',
|
||||
'css': 'css',
|
||||
'html': 'html',
|
||||
'cpp': 'cpp',
|
||||
'c': 'c',
|
||||
'java': 'java',
|
||||
'go': 'go',
|
||||
'rs': 'rust',
|
||||
'php': 'php',
|
||||
'rb': 'ruby',
|
||||
};
|
||||
return languageMap[ext] || 'text';
|
||||
}
|
||||
|
|
@ -47,11 +47,11 @@ function QuickStart() {
|
|||
<pre><code>{`# Install uv and start Ollama
|
||||
ollama run llama3.2:3b --keepalive 60m
|
||||
|
||||
# Install server dependencies
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run Llama Stack server
|
||||
OLLAMA_URL=http://localhost:11434 \\
|
||||
uv run --with llama-stack \\
|
||||
llama stack build --distro starter \\
|
||||
--image-type venv --run
|
||||
OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter
|
||||
|
||||
# Try the Python SDK
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
|
|
|||
1748
docs/static/deprecated-llama-stack-spec.html
vendored
1748
docs/static/deprecated-llama-stack-spec.html
vendored
File diff suppressed because it is too large
Load diff
1437
docs/static/deprecated-llama-stack-spec.yaml
vendored
1437
docs/static/deprecated-llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
912
docs/static/experimental-llama-stack-spec.html
vendored
912
docs/static/experimental-llama-stack-spec.html
vendored
|
|
@ -1711,343 +1711,6 @@
|
|||
},
|
||||
"deprecated": false
|
||||
}
|
||||
},
|
||||
"/v1alpha/telemetry/metrics/{metric_name}": {
|
||||
"post": {
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A QueryMetricsResponse.",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/QueryMetricsResponse"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"$ref": "#/components/responses/BadRequest400"
|
||||
},
|
||||
"429": {
|
||||
"$ref": "#/components/responses/TooManyRequests429"
|
||||
},
|
||||
"500": {
|
||||
"$ref": "#/components/responses/InternalServerError500"
|
||||
},
|
||||
"default": {
|
||||
"$ref": "#/components/responses/DefaultError"
|
||||
}
|
||||
},
|
||||
"tags": [
|
||||
"Telemetry"
|
||||
],
|
||||
"summary": "Query metrics.",
|
||||
"description": "Query metrics.",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "metric_name",
|
||||
"in": "path",
|
||||
"description": "The name of the metric to query.",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"requestBody": {
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/QueryMetricsRequest"
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": true
|
||||
},
|
||||
"deprecated": false
|
||||
}
|
||||
},
|
||||
"/v1alpha/telemetry/spans": {
|
||||
"post": {
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A QuerySpansResponse.",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/QuerySpansResponse"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"$ref": "#/components/responses/BadRequest400"
|
||||
},
|
||||
"429": {
|
||||
"$ref": "#/components/responses/TooManyRequests429"
|
||||
},
|
||||
"500": {
|
||||
"$ref": "#/components/responses/InternalServerError500"
|
||||
},
|
||||
"default": {
|
||||
"$ref": "#/components/responses/DefaultError"
|
||||
}
|
||||
},
|
||||
"tags": [
|
||||
"Telemetry"
|
||||
],
|
||||
"summary": "Query spans.",
|
||||
"description": "Query spans.",
|
||||
"parameters": [],
|
||||
"requestBody": {
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/QuerySpansRequest"
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": true
|
||||
},
|
||||
"deprecated": false
|
||||
}
|
||||
},
|
||||
"/v1alpha/telemetry/spans/export": {
|
||||
"post": {
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "OK"
|
||||
},
|
||||
"400": {
|
||||
"$ref": "#/components/responses/BadRequest400"
|
||||
},
|
||||
"429": {
|
||||
"$ref": "#/components/responses/TooManyRequests429"
|
||||
},
|
||||
"500": {
|
||||
"$ref": "#/components/responses/InternalServerError500"
|
||||
},
|
||||
"default": {
|
||||
"$ref": "#/components/responses/DefaultError"
|
||||
}
|
||||
},
|
||||
"tags": [
|
||||
"Telemetry"
|
||||
],
|
||||
"summary": "Save spans to a dataset.",
|
||||
"description": "Save spans to a dataset.",
|
||||
"parameters": [],
|
||||
"requestBody": {
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/SaveSpansToDatasetRequest"
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": true
|
||||
},
|
||||
"deprecated": false
|
||||
}
|
||||
},
|
||||
"/v1alpha/telemetry/spans/{span_id}/tree": {
|
||||
"post": {
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A QuerySpanTreeResponse.",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/QuerySpanTreeResponse"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"$ref": "#/components/responses/BadRequest400"
|
||||
},
|
||||
"429": {
|
||||
"$ref": "#/components/responses/TooManyRequests429"
|
||||
},
|
||||
"500": {
|
||||
"$ref": "#/components/responses/InternalServerError500"
|
||||
},
|
||||
"default": {
|
||||
"$ref": "#/components/responses/DefaultError"
|
||||
}
|
||||
},
|
||||
"tags": [
|
||||
"Telemetry"
|
||||
],
|
||||
"summary": "Get a span tree by its ID.",
|
||||
"description": "Get a span tree by its ID.",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "span_id",
|
||||
"in": "path",
|
||||
"description": "The ID of the span to get the tree from.",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"requestBody": {
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/GetSpanTreeRequest"
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": true
|
||||
},
|
||||
"deprecated": false
|
||||
}
|
||||
},
|
||||
"/v1alpha/telemetry/traces": {
|
||||
"post": {
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A QueryTracesResponse.",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/QueryTracesResponse"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"$ref": "#/components/responses/BadRequest400"
|
||||
},
|
||||
"429": {
|
||||
"$ref": "#/components/responses/TooManyRequests429"
|
||||
},
|
||||
"500": {
|
||||
"$ref": "#/components/responses/InternalServerError500"
|
||||
},
|
||||
"default": {
|
||||
"$ref": "#/components/responses/DefaultError"
|
||||
}
|
||||
},
|
||||
"tags": [
|
||||
"Telemetry"
|
||||
],
|
||||
"summary": "Query traces.",
|
||||
"description": "Query traces.",
|
||||
"parameters": [],
|
||||
"requestBody": {
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/QueryTracesRequest"
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": true
|
||||
},
|
||||
"deprecated": false
|
||||
}
|
||||
},
|
||||
"/v1alpha/telemetry/traces/{trace_id}": {
|
||||
"get": {
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A Trace.",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/Trace"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"$ref": "#/components/responses/BadRequest400"
|
||||
},
|
||||
"429": {
|
||||
"$ref": "#/components/responses/TooManyRequests429"
|
||||
},
|
||||
"500": {
|
||||
"$ref": "#/components/responses/InternalServerError500"
|
||||
},
|
||||
"default": {
|
||||
"$ref": "#/components/responses/DefaultError"
|
||||
}
|
||||
},
|
||||
"tags": [
|
||||
"Telemetry"
|
||||
],
|
||||
"summary": "Get a trace by its ID.",
|
||||
"description": "Get a trace by its ID.",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "trace_id",
|
||||
"in": "path",
|
||||
"description": "The ID of the trace to get.",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"deprecated": false
|
||||
}
|
||||
},
|
||||
"/v1alpha/telemetry/traces/{trace_id}/spans/{span_id}": {
|
||||
"get": {
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "A Span.",
|
||||
"content": {
|
||||
"application/json": {
|
||||
"schema": {
|
||||
"$ref": "#/components/schemas/Span"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"400": {
|
||||
"$ref": "#/components/responses/BadRequest400"
|
||||
},
|
||||
"429": {
|
||||
"$ref": "#/components/responses/TooManyRequests429"
|
||||
},
|
||||
"500": {
|
||||
"$ref": "#/components/responses/InternalServerError500"
|
||||
},
|
||||
"default": {
|
||||
"$ref": "#/components/responses/DefaultError"
|
||||
}
|
||||
},
|
||||
"tags": [
|
||||
"Telemetry"
|
||||
],
|
||||
"summary": "Get a span by its ID.",
|
||||
"description": "Get a span by its ID.",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "trace_id",
|
||||
"in": "path",
|
||||
"description": "The ID of the trace to get the span from.",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "span_id",
|
||||
"in": "path",
|
||||
"description": "The ID of the span to get.",
|
||||
"required": true,
|
||||
"schema": {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"deprecated": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"jsonSchemaDialect": "https://json-schema.org/draft/2020-12/schema",
|
||||
|
|
@ -2187,7 +1850,7 @@
|
|||
"enum": [
|
||||
"model",
|
||||
"shield",
|
||||
"vector_db",
|
||||
"vector_store",
|
||||
"dataset",
|
||||
"scoring_function",
|
||||
"benchmark",
|
||||
|
|
@ -2713,7 +2376,6 @@
|
|||
},
|
||||
"max_tokens": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "The maximum number of tokens that can be generated in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length."
|
||||
},
|
||||
"repetition_penalty": {
|
||||
|
|
@ -3203,7 +2865,7 @@
|
|||
"const": "memory_retrieval",
|
||||
"default": "memory_retrieval"
|
||||
},
|
||||
"vector_db_ids": {
|
||||
"vector_store_ids": {
|
||||
"type": "string",
|
||||
"description": "The IDs of the vector databases to retrieve context from."
|
||||
},
|
||||
|
|
@ -3217,7 +2879,7 @@
|
|||
"turn_id",
|
||||
"step_id",
|
||||
"step_type",
|
||||
"vector_db_ids",
|
||||
"vector_store_ids",
|
||||
"inserted_context"
|
||||
],
|
||||
"title": "MemoryRetrievalStep",
|
||||
|
|
@ -4320,7 +3982,7 @@
|
|||
"enum": [
|
||||
"model",
|
||||
"shield",
|
||||
"vector_db",
|
||||
"vector_store",
|
||||
"dataset",
|
||||
"scoring_function",
|
||||
"benchmark",
|
||||
|
|
@ -5765,561 +5427,6 @@
|
|||
"logger_config"
|
||||
],
|
||||
"title": "SupervisedFineTuneRequest"
|
||||
},
|
||||
"QueryMetricsRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"start_time": {
|
||||
"type": "integer",
|
||||
"description": "The start time of the metric to query."
|
||||
},
|
||||
"end_time": {
|
||||
"type": "integer",
|
||||
"description": "The end time of the metric to query."
|
||||
},
|
||||
"granularity": {
|
||||
"type": "string",
|
||||
"description": "The granularity of the metric to query."
|
||||
},
|
||||
"query_type": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"range",
|
||||
"instant"
|
||||
],
|
||||
"description": "The type of query to perform."
|
||||
},
|
||||
"label_matchers": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "The name of the label to match"
|
||||
},
|
||||
"value": {
|
||||
"type": "string",
|
||||
"description": "The value to match against"
|
||||
},
|
||||
"operator": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"=",
|
||||
"!=",
|
||||
"=~",
|
||||
"!~"
|
||||
],
|
||||
"description": "The comparison operator to use for matching",
|
||||
"default": "="
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"name",
|
||||
"value",
|
||||
"operator"
|
||||
],
|
||||
"title": "MetricLabelMatcher",
|
||||
"description": "A matcher for filtering metrics by label values."
|
||||
},
|
||||
"description": "The label matchers to apply to the metric."
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"start_time",
|
||||
"query_type"
|
||||
],
|
||||
"title": "QueryMetricsRequest"
|
||||
},
|
||||
"MetricDataPoint": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"timestamp": {
|
||||
"type": "integer",
|
||||
"description": "Unix timestamp when the metric value was recorded"
|
||||
},
|
||||
"value": {
|
||||
"type": "number",
|
||||
"description": "The numeric value of the metric at this timestamp"
|
||||
},
|
||||
"unit": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"timestamp",
|
||||
"value",
|
||||
"unit"
|
||||
],
|
||||
"title": "MetricDataPoint",
|
||||
"description": "A single data point in a metric time series."
|
||||
},
|
||||
"MetricLabel": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "The name of the label"
|
||||
},
|
||||
"value": {
|
||||
"type": "string",
|
||||
"description": "The value of the label"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"name",
|
||||
"value"
|
||||
],
|
||||
"title": "MetricLabel",
|
||||
"description": "A label associated with a metric."
|
||||
},
|
||||
"MetricSeries": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"metric": {
|
||||
"type": "string",
|
||||
"description": "The name of the metric"
|
||||
},
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/MetricLabel"
|
||||
},
|
||||
"description": "List of labels associated with this metric series"
|
||||
},
|
||||
"values": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/MetricDataPoint"
|
||||
},
|
||||
"description": "List of data points in chronological order"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"metric",
|
||||
"labels",
|
||||
"values"
|
||||
],
|
||||
"title": "MetricSeries",
|
||||
"description": "A time series of metric data points."
|
||||
},
|
||||
"QueryMetricsResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"data": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/MetricSeries"
|
||||
},
|
||||
"description": "List of metric series matching the query criteria"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"data"
|
||||
],
|
||||
"title": "QueryMetricsResponse",
|
||||
"description": "Response containing metric time series data."
|
||||
},
|
||||
"QueryCondition": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"key": {
|
||||
"type": "string",
|
||||
"description": "The attribute key to filter on"
|
||||
},
|
||||
"op": {
|
||||
"$ref": "#/components/schemas/QueryConditionOp",
|
||||
"description": "The comparison operator to apply"
|
||||
},
|
||||
"value": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "null"
|
||||
},
|
||||
{
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"type": "number"
|
||||
},
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "object"
|
||||
}
|
||||
],
|
||||
"description": "The value to compare against"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"key",
|
||||
"op",
|
||||
"value"
|
||||
],
|
||||
"title": "QueryCondition",
|
||||
"description": "A condition for filtering query results."
|
||||
},
|
||||
"QueryConditionOp": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"eq",
|
||||
"ne",
|
||||
"gt",
|
||||
"lt"
|
||||
],
|
||||
"title": "QueryConditionOp",
|
||||
"description": "Comparison operators for query conditions."
|
||||
},
|
||||
"QuerySpansRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"attribute_filters": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/QueryCondition"
|
||||
},
|
||||
"description": "The attribute filters to apply to the spans."
|
||||
},
|
||||
"attributes_to_return": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"description": "The attributes to return in the spans."
|
||||
},
|
||||
"max_depth": {
|
||||
"type": "integer",
|
||||
"description": "The maximum depth of the tree."
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"attribute_filters",
|
||||
"attributes_to_return"
|
||||
],
|
||||
"title": "QuerySpansRequest"
|
||||
},
|
||||
"Span": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"span_id": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier for the span"
|
||||
},
|
||||
"trace_id": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier for the trace this span belongs to"
|
||||
},
|
||||
"parent_span_id": {
|
||||
"type": "string",
|
||||
"description": "(Optional) Unique identifier for the parent span, if this is a child span"
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "Human-readable name describing the operation this span represents"
|
||||
},
|
||||
"start_time": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "Timestamp when the operation began"
|
||||
},
|
||||
"end_time": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "(Optional) Timestamp when the operation finished, if completed"
|
||||
},
|
||||
"attributes": {
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "null"
|
||||
},
|
||||
{
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"type": "number"
|
||||
},
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "object"
|
||||
}
|
||||
]
|
||||
},
|
||||
"description": "(Optional) Key-value pairs containing additional metadata about the span"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"span_id",
|
||||
"trace_id",
|
||||
"name",
|
||||
"start_time"
|
||||
],
|
||||
"title": "Span",
|
||||
"description": "A span representing a single operation within a trace."
|
||||
},
|
||||
"QuerySpansResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"data": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/Span"
|
||||
},
|
||||
"description": "List of spans matching the query criteria"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"data"
|
||||
],
|
||||
"title": "QuerySpansResponse",
|
||||
"description": "Response containing a list of spans."
|
||||
},
|
||||
"SaveSpansToDatasetRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"attribute_filters": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/QueryCondition"
|
||||
},
|
||||
"description": "The attribute filters to apply to the spans."
|
||||
},
|
||||
"attributes_to_save": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"description": "The attributes to save to the dataset."
|
||||
},
|
||||
"dataset_id": {
|
||||
"type": "string",
|
||||
"description": "The ID of the dataset to save the spans to."
|
||||
},
|
||||
"max_depth": {
|
||||
"type": "integer",
|
||||
"description": "The maximum depth of the tree."
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"attribute_filters",
|
||||
"attributes_to_save",
|
||||
"dataset_id"
|
||||
],
|
||||
"title": "SaveSpansToDatasetRequest"
|
||||
},
|
||||
"GetSpanTreeRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"attributes_to_return": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"description": "The attributes to return in the tree."
|
||||
},
|
||||
"max_depth": {
|
||||
"type": "integer",
|
||||
"description": "The maximum depth of the tree."
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"title": "GetSpanTreeRequest"
|
||||
},
|
||||
"SpanStatus": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"ok",
|
||||
"error"
|
||||
],
|
||||
"title": "SpanStatus",
|
||||
"description": "The status of a span indicating whether it completed successfully or with an error."
|
||||
},
|
||||
"SpanWithStatus": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"span_id": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier for the span"
|
||||
},
|
||||
"trace_id": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier for the trace this span belongs to"
|
||||
},
|
||||
"parent_span_id": {
|
||||
"type": "string",
|
||||
"description": "(Optional) Unique identifier for the parent span, if this is a child span"
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "Human-readable name describing the operation this span represents"
|
||||
},
|
||||
"start_time": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "Timestamp when the operation began"
|
||||
},
|
||||
"end_time": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "(Optional) Timestamp when the operation finished, if completed"
|
||||
},
|
||||
"attributes": {
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "null"
|
||||
},
|
||||
{
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"type": "number"
|
||||
},
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "array"
|
||||
},
|
||||
{
|
||||
"type": "object"
|
||||
}
|
||||
]
|
||||
},
|
||||
"description": "(Optional) Key-value pairs containing additional metadata about the span"
|
||||
},
|
||||
"status": {
|
||||
"$ref": "#/components/schemas/SpanStatus",
|
||||
"description": "(Optional) The current status of the span"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"span_id",
|
||||
"trace_id",
|
||||
"name",
|
||||
"start_time"
|
||||
],
|
||||
"title": "SpanWithStatus",
|
||||
"description": "A span that includes status information."
|
||||
},
|
||||
"QuerySpanTreeResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"data": {
|
||||
"type": "object",
|
||||
"additionalProperties": {
|
||||
"$ref": "#/components/schemas/SpanWithStatus"
|
||||
},
|
||||
"description": "Dictionary mapping span IDs to spans with status information"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"data"
|
||||
],
|
||||
"title": "QuerySpanTreeResponse",
|
||||
"description": "Response containing a tree structure of spans."
|
||||
},
|
||||
"QueryTracesRequest": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"attribute_filters": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/QueryCondition"
|
||||
},
|
||||
"description": "The attribute filters to apply to the traces."
|
||||
},
|
||||
"limit": {
|
||||
"type": "integer",
|
||||
"description": "The limit of traces to return."
|
||||
},
|
||||
"offset": {
|
||||
"type": "integer",
|
||||
"description": "The offset of the traces to return."
|
||||
},
|
||||
"order_by": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "string"
|
||||
},
|
||||
"description": "The order by of the traces to return."
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"title": "QueryTracesRequest"
|
||||
},
|
||||
"Trace": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"trace_id": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier for the trace"
|
||||
},
|
||||
"root_span_id": {
|
||||
"type": "string",
|
||||
"description": "Unique identifier for the root span that started this trace"
|
||||
},
|
||||
"start_time": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "Timestamp when the trace began"
|
||||
},
|
||||
"end_time": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "(Optional) Timestamp when the trace finished, if completed"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"trace_id",
|
||||
"root_span_id",
|
||||
"start_time"
|
||||
],
|
||||
"title": "Trace",
|
||||
"description": "A trace representing the complete execution path of a request across multiple operations."
|
||||
},
|
||||
"QueryTracesResponse": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"data": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "#/components/schemas/Trace"
|
||||
},
|
||||
"description": "List of traces matching the query criteria"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": [
|
||||
"data"
|
||||
],
|
||||
"title": "QueryTracesResponse",
|
||||
"description": "Response containing a list of traces."
|
||||
}
|
||||
},
|
||||
"responses": {
|
||||
|
|
@ -6410,16 +5517,12 @@
|
|||
},
|
||||
{
|
||||
"name": "Eval",
|
||||
"description": "",
|
||||
"x-displayName": "Llama Stack Evaluation API for running evaluations on model and agent candidates."
|
||||
"description": "Llama Stack Evaluation API for running evaluations on model and agent candidates.",
|
||||
"x-displayName": "Evaluations"
|
||||
},
|
||||
{
|
||||
"name": "PostTraining (Coming Soon)",
|
||||
"description": ""
|
||||
},
|
||||
{
|
||||
"name": "Telemetry",
|
||||
"description": ""
|
||||
}
|
||||
],
|
||||
"x-tagGroups": [
|
||||
|
|
@ -6431,8 +5534,7 @@
|
|||
"DatasetIO",
|
||||
"Datasets",
|
||||
"Eval",
|
||||
"PostTraining (Coming Soon)",
|
||||
"Telemetry"
|
||||
"PostTraining (Coming Soon)"
|
||||
]
|
||||
}
|
||||
]
|
||||
|
|
|
|||
676
docs/static/experimental-llama-stack-spec.yaml
vendored
676
docs/static/experimental-llama-stack-spec.yaml
vendored
|
|
@ -1224,238 +1224,6 @@ paths:
|
|||
$ref: '#/components/schemas/SupervisedFineTuneRequest'
|
||||
required: true
|
||||
deprecated: false
|
||||
/v1alpha/telemetry/metrics/{metric_name}:
|
||||
post:
|
||||
responses:
|
||||
'200':
|
||||
description: A QueryMetricsResponse.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/QueryMetricsResponse'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest400'
|
||||
'429':
|
||||
$ref: >-
|
||||
#/components/responses/TooManyRequests429
|
||||
'500':
|
||||
$ref: >-
|
||||
#/components/responses/InternalServerError500
|
||||
default:
|
||||
$ref: '#/components/responses/DefaultError'
|
||||
tags:
|
||||
- Telemetry
|
||||
summary: Query metrics.
|
||||
description: Query metrics.
|
||||
parameters:
|
||||
- name: metric_name
|
||||
in: path
|
||||
description: The name of the metric to query.
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/QueryMetricsRequest'
|
||||
required: true
|
||||
deprecated: false
|
||||
/v1alpha/telemetry/spans:
|
||||
post:
|
||||
responses:
|
||||
'200':
|
||||
description: A QuerySpansResponse.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/QuerySpansResponse'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest400'
|
||||
'429':
|
||||
$ref: >-
|
||||
#/components/responses/TooManyRequests429
|
||||
'500':
|
||||
$ref: >-
|
||||
#/components/responses/InternalServerError500
|
||||
default:
|
||||
$ref: '#/components/responses/DefaultError'
|
||||
tags:
|
||||
- Telemetry
|
||||
summary: Query spans.
|
||||
description: Query spans.
|
||||
parameters: []
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/QuerySpansRequest'
|
||||
required: true
|
||||
deprecated: false
|
||||
/v1alpha/telemetry/spans/export:
|
||||
post:
|
||||
responses:
|
||||
'200':
|
||||
description: OK
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest400'
|
||||
'429':
|
||||
$ref: >-
|
||||
#/components/responses/TooManyRequests429
|
||||
'500':
|
||||
$ref: >-
|
||||
#/components/responses/InternalServerError500
|
||||
default:
|
||||
$ref: '#/components/responses/DefaultError'
|
||||
tags:
|
||||
- Telemetry
|
||||
summary: Save spans to a dataset.
|
||||
description: Save spans to a dataset.
|
||||
parameters: []
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/SaveSpansToDatasetRequest'
|
||||
required: true
|
||||
deprecated: false
|
||||
/v1alpha/telemetry/spans/{span_id}/tree:
|
||||
post:
|
||||
responses:
|
||||
'200':
|
||||
description: A QuerySpanTreeResponse.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/QuerySpanTreeResponse'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest400'
|
||||
'429':
|
||||
$ref: >-
|
||||
#/components/responses/TooManyRequests429
|
||||
'500':
|
||||
$ref: >-
|
||||
#/components/responses/InternalServerError500
|
||||
default:
|
||||
$ref: '#/components/responses/DefaultError'
|
||||
tags:
|
||||
- Telemetry
|
||||
summary: Get a span tree by its ID.
|
||||
description: Get a span tree by its ID.
|
||||
parameters:
|
||||
- name: span_id
|
||||
in: path
|
||||
description: The ID of the span to get the tree from.
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/GetSpanTreeRequest'
|
||||
required: true
|
||||
deprecated: false
|
||||
/v1alpha/telemetry/traces:
|
||||
post:
|
||||
responses:
|
||||
'200':
|
||||
description: A QueryTracesResponse.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/QueryTracesResponse'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest400'
|
||||
'429':
|
||||
$ref: >-
|
||||
#/components/responses/TooManyRequests429
|
||||
'500':
|
||||
$ref: >-
|
||||
#/components/responses/InternalServerError500
|
||||
default:
|
||||
$ref: '#/components/responses/DefaultError'
|
||||
tags:
|
||||
- Telemetry
|
||||
summary: Query traces.
|
||||
description: Query traces.
|
||||
parameters: []
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/QueryTracesRequest'
|
||||
required: true
|
||||
deprecated: false
|
||||
/v1alpha/telemetry/traces/{trace_id}:
|
||||
get:
|
||||
responses:
|
||||
'200':
|
||||
description: A Trace.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Trace'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest400'
|
||||
'429':
|
||||
$ref: >-
|
||||
#/components/responses/TooManyRequests429
|
||||
'500':
|
||||
$ref: >-
|
||||
#/components/responses/InternalServerError500
|
||||
default:
|
||||
$ref: '#/components/responses/DefaultError'
|
||||
tags:
|
||||
- Telemetry
|
||||
summary: Get a trace by its ID.
|
||||
description: Get a trace by its ID.
|
||||
parameters:
|
||||
- name: trace_id
|
||||
in: path
|
||||
description: The ID of the trace to get.
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
deprecated: false
|
||||
/v1alpha/telemetry/traces/{trace_id}/spans/{span_id}:
|
||||
get:
|
||||
responses:
|
||||
'200':
|
||||
description: A Span.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Span'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest400'
|
||||
'429':
|
||||
$ref: >-
|
||||
#/components/responses/TooManyRequests429
|
||||
'500':
|
||||
$ref: >-
|
||||
#/components/responses/InternalServerError500
|
||||
default:
|
||||
$ref: '#/components/responses/DefaultError'
|
||||
tags:
|
||||
- Telemetry
|
||||
summary: Get a span by its ID.
|
||||
description: Get a span by its ID.
|
||||
parameters:
|
||||
- name: trace_id
|
||||
in: path
|
||||
description: >-
|
||||
The ID of the trace to get the span from.
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
- name: span_id
|
||||
in: path
|
||||
description: The ID of the span to get.
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
deprecated: false
|
||||
jsonSchemaDialect: >-
|
||||
https://json-schema.org/draft/2020-12/schema
|
||||
components:
|
||||
|
|
@ -1552,7 +1320,7 @@ components:
|
|||
enum:
|
||||
- model
|
||||
- shield
|
||||
- vector_db
|
||||
- vector_store
|
||||
- dataset
|
||||
- scoring_function
|
||||
- benchmark
|
||||
|
|
@ -1927,7 +1695,6 @@ components:
|
|||
description: The sampling strategy.
|
||||
max_tokens:
|
||||
type: integer
|
||||
default: 0
|
||||
description: >-
|
||||
The maximum number of tokens that can be generated in the completion.
|
||||
The token count of your prompt plus max_tokens cannot exceed the model's
|
||||
|
|
@ -2318,7 +2085,7 @@ components:
|
|||
description: Type of the step in an agent turn.
|
||||
const: memory_retrieval
|
||||
default: memory_retrieval
|
||||
vector_db_ids:
|
||||
vector_store_ids:
|
||||
type: string
|
||||
description: >-
|
||||
The IDs of the vector databases to retrieve context from.
|
||||
|
|
@ -2331,7 +2098,7 @@ components:
|
|||
- turn_id
|
||||
- step_id
|
||||
- step_type
|
||||
- vector_db_ids
|
||||
- vector_store_ids
|
||||
- inserted_context
|
||||
title: MemoryRetrievalStep
|
||||
description: >-
|
||||
|
|
@ -3159,7 +2926,7 @@ components:
|
|||
enum:
|
||||
- model
|
||||
- shield
|
||||
- vector_db
|
||||
- vector_store
|
||||
- dataset
|
||||
- scoring_function
|
||||
- benchmark
|
||||
|
|
@ -4249,434 +4016,6 @@ components:
|
|||
- hyperparam_search_config
|
||||
- logger_config
|
||||
title: SupervisedFineTuneRequest
|
||||
QueryMetricsRequest:
|
||||
type: object
|
||||
properties:
|
||||
start_time:
|
||||
type: integer
|
||||
description: The start time of the metric to query.
|
||||
end_time:
|
||||
type: integer
|
||||
description: The end time of the metric to query.
|
||||
granularity:
|
||||
type: string
|
||||
description: The granularity of the metric to query.
|
||||
query_type:
|
||||
type: string
|
||||
enum:
|
||||
- range
|
||||
- instant
|
||||
description: The type of query to perform.
|
||||
label_matchers:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: The name of the label to match
|
||||
value:
|
||||
type: string
|
||||
description: The value to match against
|
||||
operator:
|
||||
type: string
|
||||
enum:
|
||||
- '='
|
||||
- '!='
|
||||
- =~
|
||||
- '!~'
|
||||
description: >-
|
||||
The comparison operator to use for matching
|
||||
default: '='
|
||||
additionalProperties: false
|
||||
required:
|
||||
- name
|
||||
- value
|
||||
- operator
|
||||
title: MetricLabelMatcher
|
||||
description: >-
|
||||
A matcher for filtering metrics by label values.
|
||||
description: >-
|
||||
The label matchers to apply to the metric.
|
||||
additionalProperties: false
|
||||
required:
|
||||
- start_time
|
||||
- query_type
|
||||
title: QueryMetricsRequest
|
||||
MetricDataPoint:
|
||||
type: object
|
||||
properties:
|
||||
timestamp:
|
||||
type: integer
|
||||
description: >-
|
||||
Unix timestamp when the metric value was recorded
|
||||
value:
|
||||
type: number
|
||||
description: >-
|
||||
The numeric value of the metric at this timestamp
|
||||
unit:
|
||||
type: string
|
||||
additionalProperties: false
|
||||
required:
|
||||
- timestamp
|
||||
- value
|
||||
- unit
|
||||
title: MetricDataPoint
|
||||
description: >-
|
||||
A single data point in a metric time series.
|
||||
MetricLabel:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: The name of the label
|
||||
value:
|
||||
type: string
|
||||
description: The value of the label
|
||||
additionalProperties: false
|
||||
required:
|
||||
- name
|
||||
- value
|
||||
title: MetricLabel
|
||||
description: A label associated with a metric.
|
||||
MetricSeries:
|
||||
type: object
|
||||
properties:
|
||||
metric:
|
||||
type: string
|
||||
description: The name of the metric
|
||||
labels:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/MetricLabel'
|
||||
description: >-
|
||||
List of labels associated with this metric series
|
||||
values:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/MetricDataPoint'
|
||||
description: >-
|
||||
List of data points in chronological order
|
||||
additionalProperties: false
|
||||
required:
|
||||
- metric
|
||||
- labels
|
||||
- values
|
||||
title: MetricSeries
|
||||
description: A time series of metric data points.
|
||||
QueryMetricsResponse:
|
||||
type: object
|
||||
properties:
|
||||
data:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/MetricSeries'
|
||||
description: >-
|
||||
List of metric series matching the query criteria
|
||||
additionalProperties: false
|
||||
required:
|
||||
- data
|
||||
title: QueryMetricsResponse
|
||||
description: >-
|
||||
Response containing metric time series data.
|
||||
QueryCondition:
|
||||
type: object
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
description: The attribute key to filter on
|
||||
op:
|
||||
$ref: '#/components/schemas/QueryConditionOp'
|
||||
description: The comparison operator to apply
|
||||
value:
|
||||
oneOf:
|
||||
- type: 'null'
|
||||
- type: boolean
|
||||
- type: number
|
||||
- type: string
|
||||
- type: array
|
||||
- type: object
|
||||
description: The value to compare against
|
||||
additionalProperties: false
|
||||
required:
|
||||
- key
|
||||
- op
|
||||
- value
|
||||
title: QueryCondition
|
||||
description: A condition for filtering query results.
|
||||
QueryConditionOp:
|
||||
type: string
|
||||
enum:
|
||||
- eq
|
||||
- ne
|
||||
- gt
|
||||
- lt
|
||||
title: QueryConditionOp
|
||||
description: >-
|
||||
Comparison operators for query conditions.
|
||||
QuerySpansRequest:
|
||||
type: object
|
||||
properties:
|
||||
attribute_filters:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/QueryCondition'
|
||||
description: >-
|
||||
The attribute filters to apply to the spans.
|
||||
attributes_to_return:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: The attributes to return in the spans.
|
||||
max_depth:
|
||||
type: integer
|
||||
description: The maximum depth of the tree.
|
||||
additionalProperties: false
|
||||
required:
|
||||
- attribute_filters
|
||||
- attributes_to_return
|
||||
title: QuerySpansRequest
|
||||
Span:
|
||||
type: object
|
||||
properties:
|
||||
span_id:
|
||||
type: string
|
||||
description: Unique identifier for the span
|
||||
trace_id:
|
||||
type: string
|
||||
description: >-
|
||||
Unique identifier for the trace this span belongs to
|
||||
parent_span_id:
|
||||
type: string
|
||||
description: >-
|
||||
(Optional) Unique identifier for the parent span, if this is a child span
|
||||
name:
|
||||
type: string
|
||||
description: >-
|
||||
Human-readable name describing the operation this span represents
|
||||
start_time:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Timestamp when the operation began
|
||||
end_time:
|
||||
type: string
|
||||
format: date-time
|
||||
description: >-
|
||||
(Optional) Timestamp when the operation finished, if completed
|
||||
attributes:
|
||||
type: object
|
||||
additionalProperties:
|
||||
oneOf:
|
||||
- type: 'null'
|
||||
- type: boolean
|
||||
- type: number
|
||||
- type: string
|
||||
- type: array
|
||||
- type: object
|
||||
description: >-
|
||||
(Optional) Key-value pairs containing additional metadata about the span
|
||||
additionalProperties: false
|
||||
required:
|
||||
- span_id
|
||||
- trace_id
|
||||
- name
|
||||
- start_time
|
||||
title: Span
|
||||
description: >-
|
||||
A span representing a single operation within a trace.
|
||||
QuerySpansResponse:
|
||||
type: object
|
||||
properties:
|
||||
data:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/Span'
|
||||
description: >-
|
||||
List of spans matching the query criteria
|
||||
additionalProperties: false
|
||||
required:
|
||||
- data
|
||||
title: QuerySpansResponse
|
||||
description: Response containing a list of spans.
|
||||
SaveSpansToDatasetRequest:
|
||||
type: object
|
||||
properties:
|
||||
attribute_filters:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/QueryCondition'
|
||||
description: >-
|
||||
The attribute filters to apply to the spans.
|
||||
attributes_to_save:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: The attributes to save to the dataset.
|
||||
dataset_id:
|
||||
type: string
|
||||
description: >-
|
||||
The ID of the dataset to save the spans to.
|
||||
max_depth:
|
||||
type: integer
|
||||
description: The maximum depth of the tree.
|
||||
additionalProperties: false
|
||||
required:
|
||||
- attribute_filters
|
||||
- attributes_to_save
|
||||
- dataset_id
|
||||
title: SaveSpansToDatasetRequest
|
||||
GetSpanTreeRequest:
|
||||
type: object
|
||||
properties:
|
||||
attributes_to_return:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: The attributes to return in the tree.
|
||||
max_depth:
|
||||
type: integer
|
||||
description: The maximum depth of the tree.
|
||||
additionalProperties: false
|
||||
title: GetSpanTreeRequest
|
||||
SpanStatus:
|
||||
type: string
|
||||
enum:
|
||||
- ok
|
||||
- error
|
||||
title: SpanStatus
|
||||
description: >-
|
||||
The status of a span indicating whether it completed successfully or with
|
||||
an error.
|
||||
SpanWithStatus:
|
||||
type: object
|
||||
properties:
|
||||
span_id:
|
||||
type: string
|
||||
description: Unique identifier for the span
|
||||
trace_id:
|
||||
type: string
|
||||
description: >-
|
||||
Unique identifier for the trace this span belongs to
|
||||
parent_span_id:
|
||||
type: string
|
||||
description: >-
|
||||
(Optional) Unique identifier for the parent span, if this is a child span
|
||||
name:
|
||||
type: string
|
||||
description: >-
|
||||
Human-readable name describing the operation this span represents
|
||||
start_time:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Timestamp when the operation began
|
||||
end_time:
|
||||
type: string
|
||||
format: date-time
|
||||
description: >-
|
||||
(Optional) Timestamp when the operation finished, if completed
|
||||
attributes:
|
||||
type: object
|
||||
additionalProperties:
|
||||
oneOf:
|
||||
- type: 'null'
|
||||
- type: boolean
|
||||
- type: number
|
||||
- type: string
|
||||
- type: array
|
||||
- type: object
|
||||
description: >-
|
||||
(Optional) Key-value pairs containing additional metadata about the span
|
||||
status:
|
||||
$ref: '#/components/schemas/SpanStatus'
|
||||
description: >-
|
||||
(Optional) The current status of the span
|
||||
additionalProperties: false
|
||||
required:
|
||||
- span_id
|
||||
- trace_id
|
||||
- name
|
||||
- start_time
|
||||
title: SpanWithStatus
|
||||
description: A span that includes status information.
|
||||
QuerySpanTreeResponse:
|
||||
type: object
|
||||
properties:
|
||||
data:
|
||||
type: object
|
||||
additionalProperties:
|
||||
$ref: '#/components/schemas/SpanWithStatus'
|
||||
description: >-
|
||||
Dictionary mapping span IDs to spans with status information
|
||||
additionalProperties: false
|
||||
required:
|
||||
- data
|
||||
title: QuerySpanTreeResponse
|
||||
description: >-
|
||||
Response containing a tree structure of spans.
|
||||
QueryTracesRequest:
|
||||
type: object
|
||||
properties:
|
||||
attribute_filters:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/QueryCondition'
|
||||
description: >-
|
||||
The attribute filters to apply to the traces.
|
||||
limit:
|
||||
type: integer
|
||||
description: The limit of traces to return.
|
||||
offset:
|
||||
type: integer
|
||||
description: The offset of the traces to return.
|
||||
order_by:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: The order by of the traces to return.
|
||||
additionalProperties: false
|
||||
title: QueryTracesRequest
|
||||
Trace:
|
||||
type: object
|
||||
properties:
|
||||
trace_id:
|
||||
type: string
|
||||
description: Unique identifier for the trace
|
||||
root_span_id:
|
||||
type: string
|
||||
description: >-
|
||||
Unique identifier for the root span that started this trace
|
||||
start_time:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Timestamp when the trace began
|
||||
end_time:
|
||||
type: string
|
||||
format: date-time
|
||||
description: >-
|
||||
(Optional) Timestamp when the trace finished, if completed
|
||||
additionalProperties: false
|
||||
required:
|
||||
- trace_id
|
||||
- root_span_id
|
||||
- start_time
|
||||
title: Trace
|
||||
description: >-
|
||||
A trace representing the complete execution path of a request across multiple
|
||||
operations.
|
||||
QueryTracesResponse:
|
||||
type: object
|
||||
properties:
|
||||
data:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/Trace'
|
||||
description: >-
|
||||
List of traces matching the query criteria
|
||||
additionalProperties: false
|
||||
required:
|
||||
- data
|
||||
title: QueryTracesResponse
|
||||
description: Response containing a list of traces.
|
||||
responses:
|
||||
BadRequest400:
|
||||
description: The request was invalid or malformed
|
||||
|
|
@ -4779,13 +4118,11 @@ tags:
|
|||
- name: Datasets
|
||||
description: ''
|
||||
- name: Eval
|
||||
description: ''
|
||||
x-displayName: >-
|
||||
description: >-
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
x-displayName: Evaluations
|
||||
- name: PostTraining (Coming Soon)
|
||||
description: ''
|
||||
- name: Telemetry
|
||||
description: ''
|
||||
x-tagGroups:
|
||||
- name: Operations
|
||||
tags:
|
||||
|
|
@ -4795,4 +4132,3 @@ x-tagGroups:
|
|||
- Datasets
|
||||
- Eval
|
||||
- PostTraining (Coming Soon)
|
||||
- Telemetry
|
||||
|
|
|
|||
2404
docs/static/llama-stack-spec.html
vendored
2404
docs/static/llama-stack-spec.html
vendored
File diff suppressed because it is too large
Load diff
2085
docs/static/llama-stack-spec.yaml
vendored
2085
docs/static/llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
3300
docs/static/stainless-llama-stack-spec.html
vendored
3300
docs/static/stainless-llama-stack-spec.html
vendored
File diff suppressed because it is too large
Load diff
2749
docs/static/stainless-llama-stack-spec.yaml
vendored
2749
docs/static/stainless-llama-stack-spec.yaml
vendored
File diff suppressed because it is too large
Load diff
|
|
@ -161,6 +161,7 @@
|
|||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4ad70258",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
|
@ -180,8 +181,8 @@
|
|||
"# Create a vector database with optimized settings for general use\n",
|
||||
"client.vector_dbs.register(\n",
|
||||
" vector_db_id=VECTOR_DB_ID,\n",
|
||||
" embedding_model=\"all-MiniLM-L6-v2\",\n",
|
||||
" embedding_dimension=384, # This is the dimension for all-MiniLM-L6-v2\n",
|
||||
" embedding_model=\"nomic-embed-text-v1.5\",\n",
|
||||
" embedding_dimension=768, # This is the dimension for nomic-embed-text-v1.5\n",
|
||||
" provider_id=provider_id,\n",
|
||||
")"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -78,17 +78,14 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next
|
|||
|
||||
## Build, Configure, and Run Llama Stack
|
||||
|
||||
1. **Build the Llama Stack**:
|
||||
Build the Llama Stack using the `starter` template:
|
||||
1. **Install dependencies**:
|
||||
```bash
|
||||
uv run --with llama-stack llama stack build --distro starter --image-type venv
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
```
|
||||
**Expected Output:**
|
||||
|
||||
2. **Start the distribution**:
|
||||
```bash
|
||||
...
|
||||
Build Successful!
|
||||
You can find the newly-built template here: ~/.llama/distributions/starter/starter-run.yaml
|
||||
You can run the new Llama Stack Distro via: uv run --with llama-stack llama stack run starter
|
||||
llama stack run starter
|
||||
```
|
||||
|
||||
3. **Set the ENV variables by exporting them to the terminal**:
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue