mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-04 10:10:36 +00:00
Merge branch 'llamastack:main' into langchain_llamastack
This commit is contained in:
commit
b785ab1579
2498 changed files with 1150580 additions and 99046 deletions
|
|
@ -1,20 +0,0 @@
|
|||
# Minimal makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?=
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = source
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
|
@ -1,14 +1,53 @@
|
|||
# Llama Stack Documentation
|
||||
|
||||
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our [Github page](https://llamastack.github.io/latest/getting_started/index.html).
|
||||
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our [Github page](https://llamastack.github.io/getting_started/quickstart).
|
||||
|
||||
## Render locally
|
||||
|
||||
From the llama-stack root directory, run the following command to render the docs locally:
|
||||
From the llama-stack `docs/` directory, run the following commands to render the docs locally:
|
||||
```bash
|
||||
uv run --group docs sphinx-autobuild docs/source docs/build/html --write-all
|
||||
npm install
|
||||
npm run gen-api-docs all
|
||||
npm run build
|
||||
npm run serve
|
||||
```
|
||||
You can open up the docs in your browser at http://localhost:8000
|
||||
You can open up the docs in your browser at http://localhost:3000
|
||||
|
||||
## File Import System
|
||||
|
||||
This documentation uses `remark-code-import` to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.
|
||||
|
||||
### Importing Code Files
|
||||
|
||||
To import Python code (or any code files) with syntax highlighting, use this syntax in `.mdx` files:
|
||||
|
||||
```markdown
|
||||
```python file=./demo_script.py title="demo_script.py"
|
||||
```
|
||||
```
|
||||
|
||||
This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.
|
||||
|
||||
**Note:** Paths are relative to the current `.mdx` file location, not the repository root.
|
||||
|
||||
### Importing Markdown Files as Content
|
||||
|
||||
For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:
|
||||
|
||||
```jsx
|
||||
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
|
||||
<ReactMarkdown>{Contributing}</ReactMarkdown>
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
- Install dependencies: `npm install --save-dev raw-loader react-markdown`
|
||||
|
||||
**Path Resolution:**
|
||||
- For `remark-code-import`: Paths are relative to the current `.mdx` file location
|
||||
- For `raw-loader`: Paths are relative to the current `.mdx` file location
|
||||
- Use `../` to navigate up directories as needed
|
||||
|
||||
## Content
|
||||
|
||||
|
|
|
|||
136
docs/_static/css/my_theme.css
vendored
136
docs/_static/css/my_theme.css
vendored
|
|
@ -1,136 +0,0 @@
|
|||
@import url("theme.css");
|
||||
|
||||
/* Horizontal Navigation Bar */
|
||||
.horizontal-nav {
|
||||
background-color: #ffffff;
|
||||
border-bottom: 1px solid #e5e5e5;
|
||||
padding: 0;
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
right: 0;
|
||||
z-index: 1050;
|
||||
height: 50px;
|
||||
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .horizontal-nav {
|
||||
background-color: #1a1a1a;
|
||||
border-bottom: 1px solid #333;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-container {
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: 0 20px;
|
||||
height: 100%;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-brand {
|
||||
font-size: 18px;
|
||||
font-weight: 600;
|
||||
color: #333;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .horizontal-nav .nav-brand {
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-links {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 30px;
|
||||
list-style: none;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-links a {
|
||||
color: #666;
|
||||
text-decoration: none;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
padding: 8px 12px;
|
||||
border-radius: 6px;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-links a:hover,
|
||||
.horizontal-nav .nav-links a.active {
|
||||
color: #333;
|
||||
background-color: #f5f5f5;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-links a.active {
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .horizontal-nav .nav-links a {
|
||||
color: #ccc;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .horizontal-nav .nav-links a:hover,
|
||||
[data-theme="dark"] .horizontal-nav .nav-links a.active {
|
||||
color: #fff;
|
||||
background-color: #333;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-links .github-link {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
}
|
||||
|
||||
.horizontal-nav .nav-links .github-icon {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
fill: currentColor;
|
||||
}
|
||||
|
||||
/* Adjust main content to account for fixed nav */
|
||||
.wy-nav-side {
|
||||
top: 50px;
|
||||
height: calc(100vh - 50px);
|
||||
}
|
||||
|
||||
.wy-nav-content-wrap {
|
||||
margin-top: 50px;
|
||||
}
|
||||
|
||||
.wy-nav-content {
|
||||
max-width: 90%;
|
||||
}
|
||||
|
||||
.wy-nav-side {
|
||||
/* background: linear-gradient(45deg, #2980B9, #16A085); */
|
||||
background: linear-gradient(90deg, #332735, #1b263c);
|
||||
}
|
||||
|
||||
.wy-side-nav-search {
|
||||
background-color: transparent !important;
|
||||
}
|
||||
|
||||
.hide-title h1 {
|
||||
display: none;
|
||||
}
|
||||
|
||||
h2, h3, h4 {
|
||||
font-weight: normal;
|
||||
}
|
||||
html[data-theme="dark"] .rst-content div[class^="highlight"] {
|
||||
background-color: #0b0b0b;
|
||||
}
|
||||
pre {
|
||||
white-space: pre-wrap !important;
|
||||
word-break: break-all;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .mermaid {
|
||||
background-color: #f4f4f6 !important;
|
||||
border-radius: 6px;
|
||||
padding: 0.5em;
|
||||
}
|
||||
32
docs/_static/js/detect_theme.js
vendored
32
docs/_static/js/detect_theme.js
vendored
|
|
@ -1,32 +0,0 @@
|
|||
document.addEventListener("DOMContentLoaded", function () {
|
||||
const prefersDark = window.matchMedia("(prefers-color-scheme: dark)").matches;
|
||||
const htmlElement = document.documentElement;
|
||||
|
||||
// Check if theme is saved in localStorage
|
||||
const savedTheme = localStorage.getItem("sphinx-rtd-theme");
|
||||
|
||||
if (savedTheme) {
|
||||
// Use the saved theme preference
|
||||
htmlElement.setAttribute("data-theme", savedTheme);
|
||||
document.body.classList.toggle("dark", savedTheme === "dark");
|
||||
} else {
|
||||
// Fall back to system preference
|
||||
const theme = prefersDark ? "dark" : "light";
|
||||
htmlElement.setAttribute("data-theme", theme);
|
||||
document.body.classList.toggle("dark", theme === "dark");
|
||||
// Save initial preference
|
||||
localStorage.setItem("sphinx-rtd-theme", theme);
|
||||
}
|
||||
|
||||
// Listen for theme changes from the existing toggle
|
||||
const observer = new MutationObserver(function(mutations) {
|
||||
mutations.forEach(function(mutation) {
|
||||
if (mutation.attributeName === "data-theme") {
|
||||
const currentTheme = htmlElement.getAttribute("data-theme");
|
||||
localStorage.setItem("sphinx-rtd-theme", currentTheme);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
observer.observe(htmlElement, { attributes: true });
|
||||
});
|
||||
44
docs/_static/js/horizontal_nav.js
vendored
44
docs/_static/js/horizontal_nav.js
vendored
|
|
@ -1,44 +0,0 @@
|
|||
// Horizontal Navigation Bar for Llama Stack Documentation
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
// Create the horizontal navigation HTML
|
||||
const navHTML = `
|
||||
<nav class="horizontal-nav">
|
||||
<div class="nav-container">
|
||||
<a href="/" class="nav-brand">Llama Stack</a>
|
||||
<ul class="nav-links">
|
||||
<li><a href="/">Docs</a></li>
|
||||
<li><a href="/references/api_reference/">API Reference</a></li>
|
||||
<li><a href="https://github.com/meta-llama/llama-stack" target="_blank" class="github-link">
|
||||
<svg class="github-icon" viewBox="0 0 16 16" aria-hidden="true">
|
||||
<path d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"/>
|
||||
</svg>
|
||||
GitHub
|
||||
</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</nav>
|
||||
`;
|
||||
|
||||
// Insert the navigation at the beginning of the body
|
||||
document.body.insertAdjacentHTML('afterbegin', navHTML);
|
||||
|
||||
// Update navigation links based on current page
|
||||
updateActiveNav();
|
||||
});
|
||||
|
||||
function updateActiveNav() {
|
||||
const currentPath = window.location.pathname;
|
||||
const navLinks = document.querySelectorAll('.horizontal-nav .nav-links a');
|
||||
|
||||
navLinks.forEach(link => {
|
||||
// Remove any existing active classes
|
||||
link.classList.remove('active');
|
||||
|
||||
// Add active class based on current path
|
||||
if (currentPath === '/' && link.getAttribute('href') === '/') {
|
||||
link.classList.add('active');
|
||||
} else if (currentPath.includes('/references/api_reference/') && link.getAttribute('href').includes('api_reference')) {
|
||||
link.classList.add('active');
|
||||
}
|
||||
});
|
||||
}
|
||||
14
docs/_static/js/keyboard_shortcuts.js
vendored
14
docs/_static/js/keyboard_shortcuts.js
vendored
|
|
@ -1,14 +0,0 @@
|
|||
document.addEventListener('keydown', function(event) {
|
||||
// command+K or ctrl+K
|
||||
if ((event.metaKey || event.ctrlKey) && event.key === 'k') {
|
||||
event.preventDefault();
|
||||
document.querySelector('.search-input, .search-field, input[name="q"]').focus();
|
||||
}
|
||||
|
||||
// forward slash
|
||||
if (event.key === '/' &&
|
||||
!event.target.matches('input, textarea, select')) {
|
||||
event.preventDefault();
|
||||
document.querySelector('.search-input, .search-field, input[name="q"]').focus();
|
||||
}
|
||||
});
|
||||
BIN
docs/_static/llama-stack-logo.png
vendored
BIN
docs/_static/llama-stack-logo.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 70 KiB |
BIN
docs/_static/llama-stack.png
vendored
BIN
docs/_static/llama-stack.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 196 KiB |
|
|
@ -1,24 +0,0 @@
|
|||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
import os
|
||||
import time
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(items):
|
||||
for item in items:
|
||||
item.name = item.name.replace(' ', '_')
|
||||
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
interval_seconds = os.getenv("LLAMA_STACK_TEST_INTERVAL_SECONDS")
|
||||
if interval_seconds:
|
||||
time.sleep(float(interval_seconds))
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
config.option.tbstyle = "short"
|
||||
config.option.disable_warnings = True
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
sphinx-autobuild --write-all source build/html --watch source/
|
||||
163
docs/docs/advanced_apis/evaluation.mdx
Normal file
163
docs/docs/advanced_apis/evaluation.mdx
Normal file
|
|
@ -0,0 +1,163 @@
|
|||
# Evaluation
|
||||
|
||||
## Evaluation Concepts
|
||||
|
||||
The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks.
|
||||
|
||||
We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications:
|
||||
- `/datasetio` + `/datasets` API
|
||||
- `/scoring` + `/scoring_functions` API
|
||||
- `/eval` + `/benchmarks` API
|
||||
|
||||
This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing).
|
||||
|
||||
The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our [Core Concepts](../concepts/index.mdx) guide for better high-level understanding.
|
||||
|
||||
- **DatasetIO**: defines interface with datasets and data loaders.
|
||||
- Associated with `Dataset` resource.
|
||||
- **Scoring**: evaluate outputs of the system.
|
||||
- Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics.
|
||||
- **Eval**: generate outputs (via Inference or Agents) and perform scoring.
|
||||
- Associated with `Benchmark` resource.
|
||||
|
||||
## Evaluation Providers
|
||||
|
||||
Llama Stack provides multiple evaluation providers:
|
||||
|
||||
- **Meta Reference** (`inline::meta-reference`) - Meta's reference implementation with multi-language support
|
||||
- **NVIDIA** (`remote::nvidia`) - NVIDIA's evaluation platform integration
|
||||
|
||||
### Meta Reference
|
||||
|
||||
Meta's reference implementation of evaluation tasks with support for multiple languages and evaluation metrics.
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `RedisKVStoreConfig \| SqliteKVStoreConfig \| PostgresKVStoreConfig \| MongoDBKVStoreConfig` | No | sqlite | Key-value store configuration |
|
||||
|
||||
#### Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db
|
||||
```
|
||||
|
||||
#### Features
|
||||
|
||||
- Multi-language evaluation support
|
||||
- Comprehensive evaluation metrics
|
||||
- Integration with various key-value stores (SQLite, Redis, PostgreSQL, MongoDB)
|
||||
- Built-in support for popular benchmarks
|
||||
|
||||
### NVIDIA
|
||||
|
||||
NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform.
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `evaluator_url` | `str` | No | http://0.0.0.0:7331 | The url for accessing the evaluator service |
|
||||
|
||||
#### Sample Configuration
|
||||
|
||||
```yaml
|
||||
evaluator_url: ${env.NVIDIA_EVALUATOR_URL:=http://localhost:7331}
|
||||
```
|
||||
|
||||
#### Features
|
||||
|
||||
- Integration with NVIDIA's evaluation platform
|
||||
- Remote evaluation capabilities
|
||||
- Scalable evaluation processing
|
||||
|
||||
## Open-benchmark Eval
|
||||
|
||||
### List of open-benchmarks Llama Stack support
|
||||
|
||||
Llama stack pre-registers several popular open-benchmarks to easily evaluate model performance via CLI.
|
||||
|
||||
The list of open-benchmarks we currently support:
|
||||
- [MMLU-COT](https://arxiv.org/abs/2009.03300) (Measuring Massive Multitask Language Understanding): Benchmark designed to comprehensively evaluate the breadth and depth of a model's academic and professional understanding
|
||||
- [GPQA-COT](https://arxiv.org/abs/2311.12022) (A Graduate-Level Google-Proof Q&A Benchmark): A challenging benchmark of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry.
|
||||
- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions.
|
||||
- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI): Benchmark designed to evaluate multimodal models.
|
||||
|
||||
You can follow this [contributing guide](../references/evals_reference/index.mdx#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
|
||||
|
||||
### Run evaluation on open-benchmarks via CLI
|
||||
|
||||
We have built-in functionality to run the supported open-benchmarks using llama-stack-client CLI
|
||||
|
||||
#### Spin up Llama Stack server
|
||||
|
||||
Spin up llama stack server with 'open-benchmark' template
|
||||
```
|
||||
llama stack run llama_stack/distributions/open-benchmark/run.yaml
|
||||
|
||||
```
|
||||
|
||||
#### Run eval CLI
|
||||
There are 3 necessary inputs to run a benchmark eval
|
||||
- `list of benchmark_ids`: The list of benchmark ids to run evaluation on
|
||||
- `model-id`: The model id to evaluate on
|
||||
- `output_dir`: Path to store the evaluate results
|
||||
```
|
||||
llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \
|
||||
--model_id <model id to evaluate on> \
|
||||
--output_dir <directory to store the evaluate results>
|
||||
```
|
||||
|
||||
You can run
|
||||
```
|
||||
llama-stack-client eval run-benchmark help
|
||||
```
|
||||
to see the description of all the flags that eval run-benchmark has
|
||||
|
||||
In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate evaluation results over there.
|
||||
|
||||
## Usage Example
|
||||
|
||||
Here's a basic example of using the evaluation API:
|
||||
|
||||
```python
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient(base_url="http://localhost:8321")
|
||||
|
||||
# Register a dataset for evaluation
|
||||
client.datasets.register(
|
||||
purpose="evaluation",
|
||||
source={
|
||||
"type": "uri",
|
||||
"uri": "huggingface://datasets/llamastack/evaluation_dataset"
|
||||
},
|
||||
dataset_id="my_eval_dataset"
|
||||
)
|
||||
|
||||
# Run evaluation
|
||||
eval_result = client.eval.run_evaluation(
|
||||
dataset_id="my_eval_dataset",
|
||||
scoring_functions=["accuracy", "bleu"],
|
||||
model_id="my_model"
|
||||
)
|
||||
|
||||
print(f"Evaluation completed: {eval_result}")
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Choose appropriate providers**: Use Meta Reference for comprehensive evaluation, NVIDIA for platform-specific needs
|
||||
- **Configure storage properly**: Ensure your key-value store configuration matches your performance requirements
|
||||
- **Monitor evaluation progress**: Large evaluations can take time - implement proper monitoring
|
||||
- **Use appropriate scoring functions**: Select scoring metrics that align with your evaluation goals
|
||||
|
||||
## What's Next?
|
||||
|
||||
- Check out our Colab notebook on working examples with running benchmark evaluations [here](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb#scrollTo=mxLCsP4MvFqP).
|
||||
- Check out our [Building Applications - Evaluation](../building_applications/evals.mdx) guide for more details on how to use the Evaluation APIs to evaluate your applications.
|
||||
- Check out our [Evaluation Reference](../references/evals_reference/index.mdx) for more details on the APIs.
|
||||
- Explore the [Scoring](./scoring.mdx) documentation for available scoring functions.
|
||||
305
docs/docs/advanced_apis/post_training.mdx
Normal file
305
docs/docs/advanced_apis/post_training.mdx
Normal file
|
|
@ -0,0 +1,305 @@
|
|||
# Post-Training
|
||||
|
||||
Post-training in Llama Stack allows you to fine-tune models using various providers and frameworks. This section covers all available post-training providers and how to use them effectively.
|
||||
|
||||
## Overview
|
||||
|
||||
Llama Stack provides multiple post-training providers:
|
||||
|
||||
- **HuggingFace SFTTrainer** (`inline::huggingface`) - Fine-tuning using HuggingFace ecosystem
|
||||
- **TorchTune** (`inline::torchtune`) - Fine-tuning using Meta's TorchTune framework
|
||||
- **NVIDIA** (`remote::nvidia`) - Fine-tuning using NVIDIA's platform
|
||||
|
||||
## HuggingFace SFTTrainer
|
||||
|
||||
[HuggingFace SFTTrainer](https://huggingface.co/docs/trl/en/sft_trainer) is an inline post training provider for Llama Stack. It allows you to run supervised fine tuning on a variety of models using many datasets.
|
||||
|
||||
### Features
|
||||
|
||||
- Simple access through the post_training API
|
||||
- Fully integrated with Llama Stack
|
||||
- GPU support, CPU support, and MPS support (MacOS Metal Performance Shaders)
|
||||
|
||||
### Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `device` | `str` | No | cuda | |
|
||||
| `distributed_backend` | `Literal['fsdp', 'deepspeed']` | No | | |
|
||||
| `checkpoint_format` | `Literal['full_state', 'huggingface']` | No | huggingface | |
|
||||
| `chat_template` | `str` | No | |
|
||||
| `model_specific_config` | `dict` | No | `{'trust_remote_code': True, 'attn_implementation': 'sdpa'}` | |
|
||||
| `max_seq_length` | `int` | No | 2048 | |
|
||||
| `gradient_checkpointing` | `bool` | No | False | |
|
||||
| `save_total_limit` | `int` | No | 3 | |
|
||||
| `logging_steps` | `int` | No | 10 | |
|
||||
| `warmup_ratio` | `float` | No | 0.1 | |
|
||||
| `weight_decay` | `float` | No | 0.01 | |
|
||||
| `dataloader_num_workers` | `int` | No | 4 | |
|
||||
| `dataloader_pin_memory` | `bool` | No | True | |
|
||||
|
||||
### Sample Configuration
|
||||
|
||||
```yaml
|
||||
checkpoint_format: huggingface
|
||||
distributed_backend: null
|
||||
device: cpu
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
You can access the HuggingFace trainer via the `starter` distribution:
|
||||
|
||||
```bash
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
llama stack run starter
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from llama_stack_client.types import (
|
||||
post_training_supervised_fine_tune_params,
|
||||
algorithm_config_param,
|
||||
)
|
||||
|
||||
def create_http_client():
|
||||
from llama_stack_client import LlamaStackClient
|
||||
return LlamaStackClient(base_url="http://localhost:8321")
|
||||
|
||||
client = create_http_client()
|
||||
|
||||
# Example Dataset
|
||||
client.datasets.register(
|
||||
purpose="post-training/messages",
|
||||
source={
|
||||
"type": "uri",
|
||||
"uri": "huggingface://datasets/llamastack/simpleqa?split=train",
|
||||
},
|
||||
dataset_id="simpleqa",
|
||||
)
|
||||
|
||||
training_config = post_training_supervised_fine_tune_params.TrainingConfig(
|
||||
data_config=post_training_supervised_fine_tune_params.TrainingConfigDataConfig(
|
||||
batch_size=32,
|
||||
data_format="instruct",
|
||||
dataset_id="simpleqa",
|
||||
shuffle=True,
|
||||
),
|
||||
gradient_accumulation_steps=1,
|
||||
max_steps_per_epoch=0,
|
||||
max_validation_steps=1,
|
||||
n_epochs=4,
|
||||
)
|
||||
|
||||
algorithm_config = algorithm_config_param.LoraFinetuningConfig(
|
||||
alpha=1,
|
||||
apply_lora_to_mlp=True,
|
||||
apply_lora_to_output=False,
|
||||
lora_attn_modules=["q_proj"],
|
||||
rank=1,
|
||||
type="LoRA",
|
||||
)
|
||||
|
||||
job_uuid = f"test-job{uuid.uuid4()}"
|
||||
|
||||
# Example Model
|
||||
training_model = "ibm-granite/granite-3.3-8b-instruct"
|
||||
|
||||
start_time = time.time()
|
||||
response = client.post_training.supervised_fine_tune(
|
||||
job_uuid=job_uuid,
|
||||
logger_config={},
|
||||
model=training_model,
|
||||
hyperparam_search_config={},
|
||||
training_config=training_config,
|
||||
algorithm_config=algorithm_config,
|
||||
checkpoint_dir="output",
|
||||
)
|
||||
print("Job: ", job_uuid)
|
||||
|
||||
# Wait for the job to complete!
|
||||
while True:
|
||||
status = client.post_training.job.status(job_uuid=job_uuid)
|
||||
if not status:
|
||||
print("Job not found")
|
||||
break
|
||||
|
||||
print(status)
|
||||
if status.status == "completed":
|
||||
break
|
||||
|
||||
print("Waiting for job to complete...")
|
||||
time.sleep(5)
|
||||
|
||||
end_time = time.time()
|
||||
print("Job completed in", end_time - start_time, "seconds!")
|
||||
|
||||
print("Artifacts:")
|
||||
print(client.post_training.job.artifacts(job_uuid=job_uuid))
|
||||
```
|
||||
|
||||
## TorchTune
|
||||
|
||||
[TorchTune](https://github.com/pytorch/torchtune) is an inline post training provider for Llama Stack. It provides a simple and efficient way to fine-tune language models using PyTorch.
|
||||
|
||||
### Features
|
||||
|
||||
- Simple access through the post_training API
|
||||
- Fully integrated with Llama Stack
|
||||
- GPU support and single device capabilities
|
||||
- Support for LoRA
|
||||
|
||||
### Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `torch_seed` | `int \| None` | No | | |
|
||||
| `checkpoint_format` | `Literal['meta', 'huggingface']` | No | meta | |
|
||||
|
||||
### Sample Configuration
|
||||
|
||||
```yaml
|
||||
checkpoint_format: meta
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
You can access the TorchTune trainer by writing your own yaml pointing to the provider:
|
||||
|
||||
```yaml
|
||||
post_training:
|
||||
- provider_id: torchtune
|
||||
provider_type: inline::torchtune
|
||||
config: {}
|
||||
```
|
||||
|
||||
You can then build and run your own stack with this provider.
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from llama_stack_client.types import (
|
||||
post_training_supervised_fine_tune_params,
|
||||
algorithm_config_param,
|
||||
)
|
||||
|
||||
def create_http_client():
|
||||
from llama_stack_client import LlamaStackClient
|
||||
return LlamaStackClient(base_url="http://localhost:8321")
|
||||
|
||||
client = create_http_client()
|
||||
|
||||
# Example Dataset
|
||||
client.datasets.register(
|
||||
purpose="post-training/messages",
|
||||
source={
|
||||
"type": "uri",
|
||||
"uri": "huggingface://datasets/llamastack/simpleqa?split=train",
|
||||
},
|
||||
dataset_id="simpleqa",
|
||||
)
|
||||
|
||||
training_config = post_training_supervised_fine_tune_params.TrainingConfig(
|
||||
data_config=post_training_supervised_fine_tune_params.TrainingConfigDataConfig(
|
||||
batch_size=32,
|
||||
data_format="instruct",
|
||||
dataset_id="simpleqa",
|
||||
shuffle=True,
|
||||
),
|
||||
gradient_accumulation_steps=1,
|
||||
max_steps_per_epoch=0,
|
||||
max_validation_steps=1,
|
||||
n_epochs=4,
|
||||
)
|
||||
|
||||
algorithm_config = algorithm_config_param.LoraFinetuningConfig(
|
||||
alpha=1,
|
||||
apply_lora_to_mlp=True,
|
||||
apply_lora_to_output=False,
|
||||
lora_attn_modules=["q_proj"],
|
||||
rank=1,
|
||||
type="LoRA",
|
||||
)
|
||||
|
||||
job_uuid = f"test-job{uuid.uuid4()}"
|
||||
|
||||
# Example Model
|
||||
training_model = "meta-llama/Llama-2-7b-hf"
|
||||
|
||||
start_time = time.time()
|
||||
response = client.post_training.supervised_fine_tune(
|
||||
job_uuid=job_uuid,
|
||||
logger_config={},
|
||||
model=training_model,
|
||||
hyperparam_search_config={},
|
||||
training_config=training_config,
|
||||
algorithm_config=algorithm_config,
|
||||
checkpoint_dir="output",
|
||||
)
|
||||
print("Job: ", job_uuid)
|
||||
|
||||
# Wait for the job to complete!
|
||||
while True:
|
||||
status = client.post_training.job.status(job_uuid=job_uuid)
|
||||
if not status:
|
||||
print("Job not found")
|
||||
break
|
||||
|
||||
print(status)
|
||||
if status.status == "completed":
|
||||
break
|
||||
|
||||
print("Waiting for job to complete...")
|
||||
time.sleep(5)
|
||||
|
||||
end_time = time.time()
|
||||
print("Job completed in", end_time - start_time, "seconds!")
|
||||
|
||||
print("Artifacts:")
|
||||
print(client.post_training.job.artifacts(job_uuid=job_uuid))
|
||||
```
|
||||
|
||||
## NVIDIA
|
||||
|
||||
NVIDIA's post-training provider for fine-tuning models on NVIDIA's platform.
|
||||
|
||||
### Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `str \| None` | No | | The NVIDIA API key. |
|
||||
| `dataset_namespace` | `str \| None` | No | default | The NVIDIA dataset namespace. |
|
||||
| `project_id` | `str \| None` | No | test-example-model@v1 | The NVIDIA project ID. |
|
||||
| `customizer_url` | `str \| None` | No | | Base URL for the NeMo Customizer API |
|
||||
| `timeout` | `int` | No | 300 | Timeout for the NVIDIA Post Training API |
|
||||
| `max_retries` | `int` | No | 3 | Maximum number of retries for the NVIDIA Post Training API |
|
||||
| `output_model_dir` | `str` | No | test-example-model@v1 | Directory to save the output model |
|
||||
|
||||
### Sample Configuration
|
||||
|
||||
```yaml
|
||||
api_key: ${env.NVIDIA_API_KEY:=}
|
||||
dataset_namespace: ${env.NVIDIA_DATASET_NAMESPACE:=default}
|
||||
project_id: ${env.NVIDIA_PROJECT_ID:=test-project}
|
||||
customizer_url: ${env.NVIDIA_CUSTOMIZER_URL:=http://nemo.test}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Choose the right provider**: Use HuggingFace for broader compatibility, TorchTune for Meta models, or NVIDIA for their ecosystem
|
||||
- **Configure hardware appropriately**: Ensure your configuration matches your available hardware (CPU, GPU, MPS)
|
||||
- **Monitor jobs**: Always monitor job status and handle completion appropriately
|
||||
- **Use appropriate datasets**: Ensure your dataset format matches the expected input format for your chosen provider
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Check out the [Building Applications - Fine-tuning](../building_applications/index.mdx) guide for application-level examples
|
||||
- See the [Providers](../providers/post_training/index.mdx) section for detailed provider documentation
|
||||
- Review the [API Reference](../advanced_apis/post_training.mdx) for complete API documentation
|
||||
193
docs/docs/advanced_apis/scoring.mdx
Normal file
193
docs/docs/advanced_apis/scoring.mdx
Normal file
|
|
@ -0,0 +1,193 @@
|
|||
# Scoring
|
||||
|
||||
The Scoring API in Llama Stack allows you to evaluate outputs of your GenAI system using various scoring functions and metrics. This section covers all available scoring providers and their configuration.
|
||||
|
||||
## Overview
|
||||
|
||||
Llama Stack provides multiple scoring providers:
|
||||
|
||||
- **Basic** (`inline::basic`) - Simple evaluation metrics and scoring functions
|
||||
- **Braintrust** (`inline::braintrust`) - Advanced evaluation using the Braintrust platform
|
||||
- **LLM-as-Judge** (`inline::llm-as-judge`) - Uses language models to evaluate responses
|
||||
|
||||
The Scoring API is associated with `ScoringFunction` resources and provides a suite of out-of-the-box scoring functions. You can also add custom evaluators to meet specific evaluation needs.
|
||||
|
||||
## Basic Scoring
|
||||
|
||||
Basic scoring provider for simple evaluation metrics and scoring functions. This provider offers fundamental scoring capabilities without external dependencies.
|
||||
|
||||
### Configuration
|
||||
|
||||
No configuration required - this provider works out of the box.
|
||||
|
||||
```yaml
|
||||
{}
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- Simple evaluation metrics (accuracy, precision, recall, F1-score)
|
||||
- String matching and similarity metrics
|
||||
- Basic statistical scoring functions
|
||||
- No external dependencies required
|
||||
- Fast execution for standard metrics
|
||||
|
||||
### Use Cases
|
||||
|
||||
- Quick evaluation of basic accuracy metrics
|
||||
- String similarity comparisons
|
||||
- Statistical analysis of model outputs
|
||||
- Development and testing scenarios
|
||||
|
||||
## Braintrust
|
||||
|
||||
Braintrust scoring provider for evaluation and scoring using the [Braintrust platform](https://braintrustdata.com/). Braintrust provides advanced evaluation capabilities and experiment tracking.
|
||||
|
||||
### Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `openai_api_key` | `str \| None` | No | | The OpenAI API Key for LLM-powered evaluations |
|
||||
|
||||
### Sample Configuration
|
||||
|
||||
```yaml
|
||||
openai_api_key: ${env.OPENAI_API_KEY:=}
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- Advanced evaluation metrics
|
||||
- Experiment tracking and comparison
|
||||
- LLM-powered evaluation functions
|
||||
- Integration with Braintrust's evaluation suite
|
||||
- Detailed scoring analytics and insights
|
||||
|
||||
### Use Cases
|
||||
|
||||
- Production evaluation pipelines
|
||||
- A/B testing of model versions
|
||||
- Advanced scoring with custom metrics
|
||||
- Detailed evaluation reporting and analysis
|
||||
|
||||
## LLM-as-Judge
|
||||
|
||||
LLM-as-judge scoring provider that uses language models to evaluate and score responses. This approach leverages the reasoning capabilities of large language models to assess quality, relevance, and other subjective metrics.
|
||||
|
||||
### Configuration
|
||||
|
||||
No configuration required - this provider works out of the box.
|
||||
|
||||
```yaml
|
||||
{}
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- Subjective quality evaluation using LLMs
|
||||
- Flexible evaluation criteria definition
|
||||
- Natural language evaluation explanations
|
||||
- Support for complex evaluation scenarios
|
||||
- Contextual understanding of responses
|
||||
|
||||
### Use Cases
|
||||
|
||||
- Evaluating response quality and relevance
|
||||
- Assessing creativity and coherence
|
||||
- Subjective metric evaluation
|
||||
- Human-like judgment for complex tasks
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Scoring Example
|
||||
|
||||
```python
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
client = LlamaStackClient(base_url="http://localhost:8321")
|
||||
|
||||
# Register a basic accuracy scoring function
|
||||
client.scoring_functions.register(
|
||||
scoring_function_id="basic_accuracy",
|
||||
provider_id="basic",
|
||||
provider_scoring_function_id="accuracy"
|
||||
)
|
||||
|
||||
# Use the scoring function
|
||||
result = client.scoring.score(
|
||||
input_rows=[
|
||||
{"expected": "Paris", "actual": "Paris"},
|
||||
{"expected": "London", "actual": "Paris"}
|
||||
],
|
||||
scoring_function_id="basic_accuracy"
|
||||
)
|
||||
print(f"Accuracy: {result.results[0].score}")
|
||||
```
|
||||
|
||||
### LLM-as-Judge Example
|
||||
|
||||
```python
|
||||
# Register an LLM-as-judge scoring function
|
||||
client.scoring_functions.register(
|
||||
scoring_function_id="quality_judge",
|
||||
provider_id="llm_judge",
|
||||
provider_scoring_function_id="response_quality",
|
||||
params={
|
||||
"criteria": "Evaluate response quality, relevance, and helpfulness",
|
||||
"scale": "1-10"
|
||||
}
|
||||
)
|
||||
|
||||
# Score responses using LLM judgment
|
||||
result = client.scoring.score(
|
||||
input_rows=[{
|
||||
"query": "What is machine learning?",
|
||||
"response": "Machine learning is a subset of AI that enables computers to learn patterns from data..."
|
||||
}],
|
||||
scoring_function_id="quality_judge"
|
||||
)
|
||||
```
|
||||
|
||||
### Braintrust Integration Example
|
||||
|
||||
```python
|
||||
# Register a Braintrust scoring function
|
||||
client.scoring_functions.register(
|
||||
scoring_function_id="braintrust_eval",
|
||||
provider_id="braintrust",
|
||||
provider_scoring_function_id="semantic_similarity"
|
||||
)
|
||||
|
||||
# Run evaluation with Braintrust
|
||||
result = client.scoring.score(
|
||||
input_rows=[{
|
||||
"reference": "The capital of France is Paris",
|
||||
"candidate": "Paris is the capital city of France"
|
||||
}],
|
||||
scoring_function_id="braintrust_eval"
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Choose appropriate providers**: Use Basic for simple metrics, Braintrust for advanced analytics, LLM-as-Judge for subjective evaluation
|
||||
- **Define clear criteria**: When using LLM-as-Judge, provide specific evaluation criteria and scales
|
||||
- **Validate scoring functions**: Test your scoring functions with known examples before production use
|
||||
- **Monitor performance**: Track scoring performance and adjust thresholds based on results
|
||||
- **Combine multiple metrics**: Use different scoring providers together for comprehensive evaluation
|
||||
|
||||
## Integration with Evaluation
|
||||
|
||||
The Scoring API works closely with the [Evaluation](./evaluation.mdx) API to provide comprehensive evaluation workflows:
|
||||
|
||||
1. **Datasets** are loaded via the DatasetIO API
|
||||
2. **Evaluation** generates model outputs using the Eval API
|
||||
3. **Scoring** evaluates the quality of outputs using various scoring functions
|
||||
4. **Results** are aggregated and reported for analysis
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Check out the [Evaluation](./evaluation.mdx) guide for running complete evaluations
|
||||
- See the [Building Applications - Evaluation](../building_applications/evals.mdx) guide for application examples
|
||||
- Review the [Evaluation Reference](../references/evals_reference/) for comprehensive scoring function usage
|
||||
- Explore the [Evaluation Concepts](../concepts/evaluation_concepts) for detailed conceptual information
|
||||
49
docs/docs/api-overview.md
Normal file
49
docs/docs/api-overview.md
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
# API Reference Overview
|
||||
|
||||
The Llama Stack provides a comprehensive set of APIs organized by stability level to help you choose the right endpoints for your use case.
|
||||
|
||||
## 🟢 Stable APIs
|
||||
|
||||
**Production-ready APIs with backward compatibility guarantees.**
|
||||
|
||||
These APIs are fully tested, documented, and stable. They follow semantic versioning principles and maintain backward compatibility within major versions. Recommended for production applications.
|
||||
|
||||
[**Browse Stable APIs →**](./api/llama-stack-specification)
|
||||
|
||||
**Key Features:**
|
||||
- ✅ Backward compatibility guaranteed
|
||||
- ✅ Comprehensive testing and validation
|
||||
- ✅ Production-ready reliability
|
||||
- ✅ Long-term support
|
||||
|
||||
---
|
||||
|
||||
## 🟡 Experimental APIs
|
||||
|
||||
**Preview APIs that may change before becoming stable.**
|
||||
|
||||
These APIs include v1alpha and v1beta endpoints that are feature-complete but may undergo changes based on feedback. Great for exploring new capabilities and providing feedback.
|
||||
|
||||
[**Browse Experimental APIs →**](./api-experimental/llama-stack-specification-experimental-apis)
|
||||
|
||||
**Key Features:**
|
||||
- 🧪 Latest features and capabilities
|
||||
- 🧪 May change based on user feedback
|
||||
- 🧪 Active development and iteration
|
||||
- 🧪 Opportunity to influence final design
|
||||
|
||||
---
|
||||
|
||||
## 🔴 Deprecated APIs
|
||||
|
||||
**Legacy APIs for migration reference.**
|
||||
|
||||
These APIs are deprecated and will be removed in future versions. They are provided for migration purposes and to help transition to newer, stable alternatives.
|
||||
|
||||
[**Browse Deprecated APIs →**](./api-deprecated/llama-stack-specification-deprecated-apis)
|
||||
|
||||
**Key Features:**
|
||||
- ⚠️ Will be removed in future versions
|
||||
- ⚠️ Migration guidance provided
|
||||
- ⚠️ Use for compatibility during transition
|
||||
- ⚠️ Not recommended for new projects
|
||||
|
|
@ -1,9 +1,18 @@
|
|||
---
|
||||
title: Agents
|
||||
description: Build powerful AI applications with the Llama Stack agent framework
|
||||
sidebar_label: Agents
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Agents
|
||||
|
||||
An Agent in Llama Stack is a powerful abstraction that allows you to build complex AI applications.
|
||||
|
||||
The Llama Stack agent framework is built on a modular architecture that allows for flexible and powerful AI
|
||||
applications. This document explains the key components and how they work together.
|
||||
The Llama Stack agent framework is built on a modular architecture that allows for flexible and powerful AI applications. This document explains the key components and how they work together.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
|
|
@ -19,7 +28,6 @@ Agents are configured using the `AgentConfig` class, which includes:
|
|||
```python
|
||||
from llama_stack_client import Agent
|
||||
|
||||
|
||||
# Create the agent
|
||||
agent = Agent(
|
||||
llama_stack_client,
|
||||
|
|
@ -46,6 +54,9 @@ Each interaction with an agent is called a "turn" and consists of:
|
|||
- **Steps**: The agent's internal processing (inference, tool execution, etc.)
|
||||
- **Output Message**: The agent's response
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="streaming" label="Streaming Response">
|
||||
|
||||
```python
|
||||
from llama_stack_client import AgentEventLogger
|
||||
|
||||
|
|
@ -57,9 +68,9 @@ turn_response = agent.create_turn(
|
|||
for log in AgentEventLogger().log(turn_response):
|
||||
log.print()
|
||||
```
|
||||
### Non-Streaming
|
||||
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="non-streaming" label="Non-Streaming Response">
|
||||
|
||||
```python
|
||||
from rich.pretty import pprint
|
||||
|
|
@ -78,6 +89,9 @@ print("Steps:")
|
|||
pprint(response.steps)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 4. Steps
|
||||
|
||||
Each turn consists of multiple steps that represent the agent's thought process:
|
||||
|
|
@ -88,5 +102,11 @@ Each turn consists of multiple steps that represent the agent's thought process:
|
|||
|
||||
## Agent Execution Loop
|
||||
|
||||
Refer to the [Agent Execution Loop](./agent_execution_loop) for more details on what happens within an agent turn.
|
||||
|
||||
Refer to the [Agent Execution Loop](agent_execution_loop) for more details on what happens within an agent turn.
|
||||
## Related Resources
|
||||
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - Understanding the internal processing flow
|
||||
- **[RAG (Retrieval Augmented Generation)](./rag)** - Building knowledge-enhanced agents
|
||||
- **[Tools Integration](./tools)** - Extending agent capabilities with external tools
|
||||
- **[Safety Guardrails](./safety)** - Implementing responsible AI practices
|
||||
|
|
@ -1,10 +1,18 @@
|
|||
## Agent Execution Loop
|
||||
---
|
||||
title: Agent Execution Loop
|
||||
description: Understanding the internal processing flow of Llama Stack agents
|
||||
sidebar_label: Agent Execution Loop
|
||||
sidebar_position: 4
|
||||
---
|
||||
|
||||
Agents are the heart of Llama Stack applications. They combine inference, memory, safety, and tool usage into coherent
|
||||
workflows. At its core, an agent follows a sophisticated execution loop that enables multi-step reasoning, tool usage,
|
||||
and safety checks.
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
### Steps in the Agent Workflow
|
||||
# Agent Execution Loop
|
||||
|
||||
Agents are the heart of Llama Stack applications. They combine inference, memory, safety, and tool usage into coherent workflows. At its core, an agent follows a sophisticated execution loop that enables multi-step reasoning, tool usage, and safety checks.
|
||||
|
||||
## Steps in the Agent Workflow
|
||||
|
||||
Each agent turn follows these key steps:
|
||||
|
||||
|
|
@ -17,7 +25,7 @@ Each agent turn follows these key steps:
|
|||
|
||||
3. **Inference Loop**: The agent enters its main execution loop:
|
||||
- The LLM receives a user prompt (with previous tool outputs)
|
||||
- The LLM generates a response, potentially with [tool calls](tools)
|
||||
- The LLM generates a response, potentially with [tool calls](./tools)
|
||||
- If tool calls are present:
|
||||
- Tool inputs are safety-checked
|
||||
- Tools are executed (e.g., web search, code execution)
|
||||
|
|
@ -29,7 +37,9 @@ Each agent turn follows these key steps:
|
|||
|
||||
4. **Final Safety Check**: The agent's final response is screened through safety shields
|
||||
|
||||
```{mermaid}
|
||||
## Execution Flow Diagram
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant E as Executor
|
||||
|
|
@ -70,12 +80,15 @@ sequenceDiagram
|
|||
|
||||
Each step in this process can be monitored and controlled through configurations.
|
||||
|
||||
### Agent Execution Loop Example
|
||||
## Agent Execution Example
|
||||
|
||||
Here's an example that demonstrates monitoring the agent's execution:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="streaming" label="Streaming Execution">
|
||||
|
||||
```python
|
||||
from llama_stack_client import LlamaStackClient, Agent, AgentEventLogger
|
||||
from rich.pretty import pprint
|
||||
|
||||
# Replace host and port
|
||||
client = LlamaStackClient(base_url=f"http://{HOST}:{PORT}")
|
||||
|
|
@ -120,6 +133,13 @@ response = agent.create_turn(
|
|||
# Monitor each step of execution
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="non-streaming" label="Non-Streaming Execution">
|
||||
|
||||
```python
|
||||
from rich.pretty import pprint
|
||||
|
||||
# Using non-streaming API, the response contains input, steps, and output.
|
||||
response = agent.create_turn(
|
||||
|
|
@ -131,9 +151,35 @@ response = agent.create_turn(
|
|||
}
|
||||
],
|
||||
session_id=session_id,
|
||||
stream=False,
|
||||
)
|
||||
|
||||
pprint(f"Input: {response.input_messages}")
|
||||
pprint(f"Output: {response.output_message.content}")
|
||||
pprint(f"Steps: {response.steps}")
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Key Configuration Options
|
||||
|
||||
### Loop Control
|
||||
- **max_infer_iters**: Maximum number of inference iterations (default: 5)
|
||||
- **max_tokens**: Token limit for responses
|
||||
- **temperature**: Controls response randomness
|
||||
|
||||
### Safety Configuration
|
||||
- **input_shields**: Safety checks for user input
|
||||
- **output_shields**: Safety checks for agent responses
|
||||
|
||||
### Tool Integration
|
||||
- **tools**: List of available tools for the agent
|
||||
- **tool_choice**: Control over when tools are used
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Agents](./agent)** - Understanding agent fundamentals
|
||||
- **[Tools Integration](./tools)** - Adding capabilities to agents
|
||||
- **[Safety Guardrails](./safety)** - Implementing safety measures
|
||||
- **[RAG (Retrieval Augmented Generation)](./rag)** - Building knowledge-enhanced workflows
|
||||
256
docs/docs/building_applications/evals.mdx
Normal file
256
docs/docs/building_applications/evals.mdx
Normal file
|
|
@ -0,0 +1,256 @@
|
|||
---
|
||||
title: Evaluations
|
||||
description: Evaluate LLM applications with Llama Stack's comprehensive evaluation framework
|
||||
sidebar_label: Evaluations
|
||||
sidebar_position: 7
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This guide walks you through the process of evaluating an LLM application built using Llama Stack. For detailed API reference, check out the [Evaluation Reference](../references/evals_reference/) guide that covers the complete set of APIs and developer experience flow.
|
||||
|
||||
:::tip[Interactive Examples]
|
||||
Check out our [Colab notebook](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing) for working examples with evaluations, or try the [Getting Started notebook](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).
|
||||
:::
|
||||
|
||||
## Application Evaluation Example
|
||||
|
||||
[](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)
|
||||
|
||||
Llama Stack offers a library of scoring functions and the `/scoring` API, allowing you to run evaluations on your pre-annotated AI application datasets.
|
||||
|
||||
In this example, we will show you how to:
|
||||
1. **Build an Agent** with Llama Stack
|
||||
2. **Query the agent's sessions, turns, and steps** to analyze execution
|
||||
3. **Evaluate the results** using scoring functions
|
||||
|
||||
## Step-by-Step Evaluation Process
|
||||
|
||||
### 1. Building a Search Agent
|
||||
|
||||
First, let's create an agent that can search the web to answer questions:
|
||||
|
||||
```python
|
||||
from llama_stack_client import LlamaStackClient, Agent, AgentEventLogger
|
||||
|
||||
client = LlamaStackClient(base_url=f"http://{HOST}:{PORT}")
|
||||
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.3-70B-Instruct",
|
||||
instructions="You are a helpful assistant. Use search tool to answer the questions.",
|
||||
tools=["builtin::websearch"],
|
||||
)
|
||||
|
||||
# Test prompts for evaluation
|
||||
user_prompts = [
|
||||
"Which teams played in the NBA Western Conference Finals of 2024. Search the web for the answer.",
|
||||
"In which episode and season of South Park does Bill Cosby (BSM-471) first appear? Give me the number and title. Search the web for the answer.",
|
||||
"What is the British-American kickboxer Andrew Tate's kickboxing name? Search the web for the answer.",
|
||||
]
|
||||
|
||||
session_id = agent.create_session("test-session")
|
||||
|
||||
# Execute all prompts in the session
|
||||
for prompt in user_prompts:
|
||||
response = agent.create_turn(
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": prompt,
|
||||
}
|
||||
],
|
||||
session_id=session_id,
|
||||
)
|
||||
|
||||
for log in AgentEventLogger().log(response):
|
||||
log.print()
|
||||
```
|
||||
|
||||
### 2. Query Agent Execution Steps
|
||||
|
||||
Now, let's analyze the agent's execution steps to understand its performance:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="session-analysis" label="Session Analysis">
|
||||
|
||||
```python
|
||||
from rich.pretty import pprint
|
||||
|
||||
# Query the agent's session to get detailed execution data
|
||||
session_response = client.agents.session.retrieve(
|
||||
session_id=session_id,
|
||||
agent_id=agent.agent_id,
|
||||
)
|
||||
|
||||
pprint(session_response)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="tool-validation" label="Tool Usage Validation">
|
||||
|
||||
```python
|
||||
# Sanity check: Verify that all user prompts are followed by tool calls
|
||||
num_tool_call = 0
|
||||
for turn in session_response.turns:
|
||||
for step in turn.steps:
|
||||
if (
|
||||
step.step_type == "tool_execution"
|
||||
and step.tool_calls[0].tool_name == "brave_search"
|
||||
):
|
||||
num_tool_call += 1
|
||||
|
||||
print(
|
||||
f"{num_tool_call}/{len(session_response.turns)} user prompts are followed by a tool call to `brave_search`"
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 3. Evaluate Agent Responses
|
||||
|
||||
Now we'll evaluate the agent's responses using Llama Stack's scoring API:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="data-preparation" label="Data Preparation">
|
||||
|
||||
```python
|
||||
# Process agent execution history into evaluation rows
|
||||
eval_rows = []
|
||||
|
||||
# Define expected answers for our test prompts
|
||||
expected_answers = [
|
||||
"Dallas Mavericks and the Minnesota Timberwolves",
|
||||
"Season 4, Episode 12",
|
||||
"King Cobra",
|
||||
]
|
||||
|
||||
# Create evaluation dataset from agent responses
|
||||
for i, turn in enumerate(session_response.turns):
|
||||
eval_rows.append(
|
||||
{
|
||||
"input_query": turn.input_messages[0].content,
|
||||
"generated_answer": turn.output_message.content,
|
||||
"expected_answer": expected_answers[i],
|
||||
}
|
||||
)
|
||||
|
||||
pprint(eval_rows)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="scoring" label="Scoring & Evaluation">
|
||||
|
||||
```python
|
||||
# Configure scoring parameters
|
||||
scoring_params = {
|
||||
"basic::subset_of": None, # Check if generated answer contains expected answer
|
||||
}
|
||||
|
||||
# Run evaluation using Llama Stack's scoring API
|
||||
scoring_response = client.scoring.score(
|
||||
input_rows=eval_rows,
|
||||
scoring_functions=scoring_params
|
||||
)
|
||||
|
||||
pprint(scoring_response)
|
||||
|
||||
# Analyze results
|
||||
for i, result in enumerate(scoring_response.results):
|
||||
print(f"Query {i+1}: {result.score}")
|
||||
print(f" Generated: {eval_rows[i]['generated_answer'][:100]}...")
|
||||
print(f" Expected: {expected_answers[i]}")
|
||||
print(f" Score: {result.score}")
|
||||
print()
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Available Scoring Functions
|
||||
|
||||
Llama Stack provides several built-in scoring functions:
|
||||
|
||||
### Basic Scoring Functions
|
||||
- **`basic::subset_of`**: Checks if the expected answer is contained in the generated response
|
||||
- **`basic::exact_match`**: Performs exact string matching between expected and generated answers
|
||||
- **`basic::regex_match`**: Uses regular expressions to match patterns in responses
|
||||
|
||||
### Advanced Scoring Functions
|
||||
- **`llm_as_judge::accuracy`**: Uses an LLM to judge response accuracy
|
||||
- **`llm_as_judge::helpfulness`**: Evaluates how helpful the response is
|
||||
- **`llm_as_judge::safety`**: Assesses response safety and appropriateness
|
||||
|
||||
### Custom Scoring Functions
|
||||
You can also create custom scoring functions for domain-specific evaluation needs.
|
||||
|
||||
## Evaluation Workflow Best Practices
|
||||
|
||||
### 🎯 **Dataset Preparation**
|
||||
- Use diverse test cases that cover edge cases and common scenarios
|
||||
- Include clear expected answers or success criteria
|
||||
- Balance your dataset across different difficulty levels
|
||||
|
||||
### 📊 **Metrics Selection**
|
||||
- Choose appropriate scoring functions for your use case
|
||||
- Combine multiple metrics for comprehensive evaluation
|
||||
- Consider both automated and human evaluation metrics
|
||||
|
||||
### 🔄 **Iterative Improvement**
|
||||
- Run evaluations regularly during development
|
||||
- Use evaluation results to identify areas for improvement
|
||||
- Track performance changes over time
|
||||
|
||||
### 📈 **Analysis & Reporting**
|
||||
- Analyze failures to understand model limitations
|
||||
- Generate comprehensive evaluation reports
|
||||
- Share results with stakeholders for informed decision-making
|
||||
|
||||
## Advanced Evaluation Scenarios
|
||||
|
||||
### Batch Evaluation
|
||||
For evaluating large datasets efficiently:
|
||||
|
||||
```python
|
||||
# Prepare large evaluation dataset
|
||||
large_eval_dataset = [
|
||||
{"input_query": query, "expected_answer": answer}
|
||||
for query, answer in zip(queries, expected_answers)
|
||||
]
|
||||
|
||||
# Run batch evaluation
|
||||
batch_results = client.scoring.score(
|
||||
input_rows=large_eval_dataset,
|
||||
scoring_functions={
|
||||
"basic::subset_of": None,
|
||||
"llm_as_judge::accuracy": {"judge_model": "meta-llama/Llama-3.3-70B-Instruct"},
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Multi-Metric Evaluation
|
||||
Combining different scoring approaches:
|
||||
|
||||
```python
|
||||
comprehensive_scoring = {
|
||||
"exact_match": "basic::exact_match",
|
||||
"subset_match": "basic::subset_of",
|
||||
"llm_judge": "llm_as_judge::accuracy",
|
||||
"safety_check": "llm_as_judge::safety",
|
||||
}
|
||||
|
||||
results = client.scoring.score(
|
||||
input_rows=eval_rows,
|
||||
scoring_functions=comprehensive_scoring
|
||||
)
|
||||
```
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Agents](./agent)** - Building agents for evaluation
|
||||
- **[Tools Integration](./tools)** - Using tools in evaluated agents
|
||||
- **[Evaluation Reference](../references/evals_reference/)** - Complete API reference for evaluations
|
||||
- **[Getting Started Notebook](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)** - Interactive examples
|
||||
- **[Evaluation Examples](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing)** - Additional evaluation scenarios
|
||||
83
docs/docs/building_applications/index.mdx
Normal file
83
docs/docs/building_applications/index.mdx
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
title: Building Applications
|
||||
description: Comprehensive guides for building AI applications with Llama Stack
|
||||
sidebar_label: Overview
|
||||
sidebar_position: 5
|
||||
---
|
||||
|
||||
# AI Application Examples
|
||||
|
||||
Llama Stack provides all the building blocks needed to create sophisticated AI applications.
|
||||
|
||||
## Getting Started
|
||||
|
||||
The best way to get started is to look at this comprehensive notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them.
|
||||
|
||||
**📓 [Building AI Applications Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)**
|
||||
|
||||
## Core Topics
|
||||
|
||||
Here are the key topics that will help you build effective AI applications:
|
||||
|
||||
### 🤖 **Agent Development**
|
||||
- **[Agent Framework](./agent.mdx)** - Understand the components and design patterns of the Llama Stack agent framework
|
||||
- **[Agent Execution Loop](./agent_execution_loop.mdx)** - How agents process information, make decisions, and execute actions
|
||||
- **[Agents vs Responses API](./responses_vs_agents.mdx)** - Learn when to use each API for different use cases
|
||||
|
||||
### 📚 **Knowledge Integration**
|
||||
- **[RAG (Retrieval-Augmented Generation)](./rag.mdx)** - Enhance your agents with external knowledge through retrieval mechanisms
|
||||
|
||||
### 🛠️ **Capabilities & Extensions**
|
||||
- **[Tools](./tools.mdx)** - Extend your agents' capabilities by integrating with external tools and APIs
|
||||
|
||||
### 📊 **Quality & Monitoring**
|
||||
- **[Evaluations](./evals.mdx)** - Evaluate your agents' effectiveness and identify areas for improvement
|
||||
- **[Telemetry](./telemetry.mdx)** - Monitor and analyze your agents' performance and behavior
|
||||
- **[Safety](./safety.mdx)** - Implement guardrails and safety measures to ensure responsible AI behavior
|
||||
|
||||
### 🎮 **Interactive Development**
|
||||
- **[Playground](./playground.mdx)** - Interactive environment for testing and developing applications
|
||||
|
||||
## Application Patterns
|
||||
|
||||
### 🤖 **Conversational Agents**
|
||||
Build intelligent chatbots and assistants that can:
|
||||
- Maintain context across conversations
|
||||
- Access external knowledge bases
|
||||
- Execute actions through tool integrations
|
||||
- Apply safety filters and guardrails
|
||||
|
||||
### 📖 **RAG Applications**
|
||||
Create knowledge-augmented applications that:
|
||||
- Retrieve relevant information from documents
|
||||
- Generate contextually accurate responses
|
||||
- Handle large knowledge bases efficiently
|
||||
- Provide source attribution
|
||||
|
||||
### 🔧 **Tool-Enhanced Systems**
|
||||
Develop applications that can:
|
||||
- Search the web for real-time information
|
||||
- Interact with databases and APIs
|
||||
- Perform calculations and analysis
|
||||
- Execute complex multi-step workflows
|
||||
|
||||
### 🛡️ **Enterprise Applications**
|
||||
Build production-ready systems with:
|
||||
- Comprehensive safety measures
|
||||
- Performance monitoring and analytics
|
||||
- Scalable deployment configurations
|
||||
- Evaluation and quality assurance
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **📖 Start with the Notebook** - Work through the complete tutorial
|
||||
2. **🎯 Choose Your Pattern** - Pick the application type that matches your needs
|
||||
3. **🏗️ Build Your Foundation** - Set up your [providers](/docs/providers/) and [distributions](/docs/distributions/)
|
||||
4. **🚀 Deploy & Monitor** - Use our [deployment guides](/docs/deploying/) for production
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Getting Started](/docs/getting_started/quickstart)** - Basic setup and concepts
|
||||
- **[Providers](/docs/providers/)** - Available AI service providers
|
||||
- **[Distributions](/docs/distributions/)** - Pre-configured deployment packages
|
||||
- **[API Reference](/docs/api/llama-stack-specification)** - Complete API documentation
|
||||
298
docs/docs/building_applications/playground.mdx
Normal file
298
docs/docs/building_applications/playground.mdx
Normal file
|
|
@ -0,0 +1,298 @@
|
|||
---
|
||||
title: Llama Stack Playground
|
||||
description: Interactive interface to explore and experiment with Llama Stack capabilities
|
||||
sidebar_label: Playground
|
||||
sidebar_position: 10
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Llama Stack Playground
|
||||
|
||||
:::note[Experimental Feature]
|
||||
The Llama Stack Playground is currently experimental and subject to change. We welcome feedback and contributions to help improve it.
|
||||
:::
|
||||
|
||||
The Llama Stack Playground is a simple interface that aims to:
|
||||
- **Showcase capabilities and concepts** of Llama Stack in an interactive environment
|
||||
- **Demo end-to-end application code** to help users get started building their own applications
|
||||
- **Provide a UI** to help users inspect and understand Llama Stack API providers and resources
|
||||
|
||||
## Key Features
|
||||
|
||||
### Interactive Playground Pages
|
||||
|
||||
The playground provides interactive pages for users to explore Llama Stack API capabilities:
|
||||
|
||||
#### Chatbot Interface
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/8d2ef802-5812-4a28-96e1-316038c84cbf" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="chat" label="Chat">
|
||||
|
||||
**Simple Chat Interface**
|
||||
- Chat directly with Llama models through an intuitive interface
|
||||
- Uses the `/chat/completions` streaming API under the hood
|
||||
- Real-time message streaming for responsive interactions
|
||||
- Perfect for testing model capabilities and prompt engineering
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rag" label="RAG Chat">
|
||||
|
||||
**Document-Aware Conversations**
|
||||
- Upload documents to create memory banks
|
||||
- Chat with a RAG-enabled agent that can query your documents
|
||||
- Uses Llama Stack's `/agents` API to create and manage RAG sessions
|
||||
- Ideal for exploring knowledge-enhanced AI applications
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Evaluation Interface
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/6cc1659f-eba4-49ca-a0a5-7c243557b4f5" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="scoring" label="Scoring Evaluations">
|
||||
|
||||
**Custom Dataset Evaluation**
|
||||
- Upload your own evaluation datasets
|
||||
- Run evaluations using available scoring functions
|
||||
- Uses Llama Stack's `/scoring` API for flexible evaluation workflows
|
||||
- Great for testing application performance on custom metrics
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="benchmarks" label="Benchmark Evaluations">
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%', marginBottom: '1rem'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/345845c7-2a2b-4095-960a-9ae40f6a93cf" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
|
||||
**Pre-registered Evaluation Tasks**
|
||||
- Evaluate models or agents on pre-defined tasks
|
||||
- Uses Llama Stack's `/eval` API for comprehensive evaluation
|
||||
- Combines datasets and scoring functions for standardized testing
|
||||
|
||||
**Setup Requirements:**
|
||||
Register evaluation datasets and benchmarks first:
|
||||
|
||||
```bash
|
||||
# Register evaluation dataset
|
||||
llama-stack-client datasets register \
|
||||
--dataset-id "mmlu" \
|
||||
--provider-id "huggingface" \
|
||||
--url "https://huggingface.co/datasets/llamastack/evals" \
|
||||
--metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \
|
||||
--schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string"}, "chat_completion_input": {"type": "string"}}'
|
||||
|
||||
# Register benchmark task
|
||||
llama-stack-client benchmarks register \
|
||||
--eval-task-id meta-reference-mmlu \
|
||||
--provider-id meta-reference \
|
||||
--dataset-id mmlu \
|
||||
--scoring-functions basic::regex_parser_multiple_choice_answer
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Inspection Interface
|
||||
|
||||
<video
|
||||
controls
|
||||
autoPlay
|
||||
playsInline
|
||||
muted
|
||||
loop
|
||||
style={{width: '100%'}}
|
||||
>
|
||||
<source src="https://github.com/user-attachments/assets/01d52b2d-92af-4e3a-b623-a9b8ba22ba99" type="video/mp4" />
|
||||
Your browser does not support the video tag.
|
||||
</video>
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="providers" label="API Providers">
|
||||
|
||||
**Provider Management**
|
||||
- Inspect available Llama Stack API providers
|
||||
- View provider configurations and capabilities
|
||||
- Uses the `/providers` API for real-time provider information
|
||||
- Essential for understanding your deployment's capabilities
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="resources" label="API Resources">
|
||||
|
||||
**Resource Exploration**
|
||||
- Inspect Llama Stack API resources including:
|
||||
- **Models**: Available language models
|
||||
- **Datasets**: Registered evaluation datasets
|
||||
- **Memory Banks**: Vector databases and knowledge stores
|
||||
- **Benchmarks**: Evaluation tasks and scoring functions
|
||||
- **Shields**: Safety and content moderation tools
|
||||
- Uses `/<resources>/list` APIs for comprehensive resource visibility
|
||||
- For detailed information about resources, see [Core Concepts](/docs/concepts)
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Quick Start Guide
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="setup" label="Setup">
|
||||
|
||||
**1. Start the Llama Stack API Server**
|
||||
|
||||
```bash
|
||||
llama stack list-deps together | xargs -L1 uv pip install
|
||||
llama stack run together
|
||||
```
|
||||
|
||||
**2. Start the Streamlit UI**
|
||||
|
||||
```bash
|
||||
# Launch the playground interface
|
||||
uv run --with ".[ui]" streamlit run llama_stack.core/ui/app.py
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="usage" label="Usage Tips">
|
||||
|
||||
**Making the Most of the Playground:**
|
||||
|
||||
- **Start with Chat**: Test basic model interactions and prompt engineering
|
||||
- **Explore RAG**: Upload sample documents to see knowledge-enhanced responses
|
||||
- **Try Evaluations**: Use the scoring interface to understand evaluation metrics
|
||||
- **Inspect Resources**: Check what providers and resources are available
|
||||
- **Experiment with Settings**: Adjust parameters to see how they affect results
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Available Distributions
|
||||
|
||||
The playground works with any Llama Stack distribution. Popular options include:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="together" label="Together AI">
|
||||
|
||||
```bash
|
||||
llama stack list-deps together | xargs -L1 uv pip install
|
||||
llama stack run together
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Cloud-hosted models
|
||||
- Fast inference
|
||||
- Multiple model options
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="ollama" label="Ollama (Local)">
|
||||
|
||||
```bash
|
||||
llama stack list-deps ollama | xargs -L1 uv pip install
|
||||
llama stack run ollama
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Local model execution
|
||||
- Privacy-focused
|
||||
- No internet required
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="meta-reference" label="Meta Reference">
|
||||
|
||||
```bash
|
||||
llama stack list-deps meta-reference | xargs -L1 uv pip install
|
||||
llama stack run meta-reference
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- Reference implementation
|
||||
- All API features available
|
||||
- Best for development
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Use Cases & Examples
|
||||
|
||||
### Educational Use Cases
|
||||
- **Learning Llama Stack**: Hands-on exploration of API capabilities
|
||||
- **Prompt Engineering**: Interactive testing of different prompting strategies
|
||||
- **RAG Experimentation**: Understanding how document retrieval affects responses
|
||||
- **Evaluation Understanding**: See how different metrics evaluate model performance
|
||||
|
||||
### Development Use Cases
|
||||
- **Prototype Testing**: Quick validation of application concepts
|
||||
- **API Exploration**: Understanding available endpoints and parameters
|
||||
- **Integration Planning**: Seeing how different components work together
|
||||
- **Demo Creation**: Showcasing Llama Stack capabilities to stakeholders
|
||||
|
||||
### Research Use Cases
|
||||
- **Model Comparison**: Side-by-side testing of different models
|
||||
- **Evaluation Design**: Understanding how scoring functions work
|
||||
- **Safety Testing**: Exploring shield effectiveness with different inputs
|
||||
- **Performance Analysis**: Measuring model behavior across different scenarios
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🚀 **Getting Started**
|
||||
- Begin with simple chat interactions to understand basic functionality
|
||||
- Gradually explore more advanced features like RAG and evaluations
|
||||
- Use the inspection tools to understand your deployment's capabilities
|
||||
|
||||
### 🔧 **Development Workflow**
|
||||
- Use the playground to prototype before writing application code
|
||||
- Test different parameter settings interactively
|
||||
- Validate evaluation approaches before implementing them programmatically
|
||||
|
||||
### 📊 **Evaluation & Testing**
|
||||
- Start with simple scoring functions before trying complex evaluations
|
||||
- Use the playground to understand evaluation results before automation
|
||||
- Test safety features with various input types
|
||||
|
||||
### 🎯 **Production Preparation**
|
||||
- Use playground insights to inform your production API usage
|
||||
- Test edge cases and error conditions interactively
|
||||
- Validate resource configurations before deployment
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Getting Started Guide](../getting_started/quickstart)** - Complete setup and introduction
|
||||
- **[Core Concepts](/docs/concepts)** - Understanding Llama Stack fundamentals
|
||||
- **[Agents](./agent)** - Building intelligent agents
|
||||
- **[RAG (Retrieval Augmented Generation)](./rag)** - Knowledge-enhanced applications
|
||||
- **[Evaluations](./evals)** - Comprehensive evaluation framework
|
||||
- **[API Reference](/docs/api/llama-stack-specification)** - Complete API documentation
|
||||
123
docs/docs/building_applications/rag.mdx
Normal file
123
docs/docs/building_applications/rag.mdx
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
---
|
||||
title: Retrieval Augmented Generation (RAG)
|
||||
description: Build knowledge-enhanced AI applications with external document retrieval
|
||||
sidebar_label: RAG (Retrieval Augmented Generation)
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Retrieval Augmented Generation (RAG)
|
||||
|
||||
|
||||
RAG enables your applications to reference and recall information from external documents. Llama Stack makes Agentic RAG available through OpenAI's Responses API.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Start the Server
|
||||
|
||||
In one terminal, start the Llama Stack server:
|
||||
|
||||
```bash
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
llama stack run starter
|
||||
```
|
||||
|
||||
### 2. Connect with OpenAI Client
|
||||
|
||||
In another terminal, use the standard OpenAI client with the Responses API:
|
||||
|
||||
```python
|
||||
import io, requests
|
||||
from openai import OpenAI
|
||||
|
||||
url = "https://www.paulgraham.com/greatwork.html"
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
|
||||
|
||||
# Create vector store - auto-detects default embedding model
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
response = requests.get(url)
|
||||
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
|
||||
file_id = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants").id
|
||||
client.vector_stores.files.create(vector_store_id=vs.id, file_id=file_id)
|
||||
|
||||
resp = client.responses.create(
|
||||
model="gpt-4o",
|
||||
input="How do you do great work? Use the existing knowledge_search tool.",
|
||||
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
|
||||
include=["file_search_call.results"],
|
||||
)
|
||||
|
||||
print(resp.output[-1].content[-1].text)
|
||||
```
|
||||
Which should give output like:
|
||||
```
|
||||
Doing great work is about more than just hard work and ambition; it involves combining several elements:
|
||||
|
||||
1. **Pursue What Excites You**: Engage in projects that are both ambitious and exciting to you. It's important to work on something you have a natural aptitude for and a deep interest in.
|
||||
|
||||
2. **Explore and Discover**: Great work often feels like a blend of discovery and creation. Focus on seeing possibilities and let ideas take their natural shape, rather than just executing a plan.
|
||||
|
||||
3. **Be Bold Yet Flexible**: Take bold steps in your work without over-planning. An adaptable approach that evolves with new ideas can often lead to breakthroughs.
|
||||
|
||||
4. **Work on Your Own Projects**: Develop a habit of working on projects of your own choosing, as these often lead to great achievements. These should be projects you find exciting and that challenge you intellectually.
|
||||
|
||||
5. **Be Earnest and Authentic**: Approach your work with earnestness and authenticity. Trying to impress others with affectation can be counterproductive, as genuine effort and intellectual honesty lead to better work outcomes.
|
||||
|
||||
6. **Build a Supportive Environment**: Work alongside great colleagues who inspire you and enhance your work. Surrounding yourself with motivating individuals creates a fertile environment for great work.
|
||||
|
||||
7. **Maintain High Morale**: High morale significantly impacts your ability to do great work. Stay optimistic and protect your mental well-being to maintain progress and momentum.
|
||||
|
||||
8. **Balance**: While hard work is essential, overworking can lead to diminishing returns. Balance periods of intensive work with rest to sustain productivity over time.
|
||||
|
||||
This approach shows that great work is less about following a strict formula and more about aligning your interests, ambition, and environment to foster creativity and innovation.
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
Llama Stack provides OpenAI-compatible RAG capabilities through:
|
||||
|
||||
- **Vector Stores API**: OpenAI-compatible vector storage with automatic embedding model detection
|
||||
- **Files API**: Document upload and processing using OpenAI's file format
|
||||
- **Responses API**: Enhanced chat completions with agentic tool calling via file search
|
||||
|
||||
## Configuring Default Embedding Models
|
||||
|
||||
To enable automatic vector store creation without specifying embedding models, configure a default embedding model in your run.yaml like so:
|
||||
|
||||
```yaml
|
||||
vector_stores:
|
||||
default_provider_id: faiss
|
||||
default_embedding_model:
|
||||
provider_id: sentence-transformers
|
||||
model_id: nomic-ai/nomic-embed-text-v1.5
|
||||
```
|
||||
|
||||
With this configuration:
|
||||
- `client.vector_stores.create()` works without requiring embedding model or provider parameters
|
||||
- The system automatically uses the default vector store provider (`faiss`) when multiple providers are available
|
||||
- The system automatically uses the default embedding model (`sentence-transformers/nomic-ai/nomic-embed-text-v1.5`) for any newly created vector store
|
||||
- The `default_provider_id` specifies which vector storage backend to use
|
||||
- The `default_embedding_model` specifies both the inference provider and model for embeddings
|
||||
|
||||
## Vector Store Operations
|
||||
|
||||
### Creating Vector Stores
|
||||
|
||||
You can create vector stores with automatic or explicit embedding model selection:
|
||||
|
||||
```python
|
||||
# Automatic - uses default configured embedding model and vector store provider
|
||||
vs = client.vector_stores.create()
|
||||
|
||||
# Explicit - specify embedding model and/or provider when you need specific ones
|
||||
vs = client.vector_stores.create(
|
||||
extra_body={
|
||||
"provider_id": "faiss", # Optional: specify vector store provider
|
||||
"embedding_model": "sentence-transformers/nomic-ai/nomic-embed-text-v1.5",
|
||||
"embedding_dimension": 768 # Optional: will be auto-detected if not provided
|
||||
}
|
||||
)
|
||||
```
|
||||
|
|
@ -1,10 +1,20 @@
|
|||
---
|
||||
title: Agents vs OpenAI Responses API
|
||||
description: Compare the Agents API and OpenAI Responses API for building AI applications with tool calling capabilities
|
||||
sidebar_label: Agents vs Responses API
|
||||
sidebar_position: 5
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Agents vs OpenAI Responses API
|
||||
|
||||
Llama Stack (LLS) provides two different APIs for building AI applications with tool calling capabilities: the **Agents API** and the **OpenAI Responses API**. While both enable AI systems to use tools, and maintain full conversation history, they serve different use cases and have distinct characteristics.
|
||||
|
||||
```{note}
|
||||
**Note:** For simple and basic inferencing, you may want to use the [Chat Completions API](../providers/openai.md#chat-completions) directly, before progressing to Agents or Responses API.
|
||||
```
|
||||
:::note
|
||||
**Note:** For simple and basic inferencing, you may want to use the [Chat Completions API](../providers/openai#chat-completions) directly, before progressing to Agents or Responses API.
|
||||
:::
|
||||
|
||||
## Overview
|
||||
|
||||
|
|
@ -21,6 +31,8 @@ Additionally, Agents let you specify input/output shields whereas Responses do n
|
|||
|
||||
Today the Agents and Responses APIs can be used independently depending on the use case. But, it is also productive to treat the APIs as complementary. It is not currently supported, but it is planned for the LLS Agents API to alternatively use the Responses API as its backend instead of the default Chat Completions API, i.e., enabling a combination of the safety features of Agents with the dynamic configuration and branching capabilities of Responses.
|
||||
|
||||
## Feature Comparison
|
||||
|
||||
| Feature | LLS Agents API | OpenAI Responses API |
|
||||
|---------|------------|---------------------|
|
||||
| **Conversation Management** | Linear persistent sessions | Can branch from any previous response ID |
|
||||
|
|
@ -34,7 +46,10 @@ Let's compare how both APIs handle a research task where we need to:
|
|||
2. Access different information sources dynamically
|
||||
3. Continue the conversation based on search results
|
||||
|
||||
### Agents API: Session-based configuration with safety shields
|
||||
<Tabs>
|
||||
<TabItem value="agents" label="Agents API">
|
||||
|
||||
### Session-based Configuration with Safety Shields
|
||||
|
||||
```python
|
||||
# Create agent with static session configuration
|
||||
|
|
@ -85,7 +100,10 @@ print(f"First result: {response1.output_message.content}")
|
|||
print(f"Optimization: {response2.output_message.content}")
|
||||
```
|
||||
|
||||
### Responses API: Dynamic per-call configuration with branching
|
||||
</TabItem>
|
||||
<TabItem value="responses" label="Responses API">
|
||||
|
||||
### Dynamic Per-call Configuration with Branching
|
||||
|
||||
```python
|
||||
# First response: Use web search for latest algorithms
|
||||
|
|
@ -130,50 +148,74 @@ print(f"File search results: {response2.output_message.content}")
|
|||
print(f"Alternative web search: {response3.output_message.content}")
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Both APIs demonstrate distinct strengths that make them valuable on their own for different scenarios. The Agents API excels in providing structured, safety-conscious workflows with persistent session management, while the Responses API offers flexibility through dynamic configuration and OpenAI compatible tool patterns.
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### 1. **Research and Analysis with Safety Controls**
|
||||
### 1. Research and Analysis with Safety Controls
|
||||
**Best Choice: Agents API**
|
||||
|
||||
**Scenario:** You're building a research assistant for a financial institution that needs to analyze market data, execute code to process financial models, and search through internal compliance documents. The system must ensure all interactions are logged for regulatory compliance and protected by safety shields to prevent malicious code execution or data leaks.
|
||||
|
||||
**Why Agents API?** The Agents API provides persistent session management for iterative research workflows, built-in safety shields to protect against malicious code in financial models, and structured execution logs (session/turn/step) required for regulatory compliance. The static tool configuration ensures consistent access to your knowledge base and code interpreter throughout the entire research session.
|
||||
|
||||
### 2. **Dynamic Information Gathering with Branching Exploration**
|
||||
### 2. Dynamic Information Gathering with Branching Exploration
|
||||
**Best Choice: Responses API**
|
||||
|
||||
**Scenario:** You're building a competitive intelligence tool that helps businesses research market trends. Users need to dynamically switch between web search for current market data and file search through uploaded industry reports. They also want to branch conversations to explore different market segments simultaneously and experiment with different models for various analysis types.
|
||||
|
||||
**Why Responses API?** The Responses API's branching capability lets users explore multiple market segments from any research point. Dynamic per-call configuration allows switching between web search and file search as needed, while experimenting with different models (faster models for quick searches, more powerful models for deep analysis). The OpenAI-compatible tool patterns make integration straightforward.
|
||||
|
||||
### 3. **OpenAI Migration with Advanced Tool Capabilities**
|
||||
### 3. OpenAI Migration with Advanced Tool Capabilities
|
||||
**Best Choice: Responses API**
|
||||
|
||||
**Scenario:** You have an existing application built with OpenAI's Assistants API that uses file search and web search capabilities. You want to migrate to Llama Stack for better performance and cost control while maintaining the same tool calling patterns and adding new capabilities like dynamic vector store selection.
|
||||
|
||||
**Why Responses API?** The Responses API provides full OpenAI tool compatibility (`web_search`, `file_search`) with identical syntax, making migration seamless. The dynamic per-call configuration enables advanced features like switching vector stores per query or changing models based on query complexity - capabilities that extend beyond basic OpenAI functionality while maintaining compatibility.
|
||||
|
||||
### 4. **Educational Programming Tutor**
|
||||
### 4. Educational Programming Tutor
|
||||
**Best Choice: Agents API**
|
||||
|
||||
**Scenario:** You're building a programming tutor that maintains student context across multiple sessions, safely executes code exercises, and tracks learning progress with audit trails for educators.
|
||||
|
||||
**Why Agents API?** Persistent sessions remember student progress across multiple interactions, safety shields prevent malicious code execution while allowing legitimate programming exercises, and structured execution logs help educators track learning patterns.
|
||||
|
||||
### 5. **Advanced Software Debugging Assistant**
|
||||
### 5. Advanced Software Debugging Assistant
|
||||
**Best Choice: Agents API with Responses Backend**
|
||||
|
||||
**Scenario:** You're building a debugging assistant that helps developers troubleshoot complex issues. It needs to maintain context throughout a debugging session, safely execute diagnostic code, switch between different analysis tools dynamically, and branch conversations to explore multiple potential causes simultaneously.
|
||||
|
||||
**Why Agents + Responses?** The Agent provides safety shields for code execution and session management for the overall debugging workflow. The underlying Responses API enables dynamic model selection and flexible tool configuration per query, while branching lets you explore different theories (memory leak vs. concurrency issue) from the same debugging point and compare results.
|
||||
|
||||
> **Note:** The ability to use Responses API as the backend for Agents is not yet implemented but is planned for a future release. Currently, Agents use Chat Completions API as their backend by default.
|
||||
:::info[Future Enhancement]
|
||||
The ability to use Responses API as the backend for Agents is not yet implemented but is planned for a future release. Currently, Agents use Chat Completions API as their backend by default.
|
||||
:::
|
||||
|
||||
## For More Information
|
||||
## Decision Framework
|
||||
|
||||
- **LLS Agents API**: For detailed information on creating and managing agents, see the [Agents documentation](agent.md)
|
||||
- **OpenAI Responses API**: For information on using the OpenAI-compatible responses API, see the [OpenAI API documentation](https://platform.openai.com/docs/api-reference/responses)
|
||||
- **Chat Completions API**: For the default backend API used by Agents, see the [Chat Completions providers documentation](../providers/openai.md#chat-completions)
|
||||
- **Agent Execution Loop**: For understanding how agents process turns and steps in their execution, see the [Agent Execution Loop documentation](agent_execution_loop.md)
|
||||
Use this framework to choose the right API for your use case:
|
||||
|
||||
### Choose Agents API when:
|
||||
- ✅ You need **safety shields** for input/output validation
|
||||
- ✅ Your application requires **linear conversation flow** with persistent context
|
||||
- ✅ You need **audit trails** and structured execution logs
|
||||
- ✅ Your tool configuration is **static** throughout the session
|
||||
- ✅ You're building **educational, financial, or enterprise** applications with compliance requirements
|
||||
|
||||
### Choose Responses API when:
|
||||
- ✅ You need **conversation branching** to explore multiple paths
|
||||
- ✅ You want **dynamic per-call configuration** (models, tools, vector stores)
|
||||
- ✅ You're **migrating from OpenAI** and want familiar tool patterns
|
||||
- ✅ You need **OpenAI compatibility** for existing workflows
|
||||
- ✅ Your application benefits from **flexible, experimental** interactions
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Agents](./agent)** - Understanding the Agents API fundamentals
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - How agents process turns and steps
|
||||
- **[Tools Integration](./tools)** - Adding capabilities to both APIs
|
||||
- **[OpenAI Compatibility](../providers/openai)** - Using OpenAI-compatible endpoints
|
||||
- **[Safety Guardrails](./safety)** - Implementing safety measures in agents
|
||||
394
docs/docs/building_applications/safety.mdx
Normal file
394
docs/docs/building_applications/safety.mdx
Normal file
|
|
@ -0,0 +1,394 @@
|
|||
---
|
||||
title: Safety Guardrails
|
||||
description: Implement safety measures and content moderation in Llama Stack applications
|
||||
sidebar_label: Safety
|
||||
sidebar_position: 9
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Safety Guardrails
|
||||
|
||||
Safety is a critical component of any AI application. Llama Stack provides a comprehensive Shield system that can be applied at multiple touchpoints to ensure responsible AI behavior and content moderation.
|
||||
|
||||
## Shield System Overview
|
||||
|
||||
The Shield system in Llama Stack provides:
|
||||
- **Content filtering** for both input and output messages
|
||||
- **Multi-touchpoint protection** across your application flow
|
||||
- **Configurable safety policies** tailored to your use case
|
||||
- **Integration with agents** for automated safety enforcement
|
||||
|
||||
## Basic Shield Usage
|
||||
|
||||
### Registering a Safety Shield
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="registration" label="Shield Registration">
|
||||
|
||||
```python
|
||||
# Register a safety shield
|
||||
shield_id = "content_safety"
|
||||
client.shields.register(
|
||||
shield_id=shield_id,
|
||||
provider_shield_id="llama-guard-basic"
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="manual-check" label="Manual Safety Check">
|
||||
|
||||
```python
|
||||
# Run content through shield manually
|
||||
response = client.safety.run_shield(
|
||||
shield_id=shield_id,
|
||||
messages=[{"role": "user", "content": "User message here"}]
|
||||
)
|
||||
|
||||
if response.violation:
|
||||
print(f"Safety violation detected: {response.violation.user_message}")
|
||||
# Handle violation appropriately
|
||||
else:
|
||||
print("Content passed safety checks")
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Agent Integration
|
||||
|
||||
Shields can be automatically applied to agent interactions for seamless safety enforcement:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="input-shields" label="Input Shields">
|
||||
|
||||
```python
|
||||
from llama_stack_client import Agent
|
||||
|
||||
# Create agent with input safety shields
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
instructions="You are a helpful assistant",
|
||||
input_shields=["content_safety"], # Shield user inputs
|
||||
tools=["builtin::websearch"],
|
||||
)
|
||||
|
||||
session_id = agent.create_session("safe_session")
|
||||
|
||||
# All user inputs will be automatically screened
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "Tell me about AI safety"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="output-shields" label="Output Shields">
|
||||
|
||||
```python
|
||||
# Create agent with output safety shields
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
instructions="You are a helpful assistant",
|
||||
output_shields=["content_safety"], # Shield agent outputs
|
||||
tools=["builtin::websearch"],
|
||||
)
|
||||
|
||||
session_id = agent.create_session("safe_session")
|
||||
|
||||
# All agent responses will be automatically screened
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "Help me with my research"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="both-shields" label="Input & Output Shields">
|
||||
|
||||
```python
|
||||
# Create agent with comprehensive safety coverage
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
instructions="You are a helpful assistant",
|
||||
input_shields=["content_safety"], # Screen user inputs
|
||||
output_shields=["content_safety"], # Screen agent outputs
|
||||
tools=["builtin::websearch"],
|
||||
)
|
||||
|
||||
session_id = agent.create_session("fully_protected_session")
|
||||
|
||||
# Both input and output are automatically protected
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "Research question here"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Available Shield Types
|
||||
|
||||
### Llama Guard Shields
|
||||
|
||||
Llama Guard provides state-of-the-art content safety classification:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="basic" label="Basic Llama Guard">
|
||||
|
||||
```python
|
||||
# Basic Llama Guard for general content safety
|
||||
client.shields.register(
|
||||
shield_id="llama_guard_basic",
|
||||
provider_shield_id="llama-guard-basic"
|
||||
)
|
||||
```
|
||||
|
||||
**Use Cases:**
|
||||
- General content moderation
|
||||
- Harmful content detection
|
||||
- Basic safety compliance
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="advanced" label="Advanced Llama Guard">
|
||||
|
||||
```python
|
||||
# Advanced Llama Guard with custom categories
|
||||
client.shields.register(
|
||||
shield_id="llama_guard_advanced",
|
||||
provider_shield_id="llama-guard-advanced",
|
||||
config={
|
||||
"categories": [
|
||||
"violence", "hate_speech", "sexual_content",
|
||||
"self_harm", "illegal_activity"
|
||||
],
|
||||
"threshold": 0.8
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Use Cases:**
|
||||
- Fine-tuned safety policies
|
||||
- Domain-specific content filtering
|
||||
- Enterprise compliance requirements
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Custom Safety Shields
|
||||
|
||||
Create domain-specific safety shields for specialized use cases:
|
||||
|
||||
```python
|
||||
# Register custom safety shield
|
||||
client.shields.register(
|
||||
shield_id="financial_compliance",
|
||||
provider_shield_id="custom-financial-shield",
|
||||
config={
|
||||
"detect_pii": True,
|
||||
"financial_advice_warning": True,
|
||||
"regulatory_compliance": "FINRA"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Safety Response Handling
|
||||
|
||||
When safety violations are detected, handle them appropriately:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="basic-handling" label="Basic Handling">
|
||||
|
||||
```python
|
||||
response = client.safety.run_shield(
|
||||
shield_id="content_safety",
|
||||
messages=[{"role": "user", "content": "Potentially harmful content"}]
|
||||
)
|
||||
|
||||
if response.violation:
|
||||
violation = response.violation
|
||||
print(f"Violation Type: {violation.violation_type}")
|
||||
print(f"User Message: {violation.user_message}")
|
||||
print(f"Metadata: {violation.metadata}")
|
||||
|
||||
# Log the violation for audit purposes
|
||||
logger.warning(f"Safety violation detected: {violation.violation_type}")
|
||||
|
||||
# Provide appropriate user feedback
|
||||
return "I can't help with that request. Please try asking something else."
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="advanced-handling" label="Advanced Handling">
|
||||
|
||||
```python
|
||||
def handle_safety_response(safety_response, user_message):
|
||||
"""Advanced safety response handling with logging and user feedback"""
|
||||
|
||||
if not safety_response.violation:
|
||||
return {"safe": True, "message": "Content passed safety checks"}
|
||||
|
||||
violation = safety_response.violation
|
||||
|
||||
# Log violation details
|
||||
audit_log = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"violation_type": violation.violation_type,
|
||||
"original_message": user_message,
|
||||
"shield_response": violation.user_message,
|
||||
"metadata": violation.metadata
|
||||
}
|
||||
logger.warning(f"Safety violation: {audit_log}")
|
||||
|
||||
# Determine appropriate response based on violation type
|
||||
if violation.violation_type == "hate_speech":
|
||||
user_feedback = "I can't engage with content that contains hate speech. Let's keep our conversation respectful."
|
||||
elif violation.violation_type == "violence":
|
||||
user_feedback = "I can't provide information that could promote violence. How else can I help you today?"
|
||||
else:
|
||||
user_feedback = "I can't help with that request. Please try asking something else."
|
||||
|
||||
return {
|
||||
"safe": False,
|
||||
"user_feedback": user_feedback,
|
||||
"violation_details": audit_log
|
||||
}
|
||||
|
||||
# Usage
|
||||
safety_result = handle_safety_response(response, user_input)
|
||||
if not safety_result["safe"]:
|
||||
return safety_result["user_feedback"]
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Safety Configuration Best Practices
|
||||
|
||||
### 🛡️ **Multi-Layer Protection**
|
||||
- Use both input and output shields for comprehensive coverage
|
||||
- Combine multiple shield types for different threat categories
|
||||
- Implement fallback mechanisms when shields fail
|
||||
|
||||
### 📊 **Monitoring & Auditing**
|
||||
- Log all safety violations for compliance and analysis
|
||||
- Monitor false positive rates to tune shield sensitivity
|
||||
- Track safety metrics across different use cases
|
||||
|
||||
### ⚙️ **Configuration Management**
|
||||
- Use environment-specific safety configurations
|
||||
- Implement A/B testing for shield effectiveness
|
||||
- Regularly update shield models and policies
|
||||
|
||||
### 🔧 **Integration Patterns**
|
||||
- Integrate shields early in the development process
|
||||
- Test safety measures with adversarial inputs
|
||||
- Provide clear user feedback for violations
|
||||
|
||||
## Advanced Safety Scenarios
|
||||
|
||||
### Context-Aware Safety
|
||||
|
||||
```python
|
||||
# Safety shields that consider conversation context
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
instructions="You are a healthcare assistant",
|
||||
input_shields=["medical_safety"],
|
||||
output_shields=["medical_safety"],
|
||||
# Context helps shields make better decisions
|
||||
safety_context={
|
||||
"domain": "healthcare",
|
||||
"user_type": "patient",
|
||||
"compliance_level": "HIPAA"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Dynamic Shield Selection
|
||||
|
||||
```python
|
||||
def select_shield_for_user(user_profile):
|
||||
"""Select appropriate safety shield based on user context"""
|
||||
if user_profile.age < 18:
|
||||
return "child_safety_shield"
|
||||
elif user_profile.context == "enterprise":
|
||||
return "enterprise_compliance_shield"
|
||||
else:
|
||||
return "general_safety_shield"
|
||||
|
||||
# Use dynamic shield selection
|
||||
shield_id = select_shield_for_user(current_user)
|
||||
response = client.safety.run_shield(
|
||||
shield_id=shield_id,
|
||||
messages=messages
|
||||
)
|
||||
```
|
||||
|
||||
## Compliance and Regulations
|
||||
|
||||
### Industry-Specific Safety
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="healthcare" label="Healthcare (HIPAA)">
|
||||
|
||||
```python
|
||||
# Healthcare-specific safety configuration
|
||||
client.shields.register(
|
||||
shield_id="hipaa_compliance",
|
||||
provider_shield_id="healthcare-safety-shield",
|
||||
config={
|
||||
"detect_phi": True, # Protected Health Information
|
||||
"medical_advice_warning": True,
|
||||
"regulatory_framework": "HIPAA"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="financial" label="Financial (FINRA)">
|
||||
|
||||
```python
|
||||
# Financial services safety configuration
|
||||
client.shields.register(
|
||||
shield_id="finra_compliance",
|
||||
provider_shield_id="financial-safety-shield",
|
||||
config={
|
||||
"detect_financial_advice": True,
|
||||
"investment_disclaimers": True,
|
||||
"regulatory_framework": "FINRA"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="education" label="Education (COPPA)">
|
||||
|
||||
```python
|
||||
# Educational platform safety for minors
|
||||
client.shields.register(
|
||||
shield_id="coppa_compliance",
|
||||
provider_shield_id="educational-safety-shield",
|
||||
config={
|
||||
"child_protection": True,
|
||||
"educational_content_only": True,
|
||||
"regulatory_framework": "COPPA"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Agents](./agent)** - Integrating safety shields with intelligent agents
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - Understanding safety in the execution flow
|
||||
- **[Evaluations](./evals)** - Evaluating safety shield effectiveness
|
||||
- **[Llama Guard Documentation](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard3)** - Advanced safety model details
|
||||
212
docs/docs/building_applications/telemetry.mdx
Normal file
212
docs/docs/building_applications/telemetry.mdx
Normal file
|
|
@ -0,0 +1,212 @@
|
|||
---
|
||||
title: Telemetry
|
||||
description: Monitor and observe Llama Stack applications with comprehensive telemetry capabilities
|
||||
sidebar_label: Telemetry
|
||||
sidebar_position: 8
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Telemetry
|
||||
|
||||
The Llama Stack uses OpenTelemetry to provide comprehensive tracing, metrics, and logging capabilities.
|
||||
|
||||
|
||||
## Automatic Metrics Generation
|
||||
|
||||
Llama Stack automatically generates metrics during inference operations. These metrics are aggregated at the **inference request level** and provide insights into token usage and model performance.
|
||||
|
||||
### Available Metrics
|
||||
|
||||
The following metrics are automatically generated for each inference request:
|
||||
|
||||
| Metric Name | Type | Unit | Description | Labels |
|
||||
|-------------|------|------|-------------|--------|
|
||||
| `llama_stack_prompt_tokens_total` | Counter | `tokens` | Number of tokens in the input prompt | `model_id`, `provider_id` |
|
||||
| `llama_stack_completion_tokens_total` | Counter | `tokens` | Number of tokens in the generated response | `model_id`, `provider_id` |
|
||||
| `llama_stack_tokens_total` | Counter | `tokens` | Total tokens used (prompt + completion) | `model_id`, `provider_id` |
|
||||
|
||||
### Metric Generation Flow
|
||||
|
||||
1. **Token Counting**: During inference operations (chat completion, completion, etc.), the system counts tokens in both input prompts and generated responses
|
||||
2. **Metric Construction**: For each request, `MetricEvent` objects are created with the token counts
|
||||
3. **Telemetry Logging**: Metrics are sent to the configured telemetry sinks
|
||||
4. **OpenTelemetry Export**: When OpenTelemetry is enabled, metrics are exposed as standard OpenTelemetry counters
|
||||
|
||||
### Metric Aggregation Level
|
||||
|
||||
All metrics are generated and aggregated at the **inference request level**. This means:
|
||||
|
||||
- Each individual inference request generates its own set of metrics
|
||||
- Metrics are not pre-aggregated across multiple requests
|
||||
- Aggregation (sums, averages, etc.) can be performed by your observability tools (Prometheus, Grafana, etc.)
|
||||
- Each metric includes labels for `model_id` and `provider_id` to enable filtering and grouping
|
||||
|
||||
### Example Metric Event
|
||||
|
||||
```python
|
||||
MetricEvent(
|
||||
trace_id="1234567890abcdef",
|
||||
span_id="abcdef1234567890",
|
||||
metric="total_tokens",
|
||||
value=150,
|
||||
timestamp=1703123456.789,
|
||||
unit="tokens",
|
||||
attributes={
|
||||
"model_id": "meta-llama/Llama-3.2-3B-Instruct",
|
||||
"provider_id": "tgi"
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
## Telemetry Sinks
|
||||
|
||||
Choose from multiple sink types based on your observability needs:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="opentelemetry" label="OpenTelemetry">
|
||||
|
||||
Send events to an OpenTelemetry Collector for integration with observability platforms:
|
||||
|
||||
**Use Cases:**
|
||||
- Visualizing traces in tools like Jaeger
|
||||
- Collecting metrics for Prometheus
|
||||
- Integration with enterprise observability stacks
|
||||
|
||||
**Features:**
|
||||
- Standard OpenTelemetry format
|
||||
- Compatible with all OpenTelemetry collectors
|
||||
- Supports both traces and metrics
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="console" label="Console">
|
||||
|
||||
Print events to the console for immediate debugging:
|
||||
|
||||
**Use Cases:**
|
||||
- Development and testing
|
||||
- Quick debugging sessions
|
||||
- Simple logging without external tools
|
||||
|
||||
**Features:**
|
||||
- Immediate output visibility
|
||||
- No setup required
|
||||
- Human-readable format
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Configuration
|
||||
|
||||
### Meta-Reference Provider
|
||||
|
||||
Currently, only the meta-reference provider is implemented. It can be configured to send events to multiple sink types:
|
||||
|
||||
```yaml
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "llama-stack-service"
|
||||
sinks: ['console', 'otel_trace', 'otel_metric']
|
||||
otel_exporter_otlp_endpoint: "http://localhost:4318"
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Configure telemetry behavior using environment variables:
|
||||
|
||||
- **`OTEL_EXPORTER_OTLP_ENDPOINT`**: OpenTelemetry Collector endpoint (default: `http://localhost:4318`)
|
||||
- **`OTEL_SERVICE_NAME`**: Service name for telemetry (default: empty string)
|
||||
- **`TELEMETRY_SINKS`**: Comma-separated list of sinks (default: `[]`)
|
||||
|
||||
### Quick Setup: Complete Telemetry Stack
|
||||
|
||||
Use the automated setup script to launch the complete telemetry stack (Jaeger, OpenTelemetry Collector, Prometheus, and Grafana):
|
||||
|
||||
```bash
|
||||
./scripts/telemetry/setup_telemetry.sh
|
||||
```
|
||||
|
||||
This sets up:
|
||||
- **Jaeger UI**: http://localhost:16686 (traces visualization)
|
||||
- **Prometheus**: http://localhost:9090 (metrics)
|
||||
- **Grafana**: http://localhost:3000 (dashboards with auto-configured data sources)
|
||||
- **OTEL Collector**: http://localhost:4318 (OTLP endpoint)
|
||||
|
||||
Once running, you can visualize traces by navigating to [Grafana](http://localhost:3000/) and login with login `admin` and password `admin`.
|
||||
|
||||
## Querying Metrics
|
||||
|
||||
When using the OpenTelemetry sink, metrics are exposed in standard format and can be queried through various tools:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="prometheus" label="Prometheus Queries">
|
||||
|
||||
Example Prometheus queries for analyzing token usage:
|
||||
|
||||
```promql
|
||||
# Total tokens used across all models
|
||||
sum(llama_stack_tokens_total)
|
||||
|
||||
# Tokens per model
|
||||
sum by (model_id) (llama_stack_tokens_total)
|
||||
|
||||
# Average tokens per request over 5 minutes
|
||||
rate(llama_stack_tokens_total[5m])
|
||||
|
||||
# Token usage by provider
|
||||
sum by (provider_id) (llama_stack_tokens_total)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="grafana" label="Grafana Dashboards">
|
||||
|
||||
Create dashboards using Prometheus as a data source:
|
||||
|
||||
- **Token Usage Over Time**: Line charts showing token consumption trends
|
||||
- **Model Performance**: Comparison of different models by token efficiency
|
||||
- **Provider Analysis**: Breakdown of usage across different providers
|
||||
- **Request Patterns**: Understanding peak usage times and patterns
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="otlp" label="OpenTelemetry Collector">
|
||||
|
||||
Forward metrics to other observability systems:
|
||||
|
||||
- Export to multiple backends simultaneously
|
||||
- Apply transformations and filtering
|
||||
- Integrate with existing monitoring infrastructure
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🔍 **Monitoring Strategy**
|
||||
- Use OpenTelemetry for production environments
|
||||
- Set up alerts on key metrics like token usage and error rates
|
||||
|
||||
### 📊 **Metrics Analysis**
|
||||
- Track token usage trends to optimize costs
|
||||
- Monitor response times across different models
|
||||
- Analyze usage patterns to improve resource allocation
|
||||
|
||||
### 🚨 **Alerting & Debugging**
|
||||
- Set up alerts for unusual token consumption spikes
|
||||
- Use trace data to debug performance issues
|
||||
- Monitor error rates and failure patterns
|
||||
|
||||
### 🔧 **Configuration Management**
|
||||
- Use environment variables for flexible deployment
|
||||
- Ensure proper network access to OpenTelemetry collectors
|
||||
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Agents](./agent)** - Monitoring agent execution with telemetry
|
||||
- **[Evaluations](./evals)** - Using telemetry data for performance evaluation
|
||||
- **[Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)** - Telemetry examples and queries
|
||||
- **[OpenTelemetry Documentation](https://opentelemetry.io/)** - Comprehensive observability framework
|
||||
- **[Jaeger Documentation](https://www.jaegertracing.io/)** - Distributed tracing visualization
|
||||
|
|
@ -1,6 +1,17 @@
|
|||
---
|
||||
title: Tools
|
||||
description: Extend agent capabilities with external tools and function calling
|
||||
sidebar_label: Tools
|
||||
sidebar_position: 6
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Tools
|
||||
|
||||
Tools are functions that can be invoked by an agent to perform tasks. They are organized into tool groups and registered with specific providers. Each tool group represents a collection of related tools from a single provider. They are organized into groups so that state can be externalized: the collection operates on the same state typically.
|
||||
|
||||
An example of this would be a "db_access" tool group that contains tools for interacting with a database. "list_tables", "query_table", "insert_row" could be examples of tools in this group.
|
||||
|
||||
Tools are treated as any other resource in llama stack like models. You can register them, have providers for them etc.
|
||||
|
|
@ -9,18 +20,15 @@ When instantiating an agent, you can provide it a list of tool groups that it ha
|
|||
|
||||
Refer to the [Building AI Applications](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb) notebook for more examples on how to use tools.
|
||||
|
||||
## Server-side vs. client-side tool execution
|
||||
## Server-side vs. Client-side Tool Execution
|
||||
|
||||
Llama Stack allows you to use both server-side and client-side tools. With server-side tools, `agent.create_turn` can perform execution of the tool calls emitted by the model
|
||||
transparently giving the user the final answer desired. If client-side tools are provided, the tool call is sent back to the user for execution
|
||||
and optional continuation using the `agent.resume_turn` method.
|
||||
Llama Stack allows you to use both server-side and client-side tools. With server-side tools, `agent.create_turn` can perform execution of the tool calls emitted by the model transparently giving the user the final answer desired. If client-side tools are provided, the tool call is sent back to the user for execution and optional continuation using the `agent.resume_turn` method.
|
||||
|
||||
|
||||
### Server-side tools
|
||||
## Server-side Tools
|
||||
|
||||
Llama Stack provides built-in providers for some common tools. These include web search, math, and RAG capabilities.
|
||||
|
||||
#### Web Search
|
||||
### Web Search
|
||||
|
||||
You have three providers to execute the web search tool calls generated by a model: Brave Search, Bing Search, and Tavily Search.
|
||||
|
||||
|
|
@ -39,25 +47,26 @@ The tool requires an API key which can be provided either in the configuration o
|
|||
{"<provider_name>_api_key": <your api key>}
|
||||
```
|
||||
|
||||
|
||||
#### Math
|
||||
### Math
|
||||
|
||||
The WolframAlpha tool provides access to computational knowledge through the WolframAlpha API.
|
||||
|
||||
```python
|
||||
client.toolgroups.register(
|
||||
toolgroup_id="builtin::wolfram_alpha", provider_id="wolfram-alpha"
|
||||
toolgroup_id="builtin::wolfram_alpha",
|
||||
provider_id="wolfram-alpha"
|
||||
)
|
||||
```
|
||||
|
||||
Example usage:
|
||||
```python
|
||||
result = client.tool_runtime.invoke_tool(
|
||||
tool_name="wolfram_alpha", args={"query": "solve x^2 + 2x + 1 = 0"}
|
||||
tool_name="wolfram_alpha",
|
||||
args={"query": "solve x^2 + 2x + 1 = 0"}
|
||||
)
|
||||
```
|
||||
|
||||
#### RAG
|
||||
### RAG
|
||||
|
||||
The RAG tool enables retrieval of context from various types of memory banks (vector, key-value, keyword, and graph).
|
||||
|
||||
|
|
@ -75,16 +84,13 @@ Features:
|
|||
- Configurable query generation
|
||||
- Context retrieval with token limits
|
||||
|
||||
|
||||
```{note}
|
||||
:::note[Default Configuration]
|
||||
By default, llama stack run.yaml defines toolgroups for web search, wolfram alpha and rag, that are provided by tavily-search, wolfram-alpha and rag providers.
|
||||
```
|
||||
:::
|
||||
|
||||
## Model Context Protocol (MCP)
|
||||
|
||||
[MCP](https://github.com/modelcontextprotocol) is an upcoming, popular standard for tool discovery and execution. It is a protocol that allows tools to be dynamically discovered
|
||||
from an MCP endpoint and can be used to extend the agent's capabilities.
|
||||
|
||||
[MCP](https://github.com/modelcontextprotocol) is an upcoming, popular standard for tool discovery and execution. It is a protocol that allows tools to be dynamically discovered from an MCP endpoint and can be used to extend the agent's capabilities.
|
||||
|
||||
### Using Remote MCP Servers
|
||||
|
||||
|
|
@ -98,8 +104,7 @@ client.toolgroups.register(
|
|||
)
|
||||
```
|
||||
|
||||
Note that most of the more useful MCP servers need you to authenticate with them. Many of them use OAuth2.0 for authentication. You can provide authorization headers to send to the MCP server
|
||||
using the "Provider Data" abstraction provided by Llama Stack. When making an agent call,
|
||||
Note that most of the more useful MCP servers need you to authenticate with them. Many of them use OAuth2.0 for authentication. You can provide authorization headers to send to the MCP server using the "Provider Data" abstraction provided by Llama Stack. When making an agent call,
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
|
|
@ -120,20 +125,26 @@ agent = Agent(
|
|||
agent.create_turn(...)
|
||||
```
|
||||
|
||||
### Running your own MCP server
|
||||
### Running Your Own MCP Server
|
||||
|
||||
Here's an example of how to run a simple MCP server that exposes a File System as a set of tools to the Llama Stack agent.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="setup" label="Server Setup">
|
||||
|
||||
```shell
|
||||
# start your MCP server
|
||||
# Start your MCP server
|
||||
mkdir /tmp/content
|
||||
touch /tmp/content/foo
|
||||
touch /tmp/content/bar
|
||||
npx -y supergateway --port 8000 --stdio 'npx -y @modelcontextprotocol/server-filesystem /tmp/content'
|
||||
```
|
||||
|
||||
Then register the MCP server as a tool group,
|
||||
</TabItem>
|
||||
<TabItem value="register" label="Registration">
|
||||
|
||||
```python
|
||||
# Register the MCP server as a tool group
|
||||
client.toolgroups.register(
|
||||
toolgroup_id="mcp::filesystem",
|
||||
provider_id="model-context-protocol",
|
||||
|
|
@ -141,12 +152,12 @@ client.toolgroups.register(
|
|||
)
|
||||
```
|
||||
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Adding Custom (Client-side) Tools
|
||||
|
||||
When you want to use tools other than the built-in tools, you just need to implement a python function with a docstring. The content of the docstring will be used to describe the tool and the parameters and passed
|
||||
along to the generative model.
|
||||
When you want to use tools other than the built-in tools, you just need to implement a python function with a docstring. The content of the docstring will be used to describe the tool and the parameters and passed along to the generative model.
|
||||
|
||||
```python
|
||||
# Example tool definition
|
||||
|
|
@ -158,16 +169,19 @@ def my_tool(input: int) -> int:
|
|||
"""
|
||||
return input * 2
|
||||
```
|
||||
> **NOTE:** We employ python docstrings to describe the tool and the parameters. It is important to document the tool and the parameters so that the model can use the tool correctly. It is recommended to experiment with different docstrings to see how they affect the model's behavior.
|
||||
|
||||
:::tip[Documentation Best Practices]
|
||||
We employ python docstrings to describe the tool and the parameters. It is important to document the tool and the parameters so that the model can use the tool correctly. It is recommended to experiment with different docstrings to see how they affect the model's behavior.
|
||||
:::
|
||||
|
||||
Once defined, simply pass the tool to the agent config. `Agent` will take care of the rest (calling the model with the tool definition, executing the tool, and returning the result to the model for the next iteration).
|
||||
|
||||
```python
|
||||
# Example agent config with client provided tools
|
||||
agent = Agent(client, ..., tools=[my_tool])
|
||||
```
|
||||
|
||||
Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_client_tools.py) for an example of how to use client provided tools.
|
||||
|
||||
Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/) for an example of how to use client provided tools.
|
||||
|
||||
## Tool Invocation
|
||||
|
||||
|
|
@ -175,7 +189,8 @@ Tools can be invoked using the `invoke_tool` method:
|
|||
|
||||
```python
|
||||
result = client.tool_runtime.invoke_tool(
|
||||
tool_name="web_search", kwargs={"query": "What is the capital of France?"}
|
||||
tool_name="web_search",
|
||||
kwargs={"query": "What is the capital of France?"}
|
||||
)
|
||||
```
|
||||
|
||||
|
|
@ -196,16 +211,22 @@ all_tools = client.tools.list_tools()
|
|||
group_tools = client.tools.list_tools(toolgroup_id="search_tools")
|
||||
```
|
||||
|
||||
## Simple Example 2: Using an Agent with the Web Search Tool
|
||||
## Complete Examples
|
||||
|
||||
### Web Search Agent
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="setup" label="Setup & Configuration">
|
||||
|
||||
1. Start by registering a Tavily API key at [Tavily](https://tavily.com/).
|
||||
2. [Optional] Provide the API key directly to the Llama Stack server
|
||||
2. [Optional] Set the API key in your environment before starting the Llama Stack server
|
||||
```bash
|
||||
export TAVILY_SEARCH_API_KEY="your key"
|
||||
```
|
||||
```bash
|
||||
--env TAVILY_SEARCH_API_KEY=${TAVILY_SEARCH_API_KEY}
|
||||
```
|
||||
3. Run the following script.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="implementation" label="Implementation">
|
||||
|
||||
```python
|
||||
from llama_stack_client.lib.agents.agent import Agent
|
||||
from llama_stack_client.types.agent_create_params import AgentConfig
|
||||
|
|
@ -240,11 +261,18 @@ for log in EventLogger().log(response):
|
|||
log.print()
|
||||
```
|
||||
|
||||
## Simple Example3: Using an Agent with the WolframAlpha Tool
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### WolframAlpha Math Agent
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="setup" label="Setup & Configuration">
|
||||
|
||||
1. Start by registering for a WolframAlpha API key at [WolframAlpha Developer Portal](https://developer.wolframalpha.com/access).
|
||||
2. Provide the API key either when starting the Llama Stack server:
|
||||
2. Provide the API key either by setting it in your environment before starting the Llama Stack server:
|
||||
```bash
|
||||
--env WOLFRAM_ALPHA_API_KEY=${WOLFRAM_ALPHA_API_KEY}
|
||||
export WOLFRAM_ALPHA_API_KEY="your key"
|
||||
```
|
||||
or from the client side:
|
||||
```python
|
||||
|
|
@ -253,12 +281,57 @@ for log in EventLogger().log(response):
|
|||
provider_data={"wolfram_alpha_api_key": wolfram_api_key},
|
||||
)
|
||||
```
|
||||
3. Configure the tools in the Agent by setting `tools=["builtin::wolfram_alpha"]`.
|
||||
4. Example user query:
|
||||
```python
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "Solve x^2 + 2x + 1 = 0 using WolframAlpha"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="implementation" label="Implementation">
|
||||
|
||||
```python
|
||||
# Configure the tools in the Agent by setting tools=["builtin::wolfram_alpha"]
|
||||
agent = Agent(
|
||||
client,
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
instructions="You are a mathematical assistant that can solve complex equations.",
|
||||
tools=["builtin::wolfram_alpha"],
|
||||
)
|
||||
|
||||
session_id = agent.create_session("math-session")
|
||||
|
||||
# Example user query
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": "Solve x^2 + 2x + 1 = 0 using WolframAlpha"}],
|
||||
session_id=session_id,
|
||||
)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 🛠️ **Tool Selection**
|
||||
- Use **server-side tools** for production applications requiring reliability and security
|
||||
- Use **client-side tools** for development, prototyping, or specialized integrations
|
||||
- Combine multiple tool types for comprehensive functionality
|
||||
|
||||
### 📝 **Documentation**
|
||||
- Write clear, detailed docstrings for custom tools
|
||||
- Include parameter descriptions and expected return types
|
||||
- Test tool descriptions with the model to ensure proper usage
|
||||
|
||||
### 🔐 **Security**
|
||||
- Store API keys securely using environment variables or secure configuration
|
||||
- Use the `X-LlamaStack-Provider-Data` header for dynamic authentication
|
||||
- Validate tool inputs and outputs for security
|
||||
|
||||
### 🔄 **Error Handling**
|
||||
- Implement proper error handling in custom tools
|
||||
- Use structured error responses with meaningful messages
|
||||
- Monitor tool performance and reliability
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Agents](./agent)** - Building intelligent agents with tools
|
||||
- **[RAG (Retrieval Augmented Generation)](./rag)** - Using knowledge retrieval tools
|
||||
- **[Agent Execution Loop](./agent_execution_loop)** - Understanding tool execution flow
|
||||
- **[Building AI Applications Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb)** - Comprehensive examples
|
||||
- **[Llama Stack Apps Examples](https://github.com/meta-llama/llama-stack-apps)** - Real-world tool implementations
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
---
|
||||
title: API Stability Leveling
|
||||
description: Understanding API stability levels and versioning in Llama Stack
|
||||
sidebar_label: API Stability
|
||||
sidebar_position: 4
|
||||
---
|
||||
|
||||
# Llama Stack API Stability Leveling
|
||||
|
||||
In order to provide a stable experience in Llama Stack, the various APIs need different stability levels indicating the level of support, backwards compatability, and overall production readiness.
|
||||
|
|
@ -55,6 +62,10 @@ The new `/v2` API must be introduced alongside the existing `/v1` API and run in
|
|||
|
||||
When a `/v2` API is introduced, a clear and generous deprecation policy for the `/v1` API must be published simultaneously. This policy must outline the timeline for the eventual removal of the `/v1` API, giving users ample time to migrate.
|
||||
|
||||
### Deprecated APIs
|
||||
|
||||
Deprecated APIs are those that are no longer actively maintained or supported. Depreated APIs are marked with the flag `deprecated = True` in the OpenAPI spec. These APIs will be removed in a future release.
|
||||
|
||||
### API Stability vs. Provider Stability
|
||||
|
||||
The leveling introduced in this document relates to the stability of the API and not specifically the providers within the API.
|
||||
|
|
@ -91,4 +102,4 @@ The testing of each stable API is already outlined in [issue #3237](https://gith
|
|||
|
||||
### New APIs going forward
|
||||
|
||||
Any subsequently introduced APIs should be introduced as `/v1alpha`
|
||||
Any subsequently introduced APIs should be introduced as `/v1alpha`
|
||||
|
|
@ -1,4 +1,11 @@
|
|||
## API Providers
|
||||
---
|
||||
title: API Providers
|
||||
description: Understanding remote vs inline provider implementations
|
||||
sidebar_label: API Providers
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# API Providers
|
||||
|
||||
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
|
||||
- LLM inference providers (e.g., Fireworks, Together, AWS Bedrock, Groq, Cerebras, SambaNova, vLLM, etc.),
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
title: External APIs
|
||||
description: Understanding external APIs in Llama Stack
|
||||
sidebar_label: External APIs
|
||||
sidebar_position: 3
|
||||
---
|
||||
# External APIs
|
||||
|
||||
Llama Stack supports external APIs that live outside of the main codebase. This allows you to:
|
||||
|
|
@ -146,7 +152,6 @@ __all__ = ["WeatherAPI", "available_providers"]
|
|||
from typing import Protocol
|
||||
|
||||
from llama_stack.providers.datatypes import (
|
||||
AdapterSpec,
|
||||
Api,
|
||||
ProviderSpec,
|
||||
RemoteProviderSpec,
|
||||
|
|
@ -160,12 +165,10 @@ def available_providers() -> list[ProviderSpec]:
|
|||
api=Api.weather,
|
||||
provider_type="remote::kaze",
|
||||
config_class="llama_stack_provider_kaze.KazeProviderConfig",
|
||||
adapter=AdapterSpec(
|
||||
adapter_type="kaze",
|
||||
module="llama_stack_provider_kaze",
|
||||
pip_packages=["llama_stack_provider_kaze"],
|
||||
config_class="llama_stack_provider_kaze.KazeProviderConfig",
|
||||
),
|
||||
adapter_type="kaze",
|
||||
module="llama_stack_provider_kaze",
|
||||
pip_packages=["llama_stack_provider_kaze"],
|
||||
config_class="llama_stack_provider_kaze.KazeProviderConfig",
|
||||
),
|
||||
]
|
||||
|
||||
|
|
@ -319,11 +322,10 @@ class WeatherKazeAdapter(WeatherProvider):
|
|||
|
||||
```yaml
|
||||
# ~/.llama/providers.d/remote/weather/kaze.yaml
|
||||
adapter:
|
||||
adapter_type: kaze
|
||||
pip_packages: ["llama_stack_provider_kaze"]
|
||||
config_class: llama_stack_provider_kaze.config.KazeProviderConfig
|
||||
module: llama_stack_provider_kaze
|
||||
adapter_type: kaze
|
||||
pip_packages: ["llama_stack_provider_kaze"]
|
||||
config_class: llama_stack_provider_kaze.config.KazeProviderConfig
|
||||
module: llama_stack_provider_kaze
|
||||
optional_api_dependencies: []
|
||||
```
|
||||
|
||||
|
|
@ -355,7 +357,7 @@ server:
|
|||
8. Run the server:
|
||||
|
||||
```bash
|
||||
python -m llama_stack.core.server.server --yaml-config ~/.llama/run-byoa.yaml
|
||||
llama stack run ~/.llama/run-byoa.yaml
|
||||
```
|
||||
|
||||
9. Test the API:
|
||||
|
|
@ -1,4 +1,11 @@
|
|||
## APIs
|
||||
---
|
||||
title: APIs
|
||||
description: Available REST APIs and planned capabilities in Llama Stack
|
||||
sidebar_label: APIs
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# APIs
|
||||
|
||||
A Llama Stack API is described as a collection of REST endpoints. We currently support the following APIs:
|
||||
|
||||
|
|
@ -9,7 +16,6 @@ A Llama Stack API is described as a collection of REST endpoints. We currently s
|
|||
- **Scoring**: evaluate outputs of the system
|
||||
- **Eval**: generate outputs (via Inference or Agents) and perform scoring
|
||||
- **VectorIO**: perform operations on vector stores, such as adding documents, searching, and deleting documents
|
||||
- **Telemetry**: collect telemetry data from the system
|
||||
- **Post Training**: fine-tune a model
|
||||
- **Tool Runtime**: interact with various tools and protocols
|
||||
- **Responses**: generate responses from an LLM using this OpenAI compatible API.
|
||||
|
|
@ -1,15 +1,19 @@
|
|||
## Llama Stack architecture
|
||||
---
|
||||
title: Llama Stack Architecture
|
||||
description: Understanding Llama Stack's service-oriented design and benefits
|
||||
sidebar_label: Architecture
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# Llama Stack architecture
|
||||
|
||||
Llama Stack allows you to build different layers of distributions for your AI workloads using various SDKs and API providers.
|
||||
|
||||
```{image} ../../_static/llama-stack.png
|
||||
:alt: Llama Stack
|
||||
:width: 400px
|
||||
```
|
||||
<img src="/img/llama-stack.png" alt="Llama Stack" width="400" />
|
||||
|
||||
### Benefits of Llama stack
|
||||
## Benefits of Llama stack
|
||||
|
||||
#### Current challenges in custom AI applications
|
||||
### Current challenges in custom AI applications
|
||||
|
||||
Building production AI applications today requires solving multiple challenges:
|
||||
|
||||
|
|
@ -32,7 +36,7 @@ Building production AI applications today requires solving multiple challenges:
|
|||
- Different providers have different APIs and abstractions.
|
||||
- Changing providers requires significant code changes.
|
||||
|
||||
#### Our Solution: A Universal Stack
|
||||
### Our Solution: A Universal Stack
|
||||
|
||||
Llama Stack addresses these challenges through a service-oriented, API-first approach:
|
||||
|
||||
|
|
@ -59,7 +63,7 @@ Llama Stack addresses these challenges through a service-oriented, API-first app
|
|||
- Ecosystem offers tailored infrastructure, software, and services for deploying a variety of models.
|
||||
|
||||
|
||||
### Our Philosophy
|
||||
## Our Philosophy
|
||||
|
||||
- **Service-Oriented**: REST APIs enforce clean interfaces and enable seamless transitions across different environments.
|
||||
- **Composability**: Every component is independent but works together seamlessly
|
||||
|
|
@ -67,4 +71,4 @@ Llama Stack addresses these challenges through a service-oriented, API-first app
|
|||
- **Turnkey Solutions**: Easy to deploy built in solutions for popular deployment scenarios
|
||||
|
||||
|
||||
With Llama Stack, you can focus on building your application while we handle the infrastructure complexity, essential capabilities, and provider integrations.
|
||||
With Llama Stack, you can focus on building your application while we handle the infrastructure complexity, essential capabilities, and provider integrations.
|
||||
|
|
@ -1,4 +1,11 @@
|
|||
## Distributions
|
||||
---
|
||||
title: Distributions
|
||||
description: Pre-packaged provider configurations for different deployment scenarios
|
||||
sidebar_label: Distributions
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
# Distributions
|
||||
|
||||
While there is a lot of flexibility to mix-and-match providers, often users will work with a specific set of providers (hardware support, contractual obligations, etc.) We therefore need to provide a _convenient shorthand_ for such collections. We call this shorthand a **Llama Stack Distribution** or a **Distro**. One can think of it as specific pre-packaged versions of the Llama Stack. Here are some examples:
|
||||
|
||||
|
|
@ -6,4 +13,4 @@ While there is a lot of flexibility to mix-and-match providers, often users will
|
|||
|
||||
**Locally Hosted Distro**: You may want to run Llama Stack on your own hardware. Typically though, you still need to use Inference via an external service. You can use providers like HuggingFace TGI, Fireworks, Together, etc. for this purpose. Or you may have access to GPUs and can run a [vLLM](https://github.com/vllm-project/vllm) or [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) instance. If you "just" have a regular desktop machine, you can use [Ollama](https://ollama.com/) for inference. To provide convenient quick access to these options, we provide a number of such pre-configured locally-hosted Distros.
|
||||
|
||||
**On-device Distro**: To run Llama Stack directly on an edge device (mobile phone or a tablet), we provide Distros for [iOS](../distributions/ondevice_distro/ios_sdk.md) and [Android](../distributions/ondevice_distro/android_sdk.md)
|
||||
**On-device Distro**: To run Llama Stack directly on an edge device (mobile phone or a tablet), we provide Distros for [iOS](/docs/distributions/ondevice_distro/ios_sdk) and [Android](/docs/distributions/ondevice_distro/android_sdk)
|
||||
|
|
@ -1,16 +1,22 @@
|
|||
## Evaluation Concepts
|
||||
---
|
||||
title: Evaluation Concepts
|
||||
description: Running evaluations on Llama Stack
|
||||
sidebar_label: Evaluation Concepts
|
||||
sidebar_position: 5
|
||||
---
|
||||
|
||||
# Evaluation Concepts
|
||||
|
||||
The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks.
|
||||
|
||||
We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications.
|
||||
We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications:
|
||||
- `/datasetio` + `/datasets` API
|
||||
- `/scoring` + `/scoring_functions` API
|
||||
- `/eval` + `/benchmarks` API
|
||||
|
||||
This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing).
|
||||
|
||||
|
||||
The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our [Core Concepts](../concepts/index.md) guide for better high-level understanding.
|
||||
The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our [Core Concepts](./index.mdx) guide for better high-level understanding.
|
||||
|
||||
- **DatasetIO**: defines interface with datasets and data loaders.
|
||||
- Associated with `Dataset` resource.
|
||||
|
|
@ -19,10 +25,9 @@ The Evaluation APIs are associated with a set of Resources. Please visit the Res
|
|||
- **Eval**: generate outputs (via Inference or Agents) and perform scoring.
|
||||
- Associated with `Benchmark` resource.
|
||||
|
||||
## Open-benchmark Eval
|
||||
|
||||
### Open-benchmark Eval
|
||||
|
||||
#### List of open-benchmarks Llama Stack support
|
||||
### List of open-benchmarks Llama Stack support
|
||||
|
||||
Llama stack pre-registers several popular open-benchmarks to easily evaluate model perfomance via CLI.
|
||||
|
||||
|
|
@ -32,19 +37,17 @@ The list of open-benchmarks we currently support:
|
|||
- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions.
|
||||
- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)]: Benchmark designed to evaluate multimodal models.
|
||||
|
||||
You can follow this [contributing guide](../references/evals_reference/#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
|
||||
|
||||
You can follow this [contributing guide](../references/evals_reference/index.md#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack
|
||||
|
||||
#### Run evaluation on open-benchmarks via CLI
|
||||
### Run evaluation on open-benchmarks via CLI
|
||||
|
||||
We have built-in functionality to run the supported open-benckmarks using llama-stack-client CLI
|
||||
|
||||
#### Spin up Llama Stack server
|
||||
|
||||
Spin up llama stack server with 'open-benchmark' template
|
||||
```
|
||||
```bash
|
||||
llama stack run llama_stack/distributions/open-benchmark/run.yaml
|
||||
|
||||
```
|
||||
|
||||
#### Run eval CLI
|
||||
|
|
@ -52,26 +55,24 @@ There are 3 necessary inputs to run a benchmark eval
|
|||
- `list of benchmark_ids`: The list of benchmark ids to run evaluation on
|
||||
- `model-id`: The model id to evaluate on
|
||||
- `output_dir`: Path to store the evaluate results
|
||||
```
|
||||
|
||||
```bash
|
||||
llama-stack-client eval run-benchmark <benchmark_id_1> <benchmark_id_2> ... \
|
||||
--model_id <model id to evaluate on> \
|
||||
--output_dir <directory to store the evaluate results> \
|
||||
--output_dir <directory to store the evaluate results>
|
||||
```
|
||||
|
||||
You can run
|
||||
```
|
||||
```bash
|
||||
llama-stack-client eval run-benchmark help
|
||||
```
|
||||
to see the description of all the flags that eval run-benchmark has
|
||||
|
||||
|
||||
In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate
|
||||
evaluation results over there.
|
||||
|
||||
|
||||
|
||||
#### What's Next?
|
||||
## What's Next?
|
||||
|
||||
- Check out our Colab notebook on working examples with running benchmark evaluations [here](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb#scrollTo=mxLCsP4MvFqP).
|
||||
- Check out our [Building Applications - Evaluation](../building_applications/evals.md) guide for more details on how to use the Evaluation APIs to evaluate your applications.
|
||||
- Check out our [Evaluation Reference](../references/evals_reference/index.md) for more details on the APIs.
|
||||
- Check out our [Building Applications - Evaluation](../building_applications/evals.mdx) guide for more details on how to use the Evaluation APIs to evaluate your applications.
|
||||
- Check out our [Evaluation Reference](../references/evals_reference/) for more details on the APIs.
|
||||
31
docs/docs/concepts/index.mdx
Normal file
31
docs/docs/concepts/index.mdx
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: Core Concepts
|
||||
description: Understanding Llama Stack's service-oriented philosophy and key concepts
|
||||
sidebar_label: Overview
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
Given Llama Stack's service-oriented philosophy, a few concepts and workflows arise which may not feel completely natural in the LLM landscape, especially if you are coming with a background in other frameworks.
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
This section covers the fundamental concepts of Llama Stack:
|
||||
|
||||
- **[Architecture](architecture.mdx)** - Learn about Llama Stack's architectural design and principles
|
||||
- **[APIs](/docs/concepts/apis/)** - Understanding the core APIs and their stability levels
|
||||
- [API Overview](apis/index.mdx) - Core APIs available in Llama Stack
|
||||
- [API Providers](apis/api_providers.mdx) - How providers implement APIs
|
||||
- [External APIs](apis/external.mdx) - External APIs available in Llama Stack
|
||||
- [API Stability Leveling](apis/api_leveling.mdx) - API stability and versioning
|
||||
- **[Distributions](distributions.mdx)** - Pre-configured deployment packages
|
||||
- **[Resources](resources.mdx)** - Understanding Llama Stack resources and their lifecycle
|
||||
|
||||
## Getting Started
|
||||
|
||||
If you're new to Llama Stack, we recommend starting with:
|
||||
|
||||
1. **[Architecture](architecture.mdx)** - Understand the overall system design
|
||||
2. **[APIs](apis/index.mdx)** - Learn about the available APIs and their purpose
|
||||
3. **[Distributions](distributions.mdx)** - Choose a pre-configured setup for your use case
|
||||
|
||||
Each concept builds upon the previous ones to give you a comprehensive understanding of how Llama Stack works and how to use it effectively.
|
||||
|
|
@ -1,4 +1,11 @@
|
|||
## Resources
|
||||
---
|
||||
title: Resources
|
||||
description: Resource federation and registration in Llama Stack
|
||||
sidebar_label: Resources
|
||||
sidebar_position: 4
|
||||
---
|
||||
|
||||
# Resources
|
||||
|
||||
Some of these APIs are associated with a set of **Resources**. Here is the mapping of APIs to resources:
|
||||
|
||||
|
|
@ -12,8 +19,8 @@ Some of these APIs are associated with a set of **Resources**. Here is the mappi
|
|||
|
||||
Furthermore, we allow these resources to be **federated** across multiple providers. For example, you may have some Llama models served by Fireworks while others are served by AWS Bedrock. Regardless, they will all work seamlessly with the same uniform Inference API provided by Llama Stack.
|
||||
|
||||
```{admonition} Registering Resources
|
||||
:class: tip
|
||||
:::tip Registering Resources
|
||||
|
||||
Given this architecture, it is necessary for the Stack to know which provider to use for a given resource. This means you need to explicitly _register_ resources (including models) before you can use them with the associated APIs.
|
||||
```
|
||||
|
||||
:::
|
||||
13
docs/docs/contributing/index.mdx
Normal file
13
docs/docs/contributing/index.mdx
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Contributing
|
||||
description: Contributing to Llama Stack
|
||||
sidebar_label: Contributing to Llama Stack
|
||||
sidebar_position: 3
|
||||
hide_title: true
|
||||
---
|
||||
|
||||
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
|
||||
|
||||
<ReactMarkdown>{Contributing}</ReactMarkdown>
|
||||
|
|
@ -1,12 +1,20 @@
|
|||
# Adding a New API Provider
|
||||
---
|
||||
title: Adding a New API Provider
|
||||
description: Guide for adding new API providers to Llama Stack
|
||||
sidebar_label: New API Provider
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This guide will walk you through the process of adding a new API provider to Llama Stack.
|
||||
|
||||
|
||||
- Begin by reviewing the [core concepts](../concepts/index.md) of Llama Stack and choose the API your provider belongs to (Inference, Safety, VectorIO, etc.)
|
||||
- Determine the provider type ({repopath}`Remote::llama_stack/providers/remote` or {repopath}`Inline::llama_stack/providers/inline`). Remote providers make requests to external services, while inline providers execute implementation locally.
|
||||
- Add your provider to the appropriate {repopath}`Registry::llama_stack/providers/registry/`. Specify pip dependencies necessary.
|
||||
- Update any distribution {repopath}`Templates::llama_stack/distributions/` `build.yaml` and `run.yaml` files if they should include your provider by default. Run {repopath}`./scripts/distro_codegen.py` if necessary. Note that `distro_codegen.py` will fail if the new provider causes any distribution template to attempt to import provider-specific dependencies. This usually means the distribution's `get_distribution_template()` code path should only import any necessary Config or model alias definitions from each provider and not the provider's actual implementation.
|
||||
- Begin by reviewing the [core concepts](../concepts/) of Llama Stack and choose the API your provider belongs to (Inference, Safety, VectorIO, etc.)
|
||||
- Determine the provider type ([Remote](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/remote) or [Inline](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/inline)). Remote providers make requests to external services, while inline providers execute implementation locally.
|
||||
- Add your provider to the appropriate [Registry](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/registry/). Specify pip dependencies necessary.
|
||||
- Update any distribution [Templates](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/distributions/) `build.yaml` and `run.yaml` files if they should include your provider by default. Run [./scripts/distro_codegen.py](https://github.com/meta-llama/llama-stack/blob/main/scripts/distro_codegen.py) if necessary. Note that `distro_codegen.py` will fail if the new provider causes any distribution template to attempt to import provider-specific dependencies. This usually means the distribution's `get_distribution_template()` code path should only import any necessary Config or model alias definitions from each provider and not the provider's actual implementation.
|
||||
|
||||
|
||||
Here are some example PRs to help you get started:
|
||||
|
|
@ -59,23 +67,23 @@ def get_base_url(self) -> str:
|
|||
|
||||
## Testing the Provider
|
||||
|
||||
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, you should install dependencies via `llama stack build --distro together`.
|
||||
Before running tests, you must have required dependencies installed. This depends on the providers or distributions you are testing. For example, if you are testing the `together` distribution, install its dependencies with `llama stack list-deps together | xargs -L1 uv pip install`.
|
||||
|
||||
### 1. Integration Testing
|
||||
|
||||
Integration tests are located in {repopath}`tests/integration`. These tests use the python client-SDK APIs (from the `llama_stack_client` package) to test functionality. Since these tests use client APIs, they can be run either by pointing to an instance of the Llama Stack server or "inline" by using `LlamaStackAsLibraryClient`.
|
||||
Integration tests are located in [tests/integration](https://github.com/meta-llama/llama-stack/tree/main/tests/integration). These tests use the python client-SDK APIs (from the `llama_stack_client` package) to test functionality. Since these tests use client APIs, they can be run either by pointing to an instance of the Llama Stack server or "inline" by using `LlamaStackAsLibraryClient`.
|
||||
|
||||
Consult {repopath}`tests/integration/README.md` for more details on how to run the tests.
|
||||
Consult [tests/integration/README.md](https://github.com/meta-llama/llama-stack/blob/main/tests/integration/README.md) for more details on how to run the tests.
|
||||
|
||||
Note that each provider's `sample_run_config()` method (in the configuration class for that provider)
|
||||
typically references some environment variables for specifying API keys and the like. You can set these in the environment or pass these via the `--env` flag to the test command.
|
||||
typically references some environment variables for specifying API keys and the like. You can set these in the environment before running the test command.
|
||||
|
||||
|
||||
### 2. Unit Testing
|
||||
|
||||
Unit tests are located in {repopath}`tests/unit`. Provider-specific unit tests are located in {repopath}`tests/unit/providers`. These tests are all run automatically as part of the CI process.
|
||||
Unit tests are located in [tests/unit](https://github.com/meta-llama/llama-stack/tree/main/tests/unit). Provider-specific unit tests are located in [tests/unit/providers](https://github.com/meta-llama/llama-stack/tree/main/tests/unit/providers). These tests are all run automatically as part of the CI process.
|
||||
|
||||
Consult {repopath}`tests/unit/README.md` for more details on how to run the tests manually.
|
||||
Consult [tests/unit/README.md](https://github.com/meta-llama/llama-stack/blob/main/tests/unit/README.md) for more details on how to run the tests manually.
|
||||
|
||||
### 3. Additional end-to-end testing
|
||||
|
||||
|
|
@ -1,4 +1,12 @@
|
|||
# Adding a New Vector Database
|
||||
---
|
||||
title: Adding a New Vector Database
|
||||
description: Guide for adding new vector database providers to Llama Stack
|
||||
sidebar_label: New Vector Database
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This guide will walk you through the process of adding a new vector database to Llama Stack.
|
||||
|
||||
|
|
@ -31,7 +39,7 @@ filtering, sorting, and aggregating vectors.
|
|||
- `YourVectorIOAdapter.query_chunks()`
|
||||
- `YourVectorIOAdapter.delete_chunks()`
|
||||
3. **Add to Registry**: Register your provider in the appropriate registry file.
|
||||
- Update {repopath}`llama_stack/providers/registry/vector_io.py` to include your new provider.
|
||||
- Update [llama_stack/providers/registry/vector_io.py](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/registry/vector_io.py) to include your new provider.
|
||||
```python
|
||||
from llama_stack.providers.registry.specs import InlineProviderSpec
|
||||
from llama_stack.providers.registry.api import Api
|
||||
|
|
@ -57,7 +65,7 @@ InlineProviderSpec(
|
|||
5. Add your provider to the `vector_io_providers` fixture dictionary.
|
||||
- Please follow the naming convention of `your_vectorprovider_index` and `your_vectorprovider_adapter` as the tests require this to execute properly.
|
||||
- Integration Tests
|
||||
- Integration tests are located in {repopath}`tests/integration`. These tests use the python client-SDK APIs (from the `llama_stack_client` package) to test functionality.
|
||||
- Integration tests are located in [tests/integration](https://github.com/meta-llama/llama-stack/tree/main/tests/integration). These tests use the python client-SDK APIs (from the `llama_stack_client` package) to test functionality.
|
||||
- The two set of integration tests are:
|
||||
- `tests/integration/vector_io/test_vector_io.py`: This file tests registration, insertion, and retrieval.
|
||||
- `tests/integration/vector_io/test_openai_vector_stores.py`: These tests are for OpenAI-compatible vector stores and test the OpenAI API compatibility.
|
||||
|
|
@ -71,5 +79,5 @@ InlineProviderSpec(
|
|||
- If you are adding tests for the `remote` provider you will have to update the `test` group, which is used in the GitHub CI for integration tests.
|
||||
- `uv add new_pip_package --group test`
|
||||
5. **Update Documentation**: Please update the documentation for end users
|
||||
- Generate the provider documentation by running {repopath}`./scripts/provider_codegen.py`.
|
||||
- Update the autogenerated content in the registry/vector_io.py file with information about your provider. Please see other providers for examples.
|
||||
- Generate the provider documentation by running [./scripts/provider_codegen.py](https://github.com/meta-llama/llama-stack/blob/main/scripts/provider_codegen.py).
|
||||
- Update the autogenerated content in the registry/vector_io.py file with information about your provider. Please see other providers for examples.
|
||||
|
|
@ -1,3 +1,13 @@
|
|||
---
|
||||
title: Record-Replay Testing System
|
||||
description: Understanding how Llama Stack captures and replays API interactions for testing
|
||||
sidebar_label: Record-Replay System
|
||||
sidebar_position: 4
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Record-Replay System
|
||||
|
||||
Understanding how Llama Stack captures and replays API interactions for testing.
|
||||
|
|
@ -58,7 +68,9 @@ recordings/
|
|||
Direct API calls with no recording or replay:
|
||||
|
||||
```python
|
||||
with inference_recording(mode=InferenceMode.LIVE):
|
||||
from llama_stack.testing.api_recorder import api_recording, APIRecordingMode
|
||||
|
||||
with api_recording(mode=APIRecordingMode.LIVE):
|
||||
response = await client.chat.completions.create(...)
|
||||
```
|
||||
|
||||
|
|
@ -69,7 +81,7 @@ Use for initial development and debugging against real APIs.
|
|||
Captures API interactions while passing through real responses:
|
||||
|
||||
```python
|
||||
with inference_recording(mode=InferenceMode.RECORD, storage_dir="./recordings"):
|
||||
with api_recording(mode=APIRecordingMode.RECORD, storage_dir="./recordings"):
|
||||
response = await client.chat.completions.create(...)
|
||||
# Real API call made, response captured AND returned
|
||||
```
|
||||
|
|
@ -86,7 +98,7 @@ The recording process:
|
|||
Returns stored responses instead of making API calls:
|
||||
|
||||
```python
|
||||
with inference_recording(mode=InferenceMode.REPLAY, storage_dir="./recordings"):
|
||||
with api_recording(mode=APIRecordingMode.REPLAY, storage_dir="./recordings"):
|
||||
response = await client.chat.completions.create(...)
|
||||
# No API call made, cached response returned instantly
|
||||
```
|
||||
|
|
@ -228,4 +240,4 @@ Loose hashing (normalizing whitespace, rounding floats) seems convenient but hid
|
|||
- **SQLite** - Fast indexed lookups without loading response bodies
|
||||
- **Hybrid** - Best of both worlds for different use cases
|
||||
|
||||
This system provides reliable, fast testing against real AI APIs while maintaining the ability to debug issues when they arise.
|
||||
This system provides reliable, fast testing against real AI APIs while maintaining the ability to debug issues when they arise.
|
||||
30
docs/docs/deploying/aws_eks_deployment.mdx
Normal file
30
docs/docs/deploying/aws_eks_deployment.mdx
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: AWS EKS Deployment Guide
|
||||
description: Deploy Llama Stack on AWS EKS
|
||||
sidebar_label: AWS EKS Deployment
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
## AWS EKS Deployment
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Set up an [EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
|
||||
- Create a [GitHub OAuth app](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/creating-an-oauth-app)
|
||||
- Set authorization callback URL to `http://<your-llama-stack-ui-url>/api/auth/callback/`
|
||||
|
||||
### Automated Deployment
|
||||
|
||||
```bash
|
||||
export HF_TOKEN=<your-huggingface-token>
|
||||
export GITHUB_CLIENT_ID=<your-github-client-id>
|
||||
export GITHUB_CLIENT_SECRET=<your-github-client-secret>
|
||||
export LLAMA_STACK_UI_URL=<your-llama-stack-ui-url>
|
||||
|
||||
cd docs/source/distributions/eks
|
||||
./apply.sh
|
||||
```
|
||||
|
||||
This script will:
|
||||
- Set up default storage class for AWS EKS
|
||||
- Deploy Llama Stack server in Kubernetes pods and services
|
||||
14
docs/docs/deploying/index.mdx
Normal file
14
docs/docs/deploying/index.mdx
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
title: Deploying Llama Stack
|
||||
description: Production deployment guides for Llama Stack in various environments
|
||||
sidebar_label: Overview
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Deploying Llama Stack
|
||||
|
||||
[**→ Kubernetes Deployment Guide**](./kubernetes_deployment.mdx)
|
||||
[**→ AWS EKS Deployment Guide**](./aws_eks_deployment.mdx)
|
||||
224
docs/docs/deploying/kubernetes_deployment.mdx
Normal file
224
docs/docs/deploying/kubernetes_deployment.mdx
Normal file
|
|
@ -0,0 +1,224 @@
|
|||
---
|
||||
title: Kubernetes Deployment Guide
|
||||
description: Deploy Llama Stack on Kubernetes clusters with vLLM inference service
|
||||
sidebar_label: Kubernetes
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Kubernetes Deployment Guide
|
||||
|
||||
Deploy Llama Stack and vLLM servers in a Kubernetes cluster instead of running them locally. This guide covers both local development with Kind and production deployment on AWS EKS.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Local Kubernetes Setup
|
||||
|
||||
Create a local Kubernetes cluster via Kind:
|
||||
|
||||
```bash
|
||||
kind create cluster --image kindest/node:v1.32.0 --name llama-stack-test
|
||||
```
|
||||
|
||||
Set your Hugging Face token:
|
||||
|
||||
```bash
|
||||
export HF_TOKEN=$(echo -n "your-hf-token" | base64)
|
||||
```
|
||||
|
||||
## Quick Deployment
|
||||
|
||||
### Step 1: Create Storage and Secrets
|
||||
|
||||
```yaml
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: vllm-models
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: 50Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: hf-token-secret
|
||||
type: Opaque
|
||||
data:
|
||||
token: $HF_TOKEN
|
||||
EOF
|
||||
```
|
||||
|
||||
### Step 2: Deploy vLLM Server
|
||||
|
||||
```yaml
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: vllm-server
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: vllm
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: vllm
|
||||
spec:
|
||||
containers:
|
||||
- name: vllm
|
||||
image: vllm/vllm-openai:latest
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["vllm serve meta-llama/Llama-3.2-1B-Instruct"]
|
||||
env:
|
||||
- name: HUGGING_FACE_HUB_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hf-token-secret
|
||||
key: token
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
volumeMounts:
|
||||
- name: llama-storage
|
||||
mountPath: /root/.cache/huggingface
|
||||
volumes:
|
||||
- name: llama-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: vllm-models
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: vllm-server
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: vllm
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8000
|
||||
targetPort: 8000
|
||||
type: ClusterIP
|
||||
EOF
|
||||
```
|
||||
|
||||
### Step 3: Configure Llama Stack
|
||||
|
||||
Update your run configuration:
|
||||
|
||||
```yaml
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: vllm
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: http://vllm-server.default.svc.cluster.local:8000/v1
|
||||
max_tokens: 4096
|
||||
api_token: fake
|
||||
```
|
||||
|
||||
Build container image:
|
||||
|
||||
```bash
|
||||
tmp_dir=$(mktemp -d) && cat >$tmp_dir/Containerfile.llama-stack-run-k8s <<EOF
|
||||
FROM distribution-myenv:dev
|
||||
RUN apt-get update && apt-get install -y git
|
||||
RUN git clone https://github.com/meta-llama/llama-stack.git /app/llama-stack-source
|
||||
ADD ./vllm-llama-stack-run-k8s.yaml /app/config.yaml
|
||||
EOF
|
||||
podman build -f $tmp_dir/Containerfile.llama-stack-run-k8s -t llama-stack-run-k8s $tmp_dir
|
||||
```
|
||||
|
||||
### Step 4: Deploy Llama Stack Server
|
||||
|
||||
```yaml
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: llama-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: llama-stack-server
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: llama-stack
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: llama-stack
|
||||
spec:
|
||||
containers:
|
||||
- name: llama-stack
|
||||
image: localhost/llama-stack-run-k8s:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
command: ["llama", "stack", "run", "/app/config.yaml"]
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
volumeMounts:
|
||||
- name: llama-storage
|
||||
mountPath: /root/.llama
|
||||
volumes:
|
||||
- name: llama-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: llama-pvc
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: llama-stack-service
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: llama-stack
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 5000
|
||||
targetPort: 5000
|
||||
type: ClusterIP
|
||||
EOF
|
||||
```
|
||||
|
||||
### Step 5: Test Deployment
|
||||
|
||||
```bash
|
||||
# Port forward and test
|
||||
kubectl port-forward service/llama-stack-service 5000:5000
|
||||
llama-stack-client --endpoint http://localhost:5000 inference chat-completion --message "hello, what model are you?"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Check pod status:**
|
||||
```bash
|
||||
kubectl get pods -l app.kubernetes.io/name=vllm
|
||||
kubectl logs -l app.kubernetes.io/name=vllm
|
||||
```
|
||||
|
||||
**Test service connectivity:**
|
||||
```bash
|
||||
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- curl http://vllm-server:8000/v1/models
|
||||
```
|
||||
|
||||
## Related Resources
|
||||
|
||||
- **[Deployment Overview](/docs/deploying/)** - Overview of deployment options
|
||||
- **[Distributions](/docs/distributions)** - Understanding Llama Stack distributions
|
||||
- **[Configuration](/docs/distributions/configuration)** - Detailed configuration options
|
||||
148
docs/docs/distributions/building_distro.mdx
Normal file
148
docs/docs/distributions/building_distro.mdx
Normal file
|
|
@ -0,0 +1,148 @@
|
|||
---
|
||||
title: Building Custom Distributions
|
||||
description: Building a Llama Stack distribution from scratch
|
||||
sidebar_label: Build your own Distribution
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
This guide walks you through inspecting existing distributions, customising their configuration, and building runnable artefacts for your own deployment.
|
||||
|
||||
### Explore existing distributions
|
||||
|
||||
All first-party distributions live under `llama_stack/distributions/`. Each directory contains:
|
||||
|
||||
- `build.yaml` – the distribution specification (providers, additional dependencies, optional external provider directories).
|
||||
- `run.yaml` – sample run configuration (when provided).
|
||||
- Documentation fragments that power this site.
|
||||
|
||||
Browse that folder to understand available providers and copy a distribution to use as a starting point. When creating a new stack, duplicate an existing directory, rename it, and adjust the `build.yaml` file to match your requirements.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="container" label="Building a container">
|
||||
|
||||
Use the Containerfile at `containers/Containerfile`, which installs `llama-stack`, resolves distribution dependencies via `llama stack list-deps`, and sets the entrypoint to `llama stack run`.
|
||||
|
||||
```bash
|
||||
docker build . \
|
||||
-f containers/Containerfile \
|
||||
--build-arg DISTRO_NAME=starter \
|
||||
--tag llama-stack:starter
|
||||
```
|
||||
|
||||
Handy build arguments:
|
||||
|
||||
- `DISTRO_NAME` – distribution directory name (defaults to `starter`).
|
||||
- `RUN_CONFIG_PATH` – absolute path inside the build context for a run config that should be baked into the image (e.g. `/workspace/run.yaml`).
|
||||
- `INSTALL_MODE=editable` – install the repository copied into `/workspace` with `uv pip install -e`. Pair it with `--build-arg LLAMA_STACK_DIR=/workspace`.
|
||||
- `LLAMA_STACK_CLIENT_DIR` – optional editable install of the Python client.
|
||||
- `PYPI_VERSION` / `TEST_PYPI_VERSION` – pin specific releases when not using editable installs.
|
||||
- `KEEP_WORKSPACE=1` – retain `/workspace` in the final image if you need to access additional files (such as sample configs or provider bundles).
|
||||
|
||||
Make sure any custom `build.yaml`, run configs, or provider directories you reference are included in the Docker build context so the Containerfile can read them.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="external" label="Building with external providers">
|
||||
|
||||
External providers live outside the main repository but can be bundled by pointing `external_providers_dir` to a directory that contains your provider packages.
|
||||
|
||||
1. Copy providers into the build context, for example `cp -R path/to/providers providers.d`.
|
||||
2. Update `build.yaml` with the directory and provider entries.
|
||||
3. Adjust run configs to use the in-container path (usually `/.llama/providers.d`). Pass `--build-arg RUN_CONFIG_PATH=/workspace/run.yaml` if you want to bake the config.
|
||||
|
||||
Example `build.yaml` excerpt for a custom Ollama provider:
|
||||
|
||||
```yaml
|
||||
distribution_spec:
|
||||
providers:
|
||||
inference:
|
||||
- remote::custom_ollama
|
||||
external_providers_dir: /workspace/providers.d
|
||||
```
|
||||
|
||||
Inside `providers.d/custom_ollama/provider.py`, define `get_provider_spec()` so the CLI can discover dependencies:
|
||||
|
||||
```python
|
||||
from llama_stack.providers.datatypes import ProviderSpec
|
||||
|
||||
|
||||
def get_provider_spec() -> ProviderSpec:
|
||||
return ProviderSpec(
|
||||
provider_type="remote::custom_ollama",
|
||||
module="llama_stack_ollama_provider",
|
||||
config_class="llama_stack_ollama_provider.config.OllamaImplConfig",
|
||||
pip_packages=[
|
||||
"ollama",
|
||||
"aiohttp",
|
||||
"llama-stack-provider-ollama",
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
Here's an example for a custom Ollama provider:
|
||||
|
||||
```yaml
|
||||
adapter:
|
||||
adapter_type: custom_ollama
|
||||
pip_packages:
|
||||
- ollama
|
||||
- aiohttp
|
||||
- llama-stack-provider-ollama # This is the provider package
|
||||
config_class: llama_stack_ollama_provider.config.OllamaImplConfig
|
||||
module: llama_stack_ollama_provider
|
||||
api_dependencies: []
|
||||
optional_api_dependencies: []
|
||||
```
|
||||
|
||||
The `pip_packages` section lists the Python packages required by the provider, as well as the
|
||||
provider package itself. The package must be available on PyPI or can be provided from a local
|
||||
directory or a git repository (git must be installed on the build environment).
|
||||
|
||||
For deeper guidance, see the [External Providers documentation](../providers/external/).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Run your stack server
|
||||
|
||||
After building the image, launch it directly with Docker or Podman—the entrypoint calls `llama stack run` using the baked distribution or the bundled run config:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-e INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
-e OLLAMA_URL=http://host.docker.internal:11434 \
|
||||
llama-stack:starter \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
Here are the docker flags and their uses:
|
||||
|
||||
* `-d`: Runs the container in the detached mode as a background process
|
||||
|
||||
* `-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT`: Maps the container port to the host port for accessing the server
|
||||
|
||||
* `-v ~/.llama:/root/.llama`: Mounts the local .llama directory to persist configurations and data
|
||||
|
||||
* `localhost/distribution-ollama:dev`: The name and tag of the container image to run
|
||||
|
||||
* `-e INFERENCE_MODEL=$INFERENCE_MODEL`: Sets the INFERENCE_MODEL environment variable in the container
|
||||
|
||||
* `-e OLLAMA_URL=http://host.docker.internal:11434`: Sets the OLLAMA_URL environment variable in the container
|
||||
|
||||
* `--port $LLAMA_STACK_PORT`: Port number for the server to listen on
|
||||
|
||||
|
||||
|
||||
If you prepared a custom run config, mount it into the container and reference it explicitly:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v $(pwd)/run.yaml:/app/run.yaml \
|
||||
llama-stack:starter \
|
||||
/app/run.yaml
|
||||
```
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
title: Configuring a "Stack"
|
||||
description: Configuring a "Stack"
|
||||
sidebar_label: Configuring a "Stack"
|
||||
sidebar_position: 6
|
||||
---
|
||||
# Configuring a "Stack"
|
||||
|
||||
The Llama Stack runtime configuration is specified as a YAML file. Here is a simplified version of an example configuration file for the Ollama distribution:
|
||||
|
|
@ -15,7 +21,6 @@ apis:
|
|||
- inference
|
||||
- vector_io
|
||||
- safety
|
||||
- telemetry
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: ollama
|
||||
|
|
@ -38,18 +43,28 @@ providers:
|
|||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: sqlite
|
||||
namespace: null
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/agents_store.db
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config: {}
|
||||
metadata_store:
|
||||
namespace: null
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/registry.db
|
||||
persistence:
|
||||
agent_state:
|
||||
backend: kv_default
|
||||
namespace: agents
|
||||
responses:
|
||||
backend: sql_default
|
||||
table_name: responses
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/kvstore.db
|
||||
sql_default:
|
||||
type: sql_sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/ollama}/sqlstore.db
|
||||
references:
|
||||
metadata:
|
||||
backend: kv_default
|
||||
namespace: registry
|
||||
inference:
|
||||
backend: sql_default
|
||||
table_name: inference_store
|
||||
models:
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
|
|
@ -72,7 +87,6 @@ apis:
|
|||
- inference
|
||||
- vector_io
|
||||
- safety
|
||||
- telemetry
|
||||
```
|
||||
|
||||
## Providers
|
||||
|
|
@ -95,7 +109,7 @@ A few things to note:
|
|||
- The id is a string you can choose freely.
|
||||
- You can instantiate any number of provider instances of the same type.
|
||||
- The configuration dictionary is provider-specific.
|
||||
- Notice that configuration can reference environment variables (with default values), which are expanded at runtime. When you run a stack server (via docker or via `llama stack run`), you can specify `--env OLLAMA_URL=http://my-server:11434` to override the default value.
|
||||
- Notice that configuration can reference environment variables (with default values), which are expanded at runtime. When you run a stack server, you can set environment variables in your shell before running `llama stack run` to override the default values.
|
||||
|
||||
### Environment Variable Substitution
|
||||
|
||||
|
|
@ -167,13 +181,10 @@ optional_token: ${env.OPTIONAL_TOKEN:+}
|
|||
|
||||
#### Runtime Override
|
||||
|
||||
You can override environment variables at runtime when starting the server:
|
||||
You can override environment variables at runtime by setting them in your shell before starting the server:
|
||||
|
||||
```bash
|
||||
# Override specific environment variables
|
||||
llama stack run --config run.yaml --env API_KEY=sk-123 --env BASE_URL=https://custom-api.com
|
||||
|
||||
# Or set them in your shell
|
||||
# Set environment variables in your shell
|
||||
export API_KEY=sk-123
|
||||
export BASE_URL=https://custom-api.com
|
||||
llama stack run --config run.yaml
|
||||
|
|
@ -200,7 +211,7 @@ models:
|
|||
provider_model_id: null
|
||||
model_type: llm
|
||||
```
|
||||
A Model is an instance of a "Resource" (see [Concepts](../concepts/index)) and is associated with a specific inference provider (in this case, the provider with identifier `ollama`). This is an instance of a "pre-registered" model. While we always encourage the clients to register models before using them, some Stack servers may come up a list of "already known and available" models.
|
||||
A Model is an instance of a "Resource" (see [Concepts](../concepts/)) and is associated with a specific inference provider (in this case, the provider with identifier `ollama`). This is an instance of a "pre-registered" model. While we always encourage the clients to register models before using them, some Stack servers may come up a list of "already known and available" models.
|
||||
|
||||
What's with the `provider_model_id` field? This is an identifier for the model inside the provider's model catalog. Contrast it with `model_id` which is the identifier for the same model for Llama Stack's purposes. For example, you may want to name "llama3.2:vision-11b" as "image_captioning_model" when you use it in your Stack interactions. When omitted, the server will set `provider_model_id` to be the same as `model_id`.
|
||||
|
||||
|
|
@ -472,12 +483,12 @@ A rule may also specify a condition, either a 'when' or an 'unless',
|
|||
with additional constraints as to where the rule applies. The
|
||||
constraints supported at present are:
|
||||
|
||||
- 'user with <attr-value> in <attr-name>'
|
||||
- 'user with <attr-value> not in <attr-name>'
|
||||
- 'user with `<attr-value>` in `<attr-name>`'
|
||||
- 'user with `<attr-value>` not in `<attr-name>`'
|
||||
- 'user is owner'
|
||||
- 'user is not owner'
|
||||
- 'user in owners <attr-name>'
|
||||
- 'user not in owners <attr-name>'
|
||||
- 'user in owners `<attr-name>`'
|
||||
- 'user not in owners `<attr-name>`'
|
||||
|
||||
The attributes defined for a user will depend on how the auth
|
||||
configuration is defined.
|
||||
|
|
@ -503,16 +514,16 @@ server:
|
|||
provider_config:
|
||||
type: "github_token"
|
||||
github_api_base_url: "https://api.github.com"
|
||||
access_policy:
|
||||
- permit:
|
||||
principal: user-1
|
||||
actions: [create, read, delete]
|
||||
description: user-1 has full access to all resources
|
||||
- permit:
|
||||
principal: user-2
|
||||
actions: [read]
|
||||
resource: model::model-1
|
||||
description: user-2 has read access to model-1 only
|
||||
access_policy:
|
||||
- permit:
|
||||
principal: user-1
|
||||
actions: [create, read, delete]
|
||||
description: user-1 has full access to all resources
|
||||
- permit:
|
||||
principal: user-2
|
||||
actions: [read]
|
||||
resource: model::model-1
|
||||
description: user-2 has read access to model-1 only
|
||||
```
|
||||
|
||||
Similarly, the following restricts access to particular kubernetes
|
||||
|
|
@ -572,24 +583,13 @@ created by users sharing a team with them:
|
|||
|
||||
In addition to resource-based access control, Llama Stack supports endpoint-level authorization using OAuth 2.0 style scopes. When authentication is enabled, specific API endpoints require users to have particular scopes in their authentication token.
|
||||
|
||||
**Scope-Gated APIs:**
|
||||
The following APIs are currently gated by scopes:
|
||||
|
||||
- **Telemetry API** (scope: `telemetry.read`):
|
||||
- `POST /telemetry/traces` - Query traces
|
||||
- `GET /telemetry/traces/{trace_id}` - Get trace by ID
|
||||
- `GET /telemetry/traces/{trace_id}/spans/{span_id}` - Get span by ID
|
||||
- `POST /telemetry/spans/{span_id}/tree` - Get span tree
|
||||
- `POST /telemetry/spans` - Query spans
|
||||
- `POST /telemetry/metrics/{metric_name}` - Query metrics
|
||||
|
||||
**Authentication Configuration:**
|
||||
|
||||
For **JWT/OAuth2 providers**, scopes should be included in the JWT's claims:
|
||||
```json
|
||||
{
|
||||
"sub": "user123",
|
||||
"scope": "telemetry.read",
|
||||
"scope": "<scope>",
|
||||
"aud": "llama-stack"
|
||||
}
|
||||
```
|
||||
|
|
@ -599,7 +599,7 @@ For **custom authentication providers**, the endpoint must return user attribute
|
|||
{
|
||||
"principal": "user123",
|
||||
"attributes": {
|
||||
"scopes": ["telemetry.read"]
|
||||
"scopes": ["<scope>"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
title: Customizing run.yaml
|
||||
description: Customizing run.yaml files for Llama Stack templates
|
||||
sidebar_label: Customizing run.yaml
|
||||
sidebar_position: 4
|
||||
---
|
||||
# Customizing run.yaml Files
|
||||
|
||||
The `run.yaml` files generated by Llama Stack templates are **starting points** designed to be customized for your specific needs. They are not meant to be used as-is in production environments.
|
||||
|
|
@ -37,4 +43,4 @@ your-project/
|
|||
└── README.md
|
||||
```
|
||||
|
||||
The goal is to take the generated template and adapt it to your specific infrastructure and operational needs.
|
||||
The goal is to take the generated template and adapt it to your specific infrastructure and operational needs.
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
title: Using Llama Stack as a Library
|
||||
description: How to use Llama Stack as a Python library instead of running a server
|
||||
sidebar_label: Importing as Library
|
||||
sidebar_position: 5
|
||||
---
|
||||
# Using Llama Stack as a Library
|
||||
|
||||
## Setup Llama Stack without a Server
|
||||
|
|
@ -6,7 +12,7 @@ This avoids the overhead of setting up a server.
|
|||
```bash
|
||||
# setup
|
||||
uv pip install llama-stack
|
||||
llama stack build --distro starter --image-type venv
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
```
|
||||
|
||||
```python
|
||||
|
|
@ -27,7 +33,7 @@ Then, you can access the APIs like `models` and `inference` on the client and ca
|
|||
response = client.models.list()
|
||||
```
|
||||
|
||||
If you've created a [custom distribution](building_distro.md), you can also use the run.yaml configuration file directly:
|
||||
If you've created a [custom distribution](./building_distro), you can also use the run.yaml configuration file directly:
|
||||
|
||||
```python
|
||||
client = LlamaStackAsLibraryClient(config_path)
|
||||
21
docs/docs/distributions/index.mdx
Normal file
21
docs/docs/distributions/index.mdx
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: Distributions Overview
|
||||
description: Pre-packaged sets of Llama Stack components for different deployment scenarios
|
||||
sidebar_label: Overview
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# Distributions Overview
|
||||
|
||||
A distribution is a pre-packaged set of Llama Stack components that can be deployed together.
|
||||
|
||||
This section provides an overview of the distributions available in Llama Stack.
|
||||
|
||||
## Distribution Guides
|
||||
|
||||
- **[Available Distributions](./list_of_distributions.mdx)** - Complete list and comparison of all distributions
|
||||
- **[Building Custom Distributions](./building_distro.mdx)** - Create your own distribution from scratch
|
||||
- **[Customizing Configuration](./customizing_run_yaml.mdx)** - Customize run.yaml for your needs
|
||||
- **[Starting Llama Stack Server](./starting_llama_stack_server.mdx)** - How to run distributions
|
||||
- **[Importing as Library](./importing_as_library.mdx)** - Use distributions in your code
|
||||
- **[Configuration Reference](./configuration.mdx)** - Configuration file format details
|
||||
155
docs/docs/distributions/k8s/stack-configmap.yaml
Normal file
155
docs/docs/distributions/k8s/stack-configmap.yaml
Normal file
|
|
@ -0,0 +1,155 @@
|
|||
apiVersion: v1
|
||||
data:
|
||||
stack_run_config.yaml: |
|
||||
version: '2'
|
||||
image_name: kubernetes-demo
|
||||
apis:
|
||||
- agents
|
||||
- inference
|
||||
- files
|
||||
- safety
|
||||
- telemetry
|
||||
- tool_runtime
|
||||
- vector_io
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: vllm-inference
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: vllm-safety
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_SAFETY_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: sentence-transformers
|
||||
provider_type: inline::sentence-transformers
|
||||
config: {}
|
||||
vector_io:
|
||||
- provider_id: ${env.ENABLE_CHROMADB:+chromadb}
|
||||
provider_type: remote::chromadb
|
||||
config:
|
||||
url: ${env.CHROMADB_URL:=}
|
||||
kvstore:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
files:
|
||||
- provider_id: meta-reference-files
|
||||
provider_type: inline::localfs
|
||||
config:
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/distributions/starter}/files_metadata.db
|
||||
safety:
|
||||
- provider_id: llama-guard
|
||||
provider_type: inline::llama-guard
|
||||
config:
|
||||
excluded_categories: []
|
||||
agents:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
responses_store:
|
||||
type: postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
telemetry:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
service_name: "${env.OTEL_SERVICE_NAME:=\u200B}"
|
||||
sinks: ${env.TELEMETRY_SINKS:=console}
|
||||
tool_runtime:
|
||||
- provider_id: brave-search
|
||||
provider_type: remote::brave-search
|
||||
config:
|
||||
api_key: ${env.BRAVE_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: tavily-search
|
||||
provider_type: remote::tavily-search
|
||||
config:
|
||||
api_key: ${env.TAVILY_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: rag-runtime
|
||||
provider_type: inline::rag-runtime
|
||||
config: {}
|
||||
- provider_id: model-context-protocol
|
||||
provider_type: remote::model-context-protocol
|
||||
config: {}
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: ${env.POSTGRES_TABLE_NAME:=llamastack_kvstore}
|
||||
sql_default:
|
||||
type: sql_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
references:
|
||||
metadata:
|
||||
backend: kv_default
|
||||
namespace: registry
|
||||
inference:
|
||||
backend: sql_default
|
||||
table_name: inference_store
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 768
|
||||
model_id: nomic-embed-text-v1.5
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
server:
|
||||
port: 8321
|
||||
auth:
|
||||
provider_config:
|
||||
type: github_token
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: llama-stack-config
|
||||
|
|
@ -52,7 +52,7 @@ spec:
|
|||
value: "${SAFETY_MODEL}"
|
||||
- name: TAVILY_SEARCH_API_KEY
|
||||
value: "${TAVILY_SEARCH_API_KEY}"
|
||||
command: ["python", "-m", "llama_stack.core.server.server", "/etc/config/stack_run_config.yaml", "--port", "8321"]
|
||||
command: ["llama", "stack", "run", "/etc/config/stack_run_config.yaml", "--port", "8321"]
|
||||
ports:
|
||||
- containerPort: 8321
|
||||
volumeMounts:
|
||||
146
docs/docs/distributions/k8s/stack_run_config.yaml
Normal file
146
docs/docs/distributions/k8s/stack_run_config.yaml
Normal file
|
|
@ -0,0 +1,146 @@
|
|||
version: '2'
|
||||
image_name: kubernetes-demo
|
||||
apis:
|
||||
- agents
|
||||
- inference
|
||||
- files
|
||||
- safety
|
||||
- tool_runtime
|
||||
- vector_io
|
||||
providers:
|
||||
inference:
|
||||
- provider_id: vllm-inference
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: vllm-safety
|
||||
provider_type: remote::vllm
|
||||
config:
|
||||
url: ${env.VLLM_SAFETY_URL:=http://localhost:8000/v1}
|
||||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
- provider_id: sentence-transformers
|
||||
provider_type: inline::sentence-transformers
|
||||
config: {}
|
||||
vector_io:
|
||||
- provider_id: ${env.ENABLE_CHROMADB:+chromadb}
|
||||
provider_type: remote::chromadb
|
||||
config:
|
||||
url: ${env.CHROMADB_URL:=}
|
||||
persistence:
|
||||
namespace: vector_io::chroma_remote
|
||||
backend: kv_default
|
||||
files:
|
||||
- provider_id: meta-reference-files
|
||||
provider_type: inline::localfs
|
||||
config:
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/distributions/starter/files}
|
||||
metadata_store:
|
||||
table_name: files_metadata
|
||||
backend: sql_default
|
||||
safety:
|
||||
- provider_id: llama-guard
|
||||
provider_type: inline::llama-guard
|
||||
config:
|
||||
excluded_categories: []
|
||||
agents:
|
||||
- provider_id: meta-reference
|
||||
provider_type: inline::meta-reference
|
||||
config:
|
||||
persistence:
|
||||
agent_state:
|
||||
namespace: agents
|
||||
backend: kv_default
|
||||
responses:
|
||||
table_name: responses
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
tool_runtime:
|
||||
- provider_id: brave-search
|
||||
provider_type: remote::brave-search
|
||||
config:
|
||||
api_key: ${env.BRAVE_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: tavily-search
|
||||
provider_type: remote::tavily-search
|
||||
config:
|
||||
api_key: ${env.TAVILY_SEARCH_API_KEY:+}
|
||||
max_results: 3
|
||||
- provider_id: rag-runtime
|
||||
provider_type: inline::rag-runtime
|
||||
config: {}
|
||||
- provider_id: model-context-protocol
|
||||
provider_type: remote::model-context-protocol
|
||||
config: {}
|
||||
storage:
|
||||
backends:
|
||||
kv_default:
|
||||
type: kv_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
table_name: ${env.POSTGRES_TABLE_NAME:=llamastack_kvstore}
|
||||
sql_default:
|
||||
type: sql_postgres
|
||||
host: ${env.POSTGRES_HOST:=localhost}
|
||||
port: ${env.POSTGRES_PORT:=5432}
|
||||
db: ${env.POSTGRES_DB:=llamastack}
|
||||
user: ${env.POSTGRES_USER:=llamastack}
|
||||
password: ${env.POSTGRES_PASSWORD:=llamastack}
|
||||
stores:
|
||||
metadata:
|
||||
namespace: registry
|
||||
backend: kv_default
|
||||
inference:
|
||||
table_name: inference_store
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
conversations:
|
||||
table_name: openai_conversations
|
||||
backend: sql_default
|
||||
registered_resources:
|
||||
models:
|
||||
- metadata:
|
||||
embedding_dimension: 768
|
||||
model_id: nomic-embed-text-v1.5
|
||||
provider_id: sentence-transformers
|
||||
model_type: embedding
|
||||
- metadata: {}
|
||||
model_id: ${env.INFERENCE_MODEL}
|
||||
provider_id: vllm-inference
|
||||
model_type: llm
|
||||
- metadata: {}
|
||||
model_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
provider_id: vllm-safety
|
||||
model_type: llm
|
||||
shields:
|
||||
- shield_id: ${env.SAFETY_MODEL:=meta-llama/Llama-Guard-3-1B}
|
||||
vector_dbs: []
|
||||
datasets: []
|
||||
scoring_fns: []
|
||||
benchmarks: []
|
||||
tool_groups:
|
||||
- toolgroup_id: builtin::websearch
|
||||
provider_id: tavily-search
|
||||
- toolgroup_id: builtin::rag
|
||||
provider_id: rag-runtime
|
||||
server:
|
||||
port: 8321
|
||||
auth:
|
||||
provider_config:
|
||||
type: github_token
|
||||
telemetry:
|
||||
enabled: true
|
||||
vector_stores:
|
||||
default_provider_id: chromadb
|
||||
default_embedding_model:
|
||||
provider_id: sentence-transformers
|
||||
model_id: nomic-ai/nomic-embed-text-v1.5
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
---
|
||||
title: Available Distributions
|
||||
description: List of available distributions for Llama Stack
|
||||
sidebar_label: Available Distributions
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# Available Distributions
|
||||
|
||||
Llama Stack provides several pre-configured distributions to help you get started quickly. Choose the distribution that best fits your hardware and use case.
|
||||
|
|
@ -55,7 +62,7 @@ docker pull llama-stack/distribution-meta-reference-gpu
|
|||
|
||||
**Partners:** [Fireworks.ai](https://fireworks.ai) and [Together.xyz](https://together.xyz)
|
||||
|
||||
**Guides:** [Remote-Hosted Endpoints](remote_hosted_distro/index)
|
||||
**Guides:** [Remote-Hosted Endpoints](./remote_hosted_distro/)
|
||||
|
||||
### 📱 Mobile Development
|
||||
|
||||
|
|
@ -74,7 +81,7 @@ docker pull llama-stack/distribution-meta-reference-gpu
|
|||
- You need custom configurations
|
||||
- You want to optimize for your specific use case
|
||||
|
||||
**Guides:** [Building Custom Distributions](building_distro.md)
|
||||
**Guides:** [Building Custom Distributions](./building_distro)
|
||||
|
||||
## Detailed Documentation
|
||||
|
||||
|
|
@ -124,4 +131,4 @@ graph TD
|
|||
3. **Configure your providers** with API keys or local models
|
||||
4. **Start building** with Llama Stack!
|
||||
|
||||
For help choosing or troubleshooting, check our [Getting Started Guide](../getting_started/index.md) or [Community Support](https://github.com/llama-stack/llama-stack/discussions).
|
||||
For help choosing or troubleshooting, check our [Getting Started Guide](/docs/getting_started/quickstart) or [Community Support](https://github.com/llamastack/llama-stack/discussions).
|
||||
|
|
@ -59,14 +59,14 @@ Start a Llama Stack server on localhost. Here is an example of how you can do th
|
|||
uv venv starter --python 3.12
|
||||
source starter/bin/activate # On Windows: starter\Scripts\activate
|
||||
pip install --no-cache llama-stack==0.2.2
|
||||
llama stack build --distro starter --image-type venv
|
||||
llama stack list-deps starter | xargs -L1 uv pip install
|
||||
export FIREWORKS_API_KEY=<SOME_KEY>
|
||||
llama stack run starter --port 5050
|
||||
```
|
||||
|
||||
Ensure the Llama Stack server version is the same as the Kotlin SDK Library for maximum compatibility.
|
||||
|
||||
Other inference providers: [Table](../../index.md#supported-llama-stack-implementations)
|
||||
Other inference providers: [Table](/docs/)
|
||||
|
||||
How to set remote localhost in Demo App: [Settings](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/examples/android_app#settings)
|
||||
|
||||
|
|
@ -2,10 +2,10 @@
|
|||
|
||||
Remote-Hosted distributions are available endpoints serving Llama Stack API that you can directly connect to.
|
||||
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety | Telemetry |
|
||||
| Distribution | Endpoint | Inference | Agents | Memory | Safety |
|
||||
|-------------|----------|-----------|---------|---------|---------|------------|
|
||||
| Together | [https://llama-stack.together.ai](https://llama-stack.together.ai) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Fireworks | [https://llamastack-preview.fireworks.ai](https://llamastack-preview.fireworks.ai) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
|
||||
| Together | [https://llama-stack.together.ai](https://llama-stack.together.ai) | remote::together | meta-reference | remote::weaviate | meta-reference |
|
||||
| Fireworks | [https://llamastack-preview.fireworks.ai](https://llamastack-preview.fireworks.ai) | remote::fireworks | meta-reference | remote::weaviate | meta-reference |
|
||||
|
||||
## Connecting to Remote-Hosted Distributions
|
||||
|
||||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-watsonx` distribution consists of the following pro
|
|||
| inference | `remote::watsonx`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss` |
|
||||
|
||||
|
|
@ -69,10 +68,10 @@ docker run \
|
|||
-it \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ./run.yaml:/root/my-run.yaml \
|
||||
-e WATSONX_API_KEY=$WATSONX_API_KEY \
|
||||
-e WATSONX_PROJECT_ID=$WATSONX_PROJECT_ID \
|
||||
-e WATSONX_BASE_URL=$WATSONX_BASE_URL \
|
||||
llamastack/distribution-watsonx \
|
||||
--config /root/my-run.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env WATSONX_API_KEY=$WATSONX_API_KEY \
|
||||
--env WATSONX_PROJECT_ID=$WATSONX_PROJECT_ID \
|
||||
--env WATSONX_BASE_URL=$WATSONX_BASE_URL
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
|
@ -13,9 +13,9 @@ self
|
|||
The `llamastack/distribution-tgi` distribution consists of the following provider configurations.
|
||||
|
||||
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- |
|
||||
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
|
||||
| **API** | **Inference** | **Agents** | **Memory** | **Safety** |
|
||||
|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |
|
||||
| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference |
|
||||
|
||||
|
||||
The only difference vs. the `tgi` distribution is that it runs the Dell-TGI server for inference.
|
||||
|
|
@ -22,7 +22,6 @@ The `llamastack/distribution-dell` distribution consists of the following provid
|
|||
| inference | `remote::tgi`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -102,7 +101,7 @@ You can start a chroma-db easily using docker.
|
|||
# This is where the indices are persisted
|
||||
mkdir -p $HOME/chromadb
|
||||
|
||||
podman run --rm -it \
|
||||
docker run --rm -it \
|
||||
--network host \
|
||||
--name chromadb \
|
||||
-v $HOME/chromadb:/chroma/chroma \
|
||||
|
|
@ -127,13 +126,13 @@ docker run -it \
|
|||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v $HOME/.llama:/root/.llama \
|
||||
# NOTE: mount the llama-stack / llama-model directories if testing local changes else not needed
|
||||
-v /home/hjshah/git/llama-stack:/app/llama-stack-source -v /home/hjshah/git/llama-models:/app/llama-models-source \
|
||||
-v $HOME/git/llama-stack:/app/llama-stack-source -v $HOME/git/llama-models:/app/llama-models-source \
|
||||
# localhost/distribution-dell:dev if building / testing locally
|
||||
llamastack/distribution-dell\
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env DEH_URL=$DEH_URL \
|
||||
--env CHROMA_URL=$CHROMA_URL
|
||||
-e INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
-e DEH_URL=$DEH_URL \
|
||||
-e CHROMA_URL=$CHROMA_URL \
|
||||
llamastack/distribution-dell \
|
||||
--port $LLAMA_STACK_PORT
|
||||
|
||||
```
|
||||
|
||||
|
|
@ -154,37 +153,37 @@ docker run \
|
|||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v $HOME/.llama:/root/.llama \
|
||||
-v ./llama_stack/distributions/tgi/run-with-safety.yaml:/root/my-run.yaml \
|
||||
-e INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
-e DEH_URL=$DEH_URL \
|
||||
-e SAFETY_MODEL=$SAFETY_MODEL \
|
||||
-e DEH_SAFETY_URL=$DEH_SAFETY_URL \
|
||||
-e CHROMA_URL=$CHROMA_URL \
|
||||
llamastack/distribution-dell \
|
||||
--config /root/my-run.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env DEH_URL=$DEH_URL \
|
||||
--env SAFETY_MODEL=$SAFETY_MODEL \
|
||||
--env DEH_SAFETY_URL=$DEH_SAFETY_URL \
|
||||
--env CHROMA_URL=$CHROMA_URL
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
### Via venv
|
||||
|
||||
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
|
||||
Install the distribution dependencies before launching:
|
||||
|
||||
```bash
|
||||
llama stack build --distro dell --image-type venv
|
||||
llama stack run dell
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env DEH_URL=$DEH_URL \
|
||||
--env CHROMA_URL=$CHROMA_URL
|
||||
llama stack list-deps dell | xargs -L1 uv pip install
|
||||
INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
DEH_URL=$DEH_URL \
|
||||
CHROMA_URL=$CHROMA_URL \
|
||||
llama stack run dell \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
||||
```bash
|
||||
INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
DEH_URL=$DEH_URL \
|
||||
SAFETY_MODEL=$SAFETY_MODEL \
|
||||
DEH_SAFETY_URL=$DEH_SAFETY_URL \
|
||||
CHROMA_URL=$CHROMA_URL \
|
||||
llama stack run ./run-with-safety.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env DEH_URL=$DEH_URL \
|
||||
--env SAFETY_MODEL=$SAFETY_MODEL \
|
||||
--env DEH_SAFETY_URL=$DEH_SAFETY_URL \
|
||||
--env CHROMA_URL=$CHROMA_URL
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
100
docs/docs/distributions/self_hosted_distro/meta-reference-gpu.md
Normal file
100
docs/docs/distributions/self_hosted_distro/meta-reference-gpu.md
Normal file
|
|
@ -0,0 +1,100 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
<!-- This file was auto-generated by distro_codegen.py, please edit source -->
|
||||
# Meta Reference GPU Distribution
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
|
||||
self
|
||||
```
|
||||
|
||||
The `llamastack/distribution-meta-reference-gpu` distribution consists of the following provider configurations:
|
||||
|
||||
| API | Provider(s) |
|
||||
|-----|-------------|
|
||||
| agents | `inline::meta-reference` |
|
||||
| datasetio | `remote::huggingface`, `inline::localfs` |
|
||||
| eval | `inline::meta-reference` |
|
||||
| inference | `inline::meta-reference` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
||||
Note that you need access to nvidia GPUs to run this distribution. This distribution is not compatible with CPU-only machines or machines with AMD GPUs.
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The following environment variables can be configured:
|
||||
|
||||
- `LLAMA_STACK_PORT`: Port for the Llama Stack distribution server (default: `8321`)
|
||||
- `INFERENCE_MODEL`: Inference model loaded into the Meta Reference server (default: `meta-llama/Llama-3.2-3B-Instruct`)
|
||||
- `INFERENCE_CHECKPOINT_DIR`: Directory containing the Meta Reference model checkpoint (default: `null`)
|
||||
- `SAFETY_MODEL`: Name of the safety (Llama-Guard) model to use (default: `meta-llama/Llama-Guard-3-1B`)
|
||||
- `SAFETY_CHECKPOINT_DIR`: Directory containing the Llama-Guard model checkpoint (default: `null`)
|
||||
|
||||
|
||||
## Prerequisite: Downloading Models
|
||||
|
||||
Please check that you have llama model checkpoints downloaded in `~/.llama` before proceeding. See [installation guide](../../references/llama_cli_reference/download_models.md) here to download the models using the Hugging Face CLI.
|
||||
```
|
||||
|
||||
## Running the Distribution
|
||||
|
||||
You can do this via venv or Docker which has a pre-built image.
|
||||
|
||||
### Via Docker
|
||||
|
||||
This method allows you to get started quickly without having to build the distribution code.
|
||||
|
||||
```bash
|
||||
LLAMA_STACK_PORT=8321
|
||||
docker run \
|
||||
-it \
|
||||
--pull always \
|
||||
--gpu all \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
llamastack/distribution-meta-reference-gpu \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-it \
|
||||
--pull always \
|
||||
--gpu all \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-e INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
-e SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
|
||||
llamastack/distribution-meta-reference-gpu \
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
### Via venv
|
||||
|
||||
Make sure you have the Llama Stack CLI available.
|
||||
|
||||
```bash
|
||||
llama stack list-deps meta-reference-gpu | xargs -L1 uv pip install
|
||||
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
llama stack run distributions/meta-reference-gpu/run.yaml \
|
||||
--port 8321
|
||||
```
|
||||
|
||||
If you are using Llama Stack Safety / Shield APIs, use:
|
||||
|
||||
```bash
|
||||
INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
|
||||
SAFETY_MODEL=meta-llama/Llama-Guard-3-1B \
|
||||
llama stack run distributions/meta-reference-gpu/run-with-safety.yaml \
|
||||
--port 8321
|
||||
```
|
||||
|
|
@ -16,7 +16,6 @@ The `llamastack/distribution-nvidia` distribution consists of the following prov
|
|||
| post_training | `remote::nvidia` |
|
||||
| safety | `remote::nvidia` |
|
||||
| scoring | `inline::basic` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `inline::rag-runtime` |
|
||||
| vector_io | `inline::faiss` |
|
||||
|
||||
|
|
@ -37,25 +36,6 @@ The following environment variables can be configured:
|
|||
- `INFERENCE_MODEL`: Inference model (default: `Llama3.1-8B-Instruct`)
|
||||
- `SAFETY_MODEL`: Name of the model to use for safety (default: `meta/llama-3.1-8b-instruct`)
|
||||
|
||||
### Models
|
||||
|
||||
The following models are available by default:
|
||||
|
||||
- `meta/llama3-8b-instruct `
|
||||
- `meta/llama3-70b-instruct `
|
||||
- `meta/llama-3.1-8b-instruct `
|
||||
- `meta/llama-3.1-70b-instruct `
|
||||
- `meta/llama-3.1-405b-instruct `
|
||||
- `meta/llama-3.2-1b-instruct `
|
||||
- `meta/llama-3.2-3b-instruct `
|
||||
- `meta/llama-3.2-11b-vision-instruct `
|
||||
- `meta/llama-3.2-90b-vision-instruct `
|
||||
- `meta/llama-3.3-70b-instruct `
|
||||
- `nvidia/vila `
|
||||
- `nvidia/llama-3.2-nv-embedqa-1b-v2 `
|
||||
- `nvidia/nv-embedqa-e5-v5 `
|
||||
- `nvidia/nv-embedqa-mistral-7b-v2 `
|
||||
- `snowflake/arctic-embed-l `
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
|
@ -79,22 +59,22 @@ The deployed platform includes the NIM Proxy microservice, which is the service
|
|||
### Datasetio API: NeMo Data Store
|
||||
The NeMo Data Store microservice serves as the default file storage solution for the NeMo microservices platform. It exposts APIs compatible with the Hugging Face Hub client (`HfApi`), so you can use the client to interact with Data Store. The `NVIDIA_DATASETS_URL` environment variable should point to your NeMo Data Store endpoint.
|
||||
|
||||
See the {repopath}`NVIDIA Datasetio docs::llama_stack/providers/remote/datasetio/nvidia/README.md` for supported features and example usage.
|
||||
See the [NVIDIA Datasetio docs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/datasetio/nvidia/README.md) for supported features and example usage.
|
||||
|
||||
### Eval API: NeMo Evaluator
|
||||
The NeMo Evaluator microservice supports evaluation of LLMs. Launching an Evaluation job with NeMo Evaluator requires an Evaluation Config (an object that contains metadata needed by the job). A Llama Stack Benchmark maps to an Evaluation Config, so registering a Benchmark creates an Evaluation Config in NeMo Evaluator. The `NVIDIA_EVALUATOR_URL` environment variable should point to your NeMo Microservices endpoint.
|
||||
|
||||
See the {repopath}`NVIDIA Eval docs::llama_stack/providers/remote/eval/nvidia/README.md` for supported features and example usage.
|
||||
See the [NVIDIA Eval docs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/eval/nvidia/README.md) for supported features and example usage.
|
||||
|
||||
### Post-Training API: NeMo Customizer
|
||||
The NeMo Customizer microservice supports fine-tuning models. You can reference {repopath}`this list of supported models::llama_stack/providers/remote/post_training/nvidia/models.py` that can be fine-tuned using Llama Stack. The `NVIDIA_CUSTOMIZER_URL` environment variable should point to your NeMo Microservices endpoint.
|
||||
The NeMo Customizer microservice supports fine-tuning models. You can reference [this list of supported models](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/post_training/nvidia/models.py) that can be fine-tuned using Llama Stack. The `NVIDIA_CUSTOMIZER_URL` environment variable should point to your NeMo Microservices endpoint.
|
||||
|
||||
See the {repopath}`NVIDIA Post-Training docs::llama_stack/providers/remote/post_training/nvidia/README.md` for supported features and example usage.
|
||||
See the [NVIDIA Post-Training docs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/post_training/nvidia/README.md) for supported features and example usage.
|
||||
|
||||
### Safety API: NeMo Guardrails
|
||||
The NeMo Guardrails microservice sits between your application and the LLM, and adds checks and content moderation to a model. The `GUARDRAILS_SERVICE_URL` environment variable should point to your NeMo Microservices endpoint.
|
||||
|
||||
See the {repopath}`NVIDIA Safety docs::llama_stack/providers/remote/safety/nvidia/README.md` for supported features and example usage.
|
||||
See the [NVIDIA Safety docs](https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/remote/safety/nvidia/README.md) for supported features and example usage.
|
||||
|
||||
## Deploying models
|
||||
In order to use a registered model with the Llama Stack APIs, ensure the corresponding NIM is deployed to your environment. For example, you can use the NIM Proxy microservice to deploy `meta/llama-3.2-1b-instruct`.
|
||||
|
|
@ -148,24 +128,24 @@ docker run \
|
|||
--pull always \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ./run.yaml:/root/my-run.yaml \
|
||||
-e NVIDIA_API_KEY=$NVIDIA_API_KEY \
|
||||
llamastack/distribution-nvidia \
|
||||
--config /root/my-run.yaml \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env NVIDIA_API_KEY=$NVIDIA_API_KEY
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
|
||||
### Via venv
|
||||
|
||||
If you've set up your local development environment, you can also build the image using your local virtual environment.
|
||||
If you've set up your local development environment, you can also install the distribution dependencies using your local virtual environment.
|
||||
|
||||
```bash
|
||||
INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
|
||||
llama stack build --distro nvidia --image-type venv
|
||||
llama stack list-deps nvidia | xargs -L1 uv pip install
|
||||
NVIDIA_API_KEY=$NVIDIA_API_KEY \
|
||||
INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
llama stack run ./run.yaml \
|
||||
--port 8321 \
|
||||
--env NVIDIA_API_KEY=$NVIDIA_API_KEY \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL
|
||||
--port 8321
|
||||
```
|
||||
|
||||
## Example Notebooks
|
||||
For examples of how to use the NVIDIA Distribution to run inference, fine-tune, evaluate, and run safety checks on your LLMs, you can reference the example notebooks in {repopath}`docs/notebooks/nvidia`.
|
||||
For examples of how to use the NVIDIA Distribution to run inference, fine-tune, evaluate, and run safety checks on your LLMs, you can reference the example notebooks in [docs/notebooks/nvidia](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks/nvidia).
|
||||
|
|
@ -21,7 +21,6 @@ The `llamastack/distribution-passthrough` distribution consists of the following
|
|||
| inference | `remote::passthrough`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `remote::wolfram-alpha`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -26,7 +26,6 @@ The starter distribution consists of the following provider configurations:
|
|||
| inference | `remote::openai`, `remote::fireworks`, `remote::together`, `remote::ollama`, `remote::anthropic`, `remote::gemini`, `remote::groq`, `remote::sambanova`, `remote::vllm`, `remote::tgi`, `remote::cerebras`, `remote::llama-openai-compat`, `remote::nvidia`, `remote::hf::serverless`, `remote::hf::endpoint`, `inline::sentence-transformers` |
|
||||
| safety | `inline::llama-guard` |
|
||||
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
|
||||
| telemetry | `inline::meta-reference` |
|
||||
| tool_runtime | `remote::brave-search`, `remote::tavily-search`, `inline::rag-runtime`, `remote::model-context-protocol` |
|
||||
| vector_io | `inline::faiss`, `inline::sqlite-vec`, `inline::milvus`, `remote::chromadb`, `remote::pgvector` |
|
||||
|
||||
|
|
@ -36,25 +35,25 @@ The starter distribution includes a comprehensive set of inference providers:
|
|||
|
||||
### Hosted Providers
|
||||
- **[OpenAI](https://openai.com/api/)**: GPT-4, GPT-3.5, O1, O3, O4 models and text embeddings -
|
||||
provider ID: `openai` - reference documentation: [openai](../../providers/inference/remote_openai.md)
|
||||
provider ID: `openai` - reference documentation: [openai](../../providers/inference/remote_openai)
|
||||
- **[Fireworks](https://fireworks.ai/)**: Llama 3.1, 3.2, 3.3, 4 Scout, 4 Maverick models and
|
||||
embeddings - provider ID: `fireworks` - reference documentation: [fireworks](../../providers/inference/remote_fireworks.md)
|
||||
embeddings - provider ID: `fireworks` - reference documentation: [fireworks](../../providers/inference/remote_fireworks)
|
||||
- **[Together](https://together.ai/)**: Llama 3.1, 3.2, 3.3, 4 Scout, 4 Maverick models and
|
||||
embeddings - provider ID: `together` - reference documentation: [together](../../providers/inference/remote_together.md)
|
||||
- **[Anthropic](https://www.anthropic.com/)**: Claude 3.5 Sonnet, Claude 3.7 Sonnet, Claude 3.5 Haiku, and Voyage embeddings - provider ID: `anthropic` - reference documentation: [anthropic](../../providers/inference/remote_anthropic.md)
|
||||
- **[Gemini](https://gemini.google.com/)**: Gemini 1.5, 2.0, 2.5 models and text embeddings - provider ID: `gemini` - reference documentation: [gemini](../../providers/inference/remote_gemini.md)
|
||||
- **[Groq](https://groq.com/)**: Fast Llama models (3.1, 3.2, 3.3, 4 Scout, 4 Maverick) - provider ID: `groq` - reference documentation: [groq](../../providers/inference/remote_groq.md)
|
||||
- **[SambaNova](https://www.sambanova.ai/)**: Llama 3.1, 3.2, 3.3, 4 Scout, 4 Maverick models - provider ID: `sambanova` - reference documentation: [sambanova](../../providers/inference/remote_sambanova.md)
|
||||
- **[Cerebras](https://www.cerebras.ai/)**: Cerebras AI models - provider ID: `cerebras` - reference documentation: [cerebras](../../providers/inference/remote_cerebras.md)
|
||||
- **[NVIDIA](https://www.nvidia.com/)**: NVIDIA NIM - provider ID: `nvidia` - reference documentation: [nvidia](../../providers/inference/remote_nvidia.md)
|
||||
- **[HuggingFace](https://huggingface.co/)**: Serverless and endpoint models - provider ID: `hf::serverless` and `hf::endpoint` - reference documentation: [huggingface-serverless](../../providers/inference/remote_hf_serverless.md) and [huggingface-endpoint](../../providers/inference/remote_hf_endpoint.md)
|
||||
- **[Bedrock](https://aws.amazon.com/bedrock/)**: AWS Bedrock models - provider ID: `bedrock` - reference documentation: [bedrock](../../providers/inference/remote_bedrock.md)
|
||||
embeddings - provider ID: `together` - reference documentation: [together](../../providers/inference/remote_together)
|
||||
- **[Anthropic](https://www.anthropic.com/)**: Claude 3.5 Sonnet, Claude 3.7 Sonnet, Claude 3.5 Haiku, and Voyage embeddings - provider ID: `anthropic` - reference documentation: [anthropic](../../providers/inference/remote_anthropic)
|
||||
- **[Gemini](https://gemini.google.com/)**: Gemini 1.5, 2.0, 2.5 models and text embeddings - provider ID: `gemini` - reference documentation: [gemini](../../providers/inference/remote_gemini)
|
||||
- **[Groq](https://groq.com/)**: Fast Llama models (3.1, 3.2, 3.3, 4 Scout, 4 Maverick) - provider ID: `groq` - reference documentation: [groq](../../providers/inference/remote_groq)
|
||||
- **[SambaNova](https://www.sambanova.ai/)**: Llama 3.1, 3.2, 3.3, 4 Scout, 4 Maverick models - provider ID: `sambanova` - reference documentation: [sambanova](../../providers/inference/remote_sambanova)
|
||||
- **[Cerebras](https://www.cerebras.ai/)**: Cerebras AI models - provider ID: `cerebras` - reference documentation: [cerebras](../../providers/inference/remote_cerebras)
|
||||
- **[NVIDIA](https://www.nvidia.com/)**: NVIDIA NIM - provider ID: `nvidia` - reference documentation: [nvidia](../../providers/inference/remote_nvidia)
|
||||
- **[HuggingFace](https://huggingface.co/)**: Serverless and endpoint models - provider ID: `hf::serverless` and `hf::endpoint` - reference documentation: [huggingface-serverless](../../providers/inference/remote_hf_serverless) and [huggingface-endpoint](../../providers/inference/remote_hf_endpoint)
|
||||
- **[Bedrock](https://aws.amazon.com/bedrock/)**: AWS Bedrock models - provider ID: `bedrock` - reference documentation: [bedrock](../../providers/inference/remote_bedrock)
|
||||
|
||||
### Local/Remote Providers
|
||||
- **[Ollama](https://ollama.ai/)**: Local Ollama models - provider ID: `ollama` - reference documentation: [ollama](../../providers/inference/remote_ollama.md)
|
||||
- **[vLLM](https://docs.vllm.ai/en/latest/)**: Local or remote vLLM server - provider ID: `vllm` - reference documentation: [vllm](../../providers/inference/remote_vllm.md)
|
||||
- **[TGI](https://github.com/huggingface/text-generation-inference)**: Text Generation Inference server - Dell Enterprise Hub's custom TGI container too (use `DEH_URL`) - provider ID: `tgi` - reference documentation: [tgi](../../providers/inference/remote_tgi.md)
|
||||
- **[Sentence Transformers](https://www.sbert.net/)**: Local embedding models - provider ID: `sentence-transformers` - reference documentation: [sentence-transformers](../../providers/inference/inline_sentence-transformers.md)
|
||||
- **[Ollama](https://ollama.ai/)**: Local Ollama models - provider ID: `ollama` - reference documentation: [ollama](../../providers/inference/remote_ollama)
|
||||
- **[vLLM](https://docs.vllm.ai/en/latest/)**: Local or remote vLLM server - provider ID: `vllm` - reference documentation: [vllm](../../providers/inference/remote_vllm)
|
||||
- **[TGI](https://github.com/huggingface/text-generation-inference)**: Text Generation Inference server - Dell Enterprise Hub's custom TGI container too (use `DEH_URL`) - provider ID: `tgi` - reference documentation: [tgi](../../providers/inference/remote_tgi)
|
||||
- **[Sentence Transformers](https://www.sbert.net/)**: Local embedding models - provider ID: `sentence-transformers` - reference documentation: [sentence-transformers](../../providers/inference/inline_sentence-transformers)
|
||||
|
||||
All providers are disabled by default. So you need to enable them by setting the environment variables.
|
||||
|
||||
|
|
@ -119,7 +118,7 @@ The following environment variables can be configured:
|
|||
|
||||
### Telemetry Configuration
|
||||
- `OTEL_SERVICE_NAME`: OpenTelemetry service name
|
||||
- `TELEMETRY_SINKS`: Telemetry sinks (default: `console,sqlite`)
|
||||
- `OTEL_EXPORTER_OTLP_ENDPOINT`: OpenTelemetry collector endpoint URL
|
||||
|
||||
## Enabling Providers
|
||||
|
||||
|
|
@ -169,7 +168,11 @@ docker run \
|
|||
Ensure you have configured the starter distribution using the environment variables explained above.
|
||||
|
||||
```bash
|
||||
uv run --with llama-stack llama stack build --distro starter --image-type venv --run
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server
|
||||
uv run --with llama-stack llama stack run starter
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
|
@ -216,7 +219,6 @@ The starter distribution uses SQLite for local storage of various components:
|
|||
- **Files metadata**: `~/.llama/distributions/starter/files_metadata.db`
|
||||
- **Agents store**: `~/.llama/distributions/starter/agents_store.db`
|
||||
- **Responses store**: `~/.llama/distributions/starter/responses_store.db`
|
||||
- **Trace store**: `~/.llama/distributions/starter/trace_store.db`
|
||||
- **Evaluation store**: `~/.llama/distributions/starter/meta_reference_eval.db`
|
||||
- **Dataset I/O stores**: Various HuggingFace and local filesystem stores
|
||||
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
---
|
||||
title: Starting a Llama Stack Server
|
||||
description: Different ways to run Llama Stack servers - as library, container, or Kubernetes deployment
|
||||
sidebar_label: Starting Llama Stack Server
|
||||
sidebar_position: 7
|
||||
---
|
||||
|
||||
# Starting a Llama Stack Server
|
||||
|
||||
You can run a Llama Stack server in one of the following ways:
|
||||
|
|
@ -9,13 +16,24 @@ This is the simplest way to get started. Using Llama Stack as a library means yo
|
|||
|
||||
## Container:
|
||||
|
||||
Another simple way to start interacting with Llama Stack is to just spin up a container (via Docker or Podman) which is pre-built with all the providers you need. We provide a number of pre-built images so you can start a Llama Stack server instantly. You can also build your own custom container. Which distribution to choose depends on the hardware you have. See [Selection of a Distribution](selection) for more details.
|
||||
Another simple way to start interacting with Llama Stack is to just spin up a container (via Docker or Podman) which is pre-built with all the providers you need. We provide a number of pre-built images so you can start a Llama Stack server instantly. You can also build your own custom container. Which distribution to choose depends on the hardware you have. See [Selection of a Distribution](./list_of_distributions) for more details.
|
||||
|
||||
## Kubernetes:
|
||||
|
||||
If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](kubernetes_deployment) for more details.
|
||||
If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](../deploying/kubernetes_deployment) for more details.
|
||||
|
||||
|
||||
## Configure logging
|
||||
|
||||
Control log output via environment variables before starting the server.
|
||||
|
||||
- `LLAMA_STACK_LOGGING` sets per-component levels, e.g. `LLAMA_STACK_LOGGING=server=debug;core=info`.
|
||||
- Supported categories: `all`, `core`, `server`, `router`, `inference`, `agents`, `safety`, `eval`, `tools`, `client`.
|
||||
- Levels: `debug`, `info`, `warning`, `error`, `critical` (default is `info`). Use `all=<level>` to apply globally.
|
||||
- `LLAMA_STACK_LOG_FILE=/path/to/log` mirrors logs to a file while still printing to stdout.
|
||||
|
||||
Export these variables prior to running `llama stack run`, launching a container, or starting the server through any other pathway.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
27
docs/docs/getting_started/demo_script.py
Normal file
27
docs/docs/getting_started/demo_script.py
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This source code is licensed under the terms described in the LICENSE file in
|
||||
# the root directory of this source tree.
|
||||
|
||||
|
||||
import io, requests
|
||||
from openai import OpenAI
|
||||
|
||||
url="https://www.paulgraham.com/greatwork.html"
|
||||
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
|
||||
|
||||
vs = client.vector_stores.create()
|
||||
response = requests.get(url)
|
||||
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
|
||||
uploaded_file = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants")
|
||||
client.vector_stores.files.create(vector_store_id=vs.id, file_id=uploaded_file.id)
|
||||
|
||||
resp = client.responses.create(
|
||||
model="openai/gpt-4o",
|
||||
input="How do you do great work? Use the existing knowledge_search tool.",
|
||||
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
|
||||
include=["file_search_call.results"],
|
||||
)
|
||||
|
||||
print(resp)
|
||||
|
|
@ -1,3 +1,13 @@
|
|||
---
|
||||
title: Detailed Tutorial
|
||||
description: Complete guide to using Llama Stack server and client SDK to build AI agents
|
||||
sidebar_label: Detailed Tutorial
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
## Detailed Tutorial
|
||||
|
||||
In this guide, we'll walk through how you can use the Llama Stack (server and client SDK) to test a simple agent.
|
||||
|
|
@ -8,7 +18,7 @@ In Llama Stack, we provide a server exposing multiple APIs. These APIs are backe
|
|||
Llama Stack is a stateful service with REST APIs to support seamless transition of AI applications across different environments. The server can be run in a variety of ways, including as a standalone binary, Docker container, or hosted service. You can build and test using a local server first and deploy to a hosted endpoint for production.
|
||||
|
||||
In this guide, we'll walk through how to build a RAG agent locally using Llama Stack with [Ollama](https://ollama.com/)
|
||||
as the inference [provider](../providers/index.md#inference) for a Llama Model.
|
||||
as the inference [provider](/docs/providers/inference/) for a Llama Model.
|
||||
|
||||
### Step 1: Installation and Setup
|
||||
|
||||
|
|
@ -21,23 +31,21 @@ ollama run llama3.2:3b --keepalive 60m
|
|||
|
||||
Install [uv](https://docs.astral.sh/uv/) to setup your virtual environment
|
||||
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} macOS and Linux
|
||||
<Tabs>
|
||||
<TabItem value="unix" label="macOS and Linux">
|
||||
Use `curl` to download the script and execute it with `sh`:
|
||||
```console
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} Windows
|
||||
</TabItem>
|
||||
<TabItem value="windows" label="Windows">
|
||||
Use `irm` to download the script and execute it with `iex`:
|
||||
|
||||
```console
|
||||
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
|
||||
```
|
||||
:::
|
||||
::::
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Setup your virtual environment.
|
||||
|
||||
|
|
@ -48,36 +56,28 @@ source .venv/bin/activate
|
|||
### Step 2: Run Llama Stack
|
||||
Llama Stack is a server that exposes multiple APIs, you connect with it using the Llama Stack client SDK.
|
||||
|
||||
::::{tab-set}
|
||||
<Tabs>
|
||||
<TabItem value="venv" label="Using venv">
|
||||
You can use Python to install dependencies and run the Llama Stack server, which is useful for testing and development.
|
||||
|
||||
:::{tab-item} Using `venv`
|
||||
You can use Python to build and run the Llama Stack server, which is useful for testing and development.
|
||||
|
||||
Llama Stack uses a [YAML configuration file](../distributions/configuration.md) to specify the stack setup,
|
||||
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml.md).
|
||||
Now let's build and run the Llama Stack config for Ollama.
|
||||
Llama Stack uses a [YAML configuration file](../distributions/configuration) to specify the stack setup,
|
||||
which defines the providers and their settings. The generated configuration serves as a starting point that you can [customize for your specific needs](../distributions/customizing_run_yaml).
|
||||
Now let's install dependencies and run the Llama Stack config for Ollama.
|
||||
We use `starter` as template. By default all providers are disabled, this requires enable ollama by passing environment variables.
|
||||
|
||||
```bash
|
||||
llama stack build --distro starter --image-type venv --run
|
||||
```
|
||||
:::
|
||||
:::{tab-item} Using `venv`
|
||||
You can use Python to build and run the Llama Stack server, which is useful for testing and development.
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
Llama Stack uses a [YAML configuration file](../distributions/configuration.md) to specify the stack setup,
|
||||
which defines the providers and their settings.
|
||||
Now let's build and run the Llama Stack config for Ollama.
|
||||
|
||||
```bash
|
||||
llama stack build --distro starter --image-type venv --run
|
||||
# Run the server
|
||||
llama stack run starter
|
||||
```
|
||||
:::
|
||||
:::{tab-item} Using a Container
|
||||
</TabItem>
|
||||
<TabItem value="container" label="Using a Container">
|
||||
You can use a container image to run the Llama Stack server. We provide several container images for the server
|
||||
component that works with different inference providers out of the box. For this guide, we will use
|
||||
`llamastack/distribution-starter` as the container image. If you'd like to build your own image or customize the
|
||||
configurations, please check out [this guide](../distributions/building_distro.md).
|
||||
configurations, please check out [this guide](../distributions/building_distro).
|
||||
First lets setup some environment variables and create a local directory to mount into the container’s file system.
|
||||
```bash
|
||||
export LLAMA_STACK_PORT=8321
|
||||
|
|
@ -90,9 +90,9 @@ docker run -it \
|
|||
--pull always \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
-e OLLAMA_URL=http://host.docker.internal:11434 \
|
||||
llamastack/distribution-starter \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env OLLAMA_URL=http://host.docker.internal:11434
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
Note to start the container with Podman, you can do the same but replace `docker` at the start of the command with
|
||||
`podman`. If you are using `podman` older than `4.7.0`, please also replace `host.docker.internal` in the `OLLAMA_URL`
|
||||
|
|
@ -100,9 +100,8 @@ with `host.containers.internal`.
|
|||
|
||||
The configuration YAML for the Ollama distribution is available at `distributions/ollama/run.yaml`.
|
||||
|
||||
```{tip}
|
||||
|
||||
Docker containers run in their own isolated network namespaces on Linux. To allow the container to communicate with services running on the host via `localhost`, you need `--network=host`. This makes the container use the host’s network directly so it can connect to Ollama running on `localhost:11434`.
|
||||
:::tip
|
||||
Docker containers run in their own isolated network namespaces on Linux. To allow the container to communicate with services running on the host via `localhost`, you need `--network=host`. This makes the container use the host's network directly so it can connect to Ollama running on `localhost:11434`.
|
||||
|
||||
Linux users having issues running the above command should instead try the following:
|
||||
```bash
|
||||
|
|
@ -111,12 +110,11 @@ docker run -it \
|
|||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
-v ~/.llama:/root/.llama \
|
||||
--network=host \
|
||||
-e OLLAMA_URL=http://localhost:11434 \
|
||||
llamastack/distribution-starter \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env OLLAMA_URL=http://localhost:11434
|
||||
--port $LLAMA_STACK_PORT
|
||||
```
|
||||
:::
|
||||
::::
|
||||
You will see output like below:
|
||||
```
|
||||
INFO: Application startup complete.
|
||||
|
|
@ -127,33 +125,31 @@ Now you can use the Llama Stack client to run inference and build agents!
|
|||
|
||||
You can reuse the server setup or use the [Llama Stack Client](https://github.com/meta-llama/llama-stack-client-python/).
|
||||
Note that the client package is already included in the `llama-stack` package.
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Step 3: Run Client CLI
|
||||
|
||||
Open a new terminal and navigate to the same directory you started the server from. Then set up a new or activate your
|
||||
existing server virtual environment.
|
||||
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} Reuse Server `venv`
|
||||
<Tabs>
|
||||
<TabItem value="reuse" label="Reuse Server venv">
|
||||
```bash
|
||||
# The client is included in the llama-stack package so we just activate the server venv
|
||||
source .venv/bin/activate
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} Install with `venv`
|
||||
</TabItem>
|
||||
<TabItem value="install" label="Install with venv">
|
||||
```bash
|
||||
uv venv client --python 3.12
|
||||
source client/bin/activate
|
||||
pip install llama-stack-client
|
||||
```
|
||||
:::
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
::::
|
||||
|
||||
Now let's use the `llama-stack-client` [CLI](../references/llama_stack_client_cli_reference.md) to check the
|
||||
Now let's use the `llama-stack-client` [CLI](../references/llama_stack_client_cli_reference) to check the
|
||||
connectivity to the server.
|
||||
|
||||
```bash
|
||||
|
|
@ -172,7 +168,7 @@ Available Models
|
|||
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ model_type ┃ identifier ┃ provider_resource_id ┃ metadata ┃ provider_id ┃
|
||||
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ embedding │ ollama/all-minilm:l6-v2 │ all-minilm:l6-v2 │ {'embedding_dimension': 384.0} │ ollama │
|
||||
│ embedding │ ollama/nomic-embed-text:v1.5 │ nomic-embed-text:v1.5 │ {'embedding_dimension': 768.0} │ ollama │
|
||||
├─────────────────┼─────────────────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────────┼───────────────────────┤
|
||||
│ ... │ ... │ ... │ │ ... │
|
||||
├─────────────────┼─────────────────────────────────────┼─────────────────────────────────────┼───────────────────────────────────────────┼───────────────────────┤
|
||||
|
|
@ -224,12 +220,11 @@ OpenAIChatCompletion(
|
|||
|
||||
### Step 4: Run the Demos
|
||||
|
||||
Note that these demos show the [Python Client SDK](../references/python_sdk_reference/index.md).
|
||||
Other SDKs are also available, please refer to the [Client SDK](../index.md#client-sdks) list for the complete options.
|
||||
Note that these demos show the [Python Client SDK](../references/python_sdk_reference/).
|
||||
Other SDKs are also available, please refer to the [Client SDK](/docs/) list for the complete options.
|
||||
|
||||
::::{tab-set}
|
||||
|
||||
:::{tab-item} Basic Inference
|
||||
<Tabs>
|
||||
<TabItem value="inference" label="Basic Inference">
|
||||
Now you can run inference using the Llama Stack client SDK.
|
||||
|
||||
#### i. Create the Script
|
||||
|
|
@ -269,9 +264,8 @@ Which will output:
|
|||
Model: ollama/llama3.2:3b
|
||||
OpenAIChatCompletion(id='chatcmpl-30cd0f28-a2ad-4b6d-934b-13707fc60ebf', choices=[OpenAIChatCompletionChoice(finish_reason='stop', index=0, message=OpenAIChatCompletionChoiceMessageOpenAIAssistantMessageParam(role='assistant', content="Lines of code unfold\nAlgorithms dance with ease\nLogic's gentle kiss", name=None, tool_calls=None, refusal=None, annotations=None, audio=None, function_call=None), logprobs=None)], created=1751732480, model='llama3.2:3b', object='chat.completion', service_tier=None, system_fingerprint='fp_ollama', usage={'completion_tokens': 16, 'prompt_tokens': 37, 'total_tokens': 53, 'completion_tokens_details': None, 'prompt_tokens_details': None})
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} Build a Simple Agent
|
||||
</TabItem>
|
||||
<TabItem value="agent" label="Build a Simple Agent">
|
||||
Next we can move beyond simple inference and build an agent that can perform tasks using the Llama Stack server.
|
||||
#### i. Create the Script
|
||||
Create a file `agent.py` and add the following code:
|
||||
|
|
@ -314,7 +308,7 @@ stream = agent.create_turn(
|
|||
for event in AgentEventLogger().log(stream):
|
||||
event.print()
|
||||
```
|
||||
### ii. Run the Script
|
||||
#### ii. Run the Script
|
||||
Let's run the script using `uv`
|
||||
```bash
|
||||
uv run python agent.py
|
||||
|
|
@ -439,9 +433,8 @@ uv run python agent.py
|
|||
|
||||
So, that's me in a nutshell!
|
||||
```
|
||||
:::
|
||||
|
||||
:::{tab-item} Build a RAG Agent
|
||||
</TabItem>
|
||||
<TabItem value="rag" label="Build a RAG Agent">
|
||||
|
||||
For our last demo, we can build a RAG agent that can answer questions about the Torchtune project using the documents
|
||||
in a vector database.
|
||||
|
|
@ -544,10 +537,9 @@ uv run python rag_agent.py
|
|||
...
|
||||
Overall, DORA is a powerful reinforcement learning algorithm that can learn complex tasks from human demonstrations. However, it requires careful consideration of the challenges and limitations to achieve optimal results.
|
||||
```
|
||||
:::
|
||||
|
||||
::::
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**You're Ready to Build Your Own Apps!**
|
||||
|
||||
Congrats! 🥳 Now you're ready to [build your own Llama Stack applications](../building_applications/index)! 🚀
|
||||
Congrats! 🥳 Now you're ready to [build your own Llama Stack applications](../building_applications/)! 🚀
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
description: We have a number of client-side SDKs available for different languages.
|
||||
sidebar_label: Libraries
|
||||
sidebar_position: 2
|
||||
title: Libraries (SDKs)
|
||||
---
|
||||
## Libraries (SDKs)
|
||||
|
||||
We have a number of client-side SDKs available for different languages.
|
||||
|
|
@ -7,4 +13,4 @@ We have a number of client-side SDKs available for different languages.
|
|||
| Python | [llama-stack-client-python](https://github.com/meta-llama/llama-stack-client-python) | [](https://pypi.org/project/llama_stack_client/)
|
||||
| Swift | [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift/tree/latest-release) | [](https://swiftpackageindex.com/meta-llama/llama-stack-client-swift)
|
||||
| Node | [llama-stack-client-node](https://github.com/meta-llama/llama-stack-client-node) | [](https://npmjs.org/package/llama-stack-client)
|
||||
| Kotlin | [llama-stack-client-kotlin](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release) | [](https://central.sonatype.com/artifact/com.llama.llamastack/llama-stack-client-kotlin)
|
||||
| Kotlin | [llama-stack-client-kotlin](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release) | [](https://central.sonatype.com/artifact/com.llama.llamastack/llama-stack-client-kotlin)
|
||||
82
docs/docs/getting_started/quickstart.mdx
Normal file
82
docs/docs/getting_started/quickstart.mdx
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
---
|
||||
description: environments.
|
||||
sidebar_label: Quickstart
|
||||
sidebar_position: 1
|
||||
title: Quickstart
|
||||
---
|
||||
|
||||
Get started with Llama Stack in minutes!
|
||||
|
||||
Llama Stack is a stateful service with REST APIs to support the seamless transition of AI applications across different
|
||||
environments. You can build and test using a local server first and deploy to a hosted endpoint for production.
|
||||
|
||||
In this guide, we'll walk through how to build a RAG application locally using Llama Stack with [Ollama](https://ollama.com/)
|
||||
as the inference [provider](/docs/providers/inference) for a Llama Model.
|
||||
|
||||
**💡 Notebook Version:** You can also follow this quickstart guide in a Jupyter notebook format: [quick_start.ipynb](https://github.com/meta-llama/llama-stack/blob/main/docs/quick_start.ipynb)
|
||||
|
||||
#### Step 1: Install and setup
|
||||
1. Install [uv](https://docs.astral.sh/uv/)
|
||||
2. Run inference on a Llama model with [Ollama](https://ollama.com/download)
|
||||
```bash
|
||||
ollama run llama3.2:3b --keepalive 60m
|
||||
```
|
||||
|
||||
#### Step 2: Run the Llama Stack server
|
||||
|
||||
```python file=./demo_script.py title="demo_script.py"
|
||||
```
|
||||
|
||||
We will use `uv` to install dependencies and run the Llama Stack server.
|
||||
```bash
|
||||
# Install dependencies for the starter distribution
|
||||
uv run --with llama-stack llama stack list-deps starter | xargs -L1 uv pip install
|
||||
|
||||
# Run the server
|
||||
OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run starter
|
||||
```
|
||||
#### Step 3: Run the demo
|
||||
Now open up a new terminal and copy the following script into a file named `demo_script.py`.
|
||||
|
||||
We will use `uv` to run the script
|
||||
```
|
||||
uv run --with llama-stack-client,fire,requests demo_script.py
|
||||
```
|
||||
And you should see output like below.
|
||||
```python
|
||||
>print(resp.output[1].content[0].text)
|
||||
To do great work, consider the following principles:
|
||||
|
||||
1. **Follow Your Interests**: Engage in work that genuinely excites you. If you find an area intriguing, pursue it without being overly concerned about external pressures or norms. You should create things that you would want for yourself, as this often aligns with what others in your circle might want too.
|
||||
|
||||
2. **Work Hard on Ambitious Projects**: Ambition is vital, but it should be tempered by genuine interest. Instead of detailed planning for the future, focus on exciting projects that keep your options open. This approach, known as "staying upwind," allows for adaptability and can lead to unforeseen achievements.
|
||||
|
||||
3. **Choose Quality Colleagues**: Collaborating with talented colleagues can significantly affect your own work. Seek out individuals who offer surprising insights and whom you admire. The presence of good colleagues can elevate the quality of your work and inspire you.
|
||||
|
||||
4. **Maintain High Morale**: Your attitude towards work and life affects your performance. Cultivating optimism and viewing yourself as lucky rather than victimized can boost your productivity. It’s essential to care for your physical health as well since it directly impacts your mental faculties and morale.
|
||||
|
||||
5. **Be Consistent**: Great work often comes from cumulative effort. Daily progress, even in small amounts, can result in substantial achievements over time. Emphasize consistency and make the work engaging, as this reduces the perceived burden of hard labor.
|
||||
|
||||
6. **Embrace Curiosity**: Curiosity is a driving force that can guide you in selecting fields of interest, pushing you to explore uncharted territories. Allow it to shape your work and continually seek knowledge and insights.
|
||||
|
||||
By focusing on these aspects, you can create an environment conducive to great work and personal fulfillment.
|
||||
```
|
||||
|
||||
Congratulations! You've successfully built your first RAG application using Llama Stack! 🎉🥳
|
||||
|
||||
:::tip HuggingFace access
|
||||
|
||||
If you are getting a **401 Client Error** from HuggingFace for the **all-MiniLM-L6-v2** model, try setting **HF_TOKEN** to a valid HuggingFace token in your environment
|
||||
|
||||
:::
|
||||
|
||||
### Next Steps
|
||||
|
||||
Now you're ready to dive deeper into Llama Stack!
|
||||
- Explore the [Detailed Tutorial](./detailed_tutorial).
|
||||
- Try the [Getting Started Notebook](https://github.com/meta-llama/llama-stack/blob/main/docs/getting_started.ipynb).
|
||||
- Browse more [Notebooks on GitHub](https://github.com/meta-llama/llama-stack/tree/main/docs/notebooks).
|
||||
- Learn about Llama Stack [Concepts](/docs/concepts).
|
||||
- Discover how to [Build Llama Stacks](/docs/distributions).
|
||||
- Refer to our [References](/docs/references) for details on the Llama CLI and Python SDK.
|
||||
- Check out the [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/tree/main/examples) repository for example applications and tutorials.
|
||||
101
docs/docs/index.mdx
Normal file
101
docs/docs/index.mdx
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
sidebar_position: 1
|
||||
title: Welcome to Llama Stack
|
||||
description: Llama Stack is the open-source framework for building generative AI applications
|
||||
sidebar_label: Intro
|
||||
tags:
|
||||
- getting-started
|
||||
- overview
|
||||
---
|
||||
|
||||
# Welcome to Llama Stack
|
||||
|
||||
Llama Stack is the open-source framework for building generative AI applications.
|
||||
|
||||
:::tip Llama 4 is here!
|
||||
|
||||
Check out [Getting Started with Llama 4](https://colab.research.google.com/github/llamastack/llama-stack/blob/main/docs/getting_started_llama4.ipynb)
|
||||
|
||||
:::
|
||||
|
||||
:::tip News
|
||||
|
||||
Llama Stack is now available! See the [release notes](https://github.com/llamastack/llama-stack/releases) for more details.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
## What is Llama Stack?
|
||||
|
||||
Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments. More specifically, it provides:
|
||||
|
||||
- **Unified API layer** for Inference, RAG, Agents, Tools, Safety, Evals.
|
||||
- **Plugin architecture** to support the rich ecosystem of implementations of the different APIs in different environments like local development, on-premises, cloud, and mobile.
|
||||
- **Prepackaged verified distributions** which offer a one-stop solution for developers to get started quickly and reliably in any environment
|
||||
- **Multiple developer interfaces** like CLI and SDKs for Python, Node, iOS, and Android
|
||||
- **Standalone applications** as examples for how to build production-grade AI applications with Llama Stack
|
||||
|
||||
<img src="/img/llama-stack.png" alt="Llama Stack" width="400px" />
|
||||
|
||||
Our goal is to provide pre-packaged implementations (aka "distributions") which can be run in a variety of deployment environments. LlamaStack can assist you in your entire app development lifecycle - start iterating on local, mobile or desktop and seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available.
|
||||
|
||||
## How does Llama Stack work?
|
||||
|
||||
Llama Stack consists of a server (with multiple pluggable API providers) and Client SDKs meant to be used in your applications. The server can be run in a variety of environments, including local (inline) development, on-premises, and cloud. The client SDKs are available for Python, Swift, Node, and Kotlin.
|
||||
|
||||
## Quick Links
|
||||
|
||||
- Ready to build? Check out the [Getting Started Guide](/docs/getting_started/quickstart) to get started.
|
||||
- Want to contribute? See the [Contributing Guide](https://github.com/llamastack/llama-stack/blob/main/CONTRIBUTING.md).
|
||||
- Explore [Example Applications](https://github.com/llamastack/llama-stack-apps) built with Llama Stack.
|
||||
|
||||
## Rich Ecosystem Support
|
||||
|
||||
Llama Stack provides adapters for popular providers across all API categories:
|
||||
|
||||
- **Inference**: Meta Reference, Ollama, Fireworks, Together, NVIDIA, vLLM, AWS Bedrock, OpenAI, Anthropic, and more
|
||||
- **Vector Databases**: FAISS, Chroma, Milvus, Postgres, Weaviate, Qdrant, and others
|
||||
- **Safety**: Llama Guard, Prompt Guard, Code Scanner, AWS Bedrock
|
||||
- **Training & Evaluation**: HuggingFace, TorchTune, NVIDIA NEMO
|
||||
|
||||
:::info Provider Details
|
||||
For complete provider compatibility and setup instructions, see our [Providers Documentation](https://llamastack.github.io/docs/providers/).
|
||||
:::
|
||||
|
||||
## Get Started Today
|
||||
|
||||
<div style={{display: 'flex', gap: '1rem', flexWrap: 'wrap', margin: '2rem 0'}}>
|
||||
<a href="/docs/getting_started/quickstart"
|
||||
style={{
|
||||
background: 'var(--ifm-color-primary)',
|
||||
color: 'white',
|
||||
padding: '0.75rem 1.5rem',
|
||||
borderRadius: '0.5rem',
|
||||
textDecoration: 'none',
|
||||
fontWeight: 'bold'
|
||||
}}>
|
||||
🚀 Quick Start Guide
|
||||
</a>
|
||||
<a href="https://github.com/llamastack/llama-stack-apps"
|
||||
style={{
|
||||
border: '2px solid var(--ifm-color-primary)',
|
||||
color: 'var(--ifm-color-primary)',
|
||||
padding: '0.75rem 1.5rem',
|
||||
borderRadius: '0.5rem',
|
||||
textDecoration: 'none',
|
||||
fontWeight: 'bold'
|
||||
}}>
|
||||
📚 Example Apps
|
||||
</a>
|
||||
<a href="https://github.com/llamastack/llama-stack"
|
||||
style={{
|
||||
border: '2px solid #666',
|
||||
color: '#666',
|
||||
padding: '0.75rem 1.5rem',
|
||||
borderRadius: '0.5rem',
|
||||
textDecoration: 'none',
|
||||
fontWeight: 'bold'
|
||||
}}>
|
||||
⭐ Star on GitHub
|
||||
</a>
|
||||
</div>
|
||||
17
docs/docs/providers/agents/index.mdx
Normal file
17
docs/docs/providers/agents/index.mdx
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
description: "Agents
|
||||
|
||||
APIs for creating and interacting with agentic systems."
|
||||
sidebar_label: Agents
|
||||
title: Agents
|
||||
---
|
||||
|
||||
# Agents
|
||||
|
||||
## Overview
|
||||
|
||||
Agents
|
||||
|
||||
APIs for creating and interacting with agentic systems.
|
||||
|
||||
This section contains documentation for all available providers for the **agents** API.
|
||||
31
docs/docs/providers/agents/inline_meta-reference.mdx
Normal file
31
docs/docs/providers/agents/inline_meta-reference.mdx
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
description: "Meta's reference implementation of an agent system that can use tools, access vector databases, and perform complex reasoning tasks."
|
||||
sidebar_label: Meta-Reference
|
||||
title: inline::meta-reference
|
||||
---
|
||||
|
||||
# inline::meta-reference
|
||||
|
||||
## Description
|
||||
|
||||
Meta's reference implementation of an agent system that can use tools, access vector databases, and perform complex reasoning tasks.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `persistence` | `<class 'inline.agents.meta_reference.config.AgentPersistenceConfig'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
persistence:
|
||||
agent_state:
|
||||
namespace: agents
|
||||
backend: kv_default
|
||||
responses:
|
||||
table_name: responses
|
||||
backend: sql_default
|
||||
max_write_queue_size: 10000
|
||||
num_writers: 4
|
||||
```
|
||||
|
|
@ -1,3 +1,18 @@
|
|||
---
|
||||
description: "The Batches API enables efficient processing of multiple requests in a single operation,
|
||||
particularly useful for processing large datasets, batch evaluation workflows, and
|
||||
cost-effective inference at scale.
|
||||
|
||||
The API is designed to allow use of openai client libraries for seamless integration.
|
||||
|
||||
This API provides the following extensions:
|
||||
- idempotent batch creation
|
||||
|
||||
Note: This API is currently under active development and may undergo changes."
|
||||
sidebar_label: Batches
|
||||
title: Batches
|
||||
---
|
||||
|
||||
# Batches
|
||||
|
||||
## Overview
|
||||
|
|
@ -14,11 +29,3 @@ The Batches API enables efficient processing of multiple requests in a single op
|
|||
Note: This API is currently under active development and may undergo changes.
|
||||
|
||||
This section contains documentation for all available providers for the **batches** API.
|
||||
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_reference
|
||||
```
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
description: "Reference implementation of batches API with KVStore persistence."
|
||||
sidebar_label: Reference
|
||||
title: inline::reference
|
||||
---
|
||||
|
||||
# inline::reference
|
||||
|
||||
## Description
|
||||
|
|
@ -8,7 +14,7 @@ Reference implementation of batches API with KVStore persistence.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | Configuration for the key-value store backend. |
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | Configuration for the key-value store backend. |
|
||||
| `max_concurrent_batches` | `<class 'int'>` | No | 1 | Maximum number of concurrent batches to process simultaneously. |
|
||||
| `max_concurrent_requests_per_batch` | `<class 'int'>` | No | 10 | Maximum number of concurrent requests to process per batch. |
|
||||
|
||||
|
|
@ -16,8 +22,6 @@ Reference implementation of batches API with KVStore persistence.
|
|||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/batches.db
|
||||
|
||||
namespace: batches
|
||||
backend: kv_default
|
||||
```
|
||||
|
||||
|
|
@ -1,15 +1,10 @@
|
|||
---
|
||||
sidebar_label: Datasetio
|
||||
title: Datasetio
|
||||
---
|
||||
|
||||
# Datasetio
|
||||
|
||||
## Overview
|
||||
|
||||
This section contains documentation for all available providers for the **datasetio** API.
|
||||
|
||||
## Providers
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
inline_localfs
|
||||
remote_huggingface
|
||||
remote_nvidia
|
||||
```
|
||||
25
docs/docs/providers/datasetio/inline_localfs.mdx
Normal file
25
docs/docs/providers/datasetio/inline_localfs.mdx
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
description: "Local filesystem-based dataset I/O provider for reading and writing datasets to local storage."
|
||||
sidebar_label: Localfs
|
||||
title: inline::localfs
|
||||
---
|
||||
|
||||
# inline::localfs
|
||||
|
||||
## Description
|
||||
|
||||
Local filesystem-based dataset I/O provider for reading and writing datasets to local storage.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
namespace: datasetio::localfs
|
||||
backend: kv_default
|
||||
```
|
||||
25
docs/docs/providers/datasetio/remote_huggingface.mdx
Normal file
25
docs/docs/providers/datasetio/remote_huggingface.mdx
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
description: "HuggingFace datasets provider for accessing and managing datasets from the HuggingFace Hub."
|
||||
sidebar_label: Remote - Huggingface
|
||||
title: remote::huggingface
|
||||
---
|
||||
|
||||
# remote::huggingface
|
||||
|
||||
## Description
|
||||
|
||||
HuggingFace datasets provider for accessing and managing datasets from the HuggingFace Hub.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
namespace: datasetio::huggingface
|
||||
backend: kv_default
|
||||
```
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
description: "NVIDIA's dataset I/O provider for accessing datasets from NVIDIA's data platform."
|
||||
sidebar_label: Remote - Nvidia
|
||||
title: remote::nvidia
|
||||
---
|
||||
|
||||
# remote::nvidia
|
||||
|
||||
## Description
|
||||
|
|
@ -20,6 +26,4 @@ api_key: ${env.NVIDIA_API_KEY:=}
|
|||
dataset_namespace: ${env.NVIDIA_DATASET_NAMESPACE:=default}
|
||||
project_id: ${env.NVIDIA_PROJECT_ID:=test-project}
|
||||
datasets_url: ${env.NVIDIA_DATASETS_URL:=http://nemo.test}
|
||||
|
||||
```
|
||||
|
||||
17
docs/docs/providers/eval/index.mdx
Normal file
17
docs/docs/providers/eval/index.mdx
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
description: "Evaluations
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates."
|
||||
sidebar_label: Eval
|
||||
title: Eval
|
||||
---
|
||||
|
||||
# Eval
|
||||
|
||||
## Overview
|
||||
|
||||
Evaluations
|
||||
|
||||
Llama Stack Evaluation API for running evaluations on model and agent candidates.
|
||||
|
||||
This section contains documentation for all available providers for the **eval** API.
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
description: "Meta's reference implementation of evaluation tasks with support for multiple languages and evaluation metrics."
|
||||
sidebar_label: Meta-Reference
|
||||
title: inline::meta-reference
|
||||
---
|
||||
|
||||
# inline::meta-reference
|
||||
|
|
@ -12,14 +14,12 @@ Meta's reference implementation of evaluation tasks with support for multiple la
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | |
|
||||
| `kvstore` | `<class 'llama_stack.core.storage.datatypes.KVStoreReference'>` | No | | |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
kvstore:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db
|
||||
|
||||
namespace: eval
|
||||
backend: kv_default
|
||||
```
|
||||
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
description: "NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform."
|
||||
sidebar_label: Remote - Nvidia
|
||||
title: remote::nvidia
|
||||
---
|
||||
|
||||
# remote::nvidia
|
||||
|
|
@ -18,6 +20,4 @@ NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform.
|
|||
|
||||
```yaml
|
||||
evaluator_url: ${env.NVIDIA_EVALUATOR_URL:=http://localhost:7331}
|
||||
|
||||
```
|
||||
|
||||
|
|
@ -11,38 +11,6 @@ an example entry in your build.yaml should look like:
|
|||
module: ramalama_stack
|
||||
```
|
||||
|
||||
Additionally you can configure the `external_providers_dir` in your Llama Stack configuration. This method is in the process of being deprecated in favor of the `module` method. If using this method, the external provider directory should contain your external provider specifications:
|
||||
|
||||
```yaml
|
||||
external_providers_dir: ~/.llama/providers.d/
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
The external providers directory should follow this structure:
|
||||
|
||||
```
|
||||
providers.d/
|
||||
remote/
|
||||
inference/
|
||||
custom_ollama.yaml
|
||||
vllm.yaml
|
||||
vector_io/
|
||||
qdrant.yaml
|
||||
safety/
|
||||
llama-guard.yaml
|
||||
inline/
|
||||
inference/
|
||||
custom_ollama.yaml
|
||||
vllm.yaml
|
||||
vector_io/
|
||||
qdrant.yaml
|
||||
safety/
|
||||
llama-guard.yaml
|
||||
```
|
||||
|
||||
Each YAML file in these directories defines a provider specification for that particular API.
|
||||
|
||||
## Provider Types
|
||||
|
||||
Llama Stack supports two types of external providers:
|
||||
|
|
@ -50,30 +18,37 @@ Llama Stack supports two types of external providers:
|
|||
1. **Remote Providers**: Providers that communicate with external services (e.g., cloud APIs)
|
||||
2. **Inline Providers**: Providers that run locally within the Llama Stack process
|
||||
|
||||
|
||||
### Provider Specification (Common between inline and remote providers)
|
||||
|
||||
- `provider_type`: The type of the provider to be installed (remote or inline). eg. `remote::ollama`
|
||||
- `api`: The API for this provider, eg. `inference`
|
||||
- `config_class`: The full path to the configuration class
|
||||
- `module`: The Python module containing the provider implementation
|
||||
- `optional_api_dependencies`: List of optional Llama Stack APIs that this provider can use
|
||||
- `api_dependencies`: List of Llama Stack APIs that this provider depends on
|
||||
- `provider_data_validator`: Optional validator for provider data.
|
||||
- `pip_packages`: List of Python packages required by the provider
|
||||
|
||||
### Remote Provider Specification
|
||||
|
||||
Remote providers are used when you need to communicate with external services. Here's an example for a custom Ollama provider:
|
||||
|
||||
```yaml
|
||||
adapter:
|
||||
adapter_type: custom_ollama
|
||||
pip_packages:
|
||||
- ollama
|
||||
- aiohttp
|
||||
config_class: llama_stack_ollama_provider.config.OllamaImplConfig
|
||||
module: llama_stack_ollama_provider
|
||||
adapter_type: custom_ollama
|
||||
provider_type: "remote::ollama"
|
||||
pip_packages:
|
||||
- ollama
|
||||
- aiohttp
|
||||
config_class: llama_stack_ollama_provider.config.OllamaImplConfig
|
||||
module: llama_stack_ollama_provider
|
||||
api_dependencies: []
|
||||
optional_api_dependencies: []
|
||||
```
|
||||
|
||||
#### Adapter Configuration
|
||||
#### Remote Provider Configuration
|
||||
|
||||
The `adapter` section defines how to load and configure the provider:
|
||||
|
||||
- `adapter_type`: A unique identifier for this adapter
|
||||
- `pip_packages`: List of Python packages required by the provider
|
||||
- `config_class`: The full path to the configuration class
|
||||
- `module`: The Python module containing the provider implementation
|
||||
- `adapter_type`: A unique identifier for this adapter, eg. `ollama`
|
||||
|
||||
### Inline Provider Specification
|
||||
|
||||
|
|
@ -81,6 +56,7 @@ Inline providers run locally within the Llama Stack process. Here's an example f
|
|||
|
||||
```yaml
|
||||
module: llama_stack_vector_provider
|
||||
provider_type: inline::llama_stack_vector_provider
|
||||
config_class: llama_stack_vector_provider.config.VectorStoreConfig
|
||||
pip_packages:
|
||||
- faiss-cpu
|
||||
|
|
@ -95,12 +71,6 @@ container_image: custom-vector-store:latest # optional
|
|||
|
||||
#### Inline Provider Fields
|
||||
|
||||
- `module`: The Python module containing the provider implementation
|
||||
- `config_class`: The full path to the configuration class
|
||||
- `pip_packages`: List of Python packages required by the provider
|
||||
- `api_dependencies`: List of Llama Stack APIs that this provider depends on
|
||||
- `optional_api_dependencies`: List of optional Llama Stack APIs that this provider can use
|
||||
- `provider_data_validator`: Optional validator for provider data
|
||||
- `container_image`: Optional container image to use instead of pip packages
|
||||
|
||||
## Required Fields
|
||||
|
|
@ -113,20 +83,17 @@ All providers must contain a `get_provider_spec` function in their `provider` mo
|
|||
from llama_stack.providers.datatypes import (
|
||||
ProviderSpec,
|
||||
Api,
|
||||
AdapterSpec,
|
||||
remote_provider_spec,
|
||||
RemoteProviderSpec,
|
||||
)
|
||||
|
||||
|
||||
def get_provider_spec() -> ProviderSpec:
|
||||
return remote_provider_spec(
|
||||
return RemoteProviderSpec(
|
||||
api=Api.inference,
|
||||
adapter=AdapterSpec(
|
||||
adapter_type="ramalama",
|
||||
pip_packages=["ramalama>=0.8.5", "pymilvus"],
|
||||
config_class="ramalama_stack.config.RamalamaImplConfig",
|
||||
module="ramalama_stack",
|
||||
),
|
||||
adapter_type="ramalama",
|
||||
pip_packages=["ramalama>=0.8.5", "pymilvus"],
|
||||
config_class="ramalama_stack.config.RamalamaImplConfig",
|
||||
module="ramalama_stack",
|
||||
)
|
||||
```
|
||||
|
||||
|
|
@ -197,18 +164,16 @@ information. Execute the test for the Provider type you are developing.
|
|||
If your external provider isn't being loaded:
|
||||
|
||||
1. Check that `module` points to a published pip package with a top level `provider` module including `get_provider_spec`.
|
||||
1. Check that the `external_providers_dir` path is correct and accessible.
|
||||
2. Verify that the YAML files are properly formatted.
|
||||
3. Ensure all required Python packages are installed.
|
||||
4. Check the Llama Stack server logs for any error messages - turn on debug logging to get more
|
||||
information using `LLAMA_STACK_LOGGING=all=debug`.
|
||||
5. Verify that the provider package is installed in your Python environment if using `external_providers_dir`.
|
||||
|
||||
## Examples
|
||||
|
||||
### Example using `external_providers_dir`: Custom Ollama Provider
|
||||
### How to create an external provider module
|
||||
|
||||
Here's a complete example of creating and using a custom Ollama provider:
|
||||
If you are creating a new external provider called `llama-stack-provider-ollama` here is how you would set up the package properly:
|
||||
|
||||
1. First, create the provider package:
|
||||
|
||||
|
|
@ -230,33 +195,28 @@ requires-python = ">=3.12"
|
|||
dependencies = ["llama-stack", "pydantic", "ollama", "aiohttp"]
|
||||
```
|
||||
|
||||
3. Create the provider specification:
|
||||
|
||||
```yaml
|
||||
# ~/.llama/providers.d/remote/inference/custom_ollama.yaml
|
||||
adapter:
|
||||
adapter_type: custom_ollama
|
||||
pip_packages: ["ollama", "aiohttp"]
|
||||
config_class: llama_stack_provider_ollama.config.OllamaImplConfig
|
||||
module: llama_stack_provider_ollama
|
||||
api_dependencies: []
|
||||
optional_api_dependencies: []
|
||||
```
|
||||
|
||||
4. Install the provider:
|
||||
3. Install the provider:
|
||||
|
||||
```bash
|
||||
uv pip install -e .
|
||||
```
|
||||
|
||||
5. Configure Llama Stack to use external providers:
|
||||
4. Edit `provider.py`
|
||||
|
||||
```yaml
|
||||
external_providers_dir: ~/.llama/providers.d/
|
||||
provider.py must be updated to contain `get_provider_spec`. This is used by llama stack to install the provider.
|
||||
|
||||
```python
|
||||
def get_provider_spec() -> ProviderSpec:
|
||||
return RemoteProviderSpec(
|
||||
api=Api.inference,
|
||||
adapter_type="llama-stack-provider-ollama",
|
||||
pip_packages=["ollama", "aiohttp"],
|
||||
config_class="llama_stack_provider_ollama.config.OllamaImplConfig",
|
||||
module="llama_stack_provider_ollama",
|
||||
)
|
||||
```
|
||||
|
||||
The provider will now be available in Llama Stack with the type `remote::custom_ollama`.
|
||||
|
||||
5. Implement the provider as outlined above with `get_provider_impl` or `get_adapter_impl`, etc.
|
||||
|
||||
### Example using `module`: ramalama-stack
|
||||
|
||||
|
|
@ -275,12 +235,11 @@ distribution_spec:
|
|||
module: ramalama_stack==0.3.0a0
|
||||
image_type: venv
|
||||
image_name: null
|
||||
external_providers_dir: null
|
||||
additional_pip_packages:
|
||||
- aiosqlite
|
||||
- sqlalchemy[asyncio]
|
||||
```
|
||||
|
||||
No other steps are required other than `llama stack build` and `llama stack run`. The build process will use `module` to install all of the provider dependencies, retrieve the spec, etc.
|
||||
No other steps are required beyond installing dependencies with `llama stack list-deps <distro> | xargs -L1 uv pip install` and then running `llama stack run`. The CLI will use `module` to install the provider dependencies, retrieve the spec, etc.
|
||||
|
||||
The provider will now be available in Llama Stack with the type `remote::ramalama`.
|
||||
The provider will now be available in Llama Stack with the type `remote::ramalama`.
|
||||
|
|
@ -5,9 +5,7 @@ Llama Stack supports external providers that live outside of the main codebase.
|
|||
- Share providers with others without contributing to the main codebase
|
||||
- Keep provider-specific code separate from the core Llama Stack code
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
## External Provider Documentation
|
||||
|
||||
external-providers-list
|
||||
external-providers-guide
|
||||
```
|
||||
- [Known External Providers](./external-providers-list.mdx)
|
||||
- [Creating External Providers](./external-providers-guide.mdx)
|
||||
17
docs/docs/providers/files/index.mdx
Normal file
17
docs/docs/providers/files/index.mdx
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
description: "Files
|
||||
|
||||
This API is used to upload documents that can be used with other Llama Stack APIs."
|
||||
sidebar_label: Files
|
||||
title: Files
|
||||
---
|
||||
|
||||
# Files
|
||||
|
||||
## Overview
|
||||
|
||||
Files
|
||||
|
||||
This API is used to upload documents that can be used with other Llama Stack APIs.
|
||||
|
||||
This section contains documentation for all available providers for the **files** API.
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
description: "Local filesystem-based file storage provider for managing files and documents locally."
|
||||
sidebar_label: Localfs
|
||||
title: inline::localfs
|
||||
---
|
||||
|
||||
# inline::localfs
|
||||
|
||||
## Description
|
||||
|
|
@ -9,7 +15,7 @@ Local filesystem-based file storage provider for managing files and documents lo
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `storage_dir` | `<class 'str'>` | No | | Directory to store uploaded files |
|
||||
| `metadata_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | SQL store configuration for file metadata |
|
||||
| `metadata_store` | `<class 'llama_stack.core.storage.datatypes.SqlStoreReference'>` | No | | SQL store configuration for file metadata |
|
||||
| `ttl_secs` | `<class 'int'>` | No | 31536000 | |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
@ -17,8 +23,6 @@ Local filesystem-based file storage provider for managing files and documents lo
|
|||
```yaml
|
||||
storage_dir: ${env.FILES_STORAGE_DIR:=~/.llama/dummy/files}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/files_metadata.db
|
||||
|
||||
table_name: files_metadata
|
||||
backend: sql_default
|
||||
```
|
||||
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
description: "AWS S3-based file storage provider for scalable cloud file management with metadata persistence."
|
||||
sidebar_label: Remote - S3
|
||||
title: remote::s3
|
||||
---
|
||||
|
||||
# remote::s3
|
||||
|
||||
## Description
|
||||
|
|
@ -14,7 +20,7 @@ AWS S3-based file storage provider for scalable cloud file management with metad
|
|||
| `aws_secret_access_key` | `str \| None` | No | | AWS secret access key (optional if using IAM roles) |
|
||||
| `endpoint_url` | `str \| None` | No | | Custom S3 endpoint URL (for MinIO, LocalStack, etc.) |
|
||||
| `auto_create_bucket` | `<class 'bool'>` | No | False | Automatically create the S3 bucket if it doesn't exist |
|
||||
| `metadata_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | SQL store configuration for file metadata |
|
||||
| `metadata_store` | `<class 'llama_stack.core.storage.datatypes.SqlStoreReference'>` | No | | SQL store configuration for file metadata |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
@ -26,8 +32,6 @@ aws_secret_access_key: ${env.AWS_SECRET_ACCESS_KEY:=}
|
|||
endpoint_url: ${env.S3_ENDPOINT_URL:=}
|
||||
auto_create_bucket: ${env.S3_AUTO_CREATE_BUCKET:=false}
|
||||
metadata_store:
|
||||
type: sqlite
|
||||
db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/s3_files_metadata.db
|
||||
|
||||
table_name: s3_files_metadata
|
||||
backend: sql_default
|
||||
```
|
||||
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
---
|
||||
title: API Providers
|
||||
description: Ecosystem of providers for swapping implementations across the same API
|
||||
sidebar_label: Overview
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# API Providers
|
||||
|
||||
The goal of Llama Stack is to build an ecosystem where users can easily swap out different implementations for the same API. Examples for these include:
|
||||
|
|
@ -12,17 +19,17 @@ Providers come in two flavors:
|
|||
|
||||
Importantly, Llama Stack always strives to provide at least one fully inline provider for each API so you can iterate on a fully featured environment locally.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
## Provider Categories
|
||||
|
||||
external/index
|
||||
openai
|
||||
inference/index
|
||||
agents/index
|
||||
datasetio/index
|
||||
safety/index
|
||||
telemetry/index
|
||||
vector_io/index
|
||||
tool_runtime/index
|
||||
files/index
|
||||
```
|
||||
- **[External Providers](external/index.mdx)** - Guide for building and using external providers
|
||||
- **[Inference](inference/index.mdx)** - LLM and embedding model providers
|
||||
- **[Agents](agents/index.mdx)** - Agentic system providers
|
||||
- **[DatasetIO](datasetio/index.mdx)** - Dataset and data loader providers
|
||||
- **[Safety](safety/index.mdx)** - Content moderation and safety providers
|
||||
- **[Vector IO](vector_io/index.mdx)** - Vector database providers
|
||||
- **[Tool Runtime](tool_runtime/index.mdx)** - Tool and protocol providers
|
||||
- **[Files](files/index.mdx)** - File system and storage providers
|
||||
|
||||
## Other information about Providers
|
||||
- **[OpenAI Compatibility](./openai.mdx)** - OpenAI API compatibility layer
|
||||
- **[OpenAI-Compatible Responses Limitations](./openai_responses_limitations.mdx)** - Known limitations of the Responses API in Llama Stack
|
||||
27
docs/docs/providers/inference/index.mdx
Normal file
27
docs/docs/providers/inference/index.mdx
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
description: "Inference
|
||||
|
||||
Llama Stack Inference API for generating completions, chat completions, and embeddings.
|
||||
|
||||
This API provides the raw interface to the underlying models. Three kinds of models are supported:
|
||||
- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.
|
||||
- Embedding models: these models generate embeddings to be used for semantic search.
|
||||
- Rerank models: these models reorder the documents based on their relevance to a query."
|
||||
sidebar_label: Inference
|
||||
title: Inference
|
||||
---
|
||||
|
||||
# Inference
|
||||
|
||||
## Overview
|
||||
|
||||
Inference
|
||||
|
||||
Llama Stack Inference API for generating completions, chat completions, and embeddings.
|
||||
|
||||
This API provides the raw interface to the underlying models. Three kinds of models are supported:
|
||||
- LLM models: these models generate "raw" and "chat" (conversational) completions.
|
||||
- Embedding models: these models generate embeddings to be used for semantic search.
|
||||
- Rerank models: these models reorder the documents based on their relevance to a query.
|
||||
|
||||
This section contains documentation for all available providers for the **inference** API.
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
description: "Meta's reference implementation of inference with support for various model formats and optimization techniques."
|
||||
sidebar_label: Meta-Reference
|
||||
title: inline::meta-reference
|
||||
---
|
||||
|
||||
# inline::meta-reference
|
||||
|
||||
## Description
|
||||
|
|
@ -27,6 +33,4 @@ quantization:
|
|||
model_parallel_size: ${env.MODEL_PARALLEL_SIZE:=0}
|
||||
max_batch_size: ${env.MAX_BATCH_SIZE:=1}
|
||||
max_seq_len: ${env.MAX_SEQ_LEN:=4096}
|
||||
|
||||
```
|
||||
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
description: "Sentence Transformers inference provider for text embeddings and similarity search."
|
||||
sidebar_label: Sentence-Transformers
|
||||
title: inline::sentence-transformers
|
||||
---
|
||||
|
||||
# inline::sentence-transformers
|
||||
|
||||
## Description
|
||||
|
||||
Sentence Transformers inference provider for text embeddings and similarity search.
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
{}
|
||||
```
|
||||
25
docs/docs/providers/inference/remote_anthropic.mdx
Normal file
25
docs/docs/providers/inference/remote_anthropic.mdx
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
description: "Anthropic inference provider for accessing Claude models and Anthropic's AI services."
|
||||
sidebar_label: Remote - Anthropic
|
||||
title: remote::anthropic
|
||||
---
|
||||
|
||||
# remote::anthropic
|
||||
|
||||
## Description
|
||||
|
||||
Anthropic inference provider for accessing Claude models and Anthropic's AI services.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
api_key: ${env.ANTHROPIC_API_KEY:=}
|
||||
```
|
||||
|
|
@ -1,3 +1,12 @@
|
|||
---
|
||||
description: |
|
||||
Azure OpenAI inference provider for accessing GPT models and other Azure services.
|
||||
Provider documentation
|
||||
https://learn.microsoft.com/en-us/azure/ai-foundry/openai/overview
|
||||
sidebar_label: Remote - Azure
|
||||
title: remote::azure
|
||||
---
|
||||
|
||||
# remote::azure
|
||||
|
||||
## Description
|
||||
|
|
@ -12,7 +21,9 @@ https://learn.microsoft.com/en-us/azure/ai-foundry/openai/overview
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | Azure API key for Azure |
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `api_base` | `<class 'pydantic.networks.HttpUrl'>` | No | | Azure API base for Azure (e.g., https://your-resource-name.openai.azure.com) |
|
||||
| `api_version` | `str \| None` | No | | Azure API version for Azure (e.g., 2024-12-01-preview) |
|
||||
| `api_type` | `str \| None` | No | azure | Azure API type for Azure (e.g., azure) |
|
||||
|
|
@ -24,6 +35,4 @@ api_key: ${env.AZURE_API_KEY:=}
|
|||
api_base: ${env.AZURE_API_BASE:=}
|
||||
api_version: ${env.AZURE_API_VERSION:=}
|
||||
api_type: ${env.AZURE_API_TYPE:=}
|
||||
|
||||
```
|
||||
|
||||
|
|
@ -1,3 +1,9 @@
|
|||
---
|
||||
description: "AWS Bedrock inference provider for accessing various AI models through AWS's managed service."
|
||||
sidebar_label: Remote - Bedrock
|
||||
title: remote::bedrock
|
||||
---
|
||||
|
||||
# remote::bedrock
|
||||
|
||||
## Description
|
||||
|
|
@ -8,6 +14,8 @@ AWS Bedrock inference provider for accessing various AI models through AWS's man
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `aws_access_key_id` | `str \| None` | No | | The AWS access key to use. Default use environment variable: AWS_ACCESS_KEY_ID |
|
||||
| `aws_secret_access_key` | `str \| None` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
|
||||
| `aws_session_token` | `str \| None` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
|
||||
|
|
@ -23,6 +31,4 @@ AWS Bedrock inference provider for accessing various AI models through AWS's man
|
|||
|
||||
```yaml
|
||||
{}
|
||||
|
||||
```
|
||||
|
||||
27
docs/docs/providers/inference/remote_cerebras.mdx
Normal file
27
docs/docs/providers/inference/remote_cerebras.mdx
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
description: "Cerebras inference provider for running models on Cerebras Cloud platform."
|
||||
sidebar_label: Remote - Cerebras
|
||||
title: remote::cerebras
|
||||
---
|
||||
|
||||
# remote::cerebras
|
||||
|
||||
## Description
|
||||
|
||||
Cerebras inference provider for running models on Cerebras Cloud platform.
|
||||
|
||||
## Configuration
|
||||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | Authentication credential for the provider |
|
||||
| `base_url` | `<class 'str'>` | No | https://api.cerebras.ai | Base URL for the Cerebras API |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
```yaml
|
||||
base_url: https://api.cerebras.ai
|
||||
api_key: ${env.CEREBRAS_API_KEY:=}
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue