llama-stack-mirror/docs
Ashwin Bharambe bef1b044bd
Some checks failed
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 0s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 1s
Integration Tests (Replay) / generate-matrix (push) Successful in 3s
Test Llama Stack Build / generate-matrix (push) Successful in 3s
Python Package Build Test / build (3.12) (push) Failing after 1s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Pre-commit / pre-commit (push) Failing after 4s
Python Package Build Test / build (3.13) (push) Failing after 1s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 2s
Vector IO Integration Tests / test-matrix (push) Failing after 6s
Test Llama Stack Build / build-single-provider (push) Failing after 4s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 5s
Test External API and Providers / test-external (venv) (push) Failing after 5s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
Unit Tests / unit-tests (3.13) (push) Failing after 4s
API Conformance Tests / check-schema-compatibility (push) Successful in 12s
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s
Test Llama Stack Build / build (push) Failing after 4s
UI Tests / ui-tests (22) (push) Successful in 48s
refactor(passthrough): use AsyncOpenAI instead of AsyncLlamaStackClient (#4085)
We'd like to remove the dependence of `llama-stack` on
`llama-stack-client`. This is a necessary step.

A few small cleanups
- Enables `embeddings` now also
- Remove ModelRegistryHelper dependency (unused)
- Consolidate to auth_credential field via RemoteInferenceProviderConfig
- Implement list_models() to fetch from downstream /v1/models

## Test Plan

Tested using this script
https://gist.github.com/ashwinb/6356463d10f989c0682ab3bff8589581

Output:
```
Listing models from downstream server...
Available models: ['passthrough/ollama/nomic-embed-text:latest', 'passthrough/ollama/all-minilm:l6-v2', 'passthrough/ollama/llama3.2-vision:11b', 'passthrough/ollama/llama3.2-vision:latest', 'passthrough/ollama/llama-guard3:1b', 'passthrough/o
llama/llama3.2:1b', 'passthrough/ollama/all-minilm:latest', 'passthrough/ollama/llama3.2:3b', 'passthrough/ollama/llama3.2:3b-instruct-fp16', 'passthrough/bedrock/meta.llama3-1-8b-instruct-v1:0', 'passthrough/bedrock/meta.llama3-1-70b-instruct
-v1:0', 'passthrough/bedrock/meta.llama3-1-405b-instruct-v1:0', 'passthrough/sentence-transformers/nomic-ai/nomic-embed-text-v1.5']

Using LLM model: passthrough/ollama/llama3.2-vision:11b

Making inference request...

Response: 4.

--- Testing streaming ---
Streamed response: ChatCompletionChunk(id='chatcmpl-64', choices=[Choice(delta=ChoiceDelta(content='1', reasoning_content=None, refusal=None, role='assistant', tool_calls=None), finish_reason='', index=0, logprobs=None)], created=1762381674, m
odel='passthrough/ollama/llama3.2-vision:11b', object='chat.completion.chunk', usage=None)
...
5ChatCompletionChunk(id='chatcmpl-64', choices=[Choice(delta=ChoiceDelta(content='', reasoning_content=None, refusal=None, role='assistant', tool_calls=None), finish_reason='stop', index=0, logprobs=None)], created=1762381674, model='passthrou
gh/ollama/llama3.2-vision:11b', object='chat.completion.chunk', usage=None)
```
2025-11-05 18:15:11 -08:00
..
docs refactor(passthrough): use AsyncOpenAI instead of AsyncLlamaStackClient (#4085) 2025-11-05 18:15:11 -08:00
notebooks docs: A getting started notebook featuring simple agent examples. (#3955) 2025-10-29 14:13:34 -04:00
openapi_generator chore(api)!: remove tool_runtime.rag_tool from the API surface (#4067) 2025-11-04 14:50:54 -08:00
scripts feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
src feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
static fix!: BREAKING CHANGE: vector_store: search API response fix (#4080) 2025-11-05 15:01:48 -08:00
supplementary docs: adding supplementary markdown content to API specs (#3632) 2025-10-01 10:15:30 -07:00
zero_to_hero_guide chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
docusaurus.config.ts feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
dog.jpg Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
getting_started.ipynb chore: update getting_started (#3875) 2025-10-21 11:09:45 -07:00
getting_started_llama4.ipynb chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
getting_started_llama_api.ipynb chore: update doc (#3857) 2025-10-20 10:33:21 -07:00
license_header.txt Initial commit 2024-07-23 08:32:33 -07:00
original_rfc.md chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
package-lock.json feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
package.json feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
quick_start.ipynb chore: update quick_start (#3878) 2025-10-21 11:33:23 -07:00
README.md feat: Add static file import system for docs (#3882) 2025-10-24 14:01:33 -04:00
sidebars.ts fix(docs): remove leftover telemetry sidebar section (#3961) 2025-10-29 11:20:13 -04:00
tsconfig.json docs: docusaurus setup (#3541) 2025-09-24 14:11:30 -07:00

Llama Stack Documentation

Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our Github page.

Render locally

From the llama-stack docs/ directory, run the following commands to render the docs locally:

npm install
npm run gen-api-docs all
npm run build
npm run serve

You can open up the docs in your browser at http://localhost:3000

File Import System

This documentation uses remark-code-import to import files directly from the repository, eliminating copy-paste maintenance. Files are automatically embedded during build time.

Importing Code Files

To import Python code (or any code files) with syntax highlighting, use this syntax in .mdx files:

```python file=./demo_script.py title="demo_script.py"

This automatically imports the file content and displays it as a formatted code block with Python syntax highlighting.

**Note:** Paths are relative to the current `.mdx` file location, not the repository root.

### Importing Markdown Files as Content

For importing and rendering markdown files (like CONTRIBUTING.md), use the raw-loader approach:

```jsx
import Contributing from '!!raw-loader!../../../CONTRIBUTING.md';
import ReactMarkdown from 'react-markdown';

<ReactMarkdown>{Contributing}</ReactMarkdown>

Requirements:

  • Install dependencies: npm install --save-dev raw-loader react-markdown

Path Resolution:

  • For remark-code-import: Paths are relative to the current .mdx file location
  • For raw-loader: Paths are relative to the current .mdx file location
  • Use ../ to navigate up directories as needed

Content

Try out Llama Stack's capabilities through our detailed Jupyter notebooks: