llama-stack-mirror/docs/docs/distributions/importing_as_library.mdx
ehhuang 359df3a37c
chore: update doc (#3857)
# What does this PR do?
follows https://github.com/llamastack/llama-stack/pull/3839

## Test Plan
2025-10-20 10:33:21 -07:00

40 lines
1.3 KiB
Text

---
title: Using Llama Stack as a Library
description: How to use Llama Stack as a Python library instead of running a server
sidebar_label: Importing as Library
sidebar_position: 5
---
# Using Llama Stack as a Library
## Setup Llama Stack without a Server
If you are planning to use an external service for Inference (even Ollama or TGI counts as external), it is often easier to use Llama Stack as a library.
This avoids the overhead of setting up a server.
```bash
# setup
uv pip install llama-stack
llama stack list-deps starter | xargs -L1 uv pip install
```
```python
from llama_stack.core.library_client import LlamaStackAsLibraryClient
client = LlamaStackAsLibraryClient(
"starter",
# provider_data is optional, but if you need to pass in any provider specific data, you can do so here.
provider_data={"tavily_search_api_key": os.environ["TAVILY_SEARCH_API_KEY"]},
)
```
This will parse your config and set up any inline implementations and remote clients needed for your implementation.
Then, you can access the APIs like `models` and `inference` on the client and call their methods directly:
```python
response = client.models.list()
```
If you've created a [custom distribution](./building_distro), you can also use the run.yaml configuration file directly:
```python
client = LlamaStackAsLibraryClient(config_path)
```