mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
# What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> - Fixes broken links and Docusaurus search Closes #3518 ## Test Plan The following should produce a clean build with no warnings and search enabled: ``` npm install npm run gen-api-docs all npm run build npm run serve ``` <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* -->
40 lines
1.3 KiB
Text
40 lines
1.3 KiB
Text
---
|
|
title: Using Llama Stack as a Library
|
|
description: How to use Llama Stack as a Python library instead of running a server
|
|
sidebar_label: Importing as Library
|
|
sidebar_position: 5
|
|
---
|
|
# Using Llama Stack as a Library
|
|
|
|
## Setup Llama Stack without a Server
|
|
If you are planning to use an external service for Inference (even Ollama or TGI counts as external), it is often easier to use Llama Stack as a library.
|
|
This avoids the overhead of setting up a server.
|
|
```bash
|
|
# setup
|
|
uv pip install llama-stack
|
|
llama stack build --distro starter --image-type venv
|
|
```
|
|
|
|
```python
|
|
from llama_stack.core.library_client import LlamaStackAsLibraryClient
|
|
|
|
client = LlamaStackAsLibraryClient(
|
|
"starter",
|
|
# provider_data is optional, but if you need to pass in any provider specific data, you can do so here.
|
|
provider_data={"tavily_search_api_key": os.environ["TAVILY_SEARCH_API_KEY"]},
|
|
)
|
|
```
|
|
|
|
This will parse your config and set up any inline implementations and remote clients needed for your implementation.
|
|
|
|
Then, you can access the APIs like `models` and `inference` on the client and call their methods directly:
|
|
|
|
```python
|
|
response = client.models.list()
|
|
```
|
|
|
|
If you've created a [custom distribution](./building_distro), you can also use the run.yaml configuration file directly:
|
|
|
|
```python
|
|
client = LlamaStackAsLibraryClient(config_path)
|
|
```
|