address feedback

This commit is contained in:
Dinesh Yeduguru 2024-11-20 15:51:55 -08:00
parent 15200e23ba
commit efdc67e5b7
7 changed files with 322 additions and 30 deletions

View file

@ -1,4 +1,4 @@
# Building Distributions # Llama Stack Distributions
```{toctree} ```{toctree}
@ -9,6 +9,11 @@ self_hosted_distro/index
remote_hosted_distro/index remote_hosted_distro/index
ondevice_distro/index ondevice_distro/index
``` ```
## Introduction
Llama Stack Distributions are pre-built Docker containers/Conda environments that assemble APIs and Providers to provide a consistent whole to the end application developer.
These distributions allow you to mix-and-match providers - some could be backed by local code and some could be remote. This flexibility enables you to choose the optimal setup for your use case, such as serving a small model locally while using a cloud provider for larger models, all while maintaining a consistent API interface for your application.
## Decide Your Build Type ## Decide Your Build Type
There are two ways to start a Llama Stack: There are two ways to start a Llama Stack:
@ -53,7 +58,8 @@ Please see our pages in detail for the types of distributions we offer:
$ git clone git@github.com:meta-llama/llama-stack.git $ git clone git@github.com:meta-llama/llama-stack.git
``` ```
### System Requirements
### Starting the Distribution
::::{tab-set} ::::{tab-set}
@ -99,7 +105,6 @@ Access to Single-Node CPU with Fireworks hosted endpoint via API_KEY from [firew
:::: ::::
### Starting the Distribution
::::{tab-set} ::::{tab-set}
:::{tab-item} meta-reference-gpu :::{tab-item} meta-reference-gpu

View file

@ -17,12 +17,12 @@ bedrock
We offer deployable distributions where you can host your own Llama Stack server using local inference. We offer deployable distributions where you can host your own Llama Stack server using local inference.
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | | **Distribution** | **Llama Stack Docker** | Start This Distribution |
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: | |:----------------: |:------------------------------------------: |:-----------------------: |
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) |
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) |
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) |
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) |
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference | | Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/together.html) |
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference | | Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/fireworks.html) |
| Bedrock | [llamastack/distribution-bedrock](https://hub.docker.com/repository/docker/llamastack/distribution-bedrock/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/bedrock.html) | remote::bedrock | meta-reference | remote::weaviate | meta-reference | meta-reference | | Bedrock | [llamastack/distribution-bedrock](https://hub.docker.com/repository/docker/llamastack/distribution-bedrock/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/bedrock.html) |

View file

@ -1,14 +1,12 @@
# Getting Started with Llama Stack # Getting Started with Llama Stack
```{toctree}
:maxdepth: 2
:hidden:
```
In this guide, we'll walk through using ollama as the inference provider and build a simple python application that uses the Llama Stack Client SDK In this guide, we'll walk through using ollama as the inference provider and build a simple python application that uses the Llama Stack Client SDK
Llama stack consists of a distribution server and an accompanying client SDK. The distribution server can be configured for different providers for inference, memory, agents, evals etc. This configuration is defined in a yaml file called `run.yaml`. Llama stack consists of a distribution server and an accompanying client SDK. The distribution server can be configured for different providers for inference, memory, agents, evals etc. This configuration is defined in a yaml file called `run.yaml`.
Running inference on the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options. Note that each option have different necessary prerequisites. We will use ollama as the inference provider as it is the easiest to get started with.
### Step 1. Start the inference server ### Step 1. Start the inference server
```bash ```bash
export LLAMA_STACK_PORT=5001 export LLAMA_STACK_PORT=5001
@ -33,12 +31,11 @@ docker run \
``` ```
### Step 3. Install the client ### Step 3. Use the Llama Stack client SDK
```bash ```bash
pip install llama-stack-client pip install llama-stack-client
``` ```
#### Check the connectivity to the server
We will use the `llama-stack-client` CLI to check the connectivity to the server. This should be installed in your environment if you installed the SDK. We will use the `llama-stack-client` CLI to check the connectivity to the server. This should be installed in your environment if you installed the SDK.
```bash ```bash
llama-stack-client --endpoint http://localhost:5001 models list llama-stack-client --endpoint http://localhost:5001 models list
@ -49,7 +46,12 @@ llama-stack-client --endpoint http://localhost:5001 models list
└──────────────────────────────────┴─────────────┴───────────────────────────┴──────────┘ └──────────────────────────────────┴─────────────┴───────────────────────────┴──────────┘
``` ```
### Step 4. Use the SDK Chat completion using the CLI
```bash
llama-stack-client --endpoint http://localhost:5001 inference chat_completion --message "hello, what model are you?"
```
Simple python example using the client SDK
```python ```python
from llama_stack_client import LlamaStackClient from llama_stack_client import LlamaStackClient
@ -70,13 +72,136 @@ response = client.inference.chat_completion(
print(response.completion_message.content) print(response.completion_message.content)
``` ```
### Step 5. Your first RAG agent ### Step 4. Your first RAG agent
Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/rag_with_memory_bank.py) on an example of how to build a RAG agent with memory. ```python
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import asyncio
import fire
from llama_stack_client import LlamaStackClient
from llama_stack_client.lib.agents.agent import Agent
from llama_stack_client.lib.agents.event_logger import EventLogger
from llama_stack_client.types import Attachment
from llama_stack_client.types.agent_create_params import AgentConfig
async def run_main(host: str, port: int, disable_safety: bool = False):
urls = [
"memory_optimizations.rst",
"chat.rst",
"llama3.rst",
"datasets.rst",
"qat_finetune.rst",
"lora_finetune.rst",
]
attachments = [
Attachment(
content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}",
mime_type="text/plain",
)
for i, url in enumerate(urls)
]
client = LlamaStackClient(
base_url=f"http://{host}:{port}",
)
available_shields = [shield.identifier for shield in client.shields.list()]
if not available_shields:
print("No available shields. Disable safety.")
else:
print(f"Available shields found: {available_shields}")
available_models = [model.identifier for model in client.models.list()]
if not available_models:
raise ValueError("No available models")
else:
selected_model = available_models[0]
print(f"Using model: {selected_model}")
agent_config = AgentConfig(
model=selected_model,
instructions="You are a helpful assistant",
sampling_params={
"strategy": "greedy",
"temperature": 1.0,
"top_p": 0.9,
},
tools=[
{
"type": "memory",
"memory_bank_configs": [],
"query_generator_config": {"type": "default", "sep": " "},
"max_tokens_in_context": 4096,
"max_chunks": 10,
},
],
tool_choice="auto",
tool_prompt_format="json",
input_shields=available_shields if available_shields else [],
output_shields=available_shields if available_shields else [],
enable_session_persistence=False,
)
agent = Agent(client, agent_config)
session_id = agent.create_session("test-session")
print(f"Created session_id={session_id} for Agent({agent.agent_id})")
user_prompts = [
(
"I am attaching some documentation for Torchtune. Help me answer questions I will ask next.",
attachments,
),
(
"What are the top 5 topics that were explained? Only list succinct bullet points.",
None,
),
(
"Was anything related to 'Llama3' discussed, if so what?",
None,
),
(
"Tell me how to use LoRA",
None,
),
(
"What about Quantization?",
None,
),
]
for prompt in user_prompts:
response = agent.create_turn(
messages=[
{
"role": "user",
"content": prompt[0],
}
],
attachments=prompt[1],
session_id=session_id,
)
async for log in EventLogger().log(response):
log.print()
def main(host: str, port: int):
asyncio.run(run_main(host, port))
if __name__ == "__main__":
fire.Fire(main)
```
## Next Steps ## Next Steps
For more advanced topics, check out:
- You can mix and match different providers for inference, memory, agents, evals etc. See [Building custom distributions](../distributions/index.md) - You can mix and match different providers for inference, memory, agents, evals etc. See [Building custom distributions](../distributions/index.md)
- [Developer Cookbook](developer_cookbook.md) - [Developer Cookbook](developer_cookbook.md)

View file

@ -7,8 +7,7 @@ The Stack APIs are rapidly improving but still a work-in-progress. We invite fee
```{image} ../_static/llama-stack.png ```{image} ../_static/llama-stack.png
:alt: Llama Stack :alt: Llama Stack
:width: 600px :width: 400px
:align: center
``` ```
## APIs ## APIs
@ -87,8 +86,9 @@ You can find more example scripts with client SDKs to talk with the Llama Stack
getting_started/index getting_started/index
distributions/index distributions/index
cli_reference/index llama_cli_reference/index
cli_reference/download_models llama_cli_reference/download_models
llama_stack_client_cli_reference/index
api_providers/index api_providers/index
distribution_dev/index distribution_dev/index
``` ```

View file

@ -1,4 +1,4 @@
# CLI Reference # llama CLI Reference
The `llama` CLI tool helps you setup and use the Llama Stack. It should be available on your path after installing the `llama-stack` package. The `llama` CLI tool helps you setup and use the Llama Stack. It should be available on your path after installing the `llama-stack` package.
@ -119,7 +119,7 @@ You should see a table like this:
To download models, you can use the llama download command. To download models, you can use the llama download command.
#### Downloading from [Meta](https://llama.meta.com/llama-downloads/) ### Downloading from [Meta](https://llama.meta.com/llama-downloads/)
Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/) Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/)
@ -137,7 +137,7 @@ llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL
llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL
``` ```
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama) ### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`. Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`.

View file

@ -0,0 +1,162 @@
# llama-stack-client CLI Reference
You may use the `llama-stack-client` to query information about the distribution.
## Basic Commands
### `llama-stack-client`
```bash
$ llama-stack-client -h
usage: llama-stack-client [-h] {models,memory_banks,shields} ...
Welcome to the LlamaStackClient CLI
options:
-h, --help show this help message and exit
subcommands:
{models,memory_banks,shields}
```
### `llama-stack-client configure`
```bash
$ llama-stack-client configure
> Enter the host name of the Llama Stack distribution server: localhost
> Enter the port number of the Llama Stack distribution server: 5000
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5000
```
## Provider Commands
### `llama-stack-client providers list`
```bash
$ llama-stack-client providers list
```
```
+-----------+----------------+-----------------+
| API | Provider ID | Provider Type |
+===========+================+=================+
| scoring | meta0 | meta-reference |
+-----------+----------------+-----------------+
| datasetio | meta0 | meta-reference |
+-----------+----------------+-----------------+
| inference | tgi0 | remote::tgi |
+-----------+----------------+-----------------+
| memory | meta-reference | meta-reference |
+-----------+----------------+-----------------+
| agents | meta-reference | meta-reference |
+-----------+----------------+-----------------+
| telemetry | meta-reference | meta-reference |
+-----------+----------------+-----------------+
| safety | meta-reference | meta-reference |
+-----------+----------------+-----------------+
```
## Model Management
### `llama-stack-client models list`
```bash
$ llama-stack-client models list
```
```
+----------------------+----------------------+---------------+----------------------------------------------------------+
| identifier | llama_model | provider_id | metadata |
+======================+======================+===============+==========================================================+
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | tgi0 | {'huggingface_repo': 'meta-llama/Llama-3.1-8B-Instruct'} |
+----------------------+----------------------+---------------+----------------------------------------------------------+
```
### `llama-stack-client models get`
```bash
$ llama-stack-client models get Llama3.1-8B-Instruct
```
```
+----------------------+----------------------+----------------------------------------------------------+---------------+
| identifier | llama_model | metadata | provider_id |
+======================+======================+==========================================================+===============+
| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | {'huggingface_repo': 'meta-llama/Llama-3.1-8B-Instruct'} | tgi0 |
+----------------------+----------------------+----------------------------------------------------------+---------------+
```
```bash
$ llama-stack-client models get Random-Model
Model RandomModel is not found at distribution endpoint host:port. Please ensure endpoint is serving specified model.
```
### `llama-stack-client models register`
```bash
$ llama-stack-client models register <model_id> [--provider-id <provider_id>] [--provider-model-id <provider_model_id>] [--metadata <metadata>]
```
### `llama-stack-client models update`
```bash
$ llama-stack-client models update <model_id> [--provider-id <provider_id>] [--provider-model-id <provider_model_id>] [--metadata <metadata>]
```
### `llama-stack-client models delete`
```bash
$ llama-stack-client models delete <model_id>
```
## Memory Bank Management
### `llama-stack-client memory_banks list`
```bash
$ llama-stack-client memory_banks list
```
```
+--------------+----------------+--------+-------------------+------------------------+--------------------------+
| identifier | provider_id | type | embedding_model | chunk_size_in_tokens | overlap_size_in_tokens |
+==============+================+========+===================+========================+==========================+
| test_bank | meta-reference | vector | all-MiniLM-L6-v2 | 512 | 64 |
+--------------+----------------+--------+-------------------+------------------------+--------------------------+
```
## Shield Management
### `llama-stack-client shields list`
```bash
$ llama-stack-client shields list
```
```
+--------------+----------+----------------+-------------+
| identifier | params | provider_id | type |
+==============+==========+================+=============+
| llama_guard | {} | meta-reference | llama_guard |
+--------------+----------+----------------+-------------+
```
## Evaluation Tasks
### `llama-stack-client eval_tasks list`
```bash
$ llama-stack-client eval run_benchmark <task_id1> <task_id2> --num-examples 10 --output-dir ./ --eval-task-config ~/eval_task_config.json
```
where `eval_task_config.json` is the path to the eval task config file in JSON format. An example eval_task_config
```
$ cat ~/eval_task_config.json
{
"type": "benchmark",
"eval_candidate": {
"type": "model",
"model": "Llama3.1-405B-Instruct",
"sampling_params": {
"strategy": "greedy",
"temperature": 0,
"top_p": 0.95,
"top_k": 0,
"max_tokens": 0,
"repetition_penalty": 1.0
}
}
}
```