Update Quick Start significantly

This commit is contained in:
Ashwin Bharambe 2024-11-21 13:20:37 -08:00
parent 654722da7d
commit 55c55b9f51
3 changed files with 68 additions and 121 deletions

View file

@ -6,7 +6,7 @@
[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-stack)](https://pypi.org/project/llama-stack/) [![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-stack)](https://pypi.org/project/llama-stack/)
[![Discord](https://img.shields.io/discord/1257833999603335178)](https://discord.gg/llama-stack) [![Discord](https://img.shields.io/discord/1257833999603335178)](https://discord.gg/llama-stack)
[**Get Started**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) [**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html)
This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions. This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions.
@ -60,14 +60,15 @@ A Distribution is where APIs and Providers are assembled together to provide a c
### Distributions ### Distributions
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | | **Distribution** | **Llama Stack Docker** | Start This Distribution |
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: | |:----------------: |:------------------------------------------: |:-----------------------: |
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) |
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) |
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) |
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) |
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference | | Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) |
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference | | Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) |
## Installation ## Installation
You have two ways to install this repository: You have two ways to install this repository:

View file

@ -1,25 +1,32 @@
# Getting Started with Llama Stack # Quick Start
In this guide, we'll through how you can use the Llama Stack client SDK to build a simple RAG agent.
In this guide, we'll walk through using ollama as the inference provider and build a simple python application that uses the Llama Stack Client SDK The most critical requirement for running the agent is running inference on the underlying Llama model. Depending on what hardware (GPUs) you have available, you have various options. We will use `Ollama` for this purpose as it is the easiest to get started with and yet robust.
Llama stack consists of a distribution server and an accompanying client SDK. The distribution server can be configured for different providers for inference, memory, agents, evals etc. This configuration is defined in a yaml file called `run.yaml`. First, let's set up some environment variables that we will use in the rest of the guide. Note that if you open up a new terminal, you will need to set these again.
Running inference on the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options. Note that each option have different necessary prerequisites. We will use ollama as the inference provider as it is the easiest to get started with.
### Step 1. Start the inference server
```bash ```bash
export LLAMA_STACK_PORT=5001
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
# ollama names this model differently, and we must use the ollama name when loading the model # ollama names this model differently, and we must use the ollama name when loading the model
export OLLAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16" export OLLAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16"
export LLAMA_STACK_PORT=5001
```
### 1. Start Ollama
```bash
ollama run $OLLAMA_INFERENCE_MODEL --keepalive 60m ollama run $OLLAMA_INFERENCE_MODEL --keepalive 60m
``` ```
### Step 2. Start the Llama Stack server By default, Ollama keeps the model loaded in memory for 5 minutes which can be too short. We set the `--keepalive` flag to 60 minutes to enspagents/agenure the model remains loaded for sometime.
### 2. Start the Llama Stack server
Llama Stack is based on a client-server architecture. It consists of a server which can be configured very flexibly so you can mix-and-match various providers for its individual API components -- beyond Inference, these include Memory, Agents, Telemetry, Evals and so forth.
```bash ```bash
export LLAMA_STACK_PORT=5001
docker run \ docker run \
-it \ -it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \ -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
@ -28,42 +35,50 @@ docker run \
--port $LLAMA_STACK_PORT \ --port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \ --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env OLLAMA_URL=http://host.docker.internal:11434 --env OLLAMA_URL=http://host.docker.internal:11434
``` ```
### Step 3. Use the Llama Stack client SDK Configuration for this is available at `distributions/ollama/run.yaml`.
### 3. Use the Llama Stack client SDK
You can interact with the Llama Stack server using the `llama-stack-client` CLI or via the Python SDK.
```bash ```bash
pip install llama-stack-client pip install llama-stack-client
``` ```
We will use the `llama-stack-client` CLI to check the connectivity to the server. This should be installed in your environment if you installed the SDK. Let's use the `llama-stack-client` CLI to check the connectivity to the server.
```bash ```bash
llama-stack-client --endpoint http://localhost:5001 models list llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT models list
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
┃ identifier ┃ provider_id ┃ provider_resource_id ┃ metadata ┃ ┃ identifier ┃ provider_id ┃ provider_resource_id ┃ metadata ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
│ meta-llama/Llama-3.2-3B-Instruct │ ollama │ llama3.2:3b-instruct-fp16 │ {} │ meta-llama/Llama-3.2-3B-Instruct │ ollama │ llama3.2:3b-instruct-fp16 │
└──────────────────────────────────┴─────────────┴───────────────────────────┴──────────┘ └──────────────────────────────────┴─────────────┴───────────────────────────┴──────────┘
``` ```
Chat completion using the CLI You can test basic Llama inference completion using the CLI too.
```bash ```bash
llama-stack-client --endpoint http://localhost:5001 inference chat_completion --message "hello, what model are you?" llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT \
inference chat_completion \
--message "hello, what model are you?"
``` ```
Simple python example using the client SDK Here is a simple example to perform chat completions using Python instead of the CLI.
```python ```python
import os
from llama_stack_client import LlamaStackClient from llama_stack_client import LlamaStackClient
client = LlamaStackClient(base_url="http://localhost:5001") client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
# List available models # List available models
models = client.models.list() models = client.models.list()
print(models) print(models)
# Simple chat completion
response = client.inference.chat_completion( response = client.inference.chat_completion(
model_id="meta-llama/Llama-3.2-3B-Instruct", model_id=os.environ["INFERENCE_MODEL"],
messages=[ messages=[
{"role": "system", "content": "You are a helpful assistant."}, {"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a haiku about coding"} {"role": "user", "content": "Write a haiku about coding"}
@ -72,17 +87,13 @@ response = client.inference.chat_completion(
print(response.completion_message.content) print(response.completion_message.content)
``` ```
### Step 4. Your first RAG agent ### 4. Your first RAG agent
Here is an example of a simple RAG agent that uses the Llama Stack client SDK.
```python ```python
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import asyncio import asyncio
import os
import fire
from llama_stack_client import LlamaStackClient from llama_stack_client import LlamaStackClient
from llama_stack_client.lib.agents.agent import Agent from llama_stack_client.lib.agents.agent import Agent
@ -91,16 +102,8 @@ from llama_stack_client.types import Attachment
from llama_stack_client.types.agent_create_params import AgentConfig from llama_stack_client.types.agent_create_params import AgentConfig
async def run_main(host: str, port: int, disable_safety: bool = False): async def run_main():
urls = [ urls = ["chat.rst", "llama3.rst", "datasets.rst", "lora_finetune.rst"]
"memory_optimizations.rst",
"chat.rst",
"llama3.rst",
"datasets.rst",
"qat_finetune.rst",
"lora_finetune.rst",
]
attachments = [ attachments = [
Attachment( Attachment(
content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}", content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}",
@ -109,95 +112,39 @@ async def run_main(host: str, port: int, disable_safety: bool = False):
for i, url in enumerate(urls) for i, url in enumerate(urls)
] ]
client = LlamaStackClient( client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
base_url=f"http://{host}:{port}",
)
available_shields = [shield.identifier for shield in client.shields.list()]
if not available_shields:
print("No available shields. Disable safety.")
else:
print(f"Available shields found: {available_shields}")
available_models = [model.identifier for model in client.models.list()]
if not available_models:
raise ValueError("No available models")
else:
selected_model = available_models[0]
print(f"Using model: {selected_model}")
agent_config = AgentConfig( agent_config = AgentConfig(
model=selected_model, model=os.environ["INFERENCE_MODEL"],
instructions="You are a helpful assistant", instructions="You are a helpful assistant",
sampling_params={ tools=[{"type": "memory"}], # enable Memory aka RAG
"strategy": "greedy",
"temperature": 1.0,
"top_p": 0.9,
},
tools=[
{
"type": "memory",
"memory_bank_configs": [],
"query_generator_config": {"type": "default", "sep": " "},
"max_tokens_in_context": 4096,
"max_chunks": 10,
},
],
tool_choice="auto",
tool_prompt_format="json",
input_shields=available_shields if available_shields else [],
output_shields=available_shields if available_shields else [],
enable_session_persistence=False,
) )
agent = Agent(client, agent_config) agent = Agent(client, agent_config)
session_id = agent.create_session("test-session") session_id = agent.create_session("test-session")
print(f"Created session_id={session_id} for Agent({agent.agent_id})") print(f"Created session_id={session_id} for Agent({agent.agent_id})")
user_prompts = [ user_prompts = [
( (
"I am attaching some documentation for Torchtune. Help me answer questions I will ask next.", "I am attaching documentation for Torchtune. Help me answer questions I will ask next.",
attachments, attachments,
), ),
( (
"What are the top 5 topics that were explained? Only list succinct bullet points.", "What are the top 5 topics that were explained? Only list succinct bullet points.",
None, None,
), ),
(
"Was anything related to 'Llama3' discussed, if so what?",
None,
),
(
"Tell me how to use LoRA",
None,
),
(
"What about Quantization?",
None,
),
] ]
for prompt, attachments in user_prompts:
for prompt in user_prompts:
response = agent.create_turn( response = agent.create_turn(
messages=[ messages=[{"role": "user", "content": prompt}],
{ attachments=attachments,
"role": "user",
"content": prompt[0],
}
],
attachments=prompt[1],
session_id=session_id, session_id=session_id,
) )
async for log in EventLogger().log(response): async for log in EventLogger().log(response):
log.print() log.print()
def main(host: str, port: int):
asyncio.run(run_main(host, port))
if __name__ == "__main__": if __name__ == "__main__":
fire.Fire(main) asyncio.run(run_main())
``` ```
## Next Steps ## Next Steps

View file

@ -56,15 +56,14 @@ A Distribution is where APIs and Providers are assembled together to provide a c
| PyTorch ExecuTorch | On-device iOS | Y | Y | | | | PyTorch ExecuTorch | On-device iOS | Y | Y | | |
### Distributions ### Distributions
| **Distribution** | **Llama Stack Docker** | Start This Distribution |
| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | |:----------------: |:------------------------------------------: |:-----------------------: |
|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: | | Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) |
| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) |
| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) |
| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference | | TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) |
| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | | Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) |
| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference | | Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) |
| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
## Llama Stack Client SDK ## Llama Stack Client SDK