mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-05 18:22:41 +00:00
fixes
This commit is contained in:
parent
03e61f1bb4
commit
e92d82122d
2 changed files with 88 additions and 58 deletions
|
@ -10,7 +10,6 @@ A Llama Stack API is described as a collection of REST endpoints. We currently s
|
|||
- **Inference**: run inference with a LLM
|
||||
- **Safety**: apply safety policies to the output at a Systems (not only model) level
|
||||
- **Agents**: run multi-step agentic workflows with LLMs with tool usage, memory (RAG), etc.
|
||||
- **Memory**: store and retrieve data for RAG, chat history, etc.
|
||||
- **DatasetIO**: interface with datasets and data loaders
|
||||
- **Scoring**: evaluate outputs of the system
|
||||
- **Eval**: generate outputs (via Inference or Agents) and perform scoring
|
||||
|
@ -39,7 +38,6 @@ Some of these APIs are associated with a set of **Resources**. Here is the mappi
|
|||
|
||||
- **Inference**, **Eval** and **Post Training** are associated with `Model` resources.
|
||||
- **Safety** is associated with `Shield` resources.
|
||||
- **Memory** is associated with `Memory Bank` resources.
|
||||
- **DatasetIO** is associated with `Dataset` resources.
|
||||
- **Scoring** is associated with `ScoringFunction` resources.
|
||||
- **Eval** is associated with `Model` and `EvalTask` resources.
|
||||
|
@ -63,12 +61,9 @@ While there is a lot of flexibility to mix-and-match providers, often users will
|
|||
|
||||
**On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.)
|
||||
|
||||
## More Concepts
|
||||
- [Evaluation Concepts](evaluation_concepts.md)
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
evaluation_concepts
|
||||
distributions/index
|
||||
```
|
||||
|
|
|
@ -1,26 +1,24 @@
|
|||
# Quick Start
|
||||
|
||||
In this guide, we'll walk through how you can use the Llama Stack client SDK to build a simple RAG agent.
|
||||
In this guide, we'll walk through how you can use the Llama Stack (server and client SDK ) to test a simple RAG agent.
|
||||
|
||||
The most critical requirement for running the agent is running inference on the underlying Llama model. Depending on what hardware (GPUs) you have available, you have various options. We will use `Ollama` for this purpose as it is the easiest to get started with and yet robust.
|
||||
A Llama Stack agent is a simple autonomous system that can perform tasks by combining a Llama model for reasoning with tools (e.g., RAG, web search, code execution, etc.) for taking actions.
|
||||
|
||||
First, let's set up some environment variables that we will use in the rest of the guide. Note that if you open up a new terminal, you will need to set these again.
|
||||
At minimum, an agent requires a Llama model for inference and at least one tool that it can use.
|
||||
|
||||
In Llama Stack, we provide a server exposing multiple APIs. These APIs are backed by implementations from different providers. For this guide, we will use [Ollama](https://ollama.com/) as the inference provider.
|
||||
|
||||
```bash
|
||||
export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
|
||||
# ollama names this model differently, and we must use the ollama name when loading the model
|
||||
export OLLAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16"
|
||||
export LLAMA_STACK_PORT=5001
|
||||
```
|
||||
|
||||
### 1. Start Ollama
|
||||
|
||||
```bash
|
||||
ollama run $OLLAMA_INFERENCE_MODEL --keepalive 60m
|
||||
ollama run llama3.2:3b-instruct-fp16 --keepalive 60m
|
||||
```
|
||||
|
||||
By default, Ollama keeps the model loaded in memory for 5 minutes which can be too short. We set the `--keepalive` flag to 60 minutes to ensure the model remains loaded for sometime.
|
||||
|
||||
NOTE: If you do not have ollama, you can install it from [here](https://ollama.ai/docs/installation).
|
||||
|
||||
|
||||
### 2. Start the Llama Stack server
|
||||
|
||||
|
@ -28,6 +26,13 @@ Llama Stack is based on a client-server architecture. It consists of a server wh
|
|||
|
||||
To get started quickly, we provide various Docker images for the server component that work with different inference providers out of the box. For this guide, we will use `llamastack/distribution-ollama` as the Docker image.
|
||||
|
||||
Lets setup some environment variables that we will use in the rest of the guide.
|
||||
```bash
|
||||
INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
|
||||
LLAMA_STACK_PORT=8321
|
||||
```
|
||||
|
||||
You can start the server using the following command:
|
||||
```bash
|
||||
docker run -it \
|
||||
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
|
||||
|
@ -45,6 +50,9 @@ Configuration for this is available at `distributions/ollama/run.yaml`.
|
|||
|
||||
You can interact with the Llama Stack server using various client SDKs. We will use the Python SDK which you can install using the following command. Note that you must be using Python 3.10 or newer:
|
||||
```bash
|
||||
yes | conda create -n stack-client python=3.10
|
||||
conda activate stack-client
|
||||
|
||||
pip install llama-stack-client
|
||||
```
|
||||
|
||||
|
@ -76,7 +84,10 @@ client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_P
|
|||
|
||||
# List available models
|
||||
models = client.models.list()
|
||||
print(models)
|
||||
print("--- Available models: ---")
|
||||
for m in models:
|
||||
print(f"- {m.identifier}")
|
||||
print()
|
||||
|
||||
response = client.inference.chat_completion(
|
||||
model_id=os.environ["INFERENCE_MODEL"],
|
||||
|
@ -93,59 +104,83 @@ print(response.completion_message.content)
|
|||
Here is an example of a simple RAG agent that uses the Llama Stack client SDK.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import os
|
||||
from termcolor import cprint
|
||||
|
||||
from llama_stack_client import LlamaStackClient
|
||||
from llama_stack_client.lib.agents.agent import Agent
|
||||
from llama_stack_client.lib.agents.event_logger import EventLogger
|
||||
from llama_stack_client.types import Attachment
|
||||
from llama_stack_client.types.agent_create_params import AgentConfig
|
||||
from llama_stack_client.types.tool_runtime import DocumentParam as Document
|
||||
|
||||
from llama_stack_client import LlamaStackClient
|
||||
|
||||
async def run_main():
|
||||
urls = ["chat.rst", "llama3.rst", "datasets.rst", "lora_finetune.rst"]
|
||||
attachments = [
|
||||
Attachment(
|
||||
content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}",
|
||||
mime_type="text/plain",
|
||||
)
|
||||
for i, url in enumerate(urls)
|
||||
]
|
||||
# Define the client and point it to the server URL
|
||||
client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
|
||||
|
||||
client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}")
|
||||
|
||||
agent_config = AgentConfig(
|
||||
model=os.environ["INFERENCE_MODEL"],
|
||||
instructions="You are a helpful assistant",
|
||||
tools=[{"type": "memory"}], # enable Memory aka RAG
|
||||
enable_session_persistence=True,
|
||||
# Define the documents to be used for RAG
|
||||
urls = ["chat.rst", "llama3.rst", "datasets.rst", "lora_finetune.rst"]
|
||||
documents = [
|
||||
Document(
|
||||
document_id=f"num-{i}",
|
||||
content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}",
|
||||
mime_type="text/plain",
|
||||
metadata={},
|
||||
)
|
||||
for i, url in enumerate(urls)
|
||||
]
|
||||
|
||||
agent = Agent(client, agent_config)
|
||||
session_id = agent.create_session("test-session")
|
||||
user_prompts = [
|
||||
(
|
||||
"I am attaching documentation for Torchtune. Help me answer questions I will ask next.",
|
||||
attachments,
|
||||
),
|
||||
(
|
||||
"What are the top 5 topics that were explained? Only list succinct bullet points.",
|
||||
None,
|
||||
),
|
||||
]
|
||||
for prompt, attachments in user_prompts:
|
||||
response = agent.create_turn(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
attachments=attachments,
|
||||
session_id=session_id,
|
||||
)
|
||||
for log in EventLogger().log(response):
|
||||
log.print()
|
||||
# Register a vector database
|
||||
vector_db_id = "test-vector-db"
|
||||
client.vector_dbs.register(
|
||||
vector_db_id=vector_db_id,
|
||||
embedding_model="all-MiniLM-L6-v2",
|
||||
embedding_dimension=384,
|
||||
)
|
||||
|
||||
# Insert the documents into the vector database
|
||||
client.tool_runtime.rag_tool.insert(
|
||||
documents=documents,
|
||||
vector_db_id=vector_db_id,
|
||||
chunk_size_in_tokens=512,
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(run_main())
|
||||
# Create an agent
|
||||
agent_config = AgentConfig(
|
||||
# Define the inference model to use
|
||||
model=os.environ["INFERENCE_MODEL"],
|
||||
# Define instructions for the agent ( aka system prompt)
|
||||
instructions="You are a helpful assistant",
|
||||
# Enable session persistence
|
||||
enable_session_persistence=False,
|
||||
# Define tools available to the agent
|
||||
toolgroups = [
|
||||
{
|
||||
"name": "builtin::memory",
|
||||
"args" : {
|
||||
"vector_db_ids": [vector_db_id],
|
||||
}
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
# Create an agent session
|
||||
rag_agent = Agent(client, agent_config)
|
||||
session_id = rag_agent.create_session("test-session")
|
||||
|
||||
# Define a user prompts
|
||||
user_prompts = [
|
||||
"What are the top 5 topics that were explained? Only list succinct bullet points.",
|
||||
]
|
||||
|
||||
# Run the agent loop by calling the `create_turn` method
|
||||
for prompt in user_prompts:
|
||||
cprint(f'User> {prompt}', 'green')
|
||||
response = rag_agent.create_turn(
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
session_id=session_id,
|
||||
)
|
||||
for log in EventLogger().log(response):
|
||||
log.print()
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue