From 55c55b9f5157ea6cba0eebad27896308c0e2f786 Mon Sep 17 00:00:00 2001 From: Ashwin Bharambe Date: Thu, 21 Nov 2024 13:20:37 -0800 Subject: [PATCH] Update Quick Start significantly --- README.md | 19 ++-- docs/source/getting_started/index.md | 153 +++++++++------------------ docs/source/index.md | 17 ++- 3 files changed, 68 insertions(+), 121 deletions(-) diff --git a/README.md b/README.md index bd2364f6f..0f5776eb8 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ [![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-stack)](https://pypi.org/project/llama-stack/) [![Discord](https://img.shields.io/discord/1257833999603335178)](https://discord.gg/llama-stack) -[**Get Started**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) +[**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions. @@ -60,14 +60,15 @@ A Distribution is where APIs and Providers are assembled together to provide a c ### Distributions -| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | -|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: | -| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference | -| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference | +| **Distribution** | **Llama Stack Docker** | Start This Distribution | +|:----------------: |:------------------------------------------: |:-----------------------: | +| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | +| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | +| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | +| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | +| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | +| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | + ## Installation You have two ways to install this repository: diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index df91bc493..5875f2776 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -1,25 +1,32 @@ -# Getting Started with Llama Stack +# Quick Start +In this guide, we'll through how you can use the Llama Stack client SDK to build a simple RAG agent. -In this guide, we'll walk through using ollama as the inference provider and build a simple python application that uses the Llama Stack Client SDK +The most critical requirement for running the agent is running inference on the underlying Llama model. Depending on what hardware (GPUs) you have available, you have various options. We will use `Ollama` for this purpose as it is the easiest to get started with and yet robust. -Llama stack consists of a distribution server and an accompanying client SDK. The distribution server can be configured for different providers for inference, memory, agents, evals etc. This configuration is defined in a yaml file called `run.yaml`. +First, let's set up some environment variables that we will use in the rest of the guide. Note that if you open up a new terminal, you will need to set these again. -Running inference on the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options. Note that each option have different necessary prerequisites. We will use ollama as the inference provider as it is the easiest to get started with. - -### Step 1. Start the inference server ```bash -export LLAMA_STACK_PORT=5001 export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" # ollama names this model differently, and we must use the ollama name when loading the model export OLLAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16" +export LLAMA_STACK_PORT=5001 +``` + +### 1. Start Ollama + +```bash ollama run $OLLAMA_INFERENCE_MODEL --keepalive 60m ``` -### Step 2. Start the Llama Stack server +By default, Ollama keeps the model loaded in memory for 5 minutes which can be too short. We set the `--keepalive` flag to 60 minutes to enspagents/agenure the model remains loaded for sometime. + + +### 2. Start the Llama Stack server + +Llama Stack is based on a client-server architecture. It consists of a server which can be configured very flexibly so you can mix-and-match various providers for its individual API components -- beyond Inference, these include Memory, Agents, Telemetry, Evals and so forth. ```bash -export LLAMA_STACK_PORT=5001 docker run \ -it \ -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \ @@ -28,42 +35,50 @@ docker run \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env OLLAMA_URL=http://host.docker.internal:11434 - ``` -### Step 3. Use the Llama Stack client SDK +Configuration for this is available at `distributions/ollama/run.yaml`. + + +### 3. Use the Llama Stack client SDK + +You can interact with the Llama Stack server using the `llama-stack-client` CLI or via the Python SDK. + ```bash pip install llama-stack-client ``` -We will use the `llama-stack-client` CLI to check the connectivity to the server. This should be installed in your environment if you installed the SDK. +Let's use the `llama-stack-client` CLI to check the connectivity to the server. + ```bash -llama-stack-client --endpoint http://localhost:5001 models list +llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT models list ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓ ┃ identifier ┃ provider_id ┃ provider_resource_id ┃ metadata ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩ -│ meta-llama/Llama-3.2-3B-Instruct │ ollama │ llama3.2:3b-instruct-fp16 │ {} │ +│ meta-llama/Llama-3.2-3B-Instruct │ ollama │ llama3.2:3b-instruct-fp16 │ │ └──────────────────────────────────┴─────────────┴───────────────────────────┴──────────┘ ``` -Chat completion using the CLI +You can test basic Llama inference completion using the CLI too. ```bash -llama-stack-client --endpoint http://localhost:5001 inference chat_completion --message "hello, what model are you?" +llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT \ + inference chat_completion \ + --message "hello, what model are you?" ``` -Simple python example using the client SDK +Here is a simple example to perform chat completions using Python instead of the CLI. ```python +import os from llama_stack_client import LlamaStackClient -client = LlamaStackClient(base_url="http://localhost:5001") +client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}") # List available models models = client.models.list() print(models) -# Simple chat completion response = client.inference.chat_completion( - model_id="meta-llama/Llama-3.2-3B-Instruct", + model_id=os.environ["INFERENCE_MODEL"], messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Write a haiku about coding"} @@ -72,17 +87,13 @@ response = client.inference.chat_completion( print(response.completion_message.content) ``` -### Step 4. Your first RAG agent +### 4. Your first RAG agent + +Here is an example of a simple RAG agent that uses the Llama Stack client SDK. + ```python -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the terms described in the LICENSE file in -# the root directory of this source tree. - import asyncio - -import fire +import os from llama_stack_client import LlamaStackClient from llama_stack_client.lib.agents.agent import Agent @@ -91,16 +102,8 @@ from llama_stack_client.types import Attachment from llama_stack_client.types.agent_create_params import AgentConfig -async def run_main(host: str, port: int, disable_safety: bool = False): - urls = [ - "memory_optimizations.rst", - "chat.rst", - "llama3.rst", - "datasets.rst", - "qat_finetune.rst", - "lora_finetune.rst", - ] - +async def run_main(): + urls = ["chat.rst", "llama3.rst", "datasets.rst", "lora_finetune.rst"] attachments = [ Attachment( content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}", @@ -109,95 +112,39 @@ async def run_main(host: str, port: int, disable_safety: bool = False): for i, url in enumerate(urls) ] - client = LlamaStackClient( - base_url=f"http://{host}:{port}", - ) - - available_shields = [shield.identifier for shield in client.shields.list()] - if not available_shields: - print("No available shields. Disable safety.") - else: - print(f"Available shields found: {available_shields}") - available_models = [model.identifier for model in client.models.list()] - if not available_models: - raise ValueError("No available models") - else: - selected_model = available_models[0] - print(f"Using model: {selected_model}") + client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}") agent_config = AgentConfig( - model=selected_model, + model=os.environ["INFERENCE_MODEL"], instructions="You are a helpful assistant", - sampling_params={ - "strategy": "greedy", - "temperature": 1.0, - "top_p": 0.9, - }, - tools=[ - { - "type": "memory", - "memory_bank_configs": [], - "query_generator_config": {"type": "default", "sep": " "}, - "max_tokens_in_context": 4096, - "max_chunks": 10, - }, - ], - tool_choice="auto", - tool_prompt_format="json", - input_shields=available_shields if available_shields else [], - output_shields=available_shields if available_shields else [], - enable_session_persistence=False, + tools=[{"type": "memory"}], # enable Memory aka RAG ) agent = Agent(client, agent_config) session_id = agent.create_session("test-session") print(f"Created session_id={session_id} for Agent({agent.agent_id})") - user_prompts = [ ( - "I am attaching some documentation for Torchtune. Help me answer questions I will ask next.", + "I am attaching documentation for Torchtune. Help me answer questions I will ask next.", attachments, ), ( "What are the top 5 topics that were explained? Only list succinct bullet points.", None, ), - ( - "Was anything related to 'Llama3' discussed, if so what?", - None, - ), - ( - "Tell me how to use LoRA", - None, - ), - ( - "What about Quantization?", - None, - ), ] - - for prompt in user_prompts: + for prompt, attachments in user_prompts: response = agent.create_turn( - messages=[ - { - "role": "user", - "content": prompt[0], - } - ], - attachments=prompt[1], + messages=[{"role": "user", "content": prompt}], + attachments=attachments, session_id=session_id, ) - async for log in EventLogger().log(response): log.print() -def main(host: str, port: int): - asyncio.run(run_main(host, port)) - - if __name__ == "__main__": - fire.Fire(main) + asyncio.run(run_main()) ``` ## Next Steps diff --git a/docs/source/index.md b/docs/source/index.md index f73020623..213025ebc 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -56,15 +56,14 @@ A Distribution is where APIs and Providers are assembled together to provide a c | PyTorch ExecuTorch | On-device iOS | Y | Y | | | ### Distributions - -| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | -|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: | -| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference | -| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference | +| **Distribution** | **Llama Stack Docker** | Start This Distribution | +|:----------------: |:------------------------------------------: |:-----------------------: | +| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | +| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | +| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | +| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | +| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/together.html) | +| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/remote_hosted_distro/fireworks.html) | ## Llama Stack Client SDK