diff --git a/README.md b/README.md index 61a0f33fe..b1878d7e4 100644 --- a/README.md +++ b/README.md @@ -4,9 +4,11 @@ [![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-stack)](https://pypi.org/project/llama-stack/) [![Discord](https://img.shields.io/discord/1257833999603335178)](https://discord.gg/llama-stack) -[**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) | [**Zero-to-Hero Guide**](https://github.com/meta-llama/llama-stack/tree/main/docs/zero_to_hero_guide) +[**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) | [**Colab Notebook**](./docs/notebooks/Llama_Stack_Building_AI_Applications.ipynb) -Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations. +Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments. + +We focus on making it easy to build production applications with the Llama model family - from the latest Llama 3.3 to specialized models like Llama Guard for safety.
-Our goal is to provide pre-packaged implementations which can be operated in a variety of deployment environments: developers start iterating with Desktops or their mobile devices and can seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available. +## Key Features -> ⚠️ **Note** -> The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions. +- **Unified API Layer** for: + - Inference: Run LLM models efficiently + - Safety: Apply content filtering and safety policies + - DatasetIO: Store and retrieve knowledge for RAG + - Agents: Build multi-step agentic workflows + - Evaluation: Test and improve model and agent quality + - Telemetry: Collect and analyze usage data and complex agentic traces + - Post Training ( Coming Soon ): Fine tune models for specific use cases +- **Rich Provider Ecosystem** + - Local Development: Meta's Reference,Ollama, vLLM, TGI + - Self-hosted: Chroma, pgvector, Nvidia NIM + - Cloud: Fireworks, Together, Nvidia, AWS Bedrock, Groq, Cerebras + - On-device: iOS and Android support -## APIs - -We have working implementations of the following APIs today: -- Inference -- Safety -- Memory -- Agents -- Eval -- Telemetry - -Alongside these APIs, we also related APIs for operating with associated resources (see [Concepts](https://llama-stack.readthedocs.io/en/latest/concepts/index.html#resources)): - -- Models -- Shields -- Memory Banks -- Eval Tasks -- Datasets -- Scoring Functions - -We are also working on the following APIs which will be released soon: - -- Post Training -- Synthetic Data Generation -- Reward Scoring - -Each of the APIs themselves is a collection of REST endpoints. - -## Philosophy - -### Service-oriented design - -Unlike other frameworks, Llama Stack is built with a service-oriented, REST API-first approach. Such a design not only allows for seamless transitions from a local to remote deployments, but also forces the design to be more declarative. We believe this restriction can result in a much simpler, robust developer experience. This will necessarily trade-off against expressivity however if we get the APIs right, it can lead to a very powerful platform. - -### Composability - -We expect the set of APIs we design to be composable. An Agent abstractly depends on { Inference, Memory, Safety } APIs but does not care about the actual implementation details. Safety itself may require model inference and hence can depend on the Inference API. - -### Turnkey one-stop solutions - -We expect to provide turnkey solutions for popular deployment scenarios. It should be easy to deploy a Llama Stack server on AWS or on a private data center. Either of these should allow a developer to get started with powerful agentic apps, model evaluations or fine-tuning services in a matter of minutes. They should all result in the same uniform observability and developer experience. - -### Focus on Llama models - -As a Meta initiated project, we have started by explicitly focusing on Meta's Llama series of models. Supporting the broad set of open models is no easy task and we want to start with models we understand best. - -### Supporting the Ecosystem - -There is a vibrant ecosystem of Providers which provide efficient inference or scalable vector stores or powerful observability solutions. We want to make sure it is easy for developers to pick and choose the best implementations for their use cases. We also want to make sure it is easy for new Providers to onboard and participate in the ecosystem. - -Additionally, we have designed every element of the Stack such that APIs as well as Resources (like Models) can be federated. +- **Built for Production** + - Pre-packaged distributions for common deployment scenarios + - Comprehensive evaluation capabilities + - Full observability and monitoring + - Provider federation and fallback ## Supported Llama Stack Implementations @@ -87,14 +55,16 @@ Additionally, we have designed every element of the Stack such that APIs as well | Groq | Hosted | | :heavy_check_mark: | | | | | Ollama | Single Node | | :heavy_check_mark: | | | | | TGI | Hosted and Single Node | | :heavy_check_mark: | | | | -| [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) | Hosted and Single Node | | :heavy_check_mark: | | | | +| NVIDIA NIM | Hosted and Single Node | | :heavy_check_mark: | | | | | Chroma | Single Node | | | :heavy_check_mark: | | | | PG Vector | Single Node | | | :heavy_check_mark: | | | | PyTorch ExecuTorch | On-device iOS | :heavy_check_mark: | :heavy_check_mark: | | | | -| [vLLM](https://github.com/vllm-project/vllm) | Hosted and Single Node | | :heavy_check_mark: | | | | +| vLLM | Hosted and Single Node | | :heavy_check_mark: | | | | ### Distributions +A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code. Here are some of the distributions we support: + | **Distribution** | **Llama Stack Docker** | Start This Distribution | |:---------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------:| | Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/meta-reference-gpu.html) | @@ -104,7 +74,7 @@ Additionally, we have designed every element of the Stack such that APIs as well | TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/tgi.html) | | Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/together.html) | | Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/fireworks.html) | -| [vLLM](https://github.com/vllm-project/vllm) | [llamastack/distribution-remote-vllm](https://hub.docker.com/repository/docker/llamastack/distribution-remote-vllm/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/remote-vllm.html) | +| vLLM | [llamastack/distribution-remote-vllm](https://hub.docker.com/repository/docker/llamastack/distribution-remote-vllm/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/distributions/self_hosted_distro/remote-vllm.html) | ## Installation diff --git a/docs/source/concepts/index.md b/docs/source/concepts/index.md index 32caa66a5..02e54d839 100644 --- a/docs/source/concepts/index.md +++ b/docs/source/concepts/index.md @@ -10,7 +10,6 @@ A Llama Stack API is described as a collection of REST endpoints. We currently s - **Inference**: run inference with a LLM - **Safety**: apply safety policies to the output at a Systems (not only model) level - **Agents**: run multi-step agentic workflows with LLMs with tool usage, memory (RAG), etc. -- **Memory**: store and retrieve data for RAG, chat history, etc. - **DatasetIO**: interface with datasets and data loaders - **Scoring**: evaluate outputs of the system - **Eval**: generate outputs (via Inference or Agents) and perform scoring @@ -39,7 +38,6 @@ Some of these APIs are associated with a set of **Resources**. Here is the mappi - **Inference**, **Eval** and **Post Training** are associated with `Model` resources. - **Safety** is associated with `Shield` resources. -- **Memory** is associated with `Memory Bank` resources. - **DatasetIO** is associated with `Dataset` resources. - **Scoring** is associated with `ScoringFunction` resources. - **Eval** is associated with `Model` and `EvalTask` resources. @@ -63,12 +61,9 @@ While there is a lot of flexibility to mix-and-match providers, often users will **On-device Distro**: Finally, you may want to run Llama Stack directly on an edge device (mobile phone or a tablet.) We provide Distros for iOS and Android (coming soon.) -## More Concepts -- [Evaluation Concepts](evaluation_concepts.md) - ```{toctree} :maxdepth: 1 :hidden: -evaluation_concepts +distributions/index ``` diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index 602b5a635..aba3de54e 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -1,26 +1,24 @@ # Quick Start -In this guide, we'll walk through how you can use the Llama Stack client SDK to build a simple RAG agent. +In this guide, we'll walk through how you can use the Llama Stack (server and client SDK ) to test a simple RAG agent. -The most critical requirement for running the agent is running inference on the underlying Llama model. Depending on what hardware (GPUs) you have available, you have various options. We will use `Ollama` for this purpose as it is the easiest to get started with and yet robust. +A Llama Stack agent is a simple autonomous system that can perform tasks by combining a Llama model for reasoning with tools (e.g., RAG, web search, code execution, etc.) for taking actions. -First, let's set up some environment variables that we will use in the rest of the guide. Note that if you open up a new terminal, you will need to set these again. +At minimum, an agent requires a Llama model for inference and at least one tool that it can use. + +In Llama Stack, we provide a server exposing multiple APIs. These APIs are backed by implementations from different providers. For this guide, we will use [Ollama](https://ollama.com/) as the inference provider. -```bash -export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" -# ollama names this model differently, and we must use the ollama name when loading the model -export OLLAMA_INFERENCE_MODEL="llama3.2:3b-instruct-fp16" -export LLAMA_STACK_PORT=5001 -``` ### 1. Start Ollama ```bash -ollama run $OLLAMA_INFERENCE_MODEL --keepalive 60m +ollama run llama3.2:3b-instruct-fp16 --keepalive 60m ``` By default, Ollama keeps the model loaded in memory for 5 minutes which can be too short. We set the `--keepalive` flag to 60 minutes to ensure the model remains loaded for sometime. +NOTE: If you do not have ollama, you can install it from [here](https://ollama.ai/docs/installation). + ### 2. Start the Llama Stack server @@ -28,6 +26,13 @@ Llama Stack is based on a client-server architecture. It consists of a server wh To get started quickly, we provide various Docker images for the server component that work with different inference providers out of the box. For this guide, we will use `llamastack/distribution-ollama` as the Docker image. +Lets setup some environment variables that we will use in the rest of the guide. +```bash +INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" +LLAMA_STACK_PORT=8321 +``` + +You can start the server using the following command: ```bash docker run -it \ -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \ @@ -45,6 +50,9 @@ Configuration for this is available at `distributions/ollama/run.yaml`. You can interact with the Llama Stack server using various client SDKs. We will use the Python SDK which you can install using the following command. Note that you must be using Python 3.10 or newer: ```bash +yes | conda create -n stack-client python=3.10 +conda activate stack-client + pip install llama-stack-client ``` @@ -76,7 +84,10 @@ client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_P # List available models models = client.models.list() -print(models) +print("--- Available models: ---") +for m in models: + print(f"- {m.identifier}") +print() response = client.inference.chat_completion( model_id=os.environ["INFERENCE_MODEL"], @@ -93,59 +104,83 @@ print(response.completion_message.content) Here is an example of a simple RAG agent that uses the Llama Stack client SDK. ```python -import asyncio import os +from termcolor import cprint -from llama_stack_client import LlamaStackClient from llama_stack_client.lib.agents.agent import Agent from llama_stack_client.lib.agents.event_logger import EventLogger -from llama_stack_client.types import Attachment from llama_stack_client.types.agent_create_params import AgentConfig +from llama_stack_client.types.tool_runtime import DocumentParam as Document +from llama_stack_client import LlamaStackClient -async def run_main(): - urls = ["chat.rst", "llama3.rst", "datasets.rst", "lora_finetune.rst"] - attachments = [ - Attachment( - content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}", - mime_type="text/plain", - ) - for i, url in enumerate(urls) - ] +# Define the client and point it to the server URL +client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}") - client = LlamaStackClient(base_url=f"http://localhost:{os.environ['LLAMA_STACK_PORT']}") - - agent_config = AgentConfig( - model=os.environ["INFERENCE_MODEL"], - instructions="You are a helpful assistant", - tools=[{"type": "memory"}], # enable Memory aka RAG - enable_session_persistence=True, +# Define the documents to be used for RAG +urls = ["chat.rst", "llama3.rst", "datasets.rst", "lora_finetune.rst"] +documents = [ + Document( + document_id=f"num-{i}", + content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}", + mime_type="text/plain", + metadata={}, ) + for i, url in enumerate(urls) +] - agent = Agent(client, agent_config) - session_id = agent.create_session("test-session") - user_prompts = [ - ( - "I am attaching documentation for Torchtune. Help me answer questions I will ask next.", - attachments, - ), - ( - "What are the top 5 topics that were explained? Only list succinct bullet points.", - None, - ), - ] - for prompt, attachments in user_prompts: - response = agent.create_turn( - messages=[{"role": "user", "content": prompt}], - attachments=attachments, - session_id=session_id, - ) - for log in EventLogger().log(response): - log.print() +# Register a vector database +vector_db_id = "test-vector-db" +client.vector_dbs.register( + vector_db_id=vector_db_id, + embedding_model="all-MiniLM-L6-v2", + embedding_dimension=384, +) +# Insert the documents into the vector database +client.tool_runtime.rag_tool.insert( + documents=documents, + vector_db_id=vector_db_id, + chunk_size_in_tokens=512, +) -if __name__ == "__main__": - asyncio.run(run_main()) +# Create an agent +agent_config = AgentConfig( + # Define the inference model to use + model=os.environ["INFERENCE_MODEL"], + # Define instructions for the agent ( aka system prompt) + instructions="You are a helpful assistant", + # Enable session persistence + enable_session_persistence=False, + # Define tools available to the agent + toolgroups = [ + { + "name": "builtin::memory", + "args" : { + "vector_db_ids": [vector_db_id], + } + } + ], +) + +# Create an agent session +rag_agent = Agent(client, agent_config) +session_id = rag_agent.create_session("test-session") + +# Define a user prompts +user_prompts = [ + "What are the top 5 topics that were explained? Only list succinct bullet points.", +] + +# Run the agent loop by calling the `create_turn` method +for prompt in user_prompts: + cprint(f'User> {prompt}', 'green') + response = rag_agent.create_turn( + messages=[{"role": "user", "content": prompt}], + session_id=session_id, + ) + for log in EventLogger().log(response): + log.print() ``` ## Next Steps diff --git a/docs/source/index.md b/docs/source/index.md index cf7c0b236..7e7977738 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -1,17 +1,15 @@ # Llama Stack -Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations. +Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments. + +We focus on making it easy to build production applications with the Llama model family - from the latest Llama 3.3 to specialized models like Llama Guard for safety. ```{image} ../_static/llama-stack.png :alt: Llama Stack :width: 400px ``` -Our goal is to provide pre-packaged implementations which can be operated in a variety of deployment environments: developers start iterating with Desktops or their mobile devices and can seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available. - -```{note} -The Stack APIs are rapidly improving but still a work-in-progress. We invite feedback as well as direct contributions. -``` +Our goal is to provide pre-packaged implementations (aka "distributions") which can be run in a variety of deployment environments. LlamaStack can assist you in your entire app development lifecycle - start iterating on local, mobile or desktop and seamlessly transition to on-prem or public cloud deployments. At every point in this transition, the same set of APIs and the same developer experience is available. ## Quick Links @@ -44,7 +42,7 @@ A number of "adapters" are available for some popular Inference and Memory (Vect | Together | Hosted | Y | Y | | Y | | | Ollama | Single Node | | Y | | | | TGI | Hosted and Single Node | | Y | | | -| [NVIDIA NIM](https://build.nvidia.com/nim?filters=nimType%3Anim_type_run_anywhere&q=llama) | Hosted and Single Node | | Y | | | +| NVIDIA NIM | Hosted and Single Node | | Y | | | | Chroma | Single Node | | | Y | | | | Postgres | Single Node | | | Y | | | | PyTorch ExecuTorch | On-device iOS | Y | Y | | | @@ -54,6 +52,7 @@ A number of "adapters" are available for some popular Inference and Memory (Vect :hidden: :maxdepth: 3 +self introduction/index getting_started/index concepts/index diff --git a/docs/source/introduction/index.md b/docs/source/introduction/index.md index 9c2a70341..beae53158 100644 --- a/docs/source/introduction/index.md +++ b/docs/source/introduction/index.md @@ -19,77 +19,41 @@ Building production AI applications today requires solving multiple challenges: - Changing providers requires significant code changes. -### The Vision: A Universal Stack - +### Our Solution: A Universal Stack ```{image} ../../_static/llama-stack.png :alt: Llama Stack :width: 400px ``` -Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. These building blocks are presented as interoperable APIs with a broad set of Service Providers providing their implementations. +Llama Stack addresses these challenges through a service-oriented, API-first approach: -#### Service-oriented Design -Unlike other frameworks, Llama Stack is built with a service-oriented, REST API-first approach. Such a design not only allows for seamless transitions from local to remote deployments but also forces the design to be more declarative. This restriction can result in a much simpler, robust developer experience. The same code works across different environments: +**Develop Anywhere, Deploy Everywhere** +- Start locally with CPU-only setups +- Move to GPU acceleration when needed +- Deploy to cloud or edge without code changes +- Same APIs and developer experience everywhere -- Local development with CPU-only setups -- Self-hosted with GPU acceleration -- Cloud-hosted on providers like AWS, Fireworks, Together -- On-device for iOS and Android - - -#### Composability -The APIs we design are composable. An Agent abstractly depends on { Inference, Memory, Safety } APIs but does not care about the actual implementation details. Safety itself may require model inference and hence can depend on the Inference API. - -#### Turnkey Solutions - -We provide turnkey solutions for popular deployment scenarios. It should be easy to deploy a Llama Stack server on AWS or in a private data center. Either of these should allow a developer to get started with powerful agentic apps, model evaluations, or fine-tuning services in minutes. - -We have built-in support for critical needs: - -- Safety guardrails and content filtering -- Comprehensive evaluation capabilities +**Production-Ready Building Blocks** +- Pre-built safety guardrails and content filtering +- Built-in RAG and agent capabilities +- Comprehensive evaluation toolkit - Full observability and monitoring -- Provider federation and fallback -#### Focus on Llama Models -As a Meta-initiated project, we explicitly focus on Meta's Llama series of models. Supporting the broad set of open models is no easy task and we want to start with models we understand best. - -#### Supporting the Ecosystem -There is a vibrant ecosystem of Providers which provide efficient inference or scalable vector stores or powerful observability solutions. We want to make sure it is easy for developers to pick and choose the best implementations for their use cases. We also want to make sure it is easy for new Providers to onboard and participate in the ecosystem. - -Additionally, we have designed every element of the Stack such that APIs as well as Resources (like Models) can be federated. - -#### Rich Provider Ecosystem - -```{list-table} -:header-rows: 1 - -* - Provider - - Local - - Self-hosted - - Cloud -* - Inference - - Ollama - - vLLM, TGI - - Fireworks, Together, AWS -* - Memory - - FAISS - - Chroma, pgvector - - Weaviate -* - Safety - - Llama Guard - - - - - AWS Bedrock -``` +**True Provider Independence** +- Swap providers without application changes +- Mix and match best-in-class implementations +- Federation and fallback support +- No vendor lock-in -### Unified API Layer +### Our Philosophy -Llama Stack provides a consistent interface for: +- **Service-Oriented**: REST APIs enforce clean interfaces and enable seamless transitions across different environments. +- **Composability**: Every component is independent but works together seamlessly +- **Production Ready**: Built for real-world applications, not just demos +- **Turnkey Solutions**: Easy to deploy built in solutions for popular deployment scenarios +- **Llama First**: Explicit focus on Meta's Llama models and partnering ecosystem -- **Inference**: Run LLM models efficiently -- **Safety**: Apply content filtering and safety policies -- **Memory**: Store and retrieve knowledge for RAG -- **Agents**: Build multi-step workflows -- **Evaluation**: Test and improve application quality + +With Llama Stack, you can focus on building your application while we handle the infrastructure complexity, essential capabilities, and provider integrations.