diff --git a/docs/source/distributions/index.md b/docs/source/distributions/index.md index 157747a49..753555d5b 100644 --- a/docs/source/distributions/index.md +++ b/docs/source/distributions/index.md @@ -1,4 +1,4 @@ -# Building Distributions +# Llama Stack Distributions ```{toctree} @@ -9,6 +9,11 @@ self_hosted_distro/index remote_hosted_distro/index ondevice_distro/index ``` +## Introduction + +Llama Stack Distributions are pre-built Docker containers/Conda environments that assemble APIs and Providers to provide a consistent whole to the end application developer. +These distributions allow you to mix-and-match providers - some could be backed by local code and some could be remote. This flexibility enables you to choose the optimal setup for your use case, such as serving a small model locally while using a cloud provider for larger models, all while maintaining a consistent API interface for your application. + ## Decide Your Build Type There are two ways to start a Llama Stack: @@ -53,7 +58,8 @@ Please see our pages in detail for the types of distributions we offer: $ git clone git@github.com:meta-llama/llama-stack.git ``` -### System Requirements + +### Starting the Distribution ::::{tab-set} @@ -99,7 +105,6 @@ Access to Single-Node CPU with Fireworks hosted endpoint via API_KEY from [firew :::: -### Starting the Distribution ::::{tab-set} :::{tab-item} meta-reference-gpu diff --git a/docs/source/distributions/self_hosted_distro/index.md b/docs/source/distributions/self_hosted_distro/index.md index b89d24bc1..fb775fb52 100644 --- a/docs/source/distributions/self_hosted_distro/index.md +++ b/docs/source/distributions/self_hosted_distro/index.md @@ -17,12 +17,12 @@ bedrock We offer deployable distributions where you can host your own Llama Stack server using local inference. -| **Distribution** | **Llama Stack Docker** | Start This Distribution | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | -|:----------------: |:------------------------------------------: |:-----------------------: |:------------------: |:------------------: |:------------------: |:------------------: |:------------------: | -| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | meta-reference | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | meta-reference-quantized | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | remote::ollama | meta-reference | remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | remote::tgi | meta-reference | meta-reference; remote::pgvector; remote::chromadb | meta-reference | meta-reference | -| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/together.html) | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference | -| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/fireworks.html) | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference | -| Bedrock | [llamastack/distribution-bedrock](https://hub.docker.com/repository/docker/llamastack/distribution-bedrock/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/bedrock.html) | remote::bedrock | meta-reference | remote::weaviate | meta-reference | meta-reference | +| **Distribution** | **Llama Stack Docker** | Start This Distribution | +|:----------------: |:------------------------------------------: |:-----------------------: | +| Meta Reference | [llamastack/distribution-meta-reference-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html) | +| Meta Reference Quantized | [llamastack/distribution-meta-reference-quantized-gpu](https://hub.docker.com/repository/docker/llamastack/distribution-meta-reference-quantized-gpu/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-quantized-gpu.html) | +| Ollama | [llamastack/distribution-ollama](https://hub.docker.com/repository/docker/llamastack/distribution-ollama/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/ollama.html) | +| TGI | [llamastack/distribution-tgi](https://hub.docker.com/repository/docker/llamastack/distribution-tgi/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/tgi.html) | +| Together | [llamastack/distribution-together](https://hub.docker.com/repository/docker/llamastack/distribution-together/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/together.html) | +| Fireworks | [llamastack/distribution-fireworks](https://hub.docker.com/repository/docker/llamastack/distribution-fireworks/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/fireworks.html) | +| Bedrock | [llamastack/distribution-bedrock](https://hub.docker.com/repository/docker/llamastack/distribution-bedrock/general) | [Guide](https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/bedrock.html) | diff --git a/docs/source/getting_started/index.md b/docs/source/getting_started/index.md index d1284e514..df91bc493 100644 --- a/docs/source/getting_started/index.md +++ b/docs/source/getting_started/index.md @@ -1,14 +1,12 @@ # Getting Started with Llama Stack -```{toctree} -:maxdepth: 2 -:hidden: -``` In this guide, we'll walk through using ollama as the inference provider and build a simple python application that uses the Llama Stack Client SDK Llama stack consists of a distribution server and an accompanying client SDK. The distribution server can be configured for different providers for inference, memory, agents, evals etc. This configuration is defined in a yaml file called `run.yaml`. +Running inference on the underlying Llama model is one of the most critical requirements. Depending on what hardware you have available, you have various options. Note that each option have different necessary prerequisites. We will use ollama as the inference provider as it is the easiest to get started with. + ### Step 1. Start the inference server ```bash export LLAMA_STACK_PORT=5001 @@ -33,12 +31,11 @@ docker run \ ``` -### Step 3. Install the client +### Step 3. Use the Llama Stack client SDK ```bash pip install llama-stack-client ``` -#### Check the connectivity to the server We will use the `llama-stack-client` CLI to check the connectivity to the server. This should be installed in your environment if you installed the SDK. ```bash llama-stack-client --endpoint http://localhost:5001 models list @@ -49,7 +46,12 @@ llama-stack-client --endpoint http://localhost:5001 models list └──────────────────────────────────┴─────────────┴───────────────────────────┴──────────┘ ``` -### Step 4. Use the SDK +Chat completion using the CLI +```bash +llama-stack-client --endpoint http://localhost:5001 inference chat_completion --message "hello, what model are you?" +``` + +Simple python example using the client SDK ```python from llama_stack_client import LlamaStackClient @@ -70,13 +72,136 @@ response = client.inference.chat_completion( print(response.completion_message.content) ``` -### Step 5. Your first RAG agent -Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/rag_with_memory_bank.py) on an example of how to build a RAG agent with memory. +### Step 4. Your first RAG agent +```python +# Copyright (c) Meta Platforms, Inc. and affiliates. +# All rights reserved. +# +# This source code is licensed under the terms described in the LICENSE file in +# the root directory of this source tree. + +import asyncio + +import fire + +from llama_stack_client import LlamaStackClient +from llama_stack_client.lib.agents.agent import Agent +from llama_stack_client.lib.agents.event_logger import EventLogger +from llama_stack_client.types import Attachment +from llama_stack_client.types.agent_create_params import AgentConfig + + +async def run_main(host: str, port: int, disable_safety: bool = False): + urls = [ + "memory_optimizations.rst", + "chat.rst", + "llama3.rst", + "datasets.rst", + "qat_finetune.rst", + "lora_finetune.rst", + ] + + attachments = [ + Attachment( + content=f"https://raw.githubusercontent.com/pytorch/torchtune/main/docs/source/tutorials/{url}", + mime_type="text/plain", + ) + for i, url in enumerate(urls) + ] + + client = LlamaStackClient( + base_url=f"http://{host}:{port}", + ) + + available_shields = [shield.identifier for shield in client.shields.list()] + if not available_shields: + print("No available shields. Disable safety.") + else: + print(f"Available shields found: {available_shields}") + available_models = [model.identifier for model in client.models.list()] + if not available_models: + raise ValueError("No available models") + else: + selected_model = available_models[0] + print(f"Using model: {selected_model}") + + agent_config = AgentConfig( + model=selected_model, + instructions="You are a helpful assistant", + sampling_params={ + "strategy": "greedy", + "temperature": 1.0, + "top_p": 0.9, + }, + tools=[ + { + "type": "memory", + "memory_bank_configs": [], + "query_generator_config": {"type": "default", "sep": " "}, + "max_tokens_in_context": 4096, + "max_chunks": 10, + }, + ], + tool_choice="auto", + tool_prompt_format="json", + input_shields=available_shields if available_shields else [], + output_shields=available_shields if available_shields else [], + enable_session_persistence=False, + ) + + agent = Agent(client, agent_config) + session_id = agent.create_session("test-session") + print(f"Created session_id={session_id} for Agent({agent.agent_id})") + + user_prompts = [ + ( + "I am attaching some documentation for Torchtune. Help me answer questions I will ask next.", + attachments, + ), + ( + "What are the top 5 topics that were explained? Only list succinct bullet points.", + None, + ), + ( + "Was anything related to 'Llama3' discussed, if so what?", + None, + ), + ( + "Tell me how to use LoRA", + None, + ), + ( + "What about Quantization?", + None, + ), + ] + + for prompt in user_prompts: + response = agent.create_turn( + messages=[ + { + "role": "user", + "content": prompt[0], + } + ], + attachments=prompt[1], + session_id=session_id, + ) + + async for log in EventLogger().log(response): + log.print() + + +def main(host: str, port: int): + asyncio.run(run_main(host, port)) + + +if __name__ == "__main__": + fire.Fire(main) +``` ## Next Steps -For more advanced topics, check out: - - You can mix and match different providers for inference, memory, agents, evals etc. See [Building custom distributions](../distributions/index.md) - [Developer Cookbook](developer_cookbook.md) diff --git a/docs/source/index.md b/docs/source/index.md index e6c950b3e..f73020623 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -7,8 +7,7 @@ The Stack APIs are rapidly improving but still a work-in-progress. We invite fee ```{image} ../_static/llama-stack.png :alt: Llama Stack -:width: 600px -:align: center +:width: 400px ``` ## APIs @@ -87,8 +86,9 @@ You can find more example scripts with client SDKs to talk with the Llama Stack getting_started/index distributions/index -cli_reference/index -cli_reference/download_models +llama_cli_reference/index +llama_cli_reference/download_models +llama_stack_client_cli_reference/index api_providers/index distribution_dev/index ``` diff --git a/docs/source/cli_reference/download_models.md b/docs/source/llama_cli_reference/download_models.md similarity index 100% rename from docs/source/cli_reference/download_models.md rename to docs/source/llama_cli_reference/download_models.md diff --git a/docs/source/cli_reference/index.md b/docs/source/llama_cli_reference/index.md similarity index 98% rename from docs/source/cli_reference/index.md rename to docs/source/llama_cli_reference/index.md index 39c566e59..aa2ecebf7 100644 --- a/docs/source/cli_reference/index.md +++ b/docs/source/llama_cli_reference/index.md @@ -1,4 +1,4 @@ -# CLI Reference +# llama CLI Reference The `llama` CLI tool helps you setup and use the Llama Stack. It should be available on your path after installing the `llama-stack` package. @@ -119,7 +119,7 @@ You should see a table like this: To download models, you can use the llama download command. -#### Downloading from [Meta](https://llama.meta.com/llama-downloads/) +### Downloading from [Meta](https://llama.meta.com/llama-downloads/) Here is an example download command to get the 3B-Instruct/11B-Vision-Instruct model. You will need META_URL which can be obtained from [here](https://llama.meta.com/docs/getting_the_models/meta/) @@ -137,7 +137,7 @@ llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL llama download --source meta --model-id Llama-Guard-3-1B --meta-url META_URL ``` -#### Downloading from [Hugging Face](https://huggingface.co/meta-llama) +### Downloading from [Hugging Face](https://huggingface.co/meta-llama) Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`. diff --git a/docs/source/llama_stack_client_cli_reference/index.md b/docs/source/llama_stack_client_cli_reference/index.md new file mode 100644 index 000000000..62a639acd --- /dev/null +++ b/docs/source/llama_stack_client_cli_reference/index.md @@ -0,0 +1,162 @@ +# llama-stack-client CLI Reference + +You may use the `llama-stack-client` to query information about the distribution. + +## Basic Commands + +### `llama-stack-client` +```bash +$ llama-stack-client -h + +usage: llama-stack-client [-h] {models,memory_banks,shields} ... + +Welcome to the LlamaStackClient CLI + +options: + -h, --help show this help message and exit + +subcommands: + {models,memory_banks,shields} +``` + +### `llama-stack-client configure` +```bash +$ llama-stack-client configure +> Enter the host name of the Llama Stack distribution server: localhost +> Enter the port number of the Llama Stack distribution server: 5000 +Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5000 +``` + +## Provider Commands + +### `llama-stack-client providers list` +```bash +$ llama-stack-client providers list +``` +``` ++-----------+----------------+-----------------+ +| API | Provider ID | Provider Type | ++===========+================+=================+ +| scoring | meta0 | meta-reference | ++-----------+----------------+-----------------+ +| datasetio | meta0 | meta-reference | ++-----------+----------------+-----------------+ +| inference | tgi0 | remote::tgi | ++-----------+----------------+-----------------+ +| memory | meta-reference | meta-reference | ++-----------+----------------+-----------------+ +| agents | meta-reference | meta-reference | ++-----------+----------------+-----------------+ +| telemetry | meta-reference | meta-reference | ++-----------+----------------+-----------------+ +| safety | meta-reference | meta-reference | ++-----------+----------------+-----------------+ +``` + +## Model Management + +### `llama-stack-client models list` +```bash +$ llama-stack-client models list +``` +``` ++----------------------+----------------------+---------------+----------------------------------------------------------+ +| identifier | llama_model | provider_id | metadata | ++======================+======================+===============+==========================================================+ +| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | tgi0 | {'huggingface_repo': 'meta-llama/Llama-3.1-8B-Instruct'} | ++----------------------+----------------------+---------------+----------------------------------------------------------+ +``` + +### `llama-stack-client models get` +```bash +$ llama-stack-client models get Llama3.1-8B-Instruct +``` + +``` ++----------------------+----------------------+----------------------------------------------------------+---------------+ +| identifier | llama_model | metadata | provider_id | ++======================+======================+==========================================================+===============+ +| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | {'huggingface_repo': 'meta-llama/Llama-3.1-8B-Instruct'} | tgi0 | ++----------------------+----------------------+----------------------------------------------------------+---------------+ +``` + + +```bash +$ llama-stack-client models get Random-Model + +Model RandomModel is not found at distribution endpoint host:port. Please ensure endpoint is serving specified model. +``` + +### `llama-stack-client models register` + +```bash +$ llama-stack-client models register [--provider-id ] [--provider-model-id ] [--metadata ] +``` + +### `llama-stack-client models update` + +```bash +$ llama-stack-client models update [--provider-id ] [--provider-model-id ] [--metadata ] +``` + +### `llama-stack-client models delete` + +```bash +$ llama-stack-client models delete +``` + +## Memory Bank Management + +### `llama-stack-client memory_banks list` +```bash +$ llama-stack-client memory_banks list +``` +``` ++--------------+----------------+--------+-------------------+------------------------+--------------------------+ +| identifier | provider_id | type | embedding_model | chunk_size_in_tokens | overlap_size_in_tokens | ++==============+================+========+===================+========================+==========================+ +| test_bank | meta-reference | vector | all-MiniLM-L6-v2 | 512 | 64 | ++--------------+----------------+--------+-------------------+------------------------+--------------------------+ +``` + +## Shield Management + +### `llama-stack-client shields list` +```bash +$ llama-stack-client shields list +``` + +``` ++--------------+----------+----------------+-------------+ +| identifier | params | provider_id | type | ++==============+==========+================+=============+ +| llama_guard | {} | meta-reference | llama_guard | ++--------------+----------+----------------+-------------+ +``` + +## Evaluation Tasks + +### `llama-stack-client eval_tasks list` +```bash +$ llama-stack-client eval run_benchmark --num-examples 10 --output-dir ./ --eval-task-config ~/eval_task_config.json +``` + +where `eval_task_config.json` is the path to the eval task config file in JSON format. An example eval_task_config +``` +$ cat ~/eval_task_config.json +{ + "type": "benchmark", + "eval_candidate": { + "type": "model", + "model": "Llama3.1-405B-Instruct", + "sampling_params": { + "strategy": "greedy", + "temperature": 0, + "top_p": 0.95, + "top_k": 0, + "max_tokens": 0, + "repetition_penalty": 1.0 + } + } +} +```