diff --git a/distributions/meta-reference-gpu/README.md b/distributions/meta-reference-gpu/README.md index 951120da5..7f209c4a9 100644 --- a/distributions/meta-reference-gpu/README.md +++ b/distributions/meta-reference-gpu/README.md @@ -11,13 +11,8 @@ The `llamastack/distribution-meta-reference-gpu` distribution consists of the fo ### Start the Distribution (Single Node GPU) > [!NOTE] -> This assumes you have access to GPU to start a TGI server with access to your GPU. +> This assumes you have access to GPU to start a local server with access to your GPU. -> [!NOTE] -> For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container. -``` -export LLAMA_CHECKPOINT_DIR=~/.llama -``` > [!NOTE] > `~/.llama` should be the path containing downloaded weights of Llama models. @@ -26,8 +21,8 @@ export LLAMA_CHECKPOINT_DIR=~/.llama To download and start running a pre-built docker container, you may use the following commands: ``` -docker run -it -p 5000:5000 -v ~/.llama:/root/.llama --gpus=all llamastack/llamastack-local-gpu +docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./run.yaml:/root/my-run.yaml --gpus=all distribution-meta-reference-gpu --yaml_config /root/my-run.yaml ``` ### Alternative (Build and start distribution locally via conda) -- You may checkout the [Getting Started](../../docs/getting_started.md) for more details on starting up a meta-reference distribution. +- You may checkout the [Getting Started](../../docs/getting_started.md) for more details on building locally via conda and starting up a meta-reference distribution. diff --git a/distributions/meta-reference-gpu/build.yaml b/distributions/meta-reference-gpu/build.yaml index ca786c51c..e76197330 100644 --- a/distributions/meta-reference-gpu/build.yaml +++ b/distributions/meta-reference-gpu/build.yaml @@ -1,4 +1,4 @@ -name: distribution-meta-reference-gpu +name: meta-reference-gpu distribution_spec: description: Use code from `llama_stack` itself to serve all llama stack APIs providers: diff --git a/distributions/ollama/README.md b/distributions/ollama/README.md index 43c764cbe..d59c3f9e1 100644 --- a/distributions/ollama/README.md +++ b/distributions/ollama/README.md @@ -71,10 +71,10 @@ ollama run **Via Docker** ``` -docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./ollama-run.yaml:/root/llamastack-run-ollama.yaml --gpus=all llamastack-local-cpu --yaml_config /root/llamastack-run-ollama.yaml +docker run --network host -it -p 5000:5000 -v ~/.llama:/root/.llama -v ./gpu/run.yaml:/root/llamastack-run-ollama.yaml --gpus=all distribution-ollama --yaml_config /root/llamastack-run-ollama.yaml ``` -Make sure in you `ollama-run.yaml` file, you inference provider is pointing to the correct Ollama endpoint. E.g. +Make sure in you `run.yaml` file, you inference provider is pointing to the correct Ollama endpoint. E.g. ``` inference: - provider_id: ollama0 diff --git a/distributions/ollama/build.yaml b/distributions/ollama/build.yaml index d14091814..c27f40929 100644 --- a/distributions/ollama/build.yaml +++ b/distributions/ollama/build.yaml @@ -1,4 +1,4 @@ -name: distribution-ollama +name: ollama distribution_spec: description: Use ollama for running LLM inference providers: @@ -10,4 +10,4 @@ distribution_spec: safety: meta-reference agents: meta-reference telemetry: meta-reference -image_type: conda +image_type: docker diff --git a/distributions/tgi/build.yaml b/distributions/tgi/build.yaml index c3950e900..2c0ca1d33 100644 --- a/distributions/tgi/build.yaml +++ b/distributions/tgi/build.yaml @@ -1,4 +1,4 @@ -name: distribution-tgi +name: tgi distribution_spec: description: Use TGI for running LLM inference providers: @@ -10,4 +10,4 @@ distribution_spec: safety: meta-reference agents: meta-reference telemetry: meta-reference -image_type: conda +image_type: docker diff --git a/distributions/tgi/cpu/compose.yaml b/distributions/tgi/cpu/compose.yaml index df7c74489..2ec10b86c 100644 --- a/distributions/tgi/cpu/compose.yaml +++ b/distributions/tgi/cpu/compose.yaml @@ -6,28 +6,7 @@ services: - $HOME/.cache/huggingface:/data ports: - "5009:5009" - devices: - - nvidia.com/gpu=all - environment: - - CUDA_VISIBLE_DEVICES=0 - - HF_HOME=/data - - HF_DATASETS_CACHE=/data - - HF_MODULES_CACHE=/data - - HF_HUB_CACHE=/data command: ["--dtype", "bfloat16", "--usage-stats", "on", "--sharded", "false", "--model-id", "meta-llama/Llama-3.1-8B-Instruct", "--port", "5009", "--cuda-memory-fraction", "0.3"] - deploy: - resources: - reservations: - devices: - - driver: nvidia - # that's the closest analogue to --gpus; provide - # an integer amount of devices or 'all' - count: 1 - # Devices are reserved using a list of capabilities, making - # capabilities the only required field. A device MUST - # satisfy all the requested capabilities for a successful - # reservation. - capabilities: [gpu] runtime: nvidia healthcheck: test: ["CMD", "curl", "-f", "http://text-generation-inference:5009/health"] diff --git a/distributions/together/README.md b/distributions/together/README.md new file mode 100644 index 000000000..481525be2 --- /dev/null +++ b/distributions/together/README.md @@ -0,0 +1,104 @@ +# Together Distribution + +### Connect to a Llama Stack Together Endpoint +- You may connect to a hosted endpoint `https://llama-stack.together.ai`, serving a Llama Stack distribution + +### Start a Together distribution +``` + +``` + +# TGI Distribution + +The `llamastack/distribution-tgi` distribution consists of the following provider configurations. + + +| **API** | **Inference** | **Agents** | **Memory** | **Safety** | **Telemetry** | +|----------------- |--------------- |---------------- |-------------------------------------------------- |---------------- |---------------- | +| **Provider(s)** | remote::tgi | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference | + + +### Start the Distribution (Single Node GPU) + +> [!NOTE] +> This assumes you have access to GPU to start a TGI server with access to your GPU. + + +``` +$ cd llama_stack/distribution/docker/tgi +$ ls +compose.yaml tgi-run.yaml +$ docker compose up +``` + +The script will first start up TGI server, then start up Llama Stack distribution server hooking up to the remote TGI provider for inference. You should be able to see the following outputs -- +``` +[text-generation-inference] | 2024-10-15T18:56:33.810397Z INFO text_generation_router::server: router/src/server.rs:1813: Using config Some(Llama) +[text-generation-inference] | 2024-10-15T18:56:33.810448Z WARN text_generation_router::server: router/src/server.rs:1960: Invalid hostname, defaulting to 0.0.0.0 +[text-generation-inference] | 2024-10-15T18:56:33.864143Z INFO text_generation_router::server: router/src/server.rs:2353: Connected +INFO: Started server process [1] +INFO: Waiting for application startup. +INFO: Application startup complete. +INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit) +``` + +To kill the server +``` +docker compose down +``` + +### Start the Distribution (Single Node CPU) + +> [!NOTE] +> This assumes you have an hosted endpoint compatible with TGI server. + +``` +$ cd llama-stack/distribution/tgi/cpu +$ ls +compose.yaml run.yaml +$ docker compose up +``` + +Replace in `run.yaml` file with your TGI endpoint. +``` +inference: + - provider_id: tgi0 + provider_type: remote::tgi + config: + url: +``` + +### (Alternative) TGI server + llama stack run (Single Node GPU) + +If you wish to separately spin up a TGI server, and connect with Llama Stack, you may use the following commands. + +#### (optional) Start TGI server locally +- Please check the [TGI Getting Started Guide](https://github.com/huggingface/text-generation-inference?tab=readme-ov-file#get-started) to get a TGI endpoint. + +``` +docker run --rm -it -v $HOME/.cache/huggingface:/data -p 5009:5009 --gpus all ghcr.io/huggingface/text-generation-inference:latest --dtype bfloat16 --usage-stats on --sharded false --model-id meta-llama/Llama-3.1-8B-Instruct --port 5009 +``` + + +#### Start Llama Stack server pointing to TGI server + +``` +docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack-local-cpu --yaml_config /root/my-run.yaml +``` + +Make sure in you `run.yaml` file, you inference provider is pointing to the correct Together URL server endpoint. E.g. +``` +inference: + - provider_id: together + provider_type: remote::together + config: + url: https://api.together.xyz/v1 +``` + +**Via Conda** + +```bash +llama stack build --config ./build.yaml +# -- modify run.yaml to a valid Together server endpoint +llama stack run ./run.yaml +``` diff --git a/distributions/together/compose.yaml b/distributions/together/compose.yaml new file mode 100644 index 000000000..e69de29bb diff --git a/distributions/together/run.yaml b/distributions/together/run.yaml new file mode 100644 index 000000000..e69de29bb diff --git a/llama_stack/distribution/build_container.sh b/llama_stack/distribution/build_container.sh index 056a7c06c..19f3df1e3 100755 --- a/llama_stack/distribution/build_container.sh +++ b/llama_stack/distribution/build_container.sh @@ -15,7 +15,7 @@ special_pip_deps="$6" set -euo pipefail build_name="$1" -image_name="llamastack-$build_name" +image_name="distribution-$build_name" docker_base=$2 build_file_path=$3 host_build_dir=$4 diff --git a/llama_stack/providers/registry/inference.py b/llama_stack/providers/registry/inference.py index c54cf5939..5a09b6af5 100644 --- a/llama_stack/providers/registry/inference.py +++ b/llama_stack/providers/registry/inference.py @@ -55,7 +55,7 @@ def available_providers() -> List[ProviderSpec]: api=Api.inference, adapter=AdapterSpec( adapter_type="ollama", - pip_packages=["ollama"], + pip_packages=["ollama", "aiohttp"], config_class="llama_stack.providers.adapters.inference.ollama.OllamaImplConfig", module="llama_stack.providers.adapters.inference.ollama", ),