refactor: remove Conda support from Llama Stack (#2969)

# What does this PR do?
<!-- Provide a short summary of what this PR does and why. Link to
relevant issues if applicable. -->
This PR is responsible for removal of Conda support in Llama Stack

<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->
Closes #2539

## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->
This commit is contained in:
IAN MILLER 2025-08-02 23:52:59 +01:00 committed by GitHub
parent f2eee4e417
commit a749d5f4a4
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
44 changed files with 159 additions and 311 deletions

View file

@ -451,7 +451,7 @@ GenAI application developers need more than just an LLM - they need to integrate
Llama Stack was created to provide developers with a comprehensive and coherent interface that simplifies AI application development and codifies best practices across the Llama ecosystem. Since our launch in September 2024, we have seen a huge uptick in interest in Llama Stack APIs by both AI developers and from partners building AI services with Llama models. Partners like Nvidia, Fireworks, and Ollama have collaborated with us to develop implementations across various APIs, including inference, memory, and safety. Llama Stack was created to provide developers with a comprehensive and coherent interface that simplifies AI application development and codifies best practices across the Llama ecosystem. Since our launch in September 2024, we have seen a huge uptick in interest in Llama Stack APIs by both AI developers and from partners building AI services with Llama models. Partners like Nvidia, Fireworks, and Ollama have collaborated with us to develop implementations across various APIs, including inference, memory, and safety.
With Llama Stack, you can easily build a RAG agent which can also search the web, do complex math, and custom tool calling. You can use telemetry to inspect those traces, and convert telemetry into evals datasets. And with Llama Stacks plugin architecture and prepackage distributions, you choose to run your agent anywhere - in the cloud with our partners, deploy your own environment using virtualenv, conda, or Docker, operate locally with Ollama, or even run on mobile devices with our SDKs. Llama Stack offers unprecedented flexibility while also simplifying the developer experience. With Llama Stack, you can easily build a RAG agent which can also search the web, do complex math, and custom tool calling. You can use telemetry to inspect those traces, and convert telemetry into evals datasets. And with Llama Stacks plugin architecture and prepackage distributions, you choose to run your agent anywhere - in the cloud with our partners, deploy your own environment using virtualenv or Docker, operate locally with Ollama, or even run on mobile devices with our SDKs. Llama Stack offers unprecedented flexibility while also simplifying the developer experience.
## Release ## Release
After iterating on the APIs for the last 3 months, today were launching a stable release (V1) of the Llama Stack APIs and the corresponding llama-stack server and client packages(v0.1.0). We now have automated tests for providers. These tests make sure that all provider implementations are verified. Developers can now easily and reliably select distributions or providers based on their specific requirements. After iterating on the APIs for the last 3 months, today were launching a stable release (V1) of the Llama Stack APIs and the corresponding llama-stack server and client packages(v0.1.0). We now have automated tests for providers. These tests make sure that all provider implementations are verified. Developers can now easily and reliably select distributions or providers based on their specific requirements.

View file

@ -164,7 +164,7 @@ Some tips about common tasks you work on while contributing to Llama Stack:
### Using `llama stack build` ### Using `llama stack build`
Building a stack image (conda / docker) will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands. Building a stack image will use the production version of the `llama-stack` and `llama-stack-client` packages. If you are developing with a llama-stack repository checked out and need your code to be reflected in the stack image, set `LLAMA_STACK_DIR` and `LLAMA_STACK_CLIENT_DIR` to the appropriate checked out directories when running any of the `llama` CLI commands.
Example: Example:
```bash ```bash

View file

@ -97,7 +97,7 @@ To start the Llama Stack Playground, run the following commands:
1. Start up the Llama Stack API server 1. Start up the Llama Stack API server
```bash ```bash
llama stack build --template together --image-type conda llama stack build --template together --image-type venv
llama stack run together llama stack run together
``` ```

View file

@ -47,13 +47,13 @@ pip install -e .
``` ```
Use the CLI to build your distribution. Use the CLI to build your distribution.
The main points to consider are: The main points to consider are:
1. **Image Type** - Do you want a Conda / venv environment or a Container (eg. Docker) 1. **Image Type** - Do you want a venv environment or a Container (eg. Docker)
2. **Template** - Do you want to use a template to build your distribution? or start from scratch ? 2. **Template** - Do you want to use a template to build your distribution? or start from scratch ?
3. **Config** - Do you want to use a pre-existing config file to build your distribution? 3. **Config** - Do you want to use a pre-existing config file to build your distribution?
``` ```
llama stack build -h llama stack build -h
usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--list-templates] [--image-type {conda,container,venv}] [--image-name IMAGE_NAME] [--print-deps-only] [--run] usage: llama stack build [-h] [--config CONFIG] [--template TEMPLATE] [--list-templates] [--image-type {container,venv}] [--image-name IMAGE_NAME] [--print-deps-only] [--run]
Build a Llama stack container Build a Llama stack container
@ -63,10 +63,10 @@ options:
be prompted to enter information interactively (default: None) be prompted to enter information interactively (default: None)
--template TEMPLATE Name of the example template config to use for build. You may use `llama stack build --list-templates` to check out the available templates (default: None) --template TEMPLATE Name of the example template config to use for build. You may use `llama stack build --list-templates` to check out the available templates (default: None)
--list-templates Show the available templates for building a Llama Stack distribution (default: False) --list-templates Show the available templates for building a Llama Stack distribution (default: False)
--image-type {conda,container,venv} --image-type {container,venv}
Image Type to use for the build. If not specified, will use the image type from the template config. (default: None) Image Type to use for the build. If not specified, will use the image type from the template config. (default: None)
--image-name IMAGE_NAME --image-name IMAGE_NAME
[for image-type=conda|container|venv] Name of the conda or virtual environment to use for the build. If not specified, currently active environment will be used if [for image-type=container|venv] Name of the virtual environment to use for the build. If not specified, currently active environment will be used if
found. (default: None) found. (default: None)
--print-deps-only Print the dependencies for the stack only, without building the stack (default: False) --print-deps-only Print the dependencies for the stack only, without building the stack (default: False)
--run Run the stack after building using the same image type, name, and other applicable arguments (default: False) --run Run the stack after building using the same image type, name, and other applicable arguments (default: False)
@ -159,7 +159,7 @@ It would be best to start with a template and understand the structure of the co
llama stack build llama stack build
> Enter a name for your Llama Stack (e.g. my-local-stack): my-stack > Enter a name for your Llama Stack (e.g. my-local-stack): my-stack
> Enter the image type you want your Llama Stack to be built as (container or conda or venv): conda > Enter the image type you want your Llama Stack to be built as (container or venv): venv
Llama Stack is composed of several APIs working together. Let's select Llama Stack is composed of several APIs working together. Let's select
the provider types (implementations) you want to use for these APIs. the provider types (implementations) you want to use for these APIs.
@ -312,7 +312,7 @@ Now, let's start the Llama Stack Distribution Server. You will need the YAML con
``` ```
llama stack run -h llama stack run -h
usage: llama stack run [-h] [--port PORT] [--image-name IMAGE_NAME] [--env KEY=VALUE] usage: llama stack run [-h] [--port PORT] [--image-name IMAGE_NAME] [--env KEY=VALUE]
[--image-type {conda,venv}] [--enable-ui] [--image-type {venv}] [--enable-ui]
[config | template] [config | template]
Start the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution. Start the server for a Llama Stack Distribution. You should have already built (or downloaded) and configured the distribution.
@ -326,8 +326,8 @@ options:
--image-name IMAGE_NAME --image-name IMAGE_NAME
Name of the image to run. Defaults to the current environment (default: None) Name of the image to run. Defaults to the current environment (default: None)
--env KEY=VALUE Environment variables to pass to the server in KEY=VALUE format. Can be specified multiple times. (default: None) --env KEY=VALUE Environment variables to pass to the server in KEY=VALUE format. Can be specified multiple times. (default: None)
--image-type {conda,venv} --image-type {venv}
Image Type used during the build. This can be either conda or venv. (default: None) Image Type used during the build. This should be venv. (default: None)
--enable-ui Start the UI server (default: False) --enable-ui Start the UI server (default: False)
``` ```
@ -342,9 +342,6 @@ llama stack run ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-
# Start using a venv # Start using a venv
llama stack run --image-type venv ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml llama stack run --image-type venv ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
# Start using a conda environment
llama stack run --image-type conda ~/.llama/distributions/llamastack-my-local-stack/my-local-stack-run.yaml
``` ```
``` ```

View file

@ -10,7 +10,6 @@ The default `run.yaml` files generated by templates are starting points for your
```yaml ```yaml
version: 2 version: 2
conda_env: ollama
apis: apis:
- agents - agents
- inference - inference

View file

@ -56,10 +56,10 @@ Breaking down the demo app, this section will show the core pieces that are used
### Setup Remote Inferencing ### Setup Remote Inferencing
Start a Llama Stack server on localhost. Here is an example of how you can do this using the firework.ai distribution: Start a Llama Stack server on localhost. Here is an example of how you can do this using the firework.ai distribution:
``` ```
conda create -n stack-fireworks python=3.10 python -m venv stack-fireworks
conda activate stack-fireworks source stack-fireworks/bin/activate # On Windows: stack-fireworks\Scripts\activate
pip install --no-cache llama-stack==0.2.2 pip install --no-cache llama-stack==0.2.2
llama stack build --template fireworks --image-type conda llama stack build --template fireworks --image-type venv
export FIREWORKS_API_KEY=<SOME_KEY> export FIREWORKS_API_KEY=<SOME_KEY>
llama stack run fireworks --port 5050 llama stack run fireworks --port 5050
``` ```

View file

@ -57,7 +57,7 @@ Make sure you have access to a watsonx API Key. You can get one by referring [wa
## Running Llama Stack with watsonx ## Running Llama Stack with watsonx
You can do this via Conda (build code), venv or Docker which has a pre-built image. You can do this via venv or Docker which has a pre-built image.
### Via Docker ### Via Docker
@ -76,13 +76,3 @@ docker run \
--env WATSONX_PROJECT_ID=$WATSONX_PROJECT_ID \ --env WATSONX_PROJECT_ID=$WATSONX_PROJECT_ID \
--env WATSONX_BASE_URL=$WATSONX_BASE_URL --env WATSONX_BASE_URL=$WATSONX_BASE_URL
``` ```
### Via Conda
```bash
llama stack build --template watsonx --image-type conda
llama stack run ./run.yaml \
--port $LLAMA_STACK_PORT \
--env WATSONX_API_KEY=$WATSONX_API_KEY \
--env WATSONX_PROJECT_ID=$WATSONX_PROJECT_ID
```

View file

@ -114,7 +114,7 @@ podman run --rm -it \
## Running Llama Stack ## Running Llama Stack
Now you are ready to run Llama Stack with TGI as the inference provider. You can do this via Conda (build code) or Docker which has a pre-built image. Now you are ready to run Llama Stack with TGI as the inference provider. You can do this via venv or Docker which has a pre-built image.
### Via Docker ### Via Docker
@ -164,12 +164,12 @@ docker run \
--env CHROMA_URL=$CHROMA_URL --env CHROMA_URL=$CHROMA_URL
``` ```
### Via Conda ### Via venv
Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available. Make sure you have done `pip install llama-stack` and have the Llama Stack CLI available.
```bash ```bash
llama stack build --template dell --image-type conda llama stack build --template dell --image-type venv
llama stack run dell llama stack run dell
--port $LLAMA_STACK_PORT \ --port $LLAMA_STACK_PORT \
--env INFERENCE_MODEL=$INFERENCE_MODEL \ --env INFERENCE_MODEL=$INFERENCE_MODEL \

View file

@ -70,7 +70,7 @@ $ llama model list --downloaded
## Running the Distribution ## Running the Distribution
You can do this via Conda (build code) or Docker which has a pre-built image. You can do this via venv or Docker which has a pre-built image.
### Via Docker ### Via Docker
@ -104,12 +104,12 @@ docker run \
--env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B --env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
``` ```
### Via Conda ### Via venv
Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available. Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available.
```bash ```bash
llama stack build --template meta-reference-gpu --image-type conda llama stack build --template meta-reference-gpu --image-type venv
llama stack run distributions/meta-reference-gpu/run.yaml \ llama stack run distributions/meta-reference-gpu/run.yaml \
--port 8321 \ --port 8321 \
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct

View file

@ -133,7 +133,7 @@ curl -X DELETE "$NEMO_URL/v1/deployment/model-deployments/meta/llama-3.1-8b-inst
## Running Llama Stack with NVIDIA ## Running Llama Stack with NVIDIA
You can do this via Conda or venv (build code), or Docker which has a pre-built image. You can do this via venv (build code), or Docker which has a pre-built image.
### Via Docker ### Via Docker
@ -152,17 +152,6 @@ docker run \
--env NVIDIA_API_KEY=$NVIDIA_API_KEY --env NVIDIA_API_KEY=$NVIDIA_API_KEY
``` ```
### Via Conda
```bash
INFERENCE_MODEL=meta-llama/Llama-3.1-8b-Instruct
llama stack build --template nvidia --image-type conda
llama stack run ./run.yaml \
--port 8321 \
--env NVIDIA_API_KEY=$NVIDIA_API_KEY \
--env INFERENCE_MODEL=$INFERENCE_MODEL
```
### Via venv ### Via venv
If you've set up your local development environment, you can also build the image using your local virtual environment. If you've set up your local development environment, you can also build the image using your local virtual environment.

View file

@ -145,7 +145,7 @@ This distribution comes with a default "llama-guard" shield that can be enabled
## Running the Distribution ## Running the Distribution
You can run the starter distribution via Docker, Conda, or venv. You can run the starter distribution via Docker or venv.
### Via Docker ### Via Docker
@ -164,12 +164,12 @@ docker run \
--port $LLAMA_STACK_PORT --port $LLAMA_STACK_PORT
``` ```
### Via Conda or venv ### Via venv
Ensure you have configured the starter distribution using the environment variables explained above. Ensure you have configured the starter distribution using the environment variables explained above.
```bash ```bash
uv run --with llama-stack llama stack build --template starter --image-type <conda|venv> --run uv run --with llama-stack llama stack build --template starter --image-type venv --run
``` ```
## Example Usage ## Example Usage

View file

@ -11,12 +11,6 @@ This is the simplest way to get started. Using Llama Stack as a library means yo
Another simple way to start interacting with Llama Stack is to just spin up a container (via Docker or Podman) which is pre-built with all the providers you need. We provide a number of pre-built images so you can start a Llama Stack server instantly. You can also build your own custom container. Which distribution to choose depends on the hardware you have. See [Selection of a Distribution](selection) for more details. Another simple way to start interacting with Llama Stack is to just spin up a container (via Docker or Podman) which is pre-built with all the providers you need. We provide a number of pre-built images so you can start a Llama Stack server instantly. You can also build your own custom container. Which distribution to choose depends on the hardware you have. See [Selection of a Distribution](selection) for more details.
## Conda:
If you have a custom or an advanced setup or you are developing on Llama Stack you can also build a custom Llama Stack server. Using `llama stack build` and `llama stack run` you can build/run a custom Llama Stack server containing the exact combination of providers you wish. We have also provided various templates to make getting started easier. See [Building a Custom Distribution](building_distro) for more details.
## Kubernetes: ## Kubernetes:
If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](kubernetes_deployment) for more details. If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](kubernetes_deployment) for more details.

View file

@ -62,7 +62,7 @@ We use `starter` as template. By default all providers are disabled, this requir
llama stack build --template starter --image-type venv --run llama stack build --template starter --image-type venv --run
``` ```
::: :::
:::{tab-item} Using `conda` :::{tab-item} Using `venv`
You can use Python to build and run the Llama Stack server, which is useful for testing and development. You can use Python to build and run the Llama Stack server, which is useful for testing and development.
Llama Stack uses a [YAML configuration file](../distributions/configuration.md) to specify the stack setup, Llama Stack uses a [YAML configuration file](../distributions/configuration.md) to specify the stack setup,
@ -70,7 +70,7 @@ which defines the providers and their settings.
Now let's build and run the Llama Stack config for Ollama. Now let's build and run the Llama Stack config for Ollama.
```bash ```bash
llama stack build --template starter --image-type conda --run llama stack build --template starter --image-type venv --run
``` ```
::: :::
:::{tab-item} Using a Container :::{tab-item} Using a Container
@ -150,10 +150,10 @@ pip install llama-stack-client
``` ```
::: :::
:::{tab-item} Install with `conda` :::{tab-item} Install with `venv`
```bash ```bash
yes | conda create -n stack-client python=3.12 python -m venv stack-client
conda activate stack-client source stack-client/bin/activate # On Windows: stack-client\Scripts\activate
pip install llama-stack-client pip install llama-stack-client
``` ```
::: :::

View file

@ -19,11 +19,11 @@ You have two ways to install Llama Stack:
cd ~/local cd ~/local
git clone git@github.com:meta-llama/llama-stack.git git clone git@github.com:meta-llama/llama-stack.git
conda create -n myenv python=3.10 python -m venv myenv
conda activate myenv source myenv/bin/activate # On Windows: myenv\Scripts\activate
cd llama-stack cd llama-stack
$CONDA_PREFIX/bin/pip install -e . pip install -e .
## Downloading models via CLI ## Downloading models via CLI

View file

@ -19,11 +19,11 @@ You have two ways to install Llama Stack:
cd ~/local cd ~/local
git clone git@github.com:meta-llama/llama-stack.git git clone git@github.com:meta-llama/llama-stack.git
conda create -n myenv python=3.10 python -m venv myenv
conda activate myenv source myenv/bin/activate # On Windows: myenv\Scripts\activate
cd llama-stack cd llama-stack
$CONDA_PREFIX/bin/pip install -e . pip install -e .
## `llama` subcommands ## `llama` subcommands

View file

@ -47,20 +47,20 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next
## Install Dependencies and Set Up Environment ## Install Dependencies and Set Up Environment
1. **Create a Conda Environment**: 1. **Install uv**:
Create a new Conda environment with Python 3.12: Install [uv](https://docs.astral.sh/uv/) for managing dependencies:
```bash ```bash
conda create -n ollama python=3.12 # macOS and Linux
``` curl -LsSf https://astral.sh/uv/install.sh | sh
Activate the environment:
```bash # Windows
conda activate ollama powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
``` ```
2. **Install ChromaDB**: 2. **Install ChromaDB**:
Install `chromadb` using `pip`: Install `chromadb` using `uv`:
```bash ```bash
pip install chromadb uv pip install chromadb
``` ```
3. **Run ChromaDB**: 3. **Run ChromaDB**:
@ -69,28 +69,21 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next
chroma run --host localhost --port 8000 --path ./my_chroma_data chroma run --host localhost --port 8000 --path ./my_chroma_data
``` ```
4. **Install Llama Stack**:
Open a new terminal and install `llama-stack`:
```bash
conda activate ollama
pip install -U llama-stack
```
--- ---
## Build, Configure, and Run Llama Stack ## Build, Configure, and Run Llama Stack
1. **Build the Llama Stack**: 1. **Build the Llama Stack**:
Build the Llama Stack using the `ollama` template: Build the Llama Stack using the `starter` template:
```bash ```bash
llama stack build --template starter --image-type conda uv run --with llama-stack llama stack build --template starter --image-type venv
``` ```
**Expected Output:** **Expected Output:**
```bash ```bash
... ...
Build Successful! Build Successful!
You can find the newly-built template here: ~/.llama/distributions/ollama/ollama-run.yaml You can find the newly-built template here: ~/.llama/distributions/starter/starter-run.yaml
You can run the new Llama Stack Distro via: llama stack run ~/.llama/distributions/ollama/ollama-run.yaml --image-type conda You can run the new Llama Stack Distro via: uv run --with llama-stack llama stack run starter --image-type venv
``` ```
3. **Set the ENV variables by exporting them to the terminal**: 3. **Set the ENV variables by exporting them to the terminal**:
@ -102,12 +95,13 @@ If you're looking for more specific topics, we have a [Zero to Hero Guide](#next
``` ```
3. **Run the Llama Stack**: 3. **Run the Llama Stack**:
Run the stack with command shared by the API from earlier: Run the stack using uv:
```bash ```bash
llama stack run ollama uv run --with llama-stack llama stack run starter \
--port $LLAMA_STACK_PORT --image-type venv \
--env INFERENCE_MODEL=$INFERENCE_MODEL --port $LLAMA_STACK_PORT \
--env SAFETY_MODEL=$SAFETY_MODEL --env INFERENCE_MODEL=$INFERENCE_MODEL \
--env SAFETY_MODEL=$SAFETY_MODEL \
--env OLLAMA_URL=$OLLAMA_URL --env OLLAMA_URL=$OLLAMA_URL
``` ```
Note: Every time you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model. Note: Every time you run a new model with `ollama run`, you will need to restart the llama stack. Otherwise it won't see the new model.
@ -120,7 +114,7 @@ After setting up the server, open a new terminal window and configure the llama-
1. Configure the CLI to point to the llama-stack server. 1. Configure the CLI to point to the llama-stack server.
```bash ```bash
llama-stack-client configure --endpoint http://localhost:8321 uv run --with llama-stack-client llama-stack-client configure --endpoint http://localhost:8321
``` ```
**Expected Output:** **Expected Output:**
```bash ```bash
@ -128,7 +122,7 @@ After setting up the server, open a new terminal window and configure the llama-
``` ```
2. Test the CLI by running inference: 2. Test the CLI by running inference:
```bash ```bash
llama-stack-client inference chat-completion --message "Write me a 2-sentence poem about the moon" uv run --with llama-stack-client llama-stack-client inference chat-completion --message "Write me a 2-sentence poem about the moon"
``` ```
**Expected Output:** **Expected Output:**
```bash ```bash
@ -170,7 +164,7 @@ curl http://localhost:$LLAMA_STACK_PORT/alpha/inference/chat-completion
EOF EOF
``` ```
You can check the available models with the command `llama-stack-client models list`. You can check the available models with the command `uv run --with llama-stack-client llama-stack-client models list`.
**Expected Output:** **Expected Output:**
```json ```json
@ -191,18 +185,12 @@ You can check the available models with the command `llama-stack-client models l
You can also interact with the Llama Stack server using a simple Python script. Below is an example: You can also interact with the Llama Stack server using a simple Python script. Below is an example:
### 1. Activate Conda Environment ### 1. Create Python Script (`test_llama_stack.py`)
```bash
conda activate ollama
```
### 2. Create Python Script (`test_llama_stack.py`)
```bash ```bash
touch test_llama_stack.py touch test_llama_stack.py
``` ```
### 3. Create a Chat Completion Request in Python ### 2. Create a Chat Completion Request in Python
In `test_llama_stack.py`, write the following code: In `test_llama_stack.py`, write the following code:
@ -233,10 +221,10 @@ response = client.inference.chat_completion(
print(response.completion_message.content) print(response.completion_message.content)
``` ```
### 4. Run the Python Script ### 3. Run the Python Script
```bash ```bash
python test_llama_stack.py uv run --with llama-stack-client python test_llama_stack.py
``` ```
**Expected Output:** **Expected Output:**

View file

@ -69,9 +69,6 @@ def run_stack_build_command(args: argparse.Namespace) -> None:
if args.image_type == ImageType.VENV.value: if args.image_type == ImageType.VENV.value:
current_venv = os.environ.get("VIRTUAL_ENV") current_venv = os.environ.get("VIRTUAL_ENV")
image_name = args.image_name or current_venv image_name = args.image_name or current_venv
elif args.image_type == ImageType.CONDA.value:
current_conda_env = os.environ.get("CONDA_DEFAULT_ENV")
image_name = args.image_name or current_conda_env
else: else:
image_name = args.image_name image_name = args.image_name
@ -132,7 +129,7 @@ def run_stack_build_command(args: argparse.Namespace) -> None:
) )
if not args.image_type: if not args.image_type:
cprint( cprint(
f"Please specify a image-type (container | conda | venv) for {args.template}", f"Please specify a image-type (container | venv) for {args.template}",
color="red", color="red",
file=sys.stderr, file=sys.stderr,
) )
@ -158,21 +155,6 @@ def run_stack_build_command(args: argparse.Namespace) -> None:
), ),
) )
if image_type == ImageType.CONDA.value:
if not image_name:
cprint(
f"No current conda environment detected or specified, will create a new conda environment with the name `llamastack-{name}`",
color="yellow",
file=sys.stderr,
)
image_name = f"llamastack-{name}"
else:
cprint(
f"Using conda environment {image_name}",
color="green",
file=sys.stderr,
)
else:
image_name = f"llamastack-{name}" image_name = f"llamastack-{name}"
cprint( cprint(
@ -372,10 +354,7 @@ def _run_stack_build_command_from_build_config(
else: else:
if not image_name: if not image_name:
raise ValueError("Please specify an image name when building a container image without a template") raise ValueError("Please specify an image name when building a container image without a template")
elif build_config.image_type == LlamaStackImageType.CONDA.value: else:
if not image_name:
raise ValueError("Please specify an image name when building a conda image")
elif build_config.image_type == LlamaStackImageType.VENV.value:
if not image_name and os.environ.get("UV_SYSTEM_PYTHON"): if not image_name and os.environ.get("UV_SYSTEM_PYTHON"):
image_name = "__system__" image_name = "__system__"
if not image_name: if not image_name:
@ -431,7 +410,6 @@ def _run_stack_build_command_from_build_config(
return_code = build_image( return_code = build_image(
build_config, build_config,
build_file_path,
image_name, image_name,
template_or_config=template_name or config_path or str(build_file_path), template_or_config=template_name or config_path or str(build_file_path),
run_config=run_config_file.as_posix() if run_config_file else None, run_config=run_config_file.as_posix() if run_config_file else None,

View file

@ -56,7 +56,7 @@ class StackBuild(Subcommand):
"--image-name", "--image-name",
type=str, type=str,
help=textwrap.dedent( help=textwrap.dedent(
f"""[for image-type={"|".join(e.value for e in ImageType)}] Name of the conda or virtual environment to use for f"""[for image-type={"|".join(e.value for e in ImageType)}] Name of the virtual environment to use for
the build. If not specified, currently active environment will be used if found. the build. If not specified, currently active environment will be used if found.
""" """
), ),

View file

@ -47,7 +47,8 @@ class StackRun(Subcommand):
self.parser.add_argument( self.parser.add_argument(
"--image-name", "--image-name",
type=str, type=str,
help="Name of the image to run.", default=None,
help="Name of the image to run. Defaults to the current environment",
) )
self.parser.add_argument( self.parser.add_argument(
"--env", "--env",
@ -58,7 +59,7 @@ class StackRun(Subcommand):
self.parser.add_argument( self.parser.add_argument(
"--image-type", "--image-type",
type=str, type=str,
help="Image Type used during the build. This can be either conda or container or venv.", help="Image Type used during the build. This can be only venv.",
choices=[e.value for e in ImageType if e.value != ImageType.CONTAINER.value], choices=[e.value for e in ImageType if e.value != ImageType.CONTAINER.value],
) )
self.parser.add_argument( self.parser.add_argument(
@ -67,20 +68,38 @@ class StackRun(Subcommand):
help="Start the UI server", help="Start the UI server",
) )
# If neither image type nor image name is provided, but at the same time def _resolve_config_and_template(self, args: argparse.Namespace) -> tuple[Path | None, str | None]:
# the current environment has conda breadcrumbs, then assume what the user """Resolve config file path and template name from args.config"""
# wants to use conda mode and not the usual default mode (using from llama_stack.distribution.utils.config_dirs import DISTRIBS_BASE_DIR
# pre-installed system packages).
# if not args.config:
# Note: yes, this is hacky. It's implemented this way to keep the existing return None, None
# conda users unaffected by the switch of the default behavior to using
# system packages. config_file = Path(args.config)
def _get_image_type_and_name(self, args: argparse.Namespace) -> tuple[str, str]: has_yaml_suffix = args.config.endswith(".yaml")
conda_env = os.environ.get("CONDA_DEFAULT_ENV") template_name = None
if conda_env and args.image_name == conda_env:
logger.warning(f"Conda detected. Using conda environment {conda_env} for the run.") if not config_file.exists() and not has_yaml_suffix:
return ImageType.CONDA.value, args.image_name # check if this is a template
return args.image_type, args.image_name config_file = Path(REPO_ROOT) / "llama_stack" / "templates" / args.config / "run.yaml"
if config_file.exists():
template_name = args.config
if not config_file.exists() and not has_yaml_suffix:
# check if it's a build config saved to ~/.llama dir
config_file = Path(DISTRIBS_BASE_DIR / f"llamastack-{args.config}" / f"{args.config}-run.yaml")
if not config_file.exists():
self.parser.error(
f"File {str(config_file)} does not exist.\n\nPlease run `llama stack build` to generate (and optionally edit) a run.yaml file"
)
if not config_file.is_file():
self.parser.error(
f"Config file must be a valid file path, '{config_file}' is not a file: type={type(config_file)}"
)
return config_file, template_name
def _run_stack_run_cmd(self, args: argparse.Namespace) -> None: def _run_stack_run_cmd(self, args: argparse.Namespace) -> None:
import yaml import yaml
@ -90,7 +109,7 @@ class StackRun(Subcommand):
if args.enable_ui: if args.enable_ui:
self._start_ui_development_server(args.port) self._start_ui_development_server(args.port)
image_type, image_name = self._get_image_type_and_name(args) image_type, image_name = args.image_type, args.image_name
if args.config: if args.config:
try: try:
@ -103,8 +122,8 @@ class StackRun(Subcommand):
config_file = None config_file = None
# Check if config is required based on image type # Check if config is required based on image type
if (image_type in [ImageType.CONDA.value, ImageType.VENV.value]) and not config_file: if image_type == ImageType.VENV.value and not config_file:
self.parser.error("Config file is required for venv and conda environments") self.parser.error("Config file is required for venv environment")
if config_file: if config_file:
logger.info(f"Using run configuration: {config_file}") logger.info(f"Using run configuration: {config_file}")

View file

@ -8,7 +8,6 @@ from enum import Enum
class ImageType(Enum): class ImageType(Enum):
CONDA = "conda"
CONTAINER = "container" CONTAINER = "container"
VENV = "venv" VENV = "venv"

View file

@ -7,7 +7,6 @@
import importlib.resources import importlib.resources
import logging import logging
import sys import sys
from pathlib import Path
from pydantic import BaseModel from pydantic import BaseModel
from termcolor import cprint from termcolor import cprint
@ -106,7 +105,6 @@ def print_pip_install_help(config: BuildConfig):
def build_image( def build_image(
build_config: BuildConfig, build_config: BuildConfig,
build_file_path: Path,
image_name: str, image_name: str,
template_or_config: str, template_or_config: str,
run_config: str | None = None, run_config: str | None = None,
@ -138,18 +136,7 @@ def build_image(
# build arguments # build arguments
if run_config is not None: if run_config is not None:
args.extend(["--run-config", run_config]) args.extend(["--run-config", run_config])
elif build_config.image_type == LlamaStackImageType.CONDA.value: else:
script = str(importlib.resources.files("llama_stack") / "core/build_conda_env.sh")
args = [
script,
"--env-name",
str(image_name),
"--build-file-path",
str(build_file_path),
"--normal-deps",
" ".join(normal_deps),
]
elif build_config.image_type == LlamaStackImageType.VENV.value:
script = str(importlib.resources.files("llama_stack") / "core/build_venv.sh") script = str(importlib.resources.files("llama_stack") / "core/build_venv.sh")
args = [ args = [
script, script,

View file

@ -6,9 +6,6 @@
# This source code is licensed under the terms described in the LICENSE file in # This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree. # the root directory of this source tree.
# TODO: combine this with build_conda_env.sh since it is almost identical
# the only difference is that we don't do any conda-specific setup
LLAMA_STACK_DIR=${LLAMA_STACK_DIR:-} LLAMA_STACK_DIR=${LLAMA_STACK_DIR:-}
LLAMA_STACK_CLIENT_DIR=${LLAMA_STACK_CLIENT_DIR:-} LLAMA_STACK_CLIENT_DIR=${LLAMA_STACK_CLIENT_DIR:-}
TEST_PYPI_VERSION=${TEST_PYPI_VERSION:-} TEST_PYPI_VERSION=${TEST_PYPI_VERSION:-}
@ -95,6 +92,8 @@ if [ -n "$LLAMA_STACK_CLIENT_DIR" ]; then
echo "Using llama-stack-client-dir=$LLAMA_STACK_CLIENT_DIR" echo "Using llama-stack-client-dir=$LLAMA_STACK_CLIENT_DIR"
fi fi
ENVNAME=""
# pre-run checks to make sure we can proceed with the installation # pre-run checks to make sure we can proceed with the installation
pre_run_checks() { pre_run_checks() {
local env_name="$1" local env_name="$1"

View file

@ -7,12 +7,10 @@
# the root directory of this source tree. # the root directory of this source tree.
cleanup() { cleanup() {
envname="$1" # For venv environments, no special cleanup is needed
# This function exists to avoid "function not found" errors
set +x local env_name="$1"
echo "Cleaning up..." echo "Cleanup called for environment: $env_name"
conda deactivate
conda env remove --name "$envname" -y
} }
handle_int() { handle_int() {
@ -31,19 +29,7 @@ handle_exit() {
fi fi
} }
setup_cleanup_handlers() {
trap handle_int INT
trap handle_exit EXIT
if is_command_available conda; then
__conda_setup="$('conda' 'shell.bash' 'hook' 2>/dev/null)"
eval "$__conda_setup"
conda deactivate
else
echo "conda is not available"
exit 1
fi
}
# check if a command is present # check if a command is present
is_command_available() { is_command_available() {

View file

@ -432,8 +432,8 @@ class BuildConfig(BaseModel):
distribution_spec: DistributionSpec = Field(description="The distribution spec to build including API providers. ") distribution_spec: DistributionSpec = Field(description="The distribution spec to build including API providers. ")
image_type: str = Field( image_type: str = Field(
default="conda", default="venv",
description="Type of package to build (conda | container | venv)", description="Type of package to build (container | venv)",
) )
image_name: str | None = Field( image_name: str | None = Field(
default=None, default=None,

View file

@ -40,7 +40,6 @@ port="$1"
shift shift
SCRIPT_DIR=$(dirname "$(readlink -f "$0")") SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
source "$SCRIPT_DIR/common.sh"
# Initialize variables # Initialize variables
yaml_config="" yaml_config=""
@ -75,9 +74,9 @@ while [[ $# -gt 0 ]]; do
esac esac
done done
# Check if yaml_config is required based on env_type # Check if yaml_config is required
if [[ "$env_type" == "venv" || "$env_type" == "conda" ]] && [ -z "$yaml_config" ]; then if [[ "$env_type" == "venv" ]] && [ -z "$yaml_config" ]; then
echo -e "${RED}Error: --config is required for venv and conda environments${NC}" >&2 echo -e "${RED}Error: --config is required for venv environment${NC}" >&2
exit 1 exit 1
fi fi
@ -101,19 +100,14 @@ case "$env_type" in
source "$env_path_or_name/bin/activate" source "$env_path_or_name/bin/activate"
fi fi
;; ;;
"conda")
if ! is_command_available conda; then
echo -e "${RED}Error: conda not found" >&2
exit 1
fi
eval "$(conda shell.bash hook)"
conda deactivate && conda activate "$env_path_or_name"
PYTHON_BINARY="$CONDA_PREFIX/bin/python"
;;
*) *)
# Handle unsupported env_types here
echo -e "${RED}Error: Unsupported environment type '$env_type'. Only 'venv' is supported.${NC}" >&2
exit 1
;;
esac esac
if [[ "$env_type" == "venv" || "$env_type" == "conda" ]]; then if [[ "$env_type" == "venv" ]]; then
set -x set -x
if [ -n "$yaml_config" ]; then if [ -n "$yaml_config" ]; then

View file

@ -9,7 +9,7 @@
1. Start up Llama Stack API server. More details [here](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html). 1. Start up Llama Stack API server. More details [here](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html).
``` ```
llama stack build --template together --image-type conda llama stack build --template together --image-type venv
llama stack run together llama stack run together
``` ```

View file

@ -15,59 +15,10 @@ from termcolor import cprint
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
import importlib import importlib
import json
from pathlib import Path
from llama_stack.core.utils.image_types import LlamaStackImageType
def formulate_run_args(image_type: str, image_name: str) -> list[str]: def formulate_run_args(image_type: str, image_name: str) -> list:
env_name = "" # Only venv is supported now
if image_type == LlamaStackImageType.CONDA.value:
current_conda_env = os.environ.get("CONDA_DEFAULT_ENV")
env_name = image_name or current_conda_env
if not env_name:
cprint(
"No current conda environment detected, please specify a conda environment name with --image-name",
color="red",
file=sys.stderr,
)
return
def get_conda_prefix(env_name):
# Conda "base" environment does not end with "base" in the
# prefix, so should be handled separately.
if env_name == "base":
return os.environ.get("CONDA_PREFIX")
# Get conda environments info
conda_env_info = json.loads(subprocess.check_output(["conda", "info", "--envs", "--json"]).decode())
envs = conda_env_info["envs"]
for envpath in envs:
if os.path.basename(envpath) == env_name:
return envpath
return None
cprint(f"Using conda environment: {env_name}", color="green", file=sys.stderr)
conda_prefix = get_conda_prefix(env_name)
if not conda_prefix:
cprint(
f"Conda environment {env_name} does not exist.",
color="red",
file=sys.stderr,
)
return
build_file = Path(conda_prefix) / "llamastack-build.yaml"
if not build_file.exists():
cprint(
f"Build file {build_file} does not exist.\n\nPlease run `llama stack build` or specify the correct conda environment name with --image-name",
color="red",
file=sys.stderr,
)
return
else:
# else must be venv since that is the only valid option left.
current_venv = os.environ.get("VIRTUAL_ENV") current_venv = os.environ.get("VIRTUAL_ENV")
env_name = image_name or current_venv env_name = image_name or current_venv
if not env_name: if not env_name:
@ -76,7 +27,8 @@ def formulate_run_args(image_type: str, image_name: str) -> list[str]:
color="red", color="red",
file=sys.stderr, file=sys.stderr,
) )
return return []
cprint(f"Using virtual environment: {env_name}", file=sys.stderr) cprint(f"Using virtual environment: {env_name}", file=sys.stderr)
script = importlib.resources.files("llama_stack") / "core/start_stack.sh" script = importlib.resources.files("llama_stack") / "core/start_stack.sh"
@ -93,7 +45,8 @@ def in_notebook():
try: try:
from IPython import get_ipython from IPython import get_ipython
if "IPKernelApp" not in get_ipython().config: # pragma: no cover ipython = get_ipython()
if ipython is None or "IPKernelApp" not in ipython.config: # pragma: no cover
return False return False
except ImportError: except ImportError:
return False return False

View file

@ -9,5 +9,4 @@ import enum
class LlamaStackImageType(enum.Enum): class LlamaStackImageType(enum.Enum):
CONTAINER = "container" CONTAINER = "container"
CONDA = "conda"
VENV = "venv" VENV = "venv"

View file

@ -20,7 +20,7 @@ This provider enables dataset management using NVIDIA's NeMo Customizer service.
Build the NVIDIA environment: Build the NVIDIA environment:
```bash ```bash
llama stack build --template nvidia --image-type conda llama stack build --template nvidia --image-type venv
``` ```
### Basic Usage using the LlamaStack Python Client ### Basic Usage using the LlamaStack Python Client

View file

@ -18,7 +18,7 @@ This provider enables running inference using NVIDIA NIM.
Build the NVIDIA environment: Build the NVIDIA environment:
```bash ```bash
llama stack build --template nvidia --image-type conda llama stack build --template nvidia --image-type venv
``` ```
### Basic Usage using the LlamaStack Python Client ### Basic Usage using the LlamaStack Python Client

View file

@ -22,7 +22,7 @@ This provider enables fine-tuning of LLMs using NVIDIA's NeMo Customizer service
Build the NVIDIA environment: Build the NVIDIA environment:
```bash ```bash
llama stack build --template nvidia --image-type conda llama stack build --template nvidia --image-type venv
``` ```
### Basic Usage using the LlamaStack Python Client ### Basic Usage using the LlamaStack Python Client

View file

@ -19,7 +19,7 @@ This provider enables safety checks and guardrails for LLM interactions using NV
Build the NVIDIA environment: Build the NVIDIA environment:
```bash ```bash
llama stack build --template nvidia --image-type conda llama stack build --template nvidia --image-type venv
``` ```
### Basic Usage using the LlamaStack Python Client ### Basic Usage using the LlamaStack Python Client

View file

@ -47,8 +47,7 @@ distribution_spec:
- provider_type: remote::tavily-search - provider_type: remote::tavily-search
- provider_type: inline::rag-runtime - provider_type: inline::rag-runtime
- provider_type: remote::model-context-protocol - provider_type: remote::model-context-protocol
image_type: conda image_type: venv
image_name: ci-tests
additional_pip_packages: additional_pip_packages:
- aiosqlite - aiosqlite
- asyncpg - asyncpg

View file

@ -29,8 +29,7 @@ distribution_spec:
- provider_type: remote::brave-search - provider_type: remote::brave-search
- provider_type: remote::tavily-search - provider_type: remote::tavily-search
- provider_type: inline::rag-runtime - provider_type: inline::rag-runtime
image_type: conda image_type: venv
image_name: dell
additional_pip_packages: additional_pip_packages:
- aiosqlite - aiosqlite
- sqlalchemy[asyncio] - sqlalchemy[asyncio]

View file

@ -28,8 +28,7 @@ distribution_spec:
- provider_type: remote::tavily-search - provider_type: remote::tavily-search
- provider_type: inline::rag-runtime - provider_type: inline::rag-runtime
- provider_type: remote::model-context-protocol - provider_type: remote::model-context-protocol
image_type: conda image_type: venv
image_name: meta-reference-gpu
additional_pip_packages: additional_pip_packages:
- aiosqlite - aiosqlite
- sqlalchemy[asyncio] - sqlalchemy[asyncio]

View file

@ -58,7 +58,7 @@ $ llama model list --downloaded
## Running the Distribution ## Running the Distribution
You can do this via Conda (build code) or Docker which has a pre-built image. You can do this via venv or Docker which has a pre-built image.
### Via Docker ### Via Docker
@ -92,12 +92,12 @@ docker run \
--env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B --env SAFETY_MODEL=meta-llama/Llama-Guard-3-1B
``` ```
### Via Conda ### Via venv
Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available. Make sure you have done `uv pip install llama-stack` and have the Llama Stack CLI available.
```bash ```bash
llama stack build --template {{ name }} --image-type conda llama stack build --template {{ name }} --image-type venv
llama stack run distributions/{{ name }}/run.yaml \ llama stack run distributions/{{ name }}/run.yaml \
--port 8321 \ --port 8321 \
--env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct

View file

@ -23,8 +23,7 @@ distribution_spec:
- provider_type: inline::basic - provider_type: inline::basic
tool_runtime: tool_runtime:
- provider_type: inline::rag-runtime - provider_type: inline::rag-runtime
image_type: conda image_type: venv
image_name: nvidia
additional_pip_packages: additional_pip_packages:
- aiosqlite - aiosqlite
- sqlalchemy[asyncio] - sqlalchemy[asyncio]

View file

@ -105,7 +105,7 @@ curl -X DELETE "$NEMO_URL/v1/deployment/model-deployments/meta/llama-3.1-8b-inst
## Running Llama Stack with NVIDIA ## Running Llama Stack with NVIDIA
You can do this via Conda or venv (build code), or Docker which has a pre-built image. You can do this via venv (build code), or Docker which has a pre-built image.
### Via Docker ### Via Docker
@ -124,17 +124,6 @@ docker run \
--env NVIDIA_API_KEY=$NVIDIA_API_KEY --env NVIDIA_API_KEY=$NVIDIA_API_KEY
``` ```
### Via Conda
```bash
INFERENCE_MODEL=meta-llama/Llama-3.1-8b-Instruct
llama stack build --template nvidia --image-type conda
llama stack run ./run.yaml \
--port 8321 \
--env NVIDIA_API_KEY=$NVIDIA_API_KEY \
--env INFERENCE_MODEL=$INFERENCE_MODEL
```
### Via venv ### Via venv
If you've set up your local development environment, you can also build the image using your local virtual environment. If you've set up your local development environment, you can also build the image using your local virtual environment.

View file

@ -32,8 +32,7 @@ distribution_spec:
- provider_type: remote::tavily-search - provider_type: remote::tavily-search
- provider_type: inline::rag-runtime - provider_type: inline::rag-runtime
- provider_type: remote::model-context-protocol - provider_type: remote::model-context-protocol
image_type: conda image_type: venv
image_name: open-benchmark
additional_pip_packages: additional_pip_packages:
- aiosqlite - aiosqlite
- sqlalchemy[asyncio] - sqlalchemy[asyncio]

View file

@ -18,8 +18,7 @@ distribution_spec:
- provider_type: remote::tavily-search - provider_type: remote::tavily-search
- provider_type: inline::rag-runtime - provider_type: inline::rag-runtime
- provider_type: remote::model-context-protocol - provider_type: remote::model-context-protocol
image_type: conda image_type: venv
image_name: postgres-demo
additional_pip_packages: additional_pip_packages:
- asyncpg - asyncpg
- psycopg2-binary - psycopg2-binary

View file

@ -47,8 +47,7 @@ distribution_spec:
- provider_type: remote::tavily-search - provider_type: remote::tavily-search
- provider_type: inline::rag-runtime - provider_type: inline::rag-runtime
- provider_type: remote::model-context-protocol - provider_type: remote::model-context-protocol
image_type: conda image_type: venv
image_name: starter
additional_pip_packages: additional_pip_packages:
- aiosqlite - aiosqlite
- asyncpg - asyncpg

View file

@ -29,6 +29,7 @@ from llama_stack.core.datatypes import (
) )
from llama_stack.core.distribution import get_provider_registry from llama_stack.core.distribution import get_provider_registry
from llama_stack.core.utils.dynamic import instantiate_class_type from llama_stack.core.utils.dynamic import instantiate_class_type
from llama_stack.core.utils.image_types import LlamaStackImageType
from llama_stack.providers.utils.inference.model_registry import ProviderModelEntry from llama_stack.providers.utils.inference.model_registry import ProviderModelEntry
from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig
from llama_stack.providers.utils.kvstore.config import get_pip_packages as get_kv_pip_packages from llama_stack.providers.utils.kvstore.config import get_pip_packages as get_kv_pip_packages
@ -314,8 +315,7 @@ class DistributionTemplate(BaseModel):
container_image=self.container_image, container_image=self.container_image,
providers=build_providers, providers=build_providers,
), ),
image_type="conda", image_type=LlamaStackImageType.VENV.value, # default to venv
image_name=self.name,
additional_pip_packages=sorted(set(additional_pip_packages)), additional_pip_packages=sorted(set(additional_pip_packages)),
) )

View file

@ -35,16 +35,11 @@ distribution_spec:
- provider_id: braintrust - provider_id: braintrust
provider_type: inline::braintrust provider_type: inline::braintrust
tool_runtime: tool_runtime:
- provider_id: brave-search - provider_type: remote::brave-search
provider_type: remote::brave-search - provider_type: remote::tavily-search
- provider_id: tavily-search - provider_type: inline::rag-runtime
provider_type: remote::tavily-search - provider_type: remote::model-context-protocol
- provider_id: rag-runtime image_type: venv
provider_type: inline::rag-runtime
- provider_id: model-context-protocol
provider_type: remote::model-context-protocol
image_type: conda
image_name: watsonx
additional_pip_packages: additional_pip_packages:
- sqlalchemy[asyncio] - sqlalchemy[asyncio]
- aiosqlite - aiosqlite

View file

@ -16,7 +16,7 @@ from llama_stack.core.utils.image_types import LlamaStackImageType
def test_container_build_passes_path(monkeypatch, tmp_path): def test_container_build_passes_path(monkeypatch, tmp_path):
called_with = {} called_with = {}
def spy_build_image(cfg, build_file_path, image_name, template_or_config, run_config=None): def spy_build_image(build_config, image_name, template_or_config, run_config=None):
called_with["path"] = template_or_config called_with["path"] = template_or_config
called_with["run_config"] = run_config called_with["run_config"] = run_config
return 0 return 0