mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
# What does this PR do? Closes #2495 Changes: - Delay the `COPY run.yaml` into docker image step until after external provider handling - Split the check for `external_providers_dir` into “non-empty” and “directory exists" ## Test Plan 0. Create and Activate venv 1. Create a `simple_build.yaml` ```yaml version: '2' distribution_spec: providers: inference: - remote::openai image_type: container image_name: openai-stack ``` 2. Run llama stack build: ```bash llama stack build --config simple_build.yaml ``` 3. Run the docker container: ```bash docker run \ -p 8321:8321 \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ openai_stack:0.2.12 ``` This should show server is running. ``` INFO 2025-06-23 19:07:57,832 llama_stack.distribution.distribution:151 core: Loading external providers from /.llama/providers.d INFO 2025-06-23 19:07:59,324 __main__:572 server: Listening on ['::', '0.0.0.0']:8321 INFO: Started server process [1] INFO: Waiting for application startup. INFO 2025-06-23 19:07:59,336 __main__:156 server: Starting up INFO: Application startup complete. INFO: Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit) ``` Notice the first line: ``` Loading external providers from /.llama/providers.d ``` This is expected behaviour. Co-authored-by: Rohan Awhad <rawhad@redhat.com> |
||
---|---|---|
.. | ||
apis | ||
cli | ||
distribution | ||
models | ||
providers | ||
strong_typing | ||
templates | ||
ui | ||
__init__.py | ||
env.py | ||
log.py | ||
schema_utils.py |