Commit graph

8 commits

Author SHA1 Message Date
Sébastien Han
9654dd9da1
refactor(env)!: enhanced environment variable substitution
This commit significantly improves the environment variable substitution
functionality in Llama Stack configuration files:
* The version field in configuration files has been changed from string
  to integer type for better type consistency across build and run
  configurations.

* The environment variable substitution system for ${env.FOO:} was fixed
  and properly returns an error

* The environment variable substitution system for ${env.FOO+} returns
  None instead of an empty strings, it better matches type annotations
  in config fields

* The system includes automatic type conversion for boolean, integer,
  and float values.

* The error messages have been enhanced to provide clearer guidance when
  environment variables are missing, including suggestions for using
  default values or conditional syntax.

* Comprehensive documentation has been added to the configuration guide
  explaining all supported syntax patterns, best practices, and runtime
  override capabilities.

* Multiple provider configurations have been updated to use the new
  conditional syntax for optional API keys, making the system more
  flexible for different deployment scenarios. The telemetry
  configuration has been improved to properly handle optional endpoints
  with appropriate validation, ensuring that required endpoints are
  specified when their corresponding sinks are enabled.

* There were many instances of ${env.NVIDIA_API_KEY:} that should have
  caused the code to fail. However, due to a bug, the distro server was
  still being started, and early validation wasn’t triggered. As a
  result, failures were likely being handled downstream by the
  providers.  I’ve maintained similar behavior by using
  ${env.NVIDIA_API_KEY:+}, though I believe this is incorrect for many
  configurations. I’ll leave it to each provider to correct it as
  needed.

* Environment variable substitution now uses the same syntax as Bash
  parameter expansion.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-25 15:59:04 +02:00
Sébastien Han
6ee319ae08
fix: convert boolean string to boolean (#2284)
# What does this PR do?

Handles the case where the vllm config `tls_verify` is set to `false` or
`true`.

Closes: https://github.com/meta-llama/llama-stack/issues/2283

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-05-27 13:05:38 -07:00
Sébastien Han
39b33a3b01
chore: allow to pass CA cert to remote vllm (#2266)
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 14s
Integration Tests / test-matrix (http, inference) (push) Failing after 22s
Integration Tests / test-matrix (http, datasets) (push) Failing after 28s
Integration Tests / test-matrix (http, inspect) (push) Failing after 29s
Integration Tests / test-matrix (http, scoring) (push) Failing after 30s
Integration Tests / test-matrix (library, datasets) (push) Failing after 18s
Integration Tests / test-matrix (library, agents) (push) Failing after 28s
Integration Tests / test-matrix (library, inference) (push) Failing after 9s
Integration Tests / test-matrix (http, post_training) (push) Failing after 35s
Integration Tests / test-matrix (http, agents) (push) Failing after 37s
Integration Tests / test-matrix (http, tool_runtime) (push) Failing after 34s
Integration Tests / test-matrix (http, providers) (push) Failing after 35s
Integration Tests / test-matrix (library, inspect) (push) Failing after 9s
Integration Tests / test-matrix (library, providers) (push) Failing after 10s
Integration Tests / test-matrix (library, scoring) (push) Failing after 8s
Test External Providers / test-external-providers (venv) (push) Failing after 7s
Integration Tests / test-matrix (library, post_training) (push) Failing after 10s
Integration Tests / test-matrix (library, tool_runtime) (push) Failing after 11s
Unit Tests / unit-tests (3.11) (push) Failing after 10s
Unit Tests / unit-tests (3.12) (push) Failing after 9s
Unit Tests / unit-tests (3.13) (push) Failing after 8s
Unit Tests / unit-tests (3.10) (push) Failing after 1m18s
Pre-commit / pre-commit (push) Successful in 3m12s
# What does this PR do?

The `tls_verify` can now receive a path to a certificate file if the
endpoint requires it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-05-26 20:59:03 +02:00
Ihar Hrachyshka
9e6561a1ec
chore: enable pyupgrade fixes (#1806)
# What does this PR do?

The goal of this PR is code base modernization.

Schema reflection code needed a minor adjustment to handle UnionTypes
and collections.abc.AsyncIterator. (Both are preferred for latest Python
releases.)

Note to reviewers: almost all changes here are automatically generated
by pyupgrade. Some additional unused imports were cleaned up. The only
change worth of note can be found under `docs/openapi_generator` and
`llama_stack/strong_typing/schema.py` where reflection code was updated
to deal with "newer" types.

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-05-01 14:23:50 -07:00
Luis Tomas Bolivar
168cbcbb92
fix: Add the option to not verify SSL at remote-vllm provider (#1585)
# What does this PR do?
Add the option to not verify SSL certificates for the remote-vllm
provider. This allows llama stack server to talk to remote LLMs which
have self-signed certificates

Partially addresses  #1545
2025-03-18 09:33:35 -04:00
Ashwin Bharambe
314ee09ae3
chore: move all Llama Stack types from llama-models to llama-stack (#1098)
llama-models should have extremely minimal cruft. Its sole purpose
should be didactic -- show the simplest implementation of the llama
models and document the prompt formats, etc.

This PR is the complement to
https://github.com/meta-llama/llama-models/pull/279

## Test Plan

Ensure all `llama` CLI `model` sub-commands work:

```bash
llama model list
llama model download --model-id ...
llama model prompt-format -m ...
```

Ran tests:
```bash
cd tests/client-sdk
LLAMA_STACK_CONFIG=fireworks pytest -s -v inference/
LLAMA_STACK_CONFIG=fireworks pytest -s -v vector_io/
LLAMA_STACK_CONFIG=fireworks pytest -s -v agents/
```

Create a fresh venv `uv venv && source .venv/bin/activate` and run
`llama stack build --template fireworks --image-type venv` followed by
`llama stack run together --image-type venv` <-- the server runs

Also checked that the OpenAPI generator can run and there is no change
in the generated files as a result.

```bash
cd docs/openapi_generator
sh run_openapi_generator.sh
```
2025-02-14 09:10:59 -08:00
Ashwin Bharambe
2a31163178
Auto-generate distro yamls + docs (#468)
# What does this PR do?

Automatically generates
- build.yaml
- run.yaml
- run-with-safety.yaml
- parts of markdown docs

for the distributions.

## Test Plan

At this point, this only updates the YAMLs and the docs. Some testing
(especially with ollama and vllm) has been performed but needs to be
much more tested.
2024-11-18 14:57:06 -08:00
Ashwin Bharambe
994732e2e0
impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
Renamed from llama_stack/providers/adapters/inference/vllm/config.py (Browse further)