llama-stack-mirror/tests/unit/providers
Charlie Doern 7a9c32f737 feat!: standardize base_url for inference
Completes #3732 by removing runtime URL transformations and requiring
users to provide full URLs in configuration. All providers now use
'base_url' consistently and respect the exact URL provided without
appending paths like /v1 or /openai/v1 at runtime.

Add unit test to enforce URL standardization across remote inference providers (verifies all use 'base_url' field with HttpUrl | None type)

BREAKING CHANGE: Users must update configs to include full URL paths
(e.g., http://localhost:11434/v1 instead of http://localhost:11434).

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-11-18 09:42:29 -05:00
..
agents/meta_reference test: Restore responses unit tests (#4153) 2025-11-14 13:16:03 -08:00
batches fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
files fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
inference feat!: standardize base_url for inference 2025-11-18 09:42:29 -05:00
inline fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
nvidia feat!: standardize base_url for inference 2025-11-18 09:42:29 -05:00
utils fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
vector_io fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
test_bedrock.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
test_configs.py feat!: standardize base_url for inference 2025-11-18 09:42:29 -05:00