llama-stack-mirror/tests/unit/providers/inference
Charlie Doern 7a9c32f737 feat!: standardize base_url for inference
Completes #3732 by removing runtime URL transformations and requiring
users to provide full URLs in configuration. All providers now use
'base_url' consistently and respect the exact URL provided without
appending paths like /v1 or /openai/v1 at runtime.

Add unit test to enforce URL standardization across remote inference providers (verifies all use 'base_url' field with HttpUrl | None type)

BREAKING CHANGE: Users must update configs to include full URL paths
(e.g., http://localhost:11434/v1 instead of http://localhost:11434).

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-11-18 09:42:29 -05:00
..
bedrock fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
test_bedrock_adapter.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
test_bedrock_config.py feat: add OpenAI-compatible Bedrock provider (#3748) 2025-11-06 17:18:18 -08:00
test_inference_client_caching.py feat!: standardize base_url for inference 2025-11-18 09:42:29 -05:00
test_litellm_openai_mixin.py fix(tests): reduce some test noise (#3825) 2025-10-16 09:52:16 -07:00
test_openai_base_url_config.py chore: turn OpenAIMixin into a pydantic.BaseModel (#3671) 2025-10-06 11:33:19 -04:00
test_remote_vllm.py feat!: standardize base_url for inference 2025-11-18 09:42:29 -05:00