llama-stack-mirror/src/llama_stack/providers
Charlie Doern 7a9c32f737 feat!: standardize base_url for inference
Completes #3732 by removing runtime URL transformations and requiring
users to provide full URLs in configuration. All providers now use
'base_url' consistently and respect the exact URL provided without
appending paths like /v1 or /openai/v1 at runtime.

Add unit test to enforce URL standardization across remote inference providers (verifies all use 'base_url' field with HttpUrl | None type)

BREAKING CHANGE: Users must update configs to include full URL paths
(e.g., http://localhost:11434/v1 instead of http://localhost:11434).

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-11-18 09:42:29 -05:00
..
inline fix: Propagate the runtime error message to user (#4150) 2025-11-14 13:14:49 -08:00
registry fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
remote feat!: standardize base_url for inference 2025-11-18 09:42:29 -05:00
utils fix: MCP authorization parameter implementation (#4052) 2025-11-14 08:54:42 -08:00
__init__.py chore(package): migrate to src/ layout (#3920) 2025-10-27 12:02:21 -07:00