llama-stack-mirror/tests/unit/providers/inference
Akram Ben Aissi 5e74bc7fcf Add dynamic authentication token forwarding support for vLLM provider
This enables per-request authentication tokens for vLLM providers, supporting use cases like RAG operations where different requests may need different authentication tokens. The implementation follows the same pattern as other providers like Together AI, Fireworks, and Passthrough.

- Add LiteLLMOpenAIMixin that manages the vllm_api_token properly

Usage:

- Static: VLLM_API_TOKEN env var or config.api_token
- Dynamic: X-LlamaStack-Provider-Data header with vllm_api_token
All existing functionality is preserved while adding new dynamic capabilities.

Signed-off-by: Akram Ben Aissi <akram.benaissi@gmail.com>
2025-09-15 13:01:12 +01:00
..
bedrock fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
test_inference_client_caching.py chore: update the groq inference impl to use openai-python for openai-compat functions (#3348) 2025-09-06 15:36:27 -07:00
test_litellm_openai_mixin.py feat: Add clear error message when API key is missing (#2992) 2025-07-31 16:33:16 -04:00
test_openai_base_url_config.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
test_remote_vllm.py Add dynamic authentication token forwarding support for vLLM provider 2025-09-15 13:01:12 +01:00