llama-stack-mirror/llama_stack/providers/remote/inference/vllm
Akram Ben Aissi 5e74bc7fcf Add dynamic authentication token forwarding support for vLLM provider
This enables per-request authentication tokens for vLLM providers, supporting use cases like RAG operations where different requests may need different authentication tokens. The implementation follows the same pattern as other providers like Together AI, Fireworks, and Passthrough.

- Add LiteLLMOpenAIMixin that manages the vllm_api_token properly

Usage:

- Static: VLLM_API_TOKEN env var or config.api_token
- Dynamic: X-LlamaStack-Provider-Data header with vllm_api_token
All existing functionality is preserved while adding new dynamic capabilities.

Signed-off-by: Akram Ben Aissi <akram.benaissi@gmail.com>
2025-09-15 13:01:12 +01:00
..
__init__.py Add dynamic authentication token forwarding support for vLLM provider 2025-09-15 13:01:12 +01:00
config.py feat(registry): make the Stack query providers for model listing (#2862) 2025-07-24 10:39:53 -07:00
vllm.py Add dynamic authentication token forwarding support for vLLM provider 2025-09-15 13:01:12 +01:00