llama-stack-mirror/llama_stack/providers/remote/inference/vllm
Matthew Farrellee f754e1b65b chore: remove deprecated inference.chat_completion implementations
vllm -
 - requires max_tokens be set, use config value
 - set tool_choice to none if no tools provided
2025-10-02 10:39:30 -04:00
..
__init__.py feat: Add dynamic authentication token forwarding support for vLLM (#3388) 2025-09-18 11:13:55 +02:00
config.py feat(registry): make the Stack query providers for model listing (#2862) 2025-07-24 10:39:53 -07:00
vllm.py chore: remove deprecated inference.chat_completion implementations 2025-10-02 10:39:30 -04:00