llama-stack-mirror/llama_stack/providers/inline/inference/vllm
Ben Browning 24cfa1ef1a Mark inline vllm as OpenAI unsupported inference
Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-09 15:47:02 -04:00
..
__init__.py chore: fix typing hints for get_provider_impl deps arguments (#1544) 2025-03-11 10:07:28 -07:00
config.py test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
openai_utils.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
vllm.py Mark inline vllm as OpenAI unsupported inference 2025-04-09 15:47:02 -04:00