llama-stack/llama_stack/providers/impls/vllm
Yuan Tang 80ada04f76
Remove request arg from chat completion response processing (#240)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2024-10-15 13:03:17 -07:00
..
__init__.py Fix incorrect completion() signature for Databricks provider (#236) 2024-10-11 08:47:57 -07:00
config.py Inline vLLM inference provider (#181) 2024-10-05 23:34:16 -07:00
vllm.py Remove request arg from chat completion response processing (#240) 2024-10-15 13:03:17 -07:00