llama-stack-mirror/llama_stack/providers/impls/vllm
Yuan Tang fe9029f169
Remove request arg from chat completion response processing
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2024-10-11 16:18:40 -04:00
..
__init__.py Fix incorrect completion() signature for Databricks provider (#236) 2024-10-11 08:47:57 -07:00
config.py Inline vLLM inference provider (#181) 2024-10-05 23:34:16 -07:00
vllm.py Remove request arg from chat completion response processing 2024-10-11 16:18:40 -04:00