llama-stack-mirror/llama_stack/providers/remote/inference/vllm
Daniel Alvarez 538d601472 Do not send an empty 'tools' param to remote vllm
Fixes: #1955

Since 0.2.0, the vLLM gets an empty list (vs ``None`` in 0.1.9 and
before) when there are no tools configured which causes the issue described
in #1955. This patch avoids sending the 'tools' param to the vLLM altogether
instead of an empty list.

It also adds a small unit test to avoid regressions.

Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
2025-04-15 18:02:21 +02:00
..
__init__.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
config.py fix: Add the option to not verify SSL at remote-vllm provider (#1585) 2025-03-18 09:33:35 -04:00
vllm.py Do not send an empty 'tools' param to remote vllm 2025-04-15 18:02:21 +02:00