llama-stack/llama_stack/providers
Derek Higgins c8797f1125
fix: Including tool call in chat (#1931)
Include the tool call details with the chat when doing Rag with Remote
vllm

Fixes: #1929

With this PR the tool call is included in the chat returned to vllm, the
model (meta-llama/Llama-3.1-8B-Instruct) the returns the answer as
expected.

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-04-24 16:59:10 -07:00
..
inline fix: meta ref inference (#2022) 2025-04-24 13:03:35 -07:00
registry fix: use torchao 0.8.0 for inference (#1925) 2025-04-10 13:39:20 -07:00
remote fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969) 2025-04-23 15:33:19 +02:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils fix: Including tool call in chat (#1931) 2025-04-24 16:59:10 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00