llama-stack-mirror/tests/unit/providers/utils
Derek Higgins c8797f1125
fix: Including tool call in chat (#1931)
Include the tool call details with the chat when doing Rag with Remote
vllm

Fixes: #1929

With this PR the tool call is included in the chat returned to vllm, the
model (meta-llama/Llama-3.1-8B-Instruct) the returns the answer as
expected.

Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-04-24 16:59:10 -07:00
..
inference fix: Including tool call in chat (#1931) 2025-04-24 16:59:10 -07:00
test_scheduler.py feat: Implement async job execution for torchtune training (#1437) 2025-04-14 08:59:11 -07:00