forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Closes #2111. Fixes an error causing Llama Stack to just return `<tool_call>` and complete the turn without actually executing the tool. See the issue description for more detail. ## Test Plan 1) Ran existing unit tests 2) Added a dedicated test verifying correct behavior in this edge case 3) Ran the code snapshot from #2111 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| config.py | ||
| vllm.py | ||