llama-stack/llama_stack/providers
ehhuang 6cf79437b3
feat: support ClientTool output metadata (#1426)
# Summary:
Client side change in
https://github.com/meta-llama/llama-stack-client-python/pull/180
Changes the resume_turn API to accept `ToolResponse` instead of
`ToolResponseMessage`:
1. `ToolResponse` contains `metadata`
2. `ToolResponseMessage` is a concept for model inputs. Here we are just
submitting the outputs of tool execution.

# Test Plan:
Ran integration tests with newly added test using client tool with
metadata

LLAMA_STACK_CONFIG=fireworks pytest -s -v
tests/integration/agents/test_agents.py --safety-shield
meta-llama/Llama-Guard-3-8B --record-responses
2025-03-05 14:30:27 -08:00
..
inline feat: support ClientTool output metadata (#1426) 2025-03-05 14:30:27 -08:00
registry fix: groq now depends on litellm 2025-02-27 14:07:12 -08:00
remote fix: Gracefully handle no choices in remote vLLM response (#1424) 2025-03-05 15:07:54 -05:00
tests chore: Make README code blocks more easily copy pastable (#1420) 2025-03-05 09:11:01 -08:00
utils refactor(test): unify vector_io tests and make them configurable (#1398) 2025-03-04 13:37:45 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00