llama-stack-mirror/llama_stack/providers
ehhuang 07a992ef90
feat: deterministic tools ordering (#1380)
Summary:

1. The `tools` parameter we construct to pass the inference API is
non-deterministic. As a result, our recordable mocks is flaky as the
ordering change sometimes. This PR makes it so that `tools` ordering is
deterministic and aligned with the order user specified.
2. In recordable mock key generation, client tool's parameter type was
'str' and now is 'string' for some reason. I didn't dig into exactly
why, but just regenerated the fixtures.

Test Plan:
Regenerate mocks:
```
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --record-responses
```

Rerun tests without  --record-responses:
```
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B
```
2025-03-03 20:38:07 -08:00
..
inline feat: deterministic tools ordering (#1380) 2025-03-03 20:38:07 -08:00
registry fix: groq now depends on litellm 2025-02-27 14:07:12 -08:00
remote feat: add a configurable category-based logger (#1352) 2025-03-02 18:51:14 -08:00
tests refactor: move more tests, delete some providers tests (#1382) 2025-03-03 20:28:34 -08:00
utils feat: better using get_default_tool_prompt_format (#1360) 2025-03-03 14:50:06 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00