llama-stack-mirror/llama_stack/providers/inline/agents/meta_reference
ehhuang 07a992ef90
feat: deterministic tools ordering (#1380)
Summary:

1. The `tools` parameter we construct to pass the inference API is
non-deterministic. As a result, our recordable mocks is flaky as the
ordering change sometimes. This PR makes it so that `tools` ordering is
deterministic and aligned with the order user specified.
2. In recordable mock key generation, client tool's parameter type was
'str' and now is 'string' for some reason. I didn't dig into exactly
why, but just regenerated the fixtures.

Test Plan:
Regenerate mocks:
```
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --record-responses
```

Rerun tests without  --record-responses:
```
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B
```
2025-03-03 20:38:07 -08:00
..
tests chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
__init__.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
agent_instance.py feat: deterministic tools ordering (#1380) 2025-03-03 20:38:07 -08:00
agents.py feat: ability to retrieve agents session, turn, step by ids (#1286) 2025-02-27 09:45:14 -08:00
config.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
persistence.py feat: unify max_infer_iters in client/server agent loop (#1309) 2025-03-03 10:08:36 -08:00
safety.py build: configure ruff from pyproject.toml (#1100) 2025-02-14 09:01:57 -08:00