llama-stack-mirror/llama_stack/providers/inline
ehhuang 07a992ef90
feat: deterministic tools ordering (#1380)
Summary:

1. The `tools` parameter we construct to pass the inference API is
non-deterministic. As a result, our recordable mocks is flaky as the
ordering change sometimes. This PR makes it so that `tools` ordering is
deterministic and aligned with the order user specified.
2. In recordable mock key generation, client tool's parameter type was
'str' and now is 'string' for some reason. I didn't dig into exactly
why, but just regenerated the fixtures.

Test Plan:
Regenerate mocks:
```
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --record-responses
```

Rerun tests without  --record-responses:
```
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/client-sdk/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B
```
2025-03-03 20:38:07 -08:00
..
agents feat: deterministic tools ordering (#1380) 2025-03-03 20:38:07 -08:00
datasetio build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
eval fix: replace eval with json decoding (#1327) 2025-02-28 11:10:45 -08:00
inference refactor: move generation.py to llama3 2025-03-03 13:50:19 -08:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training fix: replace eval with json decoding for format_adapter (#1328) 2025-02-28 11:25:23 -08:00
safety chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
scoring chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
telemetry chore: better raise (#1335) 2025-02-28 16:41:20 -08:00
tool_runtime chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
vector_io feat: allow conditionally enabling providers in run.yaml (#1321) 2025-03-01 11:19:14 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00