mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
The test depends on llama's tool calling ability. In the CI, we run with a small ollama model. The fix might be to check for either message or function_call because the model is flaky and we aren't really testing that behavior? |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| test_agents.py | ||
| test_openai_responses.py | ||
| test_persistence.py | ||