forked from phoenix-oss/llama-stack-mirror
The test depends on llama's tool calling ability. In the CI, we run with a small ollama model. The fix might be to check for either message or function_call because the model is flaky and we aren't really testing that behavior? |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| test_agents.py | ||
| test_openai_responses.py | ||
| test_persistence.py | ||