mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-29 03:14:19 +00:00
The test depends on llama's tool calling ability. In the CI, we run with a small ollama model. The fix might be to check for either message or function_call because the model is flaky and we aren't really testing that behavior? |
||
---|---|---|
.. | ||
__init__.py | ||
dog.png | ||
test_batch_inference.py | ||
test_embedding.py | ||
test_openai_completion.py | ||
test_text_inference.py | ||
test_vision_inference.py | ||
vision_test_1.jpg | ||
vision_test_2.jpg | ||
vision_test_3.jpg |