llama-stack-mirror/tests/integration/inference
Ashwin Bharambe 66f09f24ed
fix: disable test_responses_store (#2244)
The test depends on llama's tool calling ability. In the CI, we run with
a small ollama model.

The fix might be to check for either message or function_call because
the model is flaky and we aren't really testing that behavior?
2025-05-24 08:18:06 -07:00
..
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_batch_inference.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
test_embedding.py refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_openai_completion.py fix: disable test_responses_store (#2244) 2025-05-24 08:18:06 -07:00
test_text_inference.py fix: llama4 tool use prompt fix (#2103) 2025-05-06 22:18:31 -07:00
test_vision_inference.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00