llama-stack-mirror/tests/integration/inference
Ben Browning ef684ff178 Fix openai_completion tests for ollama
When called via the OpenAI API, ollama is responding with more brief
responses than when called via its native API. This adjusts the
prompting for its OpenAI calls to ask it to be more verbose.
2025-04-09 15:47:02 -04:00
..
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_embedding.py refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_openai_completion.py Fix openai_completion tests for ollama 2025-04-09 15:47:02 -04:00
test_text_inference.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
test_vision_inference.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00