llama-stack-mirror/tests/integration/inference
Ben Browning da2d39a836 Handle chunks with null text in test_openai_completion.py
This updates test_openai_completion.py to allow chunks with null text
in streaming responses, as that's a valid chunk and I just hastily
wrote the test without accounting for this originally.

This is required to get this test passing with gpt-4o models and the
openai provider.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-13 13:39:56 -04:00
..
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_batch_inference.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
test_embedding.py refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_openai_completion.py Handle chunks with null text in test_openai_completion.py 2025-04-13 13:39:56 -04:00
test_text_inference.py fix: misc fixes for tests kill horrible warnings 2025-04-12 17:12:11 -07:00
test_vision_inference.py test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00