forked from phoenix-oss/llama-stack-mirror
Fixes provider tests ``` pytest -v -s -k "together or fireworks or ollama" --inference-model="meta-llama/Llama-3.1-8B-Instruct" ./llama_stack/providers/tests/inference/test_text_inference.py ``` ``` ... .... llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_streaming[-together] PASSED llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling[-together] PASSED llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling_streaming[-together] PASSED ================ 21 passed, 6 skipped, 81 deselected, 5 warnings in 32.11s ================= ``` Co-authored-by: Hardik Shah <hjshah@fb.com> |
||
|---|---|---|
| .. | ||
| groq | ||
| __init__.py | ||
| conftest.py | ||
| fixtures.py | ||
| pasta.jpeg | ||
| test_embeddings.py | ||
| test_model_registration.py | ||
| test_prompt_adapter.py | ||
| test_text_inference.py | ||
| test_vision_inference.py | ||
| utils.py | ||