llama-stack-mirror/llama_stack/providers/tests/inference
Hardik Shah b2ac29b9da
fix provider model list test (#800)
Fixes provider tests

```
pytest -v -s -k "together or fireworks or ollama" --inference-model="meta-llama/Llama-3.1-8B-Instruct" ./llama_stack/providers/tests/inference/test_text_inference.py 
```
```
...
.... 
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_streaming[-together] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling[-together] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling_streaming[-together] PASSED

================ 21 passed, 6 skipped, 81 deselected, 5 warnings in 32.11s =================
```

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-01-16 19:27:29 -08:00
..
groq Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
conftest.py [test automation] support run tests on config file (#730) 2025-01-16 12:05:49 -08:00
fixtures.py [test automation] support run tests on config file (#730) 2025-01-16 12:05:49 -08:00
pasta.jpeg Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00
test_embeddings.py bug fixes on inference tests (#774) 2025-01-15 15:39:05 -08:00
test_model_registration.py [4/n][torchtune integration] support lazy load model during inference (#620) 2024-12-18 16:30:53 -08:00
test_prompt_adapter.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
test_text_inference.py fix provider model list test (#800) 2025-01-16 19:27:29 -08:00
test_vision_inference.py bug fixes on inference tests (#774) 2025-01-15 15:39:05 -08:00
utils.py Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (#376) 2024-11-05 16:22:33 -08:00