llama-stack/llama_stack
Hardik Shah b2ac29b9da
fix provider model list test (#800)
Fixes provider tests

```
pytest -v -s -k "together or fireworks or ollama" --inference-model="meta-llama/Llama-3.1-8B-Instruct" ./llama_stack/providers/tests/inference/test_text_inference.py 
```
```
...
.... 
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_streaming[-together] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling[-together] PASSED
llama_stack/providers/tests/inference/test_text_inference.py::TestInference::test_chat_completion_with_tool_calling_streaming[-together] PASSED

================ 21 passed, 6 skipped, 81 deselected, 5 warnings in 32.11s =================
```

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-01-16 19:27:29 -08:00
..
apis REST API fixes (#789) 2025-01-16 13:47:08 -08:00
cli Update default port from 5000 -> 8321 2025-01-16 15:26:48 -08:00
distribution meta reference inference fixes (#797) 2025-01-16 18:17:46 -08:00
providers fix provider model list test (#800) 2025-01-16 19:27:29 -08:00
scripts Fix to conda env build script 2024-12-17 12:19:34 -08:00
templates Remove llama-guard in Cerebras template & improve agent test (#798) 2025-01-16 18:11:35 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00