llama-stack-mirror/llama_stack/providers
Matthew Farrellee 3a9be58523
fix: use ollama list to find models (#1854)
# What does this PR do?

closes #1853 

## Test Plan
```
uv run llama stack build --image-type conda --image-name ollama --config llama_stack/templates/ollama/build.yaml

ollama pull llama3.2:3b

LLAMA_STACK_CONFIG=http://localhost:8321 uv run pytest tests/integration/inference/test_text_inference.py -v --text-model=llama3.2:3b
```
2025-04-09 10:34:26 +02:00
..
inline fix: meta reference + llama4 tokenizer fix 2025-04-09 00:46:32 -07:00
registry test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
remote fix: use ollama list to find models (#1854) 2025-04-09 10:34:26 +02:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00