llama-stack-mirror/llama_stack/providers/remote
Matthew Farrellee 3a9be58523
fix: use ollama list to find models (#1854)
# What does this PR do?

closes #1853 

## Test Plan
```
uv run llama stack build --image-type conda --image-name ollama --config llama_stack/templates/ollama/build.yaml

ollama pull llama3.2:3b

LLAMA_STACK_CONFIG=http://localhost:8321 uv run pytest tests/integration/inference/test_text_inference.py -v --text-model=llama3.2:3b
```
2025-04-09 10:34:26 +02:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
inference fix: use ollama list to find models (#1854) 2025-04-09 10:34:26 +02:00
post_training refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
safety feat: added nvidia as safety provider (#1248) 2025-03-17 14:39:23 -07:00
tool_runtime fix(api): don't return list for runtime tools (#1686) 2025-04-01 09:53:11 +02:00
vector_io chore: Updating Milvus Client calls to be non-blocking (#1830) 2025-03-28 22:14:07 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00