llama-stack-mirror/llama_stack/providers/remote/inference/ollama
Nathan Weinberg ff8942bc71 refactor: standardize InferenceRouter model handling
* introduces ModelTypeError custom exception class
* introduces _get_model private method in InferenceRouter class
* standardizes inconsistent variable name usage for models in InferenceRouter class
* removes unneeded model type check in ollama provider

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-08-12 04:49:43 -04:00
..
__init__.py fix: Ollama should be optional in starter distro (#2482) 2025-06-25 15:54:00 +02:00
config.py feat(registry): make the Stack query providers for model listing (#2862) 2025-07-24 10:39:53 -07:00
models.py fix: Safety in starter (#2731) 2025-07-14 15:07:40 -07:00
ollama.py refactor: standardize InferenceRouter model handling 2025-08-12 04:49:43 -04:00