llama-stack-mirror/llama_stack/core/routers
Nathan Weinberg ff8942bc71 refactor: standardize InferenceRouter model handling
* introduces ModelTypeError custom exception class
* introduces _get_model private method in InferenceRouter class
* standardizes inconsistent variable name usage for models in InferenceRouter class
* removes unneeded model type check in ollama provider

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-08-12 04:49:43 -04:00
..
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
datasets.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
eval_scoring.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inference.py refactor: standardize InferenceRouter model handling 2025-08-12 04:49:43 -04:00
safety.py feat: Add moderations create api (#3020) 2025-08-06 13:51:23 -07:00
tool_runtime.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
vector_io.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00