llama-stack-mirror/llama_stack/distribution/routers
Charlie Doern 71caa271ad feat: associated models API with post_training
there are likely scenarios where admins of a stack only want to allow clients to fine-tune certain models, register certain models to be fine-tuned. etc
introduce the post_training router and post_training_models as the associated type. A different model type needs to be used for inference vs post_training due to the structure of the router currently.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-05-30 13:32:11 -04:00
..
__init__.py feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
datasets.py chore: split routers into individual files (datasets) (#2249) 2025-05-24 22:11:43 -07:00
eval_scoring.py chore: split routers into individual files (inference, tool, vector_io, eval_scoring) (#2258) 2025-05-24 22:59:07 -07:00
inference.py chore: split routers into individual files (inference, tool, vector_io, eval_scoring) (#2258) 2025-05-24 22:59:07 -07:00
post_training.py feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
safety.py chore: split routers into individual files (safety) 2025-05-24 22:00:32 -07:00
tool_runtime.py fix(tools): do not index tools, only index toolgroups (#2261) 2025-05-25 13:27:52 -07:00
vector_io.py chore: split routers into individual files (inference, tool, vector_io, eval_scoring) (#2258) 2025-05-24 22:59:07 -07:00