llama-stack-mirror/llama_stack/distribution/routing_tables
Charlie Doern 71caa271ad feat: associated models API with post_training
there are likely scenarios where admins of a stack only want to allow clients to fine-tune certain models, register certain models to be fine-tuned. etc
introduce the post_training router and post_training_models as the associated type. A different model type needs to be used for inference vs post_training due to the structure of the router currently.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-05-30 13:32:11 -04:00
..
__init__.py chore: split routing_tables into individual files (#2259) 2025-05-24 23:15:05 -07:00
benchmarks.py chore: split routing_tables into individual files (#2259) 2025-05-24 23:15:05 -07:00
common.py feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
datasets.py chore: split routing_tables into individual files (#2259) 2025-05-24 23:15:05 -07:00
models.py feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
post_training_models.py feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
scoring_functions.py chore: split routing_tables into individual files (#2259) 2025-05-24 23:15:05 -07:00
shields.py chore: split routing_tables into individual files (#2259) 2025-05-24 23:15:05 -07:00
toolgroups.py fix: index non-MCP toolgroups at registration time (#2272) 2025-05-26 20:33:36 -07:00
vector_dbs.py chore: split routing_tables into individual files (#2259) 2025-05-24 23:15:05 -07:00