llama-stack-mirror/llama_stack
Charlie Doern 71caa271ad feat: associated models API with post_training
there are likely scenarios where admins of a stack only want to allow clients to fine-tune certain models, register certain models to be fine-tuned. etc
introduce the post_training router and post_training_models as the associated type. A different model type needs to be used for inference vs post_training due to the structure of the router currently.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-05-30 13:32:11 -04:00
..
apis feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
cli fix: handle None external_providers_dir in build with run arg (#2269) 2025-05-27 09:41:12 +02:00
distribution feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
models chore: make cprint write to stderr (#2250) 2025-05-24 23:39:57 -07:00
providers feat: associated models API with post_training 2025-05-30 13:32:11 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates chore: remove dependencies.json (#2281) 2025-05-27 10:26:57 -07:00
ui feat(ui): add views for Responses (#2293) 2025-05-28 09:51:22 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: make cprint write to stderr (#2250) 2025-05-24 23:39:57 -07:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00