llama-stack-mirror/llama_stack/providers/inline/post_training/torchtune
Charlie Doern 0ec5151ab5 feat: add post_training RuntimeConfig
certain APIs require a bunch of runtime arguments per-provider. The best way currently to pass these arguments in is via the
provider config. This is tricky because it requires a provider to be pre-configured with certain arguments that a client side user should be able to pass in at runtime

Especially with the advent of out-of-tree providers, it would be great for a generic RuntimeConfig class to allow for providers to add and validate their own runtime arguments for things like supervised_fine_tune

For example: https://github.com/opendatahub-io/llama-stack-provider-kft has things like `input-pvc`, `model-path`, etc in the Provider Config.
This is not sustainable nor is adding each and every field needed to the post_training API spec. RuntimeConfig has a sub-class called Config which allows for extra fields to arbitrarily be specified. It is the providers job to create its own class based on this one and add valid options, parse them, etc

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-04-26 10:47:29 -04:00
..
common refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
datasets chore: fix mypy violations in post_training modules (#1548) 2025-03-18 14:58:16 -07:00
recipes feat: make training config fields optional (#1861) 2025-04-12 01:13:45 -07:00
__init__.py chore: fix typing hints for get_provider_impl deps arguments (#1544) 2025-03-11 10:07:28 -07:00
config.py test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
post_training.py feat: add post_training RuntimeConfig 2025-04-26 10:47:29 -04:00