llama-stack-mirror/llama_stack/providers
Charlie Doern 0ec5151ab5 feat: add post_training RuntimeConfig
certain APIs require a bunch of runtime arguments per-provider. The best way currently to pass these arguments in is via the
provider config. This is tricky because it requires a provider to be pre-configured with certain arguments that a client side user should be able to pass in at runtime

Especially with the advent of out-of-tree providers, it would be great for a generic RuntimeConfig class to allow for providers to add and validate their own runtime arguments for things like supervised_fine_tune

For example: https://github.com/opendatahub-io/llama-stack-provider-kft has things like `input-pvc`, `model-path`, etc in the Provider Config.
This is not sustainable nor is adding each and every field needed to the post_training API spec. RuntimeConfig has a sub-class called Config which allows for extra fields to arbitrarily be specified. It is the providers job to create its own class based on this one and add valid options, parse them, etc

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-04-26 10:47:29 -04:00
..
inline feat: add post_training RuntimeConfig 2025-04-26 10:47:29 -04:00
registry feat: Add watsonx inference adapter (#1895) 2025-04-25 11:29:21 -07:00
remote fix: Correctly parse algorithm_config when launching NVIDIA customization job; fix internal request handler (#2025) 2025-04-25 13:21:50 -07:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils feat: new system prompt for llama4 (#2031) 2025-04-25 11:29:08 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00