llama-stack-mirror/llama_stack/apis
Charlie Doern 0ec5151ab5 feat: add post_training RuntimeConfig
certain APIs require a bunch of runtime arguments per-provider. The best way currently to pass these arguments in is via the
provider config. This is tricky because it requires a provider to be pre-configured with certain arguments that a client side user should be able to pass in at runtime

Especially with the advent of out-of-tree providers, it would be great for a generic RuntimeConfig class to allow for providers to add and validate their own runtime arguments for things like supervised_fine_tune

For example: https://github.com/opendatahub-io/llama-stack-provider-kft has things like `input-pvc`, `model-path`, etc in the Provider Config.
This is not sustainable nor is adding each and every field needed to the post_training API spec. RuntimeConfig has a sub-class called Config which allows for extra fields to arbitrarily be specified. It is the providers job to create its own class based on this one and add valid options, parse them, etc

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-04-26 10:47:29 -04:00
..
agents feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
batch_inference feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
benchmarks fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
common refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
datasetio refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
datasets chore: Don't set type variables from register_schema() (#1713) 2025-03-19 20:29:00 -07:00
eval fix: fix jobs api literal return type (#1757) 2025-03-21 14:04:21 -07:00
files feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00
inference fix: OpenAI spec cleanup for assistant requests (#1963) 2025-04-17 06:56:10 -07:00
inspect feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00
models feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
post_training feat: add post_training RuntimeConfig 2025-04-26 10:47:29 -04:00
providers feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00
safety chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
scoring docs: api documentation for agents/eval/scoring/datasets (#1400) 2025-03-05 09:40:24 -08:00
scoring_functions chore: Don't set type variables from register_schema() (#1713) 2025-03-19 20:29:00 -07:00
shields fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
synthetic_data_generation chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
telemetry chore: Don't set type variables from register_schema() (#1713) 2025-03-19 20:29:00 -07:00
tools fix(api): don't return list for runtime tools (#1686) 2025-04-01 09:53:11 +02:00
vector_dbs fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
vector_io chore: mypy violations cleanup for inline::{telemetry,tool_runtime,vector_io} (#1711) 2025-03-20 10:01:10 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00
resource.py fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
version.py llama-stack version alpha -> v1 2025-01-15 05:58:09 -08:00