forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Today, supervised_fine_tune itself and the `TrainingConfig` class have a bunch of required fields that a provider implementation might not need. for example, if a provider wants to handle hyperparameters in its configuration as well as any type of dataset retrieval, optimizer or LoRA config, a user will still need to pass in a virtually empty `DataConfig`, `OptimizerConfig` and `AlgorithmConfig` in some cases. Many of these fields are intended to work specifically with llama models and knobs intended for customizing inline. Adding remote post_training providers will require loosening these arguments, or forcing users to pass in empty objects to satisfy the pydantic models. Signed-off-by: Charlie Doern <cdoern@redhat.com> |
||
---|---|---|
.. | ||
css | ||
js | ||
providers/vector_io | ||
llama-stack-logo.png | ||
llama-stack-spec.html | ||
llama-stack-spec.yaml | ||
llama-stack.png | ||
remote_or_local.gif | ||
safety_system.webp |