llama-stack/docs/_static
Charlie Doern 0751a960a5
feat: make training config fields optional (#1861)
# What does this PR do?

Today, supervised_fine_tune itself and the `TrainingConfig` class have a
bunch of required fields that a provider implementation might not need.

for example, if a provider wants to handle hyperparameters in its
configuration as well as any type of dataset retrieval, optimizer or
LoRA config, a user will still need to pass in a virtually empty
`DataConfig`, `OptimizerConfig` and `AlgorithmConfig` in some cases.

Many of these fields are intended to work specifically with llama models
and knobs intended for customizing inline.

Adding remote post_training providers will require loosening these
arguments, or forcing users to pass in empty objects to satisfy the
pydantic models.

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-04-12 01:13:45 -07:00
..
css docs: Updated docs to show minimal RAG example and some other minor changes (#1935) 2025-04-11 11:50:36 -07:00
js chore: Detect browser setting for dark/light mode and set default to light mode (#1913) 2025-04-09 12:40:56 -04:00
providers/vector_io docs: Document sqlite-vec faiss comparison (#1821) 2025-03-28 17:41:33 +01:00
llama-stack-logo.png first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
llama-stack-spec.html feat: make training config fields optional (#1861) 2025-04-12 01:13:45 -07:00
llama-stack-spec.yaml feat: make training config fields optional (#1861) 2025-04-12 01:13:45 -07:00
llama-stack.png Make a new llama stack image 2024-11-22 23:49:22 -08:00
remote_or_local.gif [docs] update documentations (#356) 2024-11-04 16:52:38 -08:00
safety_system.webp [Docs] Zero-to-Hero notebooks and quick start documentation (#368) 2024-11-08 17:16:44 -08:00