mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
# What does this PR do? The supervised_fine_tune method in NvidiaPostTrainingAdapter had some extra args that aren't part of the post_training protocol, and these extra args were causing FastAPI to throw an error when attempting to stand up an endpoint that used this provider. (Closes #1938) ## Test Plan Before this change, bringing up a stack with the `nvidia` template failed. Afterwards, it passes. I'm testing this like: ``` INFERENCE_MODEL="meta/llama-3.1-8b-instruct" \ llama stack build --template nvidia --image-type venv --run ``` I also ensured the nvidia/test_supervised_fine_tuning.py tests still pass via: ``` python -m pytest \ tests/unit/providers/nvidia/test_supervised_fine_tuning.py ``` Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
---|---|---|
.. | ||
apis | ||
cli | ||
distribution | ||
models | ||
providers | ||
strong_typing | ||
templates | ||
__init__.py | ||
env.py | ||
log.py | ||
schema_utils.py |