fix: remove extra sft args in NvidiaPostTrainingAdapter (#1939)

# What does this PR do?

The supervised_fine_tune method in NvidiaPostTrainingAdapter had some
extra args that aren't part of the post_training protocol, and these
extra args were causing FastAPI to throw an error when attempting to
stand up an endpoint that used this provider.

(Closes #1938)

## Test Plan

Before this change, bringing up a stack with the `nvidia` template
failed. Afterwards, it passes. I'm testing this like:

```
INFERENCE_MODEL="meta/llama-3.1-8b-instruct" \
llama stack build --template nvidia --image-type venv --run
```

I also ensured the nvidia/test_supervised_fine_tuning.py tests still
pass via:

```
python -m pytest \
  tests/unit/providers/nvidia/test_supervised_fine_tuning.py
```

Signed-off-by: Ben Browning <bbrownin@redhat.com>
This commit is contained in:
Ben Browning 2025-04-11 13:17:57 -04:00 committed by GitHub
parent 40f41af2f7
commit 2a74f0db39
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -206,10 +206,6 @@ class NvidiaPostTrainingAdapter(ModelRegistryHelper):
model: str, model: str,
checkpoint_dir: Optional[str], checkpoint_dir: Optional[str],
algorithm_config: Optional[AlgorithmConfig] = None, algorithm_config: Optional[AlgorithmConfig] = None,
extra_json: Optional[Dict[str, Any]] = None,
params: Optional[Dict[str, Any]] = None,
headers: Optional[Dict[str, Any]] = None,
**kwargs,
) -> NvidiaPostTrainingJob: ) -> NvidiaPostTrainingJob:
""" """
Fine-tunes a model on a dataset. Fine-tunes a model on a dataset.