llama-stack/llama_stack/providers/remote/post_training/nvidia
Jash Gulabrai 30fc66923b
fix: Add llama-3.2-1b-instruct to NVIDIA fine-tuned model list (#1975)
# What does this PR do?
Adds `meta/llama-3.2-1b-instruct` to list of models that NeMo Customizer
can fine-tune. This is the model our example notebooks typically use for
fine-tuning.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

Co-authored-by: Jash Gulabrai <jgulabrai@nvidia.com>
2025-04-16 15:02:08 -07:00
..
__init__.py feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
config.py feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
models.py fix: Add llama-3.2-1b-instruct to NVIDIA fine-tuned model list (#1975) 2025-04-16 15:02:08 -07:00
post_training.py fix: remove extra sft args in NvidiaPostTrainingAdapter (#1939) 2025-04-11 10:17:57 -07:00
README.md feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
utils.py feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00

NVIDIA Post-Training Provider for LlamaStack

This provider enables fine-tuning of LLMs using NVIDIA's NeMo Customizer service.

Features

  • Supervised fine-tuning of Llama models
  • LoRA fine-tuning support
  • Job management and status tracking

Getting Started

Prerequisites

  • LlamaStack with NVIDIA configuration
  • Access to Hosted NVIDIA NeMo Customizer service
  • Dataset registered in the Hosted NVIDIA NeMo Customizer service
  • Base model downloaded and available in the Hosted NVIDIA NeMo Customizer service

Setup

Build the NVIDIA environment:

llama stack build --template nvidia --image-type conda

Basic Usage using the LlamaStack Python Client

Create Customization Job

Initialize the client

import os

os.environ["NVIDIA_API_KEY"] = "your-api-key"
os.environ["NVIDIA_CUSTOMIZER_URL"] = "http://nemo.test"
os.environ["NVIDIA_USER_ID"] = "llama-stack-user"
os.environ["NVIDIA_DATASET_NAMESPACE"] = "default"
os.environ["NVIDIA_PROJECT_ID"] = "test-project"
os.environ["NVIDIA_OUTPUT_MODEL_DIR"] = "test-example-model@v1"

from llama_stack.distribution.library_client import LlamaStackAsLibraryClient

client = LlamaStackAsLibraryClient("nvidia")
client.initialize()

Configure fine-tuning parameters

from llama_stack_client.types.post_training_supervised_fine_tune_params import (
    TrainingConfig,
    TrainingConfigDataConfig,
    TrainingConfigOptimizerConfig,
)
from llama_stack_client.types.algorithm_config_param import LoraFinetuningConfig

Set up LoRA configuration

algorithm_config = LoraFinetuningConfig(type="LoRA", adapter_dim=16)

Configure training data

data_config = TrainingConfigDataConfig(
    dataset_id="your-dataset-id",  # Use client.datasets.list() to see available datasets
    batch_size=16,
)

Configure optimizer

optimizer_config = TrainingConfigOptimizerConfig(
    lr=0.0001,
)

Set up training configuration

training_config = TrainingConfig(
    n_epochs=2,
    data_config=data_config,
    optimizer_config=optimizer_config,
)

Start fine-tuning job

training_job = client.post_training.supervised_fine_tune(
    job_uuid="unique-job-id",
    model="meta-llama/Llama-3.1-8B-Instruct",
    checkpoint_dir="",
    algorithm_config=algorithm_config,
    training_config=training_config,
    logger_config={},
    hyperparam_search_config={},
)

List all jobs

jobs = client.post_training.job.list()

Check job status

job_status = client.post_training.job.status(job_uuid="your-job-id")

Cancel a job

client.post_training.job.cancel(job_uuid="your-job-id")

Inference with the fine-tuned model

response = client.inference.completion(
    content="Complete the sentence using one word: Roses are red, violets are ",
    stream=False,
    model_id="test-example-model@v1",
    sampling_params={
        "max_tokens": 50,
    },
)
print(response.content)