llama-stack-mirror/llama_stack/providers/remote/post_training/nvidia
Sébastien Han 43c1f39bd6
refactor(env)!: enhanced environment variable substitution (#2490)
# What does this PR do?

This commit significantly improves the environment variable substitution
functionality in Llama Stack configuration files:
* The version field in configuration files has been changed from string
to integer type for better type consistency across build and run
configurations.

* The environment variable substitution system for ${env.FOO:} was fixed
and properly returns an error

* The environment variable substitution system for ${env.FOO+} returns
None instead of an empty strings, it better matches type annotations in
config fields

* The system includes automatic type conversion for boolean, integer,
and float values.

* The error messages have been enhanced to provide clearer guidance when
environment variables are missing, including suggestions for using
default values or conditional syntax.

* Comprehensive documentation has been added to the configuration guide
explaining all supported syntax patterns, best practices, and runtime
override capabilities.

* Multiple provider configurations have been updated to use the new
conditional syntax for optional API keys, making the system more
flexible for different deployment scenarios. The telemetry configuration
has been improved to properly handle optional endpoints with appropriate
validation, ensuring that required endpoints are specified when their
corresponding sinks are enabled.

* There were many instances of ${env.NVIDIA_API_KEY:} that should have
caused the code to fail. However, due to a bug, the distro server was
still being started, and early validation wasn’t triggered. As a result,
failures were likely being handled downstream by the providers. I’ve
maintained similar behavior by using ${env.NVIDIA_API_KEY:+}, though I
believe this is incorrect for many configurations. I’ll leave it to each
provider to correct it as needed.

* Environment variable substitution now uses the same syntax as Bash
parameter expansion.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-06-26 08:20:08 +05:30
..
__init__.py feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
config.py refactor(env)!: enhanced environment variable substitution (#2490) 2025-06-26 08:20:08 +05:30
models.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
post_training.py fix: Pass model parameter as config name to NeMo Customizer (#2218) 2025-05-20 09:51:39 -07:00
README.md feat: NVIDIA allow non-llama model registration (#1859) 2025-04-24 17:13:33 -07:00
utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00

NVIDIA Post-Training Provider for LlamaStack

This provider enables fine-tuning of LLMs using NVIDIA's NeMo Customizer service.

Features

  • Supervised fine-tuning of Llama models
  • LoRA fine-tuning support
  • Job management and status tracking

Getting Started

Prerequisites

  • LlamaStack with NVIDIA configuration
  • Access to Hosted NVIDIA NeMo Customizer service
  • Dataset registered in the Hosted NVIDIA NeMo Customizer service
  • Base model downloaded and available in the Hosted NVIDIA NeMo Customizer service

Setup

Build the NVIDIA environment:

llama stack build --template nvidia --image-type conda

Basic Usage using the LlamaStack Python Client

Create Customization Job

Initialize the client

import os

os.environ["NVIDIA_API_KEY"] = "your-api-key"
os.environ["NVIDIA_CUSTOMIZER_URL"] = "http://nemo.test"
os.environ["NVIDIA_DATASET_NAMESPACE"] = "default"
os.environ["NVIDIA_PROJECT_ID"] = "test-project"
os.environ["NVIDIA_OUTPUT_MODEL_DIR"] = "test-example-model@v1"

from llama_stack.distribution.library_client import LlamaStackAsLibraryClient

client = LlamaStackAsLibraryClient("nvidia")
client.initialize()

Configure fine-tuning parameters

from llama_stack_client.types.post_training_supervised_fine_tune_params import (
    TrainingConfig,
    TrainingConfigDataConfig,
    TrainingConfigOptimizerConfig,
)
from llama_stack_client.types.algorithm_config_param import LoraFinetuningConfig

Set up LoRA configuration

algorithm_config = LoraFinetuningConfig(type="LoRA", adapter_dim=16)

Configure training data

data_config = TrainingConfigDataConfig(
    dataset_id="your-dataset-id",  # Use client.datasets.list() to see available datasets
    batch_size=16,
)

Configure optimizer

optimizer_config = TrainingConfigOptimizerConfig(
    lr=0.0001,
)

Set up training configuration

training_config = TrainingConfig(
    n_epochs=2,
    data_config=data_config,
    optimizer_config=optimizer_config,
)

Start fine-tuning job

training_job = client.post_training.supervised_fine_tune(
    job_uuid="unique-job-id",
    model="meta-llama/Llama-3.1-8B-Instruct",
    checkpoint_dir="",
    algorithm_config=algorithm_config,
    training_config=training_config,
    logger_config={},
    hyperparam_search_config={},
)

List all jobs

jobs = client.post_training.job.list()

Check job status

job_status = client.post_training.job.status(job_uuid="your-job-id")

Cancel a job

client.post_training.job.cancel(job_uuid="your-job-id")

Inference with the fine-tuned model

1. Register the model

from llama_stack.apis.models import Model, ModelType

client.models.register(
    model_id="test-example-model@v1",
    provider_id="nvidia",
    provider_model_id="test-example-model@v1",
    model_type=ModelType.llm,
)

2. Inference with the fine-tuned model

response = client.inference.completion(
    content="Complete the sentence using one word: Roses are red, violets are ",
    stream=False,
    model_id="test-example-model@v1",
    sampling_params={
        "max_tokens": 50,
    },
)
print(response.content)