llama-stack-mirror/llama_stack/providers/remote/datasetio/nvidia/README.md
Jash Gulabrai 40e2c97915
feat: Add Nvidia e2e beginner notebook and tool calling notebook (#1964)
# What does this PR do?
This PR contains two sets of notebooks that serve as reference material
for developers getting started with Llama Stack using the NVIDIA
Provider. Developers should be able to execute these notebooks
end-to-end, pointing to their NeMo Microservices deployment.
1. `beginner_e2e/`: Notebook that walks through a beginner end-to-end
workflow that covers creating datasets, running inference, customizing
and evaluating models, and running safety checks.
2. `tool_calling/`: Notebook that is ported over from the [Data Flywheel
& Tool Calling
notebook](https://github.com/NVIDIA/GenerativeAIExamples/tree/main/nemo/data-flywheel)
that is referenced in the NeMo Microservices docs. I updated the
notebook to use the Llama Stack client wherever possible, and added
relevant instructions.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
- Both notebook folders contain READMEs with pre-requisites. To manually
test these notebooks, you'll need to have a deployment of the NeMo
Microservices Platform and update the `config.py` file with your
deployment's information.
- I've run through these notebooks manually end-to-end to verify each
step works.

[//]: # (## Documentation)

---------

Co-authored-by: Jash Gulabrai <jgulabrai@nvidia.com>
2025-06-16 11:29:01 -04:00

1.6 KiB

NVIDIA DatasetIO Provider for LlamaStack

This provider enables dataset management using NVIDIA's NeMo Customizer service.

Features

  • Register datasets for fine-tuning LLMs
  • Unregister datasets

Getting Started

Prerequisites

  • LlamaStack with NVIDIA configuration
  • Access to Hosted NVIDIA NeMo Microservice
  • API key for authentication with the NVIDIA service

Setup

Build the NVIDIA environment:

llama stack build --template nvidia --image-type conda

Basic Usage using the LlamaStack Python Client

Initialize the client

import os

os.environ["NVIDIA_API_KEY"] = "your-api-key"
os.environ["NVIDIA_CUSTOMIZER_URL"] = "http://nemo.test"
os.environ["NVIDIA_DATASET_NAMESPACE"] = "default"
os.environ["NVIDIA_PROJECT_ID"] = "test-project"
from llama_stack.distribution.library_client import LlamaStackAsLibraryClient

client = LlamaStackAsLibraryClient("nvidia")
client.initialize()

Register a dataset

client.datasets.register(
    purpose="post-training/messages",
    dataset_id="my-training-dataset",
    source={"type": "uri", "uri": "hf://datasets/default/sample-dataset"},
    metadata={
        "format": "json",
        "description": "Dataset for LLM fine-tuning",
        "provider": "nvidia",
    },
)

Get a list of all registered datasets

datasets = client.datasets.list()
for dataset in datasets:
    print(f"Dataset ID: {dataset.identifier}")
    print(f"Description: {dataset.metadata.get('description', '')}")
    print(f"Source: {dataset.source.uri}")
    print("---")

Unregister a dataset

client.datasets.unregister(dataset_id="my-training-dataset")