llama-stack-mirror/llama_stack/providers/remote/safety/nvidia
Mustafa Elbehery c3b2b06974
refactor(logging): rename llama_stack logger categories (#3065)
# What does this PR do?
<!-- Provide a short summary of what this PR does and why. Link to
relevant issues if applicable. -->
This PR renames categories of llama_stack loggers.

This PR aligns logging categories as per the package name, as well as
reviews from initial
https://github.com/meta-llama/llama-stack/pull/2868. This is a follow up
to #3061.

<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->

Replaces https://github.com/meta-llama/llama-stack/pull/2868
Part of https://github.com/meta-llama/llama-stack/issues/2865

cc @leseb @rhuss

Signed-off-by: Mustafa Elbehery <melbeher@redhat.com>
2025-08-21 17:31:04 -07:00
..
__init__.py feat: added nvidia as safety provider (#1248) 2025-03-17 14:39:23 -07:00
config.py refactor(env)!: enhanced environment variable substitution (#2490) 2025-06-26 08:20:08 +05:30
nvidia.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
README.md chore: rename templates to distributions (#3035) 2025-08-04 11:34:17 -07:00

NVIDIA Safety Provider for LlamaStack

This provider enables safety checks and guardrails for LLM interactions using NVIDIA's NeMo Guardrails service.

Features

  • Run safety checks for messages

Getting Started

Prerequisites

  • LlamaStack with NVIDIA configuration
  • Access to NVIDIA NeMo Guardrails service
  • NIM for model to use for safety check is deployed

Setup

Build the NVIDIA environment:

llama stack build --distro nvidia --image-type venv

Basic Usage using the LlamaStack Python Client

Initialize the client

import os

os.environ["NVIDIA_API_KEY"] = "your-api-key"
os.environ["NVIDIA_GUARDRAILS_URL"] = "http://guardrails.test"

from llama_stack.core.library_client import LlamaStackAsLibraryClient

client = LlamaStackAsLibraryClient("nvidia")
client.initialize()

Create a safety shield

from llama_stack.apis.safety import Shield
from llama_stack.apis.inference import Message

# Create a safety shield
shield = Shield(
    shield_id="your-shield-id",
    provider_resource_id="safety-model-id",  # The model to use for safety checks
    description="Safety checks for content moderation",
)

# Register the shield
await client.safety.register_shield(shield)

Run safety checks

# Messages to check
messages = [Message(role="user", content="Your message to check")]

# Run safety check
response = await client.safety.run_shield(
    shield_id="your-shield-id",
    messages=messages,
)

# Check for violations
if response.violation:
    print(f"Safety violation detected: {response.violation.user_message}")
    print(f"Violation level: {response.violation.violation_level}")
    print(f"Metadata: {response.violation.metadata}")
else:
    print("No safety violations detected")