llama-stack/llama_stack/providers/remote/safety/nvidia
Jash Gulabrai eab550f7d2
fix: Fix messages format in NVIDIA safety check request body (#2063)
# What does this PR do?
When running a Llama Stack server and invoking the
`/v1/safety/run-shield` endpoint, the NVIDIA Guardrails endpoint in some
cases errors with a `422: Unprocessable Entity` due to malformed input.

For example, given an request body like:
```
{
  "model": "test",
  "messages": [
    { "role": "user", "content": "You are stupid." }
  ]
}
```
`convert_pydantic_to_json_value` converts the message to:
```
{ "role": "user", "content": "You are stupid.", "context": null }
```
Which causes NVIDIA Guardrails to return an error `HTTPError: 422 Client
Error: Unprocessable Entity for url:
http://nemo.test/v1/guardrail/checks`, because `context` shouldn't be
included in the body.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
I ran the Llama Stack server locally and manually verified that the
endpoint now succeeds.

```
message = {"role": "user", "content": "You are stupid."}
response = client.safety.run_shield(messages=[message], shield_id=shield_id, params={})
```
Server logs:
```
14:29:09.656 [START] /v1/safety/run-shield
INFO:     127.0.0.1:54616 - "POST /v1/safety/run-shield HTTP/1.1" 200 OK
14:29:09.918 [END] /v1/safety/run-shield [StatusCode.OK] (262.26ms
```

[//]: # (## Documentation)

Co-authored-by: Jash Gulabrai <jgulabrai@nvidia.com>
2025-04-30 18:01:28 +02:00
..
__init__.py feat: added nvidia as safety provider (#1248) 2025-03-17 14:39:23 -07:00
config.py feat: added nvidia as safety provider (#1248) 2025-03-17 14:39:23 -07:00
nvidia.py fix: Fix messages format in NVIDIA safety check request body (#2063) 2025-04-30 18:01:28 +02:00
README.md docs: Add NVIDIA platform distro docs (#1971) 2025-04-17 05:54:30 -07:00

NVIDIA Safety Provider for LlamaStack

This provider enables safety checks and guardrails for LLM interactions using NVIDIA's NeMo Guardrails service.

Features

  • Run safety checks for messages

Getting Started

Prerequisites

  • LlamaStack with NVIDIA configuration
  • Access to NVIDIA NeMo Guardrails service
  • NIM for model to use for safety check is deployed

Setup

Build the NVIDIA environment:

llama stack build --template nvidia --image-type conda

Basic Usage using the LlamaStack Python Client

Initialize the client

import os

os.environ["NVIDIA_API_KEY"] = "your-api-key"
os.environ["NVIDIA_GUARDRAILS_URL"] = "http://guardrails.test"

from llama_stack.distribution.library_client import LlamaStackAsLibraryClient

client = LlamaStackAsLibraryClient("nvidia")
client.initialize()

Create a safety shield

from llama_stack.apis.safety import Shield
from llama_stack.apis.inference import Message

# Create a safety shield
shield = Shield(
    shield_id="your-shield-id",
    provider_resource_id="safety-model-id",  # The model to use for safety checks
    description="Safety checks for content moderation",
)

# Register the shield
await client.safety.register_shield(shield)

Run safety checks

# Messages to check
messages = [Message(role="user", content="Your message to check")]

# Run safety check
response = await client.safety.run_shield(
    shield_id="your-shield-id",
    messages=messages,
)

# Check for violations
if response.violation:
    print(f"Safety violation detected: {response.violation.user_message}")
    print(f"Violation level: {response.violation.violation_level}")
    print(f"Metadata: {response.violation.metadata}")
else:
    print("No safety violations detected")