llama-stack-mirror/llama_stack
Matthew Farrellee 42409db891 feat: add llama guard 4 model
add support for Llama Guard 4 model to the llama_guard safety provider

test with -

0. NVIDIA_API_KEY=... llama stack build --image-type conda --image-name env-nvidia --providers inference=remote::nvidia,safety=inline::llama-guard --run
1. llama-stack-client models register meta-llama/Llama-Guard-4-12B --provider-model-id meta/llama-guard-4-12b
2. pytest tests/integration/safety/test_llama_guard.py
2025-07-01 15:13:41 -04:00
..
apis feat: Add webmethod for deleting openai responses (#2160) 2025-06-30 11:28:02 +02:00
cli fix: stack build (#2485) 2025-06-20 15:15:43 -07:00
distribution fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
models fix: finish conversion to StrEnum (#2514) 2025-06-26 08:01:26 +05:30
providers feat: add llama guard 4 model 2025-07-01 15:13:41 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
ui build: Bump version to 0.2.13 2025-06-27 23:56:14 +00:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00