llama-stack-mirror/llama_stack/core/routers
ehhuang f8eaa40580
chore: better error messages for moderations API (#3887)
# What does this PR do?


## Test Plan
```
~/projects/lst3 remotes/origin/HEAD*
.venv ❯ curl http://localhost:8321/v1/moderations \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "input": [
        "hello"
    ]
  }'
{"detail":"Invalid value: No shield associated with provider_resource id gpt-4o-mini: choose from ['together/meta-llama/Llama-Guard-4-12B']"}
```
2025-10-22 14:33:13 -07:00
..
__init__.py chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00
datasets.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
eval_scoring.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
inference.py feat: Add rerank models and rerank API change (#3831) 2025-10-22 12:02:28 -07:00
safety.py chore: better error messages for moderations API (#3887) 2025-10-22 14:33:13 -07:00
tool_runtime.py revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
vector_io.py chore(cleanup)!: kill vector_db references as far as possible (#3864) 2025-10-20 20:06:16 -07:00