llama-stack-mirror/llama_stack/providers/inline/safety
Michael Dawson a654467552
feat: add cpu/cuda config for prompt guard (#2194)
# What does this PR do?
Previously prompt guard was hard coded to require cuda which prevented
it from being used on an instance without a cuda support.

This PR allows prompt guard to be configured to use either cpu or cuda.

[//]: # (If resolving an issue, uncomment and update the line below)
Closes [#2133](https://github.com/meta-llama/llama-stack/issues/2133)

## Test Plan (Edited after incorporating suggestion)
1) started stack configured with prompt guard as follows on a system
without a GPU
and validated prompt guard could be used through the APIs

2) validated on a system with a gpu (but without llama stack) that the
python selecting between cpu and cuda support returned the right value
when a cuda device was available.

3) ran the unit tests as per -
https://github.com/meta-llama/llama-stack/blob/main/tests/unit/README.md

[//]: # (## Documentation)

---------

Signed-off-by: Michael Dawson <mdawson@devrus.com>
2025-05-28 12:23:15 -07:00
..
code_scanner chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
llama_guard chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
prompt_guard feat: add cpu/cuda config for prompt guard (#2194) 2025-05-28 12:23:15 -07:00
__init__.py add missing inits 2024-11-08 17:54:24 -08:00