mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-27 23:31:59 +00:00
Previously prompt guard was hard coded to require cuda which prevented it from being used on an instance without a cuda support. This PR allows prompt guard to be configured to use either cpu or cuda. Signed-off-by: Michael Dawson <mdawson@devrus.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| config.py | ||
| prompt_guard.py | ||