llama-stack-mirror/llama_stack/providers
Derek Higgins 6434cdfdab fix: Run prompt_guard model in a seperate thread
The GPU model usage blocks the CPU. Move it
to its own thread. Also wrap in a lock to
prevent multiple simultaneous run from exhausting
the GPU.

Closes: #1746
Signed-off-by: Derek Higgins <derekh@redhat.com>
2025-03-28 14:19:30 +00:00
..
inline fix: Run prompt_guard model in a seperate thread 2025-03-28 14:19:30 +00:00
registry feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00
remote feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
tests refactor(test): introduce --stack-config and simplify options (#1404) 2025-03-05 17:02:02 -08:00
utils feat: Support "stop" parameter in remote:vLLM (#1715) 2025-03-24 12:42:55 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00