llama-stack-mirror/llama_stack/providers
Ashwin Bharambe 9fa69b0337
feat(distro): no huggingface provider for starter (#3258)
The `trl` dependency brings in `accelerate` which brings in nvidia
dependencies for torch. We cannot have that in the starter distro. As
such, no CPU-only post-training for the huggingface provider.
2025-08-26 14:06:36 -07:00
..
inline feat: Add optional idempotency support to batches API (#3171) 2025-08-22 15:50:40 -07:00
registry feat(distro): no huggingface provider for starter (#3258) 2025-08-26 14:06:36 -07:00
remote chore: indicate to mypy that InferenceProvider.batch_completion/batch_chat_completion is concrete (#3239) 2025-08-22 14:17:30 -07:00
utils feat: implement query_metrics (#3074) 2025-08-22 14:19:24 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00