llama-stack-mirror/llama_stack/distributions/starter-gpu
2025-09-30 14:48:12 -07:00
..
__init__.py feat(distro): fork off a starter-gpu distribution (#3240) 2025-08-22 15:47:15 -07:00
build.yaml feat: add Azure OpenAI inference provider support (#3396) 2025-09-11 13:48:38 +02:00
run.yaml fix(logging): disable console telemetry sink by default 2025-09-30 14:48:12 -07:00
starter_gpu.py fix: Fix locations of distrubution runtime directories (#3336) 2025-09-05 14:09:36 +02:00