forked from phoenix-oss/llama-stack-mirror
Summary: Fixing issue reported in https://github.com/meta-llama/llama-stack/pull/755/files#r1915696188 Test Plan: Re-run the config gen ``` pip install . python3 llama_stack/scripts/distro_codegen.py ``` |
||
|---|---|---|
| .. | ||
| bedrock | ||
| cerebras | ||
| experimental-post-training | ||
| fireworks | ||
| hf-endpoint | ||
| hf-serverless | ||
| meta-reference-gpu | ||
| meta-reference-quantized-gpu | ||
| ollama | ||
| remote-vllm | ||
| tgi | ||
| together | ||
| vllm-gpu | ||
| __init__.py | ||
| template.py | ||