mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-03 21:02:38 +00:00
# What does this PR do? Catches a bug in the previous codegen which was removing newlines. [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan ``` python llama_stack/scripts/distro_codegen.py ``` [//]: # (## Documentation) [//]: # (- [ ] Added a Changelog entry if the change is significant) |
||
---|---|---|
.. | ||
bedrock | ||
cerebras | ||
dell | ||
experimental-post-training | ||
fireworks | ||
hf-endpoint | ||
hf-serverless | ||
meta-reference-gpu | ||
meta-reference-quantized-gpu | ||
nvidia | ||
ollama | ||
remote-vllm | ||
sambanova | ||
tgi | ||
together | ||
vllm-gpu | ||
__init__.py | ||
template.py |