llama-stack/distributions
Yuan Tang 300e6e2702
Fix issue when generating distros (#755)
Addressed comment
https://github.com/meta-llama/llama-stack/pull/723#issuecomment-2581902075.

cc @yanxi0830 

I am not 100% sure if the diff is correct though but this is the result
of running `python llama_stack/scripts/distro_codegen.py`.

---------

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-01-15 05:34:08 -08:00
..
bedrock Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
cerebras Cerebras Inference Integration (#265) 2024-12-03 21:15:32 -08:00
dell-tgi Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
fireworks Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
meta-reference-gpu Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
meta-reference-quantized-gpu Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
ollama Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
remote-vllm rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars (#744) 2025-01-10 11:09:49 -08:00
tgi Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
together Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
vllm-gpu Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
dependencies.json Fix issue when generating distros (#755) 2025-01-15 05:34:08 -08:00