forked from phoenix-oss/llama-stack-mirror
# What does this PR do? Create a distribution template using Groq as inference provider. Link to issue: https://github.com/meta-llama/llama-stack/issues/958 ## Test Plan Run `python llama_stack/scripts/distro_codegen.py` to generate run.yaml and build.yaml Test the newly created template by running `llama stack build --template <template-name>` `llama stack run <template-name>` |
||
|---|---|---|
| .. | ||
| bedrock | ||
| cerebras | ||
| databricks | ||
| fireworks | ||
| groq | ||
| nvidia | ||
| ollama | ||
| passthrough | ||
| runpod | ||
| sambanova | ||
| sample | ||
| tgi | ||
| together | ||
| vllm | ||
| __init__.py | ||