mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-23 21:04:29 +00:00
Groq has never supported raw completions anyhow. So this makes it easier to switch it to LiteLLM. All our test suite passes. I also updated all the openai-compat providers so they work with api keys passed from headers. `provider_data` ## Test Plan ```bash LLAMA_STACK_CONFIG=groq \ pytest -s -v tests/client-sdk/inference/test_text_inference.py \ --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model="" ``` Also tested (openai, anthropic, gemini) providers. No regressions. |
||
---|---|---|
.. | ||
bedrock | ||
cerebras | ||
dell-tgi | ||
fireworks | ||
meta-reference-gpu | ||
meta-reference-quantized-gpu | ||
ollama | ||
remote-nvidia | ||
remote-vllm | ||
runpod | ||
sambanova | ||
tgi | ||
together | ||
vllm-gpu | ||
dependencies.json |