mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
Groq has never supported raw completions anyhow. So this makes it easier to switch it to LiteLLM. All our test suite passes. I also updated all the openai-compat providers so they work with api keys passed from headers. `provider_data` ## Test Plan ```bash LLAMA_STACK_CONFIG=groq \ pytest -s -v tests/client-sdk/inference/test_text_inference.py \ --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model="" ``` Also tested (openai, anthropic, gemini) providers. No regressions.
This commit is contained in:
parent
564f0e5f93
commit
928a39d17b
23 changed files with 165 additions and 1004 deletions
|
@ -37,11 +37,11 @@ The following environment variables can be configured:
|
|||
|
||||
The following models are available by default:
|
||||
|
||||
- `meta-llama/Llama-3.1-8B-Instruct (llama3-8b-8192)`
|
||||
- `meta-llama/Llama-3.1-8B-Instruct (llama-3.1-8b-instant)`
|
||||
- `meta-llama/Llama-3-70B-Instruct (llama3-70b-8192)`
|
||||
- `meta-llama/Llama-3.3-70B-Instruct (llama-3.3-70b-versatile)`
|
||||
- `meta-llama/Llama-3.2-3B-Instruct (llama-3.2-3b-preview)`
|
||||
- `meta-llama/Llama-3.1-8B-Instruct (groq/llama3-8b-8192)`
|
||||
- `meta-llama/Llama-3.1-8B-Instruct (groq/llama-3.1-8b-instant)`
|
||||
- `meta-llama/Llama-3-70B-Instruct (groq/llama3-70b-8192)`
|
||||
- `meta-llama/Llama-3.3-70B-Instruct (groq/llama-3.3-70b-versatile)`
|
||||
- `meta-llama/Llama-3.2-3B-Instruct (groq/llama-3.2-3b-preview)`
|
||||
|
||||
|
||||
### Prerequisite: API Keys
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue