feat(providers): Groq now uses LiteLLM openai-compat (#1303)

Groq has never supported raw completions anyhow. So this makes it easier
to switch it to LiteLLM. All our test suite passes.

I also updated all the openai-compat providers so they work with api
keys passed from headers. `provider_data`

## Test Plan

```bash
LLAMA_STACK_CONFIG=groq \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
   --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model=""
```

Also tested (openai, anthropic, gemini) providers. No regressions.
This commit is contained in:
Ashwin Bharambe 2025-02-27 13:16:50 -08:00 committed by GitHub
parent 564f0e5f93
commit 928a39d17b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
23 changed files with 165 additions and 1004 deletions

View file

@ -7,21 +7,21 @@
from llama_stack.models.llama.sku_list import CoreModelId
from llama_stack.providers.utils.inference.model_registry import build_model_entry
_MODEL_ENTRIES = [
MODEL_ENTRIES = [
build_model_entry(
"llama3-8b-8192",
"groq/llama3-8b-8192",
CoreModelId.llama3_1_8b_instruct.value,
),
build_model_entry(
"llama-3.1-8b-instant",
"groq/llama-3.1-8b-instant",
CoreModelId.llama3_1_8b_instruct.value,
),
build_model_entry(
"llama3-70b-8192",
"groq/llama3-70b-8192",
CoreModelId.llama3_70b_instruct.value,
),
build_model_entry(
"llama-3.3-70b-versatile",
"groq/llama-3.3-70b-versatile",
CoreModelId.llama3_3_70b_instruct.value,
),
# Groq only contains a preview version for llama-3.2-3b
@ -29,7 +29,7 @@ _MODEL_ENTRIES = [
# to pass the test fixture
# TODO(aidand): Replace this with a stable model once Groq supports it
build_model_entry(
"llama-3.2-3b-preview",
"groq/llama-3.2-3b-preview",
CoreModelId.llama3_2_3b_instruct.value,
),
]