llama-stack-mirror/llama_stack/providers/registry
Matthew Farrellee d23607483f
chore: update the groq inference impl to use openai-python for openai-compat functions (#3348)
# What does this PR do?

update Groq inference provider to use OpenAIMixin for openai-compat
endpoints

changes on api.groq.com -
- json_schema is now supported for specific models, see
https://console.groq.com/docs/structured-outputs#supported-models
- response_format with streaming is now supported for models that
support response_format
- groq no longer returns a 400 error if tools are provided and
tool_choice is not "required"


## Test Plan

```
$ GROQ_API_KEY=... uv run llama stack build --image-type venv --providers inference=remote::groq --run
...
$ LLAMA_STACK_CONFIG=http://localhost:8321 uv run --group test pytest -v -ra --text-model groq/llama-3.3-70b-versatile tests/integration/inference/test_openai_completion.py -k 'not store'
...
SKIPPED [3] tests/integration/inference/test_openai_completion.py:44: Model groq/llama-3.3-70b-versatile hosted by remote::groq doesn't support OpenAI completions.
SKIPPED [3] tests/integration/inference/test_openai_completion.py:94: Model groq/llama-3.3-70b-versatile hosted by remote::groq doesn't support vllm extra_body parameters.
SKIPPED [4] tests/integration/inference/test_openai_completion.py:73: Model groq/llama-3.3-70b-versatile hosted by remote::groq doesn't support n param.
SKIPPED [1] tests/integration/inference/test_openai_completion.py💯 Model groq/llama-3.3-70b-versatile hosted by remote::groq doesn't support chat completion calls with base64 encoded files.
======================= 8 passed, 11 skipped, 8 deselected, 2 warnings in 5.13s ========================
```

---------

Co-authored-by: raghotham <rsm@meta.com>
2025-09-06 15:36:27 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
agents.py fix: only load mcp when enabled in tool_group (#2621) 2025-07-04 20:27:05 +05:30
batches.py feat: add batches API with OpenAI compatibility (with inference replay) (#3162) 2025-08-15 15:34:15 -07:00
datasetio.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
eval.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
files.py feat: Add S3 Files Provider (#3202) 2025-08-22 10:38:59 -04:00
inference.py chore: update the groq inference impl to use openai-python for openai-compat functions (#3348) 2025-09-06 15:36:27 -07:00
post_training.py feat(distro): no huggingface provider for starter (#3258) 2025-08-26 14:06:36 -07:00
safety.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
scoring.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
telemetry.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00
tool_runtime.py feat: Updating Rag Tool to use Files API and Vector Stores API (#3344) 2025-09-06 07:26:34 -06:00
vector_io.py feat: implement keyword, vector and hybrid search inside vector stores for PGVector provider (#3064) 2025-08-29 16:30:12 +02:00