mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-31 07:03:55 +00:00
This doesn't get Groq to 100% on the OpenAI API verification tests, but it does get it to 88.2% when Llama Stack is in the middle, compared to the 61.8% results for using an OpenAI client against Groq directly. The groq provider doesn't use litellm under the covers in its openai_chat_completion endpoint, and instead directly uses an AsyncOpenAI client with some special handling to improve conformance of responses for response_format usage and tool calling. Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| bedrock.md | ||
| cerebras.md | ||
| dell-tgi.md | ||
| dell.md | ||
| fireworks.md | ||
| groq.md | ||
| meta-reference-gpu.md | ||
| meta-reference-quantized-gpu.md | ||
| nvidia.md | ||
| ollama.md | ||
| passthrough.md | ||
| remote-vllm.md | ||
| sambanova.md | ||
| tgi.md | ||
| together.md | ||