mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-31 00:00:01 +00:00
This doesn't get Groq to 100% on the OpenAI API verification tests, but it does get it to 88.2% when Llama Stack is in the middle, compared to the 61.8% results for using an OpenAI client against Groq directly. The groq provider doesn't use litellm under the covers in its openai_chat_completion endpoint, and instead directly uses an AsyncOpenAI client with some special handling to improve conformance of responses for response_format usage and tool calling. Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| cerebras.yaml | ||
| fireworks-llama-stack.yaml | ||
| fireworks.yaml | ||
| groq-llama-stack.yaml | ||
| groq.yaml | ||
| openai-llama-stack.yaml | ||
| openai.yaml | ||
| together-llama-stack.yaml | ||
| together.yaml | ||