mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-30 21:43:53 +00:00
This fixes an issue where, with some models (ie the Llama 4 models), together.ai is sending a final usage chunk for streaming responses even if the user didn't ask to include usage. With this change, the OpenAI API verification tests now pass 100% when using Llama Stack as your API server and together.ai as the backend provider. As part of this, I also cleaned up the streaming/non-streaming return types of the `openai_chat_completion` method to keep type checking happy. Signed-off-by: Ben Browning <bbrownin@redhat.com> |
||
|---|---|---|
| .. | ||
| css | ||
| js | ||
| providers/vector_io | ||
| llama-stack-logo.png | ||
| llama-stack-spec.html | ||
| llama-stack-spec.yaml | ||
| llama-stack.png | ||
| remote_or_local.gif | ||
| safety_system.webp | ||