llama-stack-mirror/llama_stack
Ben Browning c014571258 fix: OpenAI API - together.ai extra usage chunks
This fixes an issue where, with some models (ie the Llama 4 models),
together.ai is sending a final usage chunk for streaming responses
even if the user didn't ask to include usage.

With this change, the OpenAI API verification tests now pass 100% when
using Llama Stack as your API server and together.ai as the backend
provider.

As part of this, I also cleaned up the streaming/non-streaming return
types of the `openai_chat_completion` method to keep type checking happy.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-13 13:39:56 -04:00
..
apis fix: OpenAI API - together.ai extra usage chunks 2025-04-13 13:39:56 -04:00
cli fix: misleading help text for 'llama stack build' and 'llama stack run' (#1910) 2025-04-12 01:19:11 -07:00
distribution fix: OpenAI API - together.ai extra usage chunks 2025-04-13 13:39:56 -04:00
models feat: support '-' in tool names (#1807) 2025-04-12 14:23:03 -07:00
providers fix: OpenAI API - together.ai extra usage chunks 2025-04-13 13:39:56 -04:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00