llama-stack-mirror/llama_stack/providers/remote
Ben Browning ffae192540 Bug fixes for together.ai OpenAI endpoints
After actually running the test_openai_completion.py tests against
together.ai, turns out there were a couple of bugs in the initial
implementation. This fixes those.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-10 14:19:48 -04:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
inference Bug fixes for together.ai OpenAI endpoints 2025-04-10 14:19:48 -04:00
post_training refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
safety feat: added nvidia as safety provider (#1248) 2025-03-17 14:39:23 -07:00
tool_runtime fix(api): don't return list for runtime tools (#1686) 2025-04-01 09:53:11 +02:00
vector_io chore: Updating Milvus Client calls to be non-blocking (#1830) 2025-03-28 22:14:07 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00