llama-stack-mirror/llama_stack/providers/remote/inference/openai
Matthew Farrellee f6d1867bf5 chore: remove batch-related APIs
APIs removed:
 - POST /v1/batch-inference/completion
 - POST /v1/batch-inference/chat-completion
 - POST /v1/inference/batch-completion
 - POST /v1/inference/batch-chat-completion

note -
 - batch-completion & batch-chat-completion were only implemented for inference=inline::meta-reference
 - batch-inference were not implemented
2025-08-26 19:18:16 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
config.py feat(openai): add configurable base_url support with OPENAI_BASE_URL env var (#2919) 2025-07-28 10:16:02 -07:00
models.py fix: starter template and litellm backward compat conflict for openai (#2885) 2025-07-24 17:28:37 +02:00
openai.py chore: remove batch-related APIs 2025-08-26 19:18:16 -04:00