llama-stack-mirror/llama_stack/core/routers
Matthew Farrellee f6d1867bf5 chore: remove batch-related APIs
APIs removed:
 - POST /v1/batch-inference/completion
 - POST /v1/batch-inference/chat-completion
 - POST /v1/inference/batch-completion
 - POST /v1/inference/batch-chat-completion

note -
 - batch-completion & batch-chat-completion were only implemented for inference=inline::meta-reference
 - batch-inference were not implemented
2025-08-26 19:18:16 -04:00
..
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
datasets.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
eval_scoring.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
inference.py chore: remove batch-related APIs 2025-08-26 19:18:16 -04:00
safety.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
tool_runtime.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
vector_io.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00