llama-stack-mirror/llama_stack/providers/remote/inference
Ashwin Bharambe f34f22f8c7
feat: add batch inference API to llama stack inference (#1945)
# What does this PR do?

This PR adds two methods to the Inference API:
- `batch_completion`
- `batch_chat_completion`

The motivation is for evaluations targeting a local inference engine
(like meta-reference or vllm) where batch APIs provide for a substantial
amount of acceleration.

Why did I not add this to `Api.batch_inference` though? That just
resulted in a _lot_ more book-keeping given the structure of Llama
Stack. Had I done that, I would have needed to create a notion of a
"batch model" resource, setup routing based on that, etc. This does not
sound ideal.

So what's the future of the batch inference API? I am not sure. Maybe we
can keep it for true _asynchronous_ execution. So you can submit
requests, and it can return a Job instance, etc.

## Test Plan

Run meta-reference-gpu using:
```bash
export INFERENCE_MODEL=meta-llama/Llama-4-Scout-17B-16E-Instruct
export INFERENCE_CHECKPOINT_DIR=../checkpoints/Llama-4-Scout-17B-16E-Instruct-20250331210000
export MODEL_PARALLEL_SIZE=4
export MAX_BATCH_SIZE=32
export MAX_SEQ_LEN=6144

LLAMA_MODELS_DEBUG=1 llama stack run meta-reference-gpu
```

Then run the batch inference test case.
2025-04-12 11:41:12 -07:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
cerebras feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
cerebras_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
databricks feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
fireworks feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
fireworks_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
groq_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
nvidia feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
ollama feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
runpod feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
sambanova feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
sambanova_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
tgi feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
together feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
together_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
vllm feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00