llama-stack/llama_stack/apis
Ben Browning 2b2db5fbda
feat: OpenAI-Compatible models, completions, chat/completions (#1894)
# What does this PR do?

This stubs in some OpenAI server-side compatibility with three new
endpoints:

/v1/openai/v1/models
/v1/openai/v1/completions
/v1/openai/v1/chat/completions

This gives common inference apps using OpenAI clients the ability to
talk to Llama Stack using an endpoint like
http://localhost:8321/v1/openai/v1 .

The two "v1" instances in there isn't awesome, but the thinking is that
Llama Stack's API is v1 and then our OpenAI compatibility layer is
compatible with OpenAI V1. And, some OpenAI clients implicitly assume
the URL ends with "v1", so this gives maximum compatibility.

The openai models endpoint is implemented in the routing layer, and just
returns all the models Llama Stack knows about.

The following providers should be working with the new OpenAI
completions and chat/completions API:
* remote::anthropic (untested)
* remote::cerebras-openai-compat (untested)
* remote::fireworks (tested)
* remote::fireworks-openai-compat (untested)
* remote::gemini (untested)
* remote::groq-openai-compat (untested)
* remote::nvidia (tested)
* remote::ollama (tested)
* remote::openai (untested)
* remote::passthrough (untested)
* remote::sambanova-openai-compat (untested)
* remote::together (tested)
* remote::together-openai-compat (untested)
* remote::vllm (tested)

The goal to support this for every inference provider - proxying
directly to the provider's OpenAI endpoint for OpenAI-compatible
providers. For providers that don't have an OpenAI-compatible API, we'll
add a mixin to translate incoming OpenAI requests to Llama Stack
inference requests and translate the Llama Stack inference responses to
OpenAI responses.

This is related to #1817 but is a bit larger in scope than just chat
completions, as I have real use-cases that need the older completions
API as well.

## Test Plan

### vLLM

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" llama stack build --template remote-vllm --image-type venv --run

LLAMA_STACK_CONFIG=http://localhost:8321 INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" python -m pytest -v tests/integration/inference/test_openai_completion.py --text-model "meta-llama/Llama-3.2-3B-Instruct"
```

### ollama
```
INFERENCE_MODEL="llama3.2:3b-instruct-q8_0" llama stack build --template ollama --image-type venv --run

LLAMA_STACK_CONFIG=http://localhost:8321 INFERENCE_MODEL="llama3.2:3b-instruct-q8_0" python -m pytest -v tests/integration/inference/test_openai_completion.py --text-model "llama3.2:3b-instruct-q8_0"
```



## Documentation

Run a Llama Stack distribution that uses one of the providers mentioned
in the list above. Then, use your favorite OpenAI client to send
completion or chat completion requests with the base_url set to
http://localhost:8321/v1/openai/v1 . Replace "localhost:8321" with the
host and port of your Llama Stack server, if different.

---------

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-11 13:14:17 -07:00
..
agents feat(telemetry): clean up spans (#1760) 2025-03-21 20:05:11 -07:00
batch_inference fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
benchmarks fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
common refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
datasetio refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
datasets chore: Don't set type variables from register_schema() (#1713) 2025-03-19 20:29:00 -07:00
eval fix: fix jobs api literal return type (#1757) 2025-03-21 14:04:21 -07:00
files feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00
inference feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
inspect chore: deprecate /v1/inspect/providers (#1678) 2025-03-19 20:27:06 -07:00
models feat: OpenAI-Compatible models, completions, chat/completions (#1894) 2025-04-11 13:14:17 -07:00
post_training fix: Restore discriminator for AlgorithmConfig (#1706) 2025-03-20 07:33:26 -07:00
providers fix: OpenAPI with provider get (#1627) 2025-03-13 19:56:32 -07:00
safety chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
scoring docs: api documentation for agents/eval/scoring/datasets (#1400) 2025-03-05 09:40:24 -08:00
scoring_functions chore: Don't set type variables from register_schema() (#1713) 2025-03-19 20:29:00 -07:00
shields fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
synthetic_data_generation chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
telemetry chore: Don't set type variables from register_schema() (#1713) 2025-03-19 20:29:00 -07:00
tools fix(api): don't return list for runtime tools (#1686) 2025-04-01 09:53:11 +02:00
vector_dbs fix: return 4xx for non-existent resources in GET requests (#1635) 2025-03-18 14:06:53 -07:00
vector_io chore: mypy violations cleanup for inline::{telemetry,tool_runtime,vector_io} (#1711) 2025-03-20 10:01:10 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00
resource.py fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
version.py llama-stack version alpha -> v1 2025-01-15 05:58:09 -08:00