llama-stack-mirror/llama_stack/providers/inline
Ben Browning a193c9fc3f Add OpenAI-Compatible models, completions, chat/completions endpoints
This stubs in some OpenAI server-side compatibility with three new
endpoints:

/v1/openai/v1/models
/v1/openai/v1/completions
/v1/openai/v1/chat/completions

This gives common inference apps using OpenAI clients the ability to
talk to Llama Stack using an endpoint like
http://localhost:8321/v1/openai/v1 .

The two "v1" instances in there isn't awesome, but the thinking is
that Llama Stack's API is v1 and then our OpenAI compatibility layer
is compatible with OpenAI V1. And, some OpenAI clients implicitly
assume the URL ends with "v1", so this gives maximum compatibility.

The openai models endpoint is implemented in the routing layer, and
just returns all the models Llama Stack knows about.

The chat endpoints are only actually implemented for the remote-vllm
provider right now, and it just proxies the completion and chat
completion requests to the backend vLLM.

The goal to support this for every inference provider - proxying
directly to the provider's OpenAI endpoint for OpenAI-compatible
providers. For providers that don't have an OpenAI-compatible API,
we'll add a mixin to translate incoming OpenAI requests to Llama Stack
inference requests and translate the Llama Stack inference responses
to OpenAI responses.
2025-04-09 15:47:01 -04:00
..
agents chore: remove unused tempdir in agent (#1896) 2025-04-09 09:43:48 +02:00
datasetio refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
eval fix: fix jobs api literal return type (#1757) 2025-03-21 14:04:21 -07:00
inference Add OpenAI-Compatible models, completions, chat/completions endpoints 2025-04-09 15:47:01 -04:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
safety refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
scoring fix: a couple of tests were broken and not yet exercised by our per-PR test workflow 2025-03-21 12:12:14 -07:00
telemetry feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
tool_runtime fix(api): don't return list for runtime tools (#1686) 2025-04-01 09:53:11 +02:00
vector_io chore: Updating sqlite-vec to make non-blocking calls (#1762) 2025-03-23 17:25:44 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00