llama-stack-mirror/llama_stack/providers/remote/inference
Ben Browning a193c9fc3f Add OpenAI-Compatible models, completions, chat/completions endpoints
This stubs in some OpenAI server-side compatibility with three new
endpoints:

/v1/openai/v1/models
/v1/openai/v1/completions
/v1/openai/v1/chat/completions

This gives common inference apps using OpenAI clients the ability to
talk to Llama Stack using an endpoint like
http://localhost:8321/v1/openai/v1 .

The two "v1" instances in there isn't awesome, but the thinking is
that Llama Stack's API is v1 and then our OpenAI compatibility layer
is compatible with OpenAI V1. And, some OpenAI clients implicitly
assume the URL ends with "v1", so this gives maximum compatibility.

The openai models endpoint is implemented in the routing layer, and
just returns all the models Llama Stack knows about.

The chat endpoints are only actually implemented for the remote-vllm
provider right now, and it just proxies the completion and chat
completion requests to the backend vLLM.

The goal to support this for every inference provider - proxying
directly to the provider's OpenAI endpoint for OpenAI-compatible
providers. For providers that don't have an OpenAI-compatible API,
we'll add a mixin to translate incoming OpenAI requests to Llama Stack
inference requests and translate the Llama Stack inference responses
to OpenAI responses.
2025-04-09 15:47:01 -04:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
cerebras refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
cerebras_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
databricks refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
fireworks test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
fireworks_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
groq_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
nvidia refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
ollama Add OpenAI-Compatible models, completions, chat/completions endpoints 2025-04-09 15:47:01 -04:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough fix: passthrough impl response.content.text (#1665) 2025-03-17 13:42:08 -07:00
runpod test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
sambanova test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
sambanova_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
tgi chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
together test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
together_openai_compat test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
vllm Add OpenAI-Compatible models, completions, chat/completions endpoints 2025-04-09 15:47:01 -04:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00