mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-30 19:29:59 +00:00
This stubs in some OpenAI server-side compatibility with three new endpoints: /v1/openai/v1/models /v1/openai/v1/completions /v1/openai/v1/chat/completions This gives common inference apps using OpenAI clients the ability to talk to Llama Stack using an endpoint like http://localhost:8321/v1/openai/v1 . The two "v1" instances in there isn't awesome, but the thinking is that Llama Stack's API is v1 and then our OpenAI compatibility layer is compatible with OpenAI V1. And, some OpenAI clients implicitly assume the URL ends with "v1", so this gives maximum compatibility. The openai models endpoint is implemented in the routing layer, and just returns all the models Llama Stack knows about. The chat endpoints are only actually implemented for the remote-vllm provider right now, and it just proxies the completion and chat completion requests to the backend vLLM. The goal to support this for every inference provider - proxying directly to the provider's OpenAI endpoint for OpenAI-compatible providers. For providers that don't have an OpenAI-compatible API, we'll add a mixin to translate incoming OpenAI requests to Llama Stack inference requests and translate the Llama Stack inference responses to OpenAI responses.
59 lines
1.2 KiB
Text
59 lines
1.2 KiB
Text
# This file was autogenerated by uv via the following command:
|
|
# uv export --frozen --no-hashes --no-emit-project --output-file=requirements.txt
|
|
annotated-types==0.7.0
|
|
anyio==4.8.0
|
|
attrs==25.1.0
|
|
blobfile==3.0.0
|
|
certifi==2025.1.31
|
|
charset-normalizer==3.4.1
|
|
click==8.1.8
|
|
colorama==0.4.6 ; sys_platform == 'win32'
|
|
distro==1.9.0
|
|
exceptiongroup==1.2.2 ; python_full_version < '3.11'
|
|
filelock==3.17.0
|
|
fire==0.7.0
|
|
fsspec==2024.12.0
|
|
h11==0.14.0
|
|
httpcore==1.0.7
|
|
httpx==0.28.1
|
|
huggingface-hub==0.29.0
|
|
idna==3.10
|
|
jinja2==3.1.6
|
|
jiter==0.8.2
|
|
jsonschema==4.23.0
|
|
jsonschema-specifications==2024.10.1
|
|
llama-stack-client==0.2.1
|
|
lxml==5.3.1
|
|
markdown-it-py==3.0.0
|
|
markupsafe==3.0.2
|
|
mdurl==0.1.2
|
|
numpy==2.2.3
|
|
openai==1.71.0
|
|
packaging==24.2
|
|
pandas==2.2.3
|
|
pillow==11.1.0
|
|
prompt-toolkit==3.0.50
|
|
pyaml==25.1.0
|
|
pycryptodomex==3.21.0
|
|
pydantic==2.10.6
|
|
pydantic-core==2.27.2
|
|
pygments==2.19.1
|
|
python-dateutil==2.9.0.post0
|
|
python-dotenv==1.0.1
|
|
pytz==2025.1
|
|
pyyaml==6.0.2
|
|
referencing==0.36.2
|
|
regex==2024.11.6
|
|
requests==2.32.3
|
|
rich==13.9.4
|
|
rpds-py==0.22.3
|
|
setuptools==75.8.0
|
|
six==1.17.0
|
|
sniffio==1.3.1
|
|
termcolor==2.5.0
|
|
tiktoken==0.9.0
|
|
tqdm==4.67.1
|
|
typing-extensions==4.12.2
|
|
tzdata==2025.1
|
|
urllib3==2.3.0
|
|
wcwidth==0.2.13
|