This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-12-31 06:10:00 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
24cfa1ef1a
llama-stack-mirror
/
llama_stack
/
providers
/
inline
/
inference
/
vllm
History
Download ZIP
Download TAR.GZ
Ben Browning
24cfa1ef1a
Mark inline vllm as OpenAI unsupported inference
...
Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-04-09 15:47:02 -04:00
..
__init__.py
chore: fix typing hints for get_provider_impl deps arguments (
#1544
)
2025-03-11 10:07:28 -07:00
config.py
test: add unit test to ensure all config types are instantiable (
#1601
)
2025-03-12 22:29:58 -07:00
openai_utils.py
refactor: move all llama code to models/llama out of meta reference (
#1887
)
2025-04-07 15:03:58 -07:00
vllm.py
Mark inline vllm as OpenAI unsupported inference
2025-04-09 15:47:02 -04:00