This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-06-29 03:14:19 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
80ada04f76
llama-stack-mirror
/
llama_stack
/
providers
/
impls
/
vllm
History
Download ZIP
Download TAR.GZ
Yuan Tang
80ada04f76
Remove request arg from chat completion response processing (
#240
)
...
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2024-10-15 13:03:17 -07:00
..
__init__.py
Fix incorrect completion() signature for Databricks provider (
#236
)
2024-10-11 08:47:57 -07:00
config.py
Inline vLLM inference provider (
#181
)
2024-10-05 23:34:16 -07:00
vllm.py
Remove request arg from chat completion response processing (
#240
)
2024-10-15 13:03:17 -07:00