This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack
Watch
1
Star
0
Fork
You've already forked llama-stack
0
forked from
phoenix-oss/llama-stack-mirror
Code
Pull requests
Releases
Packages
2
Activity
Actions
6192bf43a4
llama-stack
/
llama_stack
/
providers
/
utils
/
inference
History
Download ZIP
Download TAR.GZ
Ashwin Bharambe
3b54ce3499
remote::vllm now works with vision models
2024-11-06 16:07:17 -08:00
..
__init__.py
Use inference APIs for executing Llama Guard (
#121
)
2024-09-28 15:40:06 -07:00
model_registry.py
Remove "routing_table" and "routing_key" concepts for the user (
#201
)
2024-10-10 10:24:13 -07:00
openai_compat.py
Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (
#376
)
2024-11-05 16:22:33 -08:00
prompt_adapter.py
remote::vllm now works with vision models
2024-11-06 16:07:17 -08:00