This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack
Watch
1
Star
0
Fork
You've already forked llama-stack
0
forked from
phoenix-oss/llama-stack-mirror
Code
Pull requests
Releases
Packages
2
Activity
Actions
e2a5a2e10d
llama-stack
/
llama_stack
/
providers
/
adapters
/
inference
History
Download ZIP
Download TAR.GZ
Yuan Tang
a27a2cd2af
Add vLLM inference provider for OpenAI compatible vLLM server (
#178
)
...
This PR adds vLLM inference provider for OpenAI compatible vLLM server.
2024-10-20 18:43:25 -07:00
..
bedrock
Make all methods
async def
again; add completion() for meta-reference (
#270
)
2024-10-18 20:50:59 -07:00
databricks
Make all methods
async def
again; add completion() for meta-reference (
#270
)
2024-10-18 20:50:59 -07:00
fireworks
Make all methods
async def
again; add completion() for meta-reference (
#270
)
2024-10-18 20:50:59 -07:00
ollama
update ollama for llama-guard3
2024-10-19 17:26:18 -07:00
sample
Remove "routing_table" and "routing_key" concepts for the user (
#201
)
2024-10-10 10:24:13 -07:00
tgi
Make all methods
async def
again; add completion() for meta-reference (
#270
)
2024-10-18 20:50:59 -07:00
together
Make all methods
async def
again; add completion() for meta-reference (
#270
)
2024-10-18 20:50:59 -07:00
vllm
Add vLLM inference provider for OpenAI compatible vLLM server (
#178
)
2024-10-20 18:43:25 -07:00
__init__.py
API Updates (
#73
)
2024-09-17 19:51:35 -07:00