This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack
Watch
1
Star
0
Fork
You've already forked llama-stack
0
forked from
phoenix-oss/llama-stack-mirror
Code
Pull requests
Releases
Packages
2
Activity
Actions
b10e9f46bb
llama-stack
/
llama_stack
/
providers
/
adapters
/
inference
History
Download ZIP
Download TAR.GZ
Ashwin Bharambe
b10e9f46bb
Enable remote::vllm (
#384
)
...
* Enable remote::vllm * Kill the giant list of hard coded models
2024-11-06 14:42:44 -08:00
..
bedrock
add bedrock distribution code (
#358
)
2024-11-06 14:39:11 -08:00
databricks
completion() for tgi (
#295
)
2024-10-24 16:02:41 -07:00
fireworks
Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (
#376
)
2024-11-05 16:22:33 -08:00
ollama
Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (
#376
)
2024-11-05 16:22:33 -08:00
sample
Remove "routing_table" and "routing_key" concepts for the user (
#201
)
2024-10-10 10:24:13 -07:00
tgi
Kill
llama stack configure
(
#371
)
2024-11-06 13:32:10 -08:00
together
Enable vision models for (Together, Fireworks, Meta-Reference, Ollama) (
#376
)
2024-11-05 16:22:33 -08:00
vllm
Enable remote::vllm (
#384
)
2024-11-06 14:42:44 -08:00
__init__.py
API Updates (
#73
)
2024-09-17 19:51:35 -07:00