This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack
Watch
1
Star
0
Fork
You've already forked llama-stack
0
forked from
phoenix-oss/llama-stack-mirror
Code
Pull requests
Releases
Packages
2
Activity
Actions
530d4bdfe1
llama-stack
/
llama_stack
/
templates
/
vllm-gpu
History
Download ZIP
Download TAR.GZ
ehhuang
2f38851751
chore: Revert "chore(telemetry): remove service_name entirely" (
#1785
)
...
Reverts meta-llama/llama-stack#1755
closes
#1781
2025-03-25 14:42:05 -07:00
..
__init__.py
Update more distribution docs to be simpler and partially codegen'ed
2024-11-20 22:03:44 -08:00
build.yaml
chore: move embedding deps to RAG tool where they are needed (
#1210
)
2025-02-21 11:33:41 -08:00
run.yaml
chore: Revert "chore(telemetry): remove service_name entirely" (
#1785
)
2025-03-25 14:42:05 -07:00
vllm.py
fix: Default to port 8321 everywhere (
#1734
)
2025-03-20 15:50:41 -07:00