This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-07-02 20:40:36 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
35a00d004a
llama-stack-mirror
/
llama_stack
/
templates
/
vllm-gpu
History
Download ZIP
Download TAR.GZ
Dinesh Yeduguru
3d4c53dfec
add mcp runtime as default to all providers (
#816
)
...
# What does this PR do? This is needed to have the notebook work with MCP
2025-01-17 16:40:58 -08:00
..
__init__.py
Update more distribution docs to be simpler and partially codegen'ed
2024-11-20 22:03:44 -08:00
build.yaml
add mcp runtime as default to all providers (
#816
)
2025-01-17 16:40:58 -08:00
run.yaml
add mcp runtime as default to all providers (
#816
)
2025-01-17 16:40:58 -08:00
vllm.py
add mcp runtime as default to all providers (
#816
)
2025-01-17 16:40:58 -08:00