This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-12-28 08:30:24 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
8e9217774a
llama-stack-mirror
/
llama_stack
/
providers
History
Download ZIP
Download TAR.GZ
Eric Huang
8e9217774a
new prompt
2025-04-25 10:54:40 -07:00
..
inline
fix: meta ref inference (
#2022
)
2025-04-24 13:03:35 -07:00
registry
fix: use torchao 0.8.0 for inference (
#1925
)
2025-04-10 13:39:20 -07:00
remote
fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (
#1969
)
2025-04-23 15:33:19 +02:00
tests
refactor: move all llama code to models/llama out of meta reference (
#1887
)
2025-04-07 15:03:58 -07:00
utils
new prompt
2025-04-25 10:54:40 -07:00
__init__.py
API Updates (
#73
)
2024-09-17 19:51:35 -07:00
datatypes.py
feat: add health to all providers through providers endpoint (
#1418
)
2025-04-14 11:59:36 +02:00