This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-12-03 09:53:45 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
a5d6ab16b2
llama-stack-mirror
/
llama_stack
/
providers
History
Download ZIP
Download TAR.GZ
Ashwin Bharambe
a5d6ab16b2
fix: meta-reference parallel utils bug, use isinstance not equality
2025-04-24 11:27:49 -07:00
..
inline
fix: meta-reference parallel utils bug, use isinstance not equality
2025-04-24 11:27:49 -07:00
registry
fix: use torchao 0.8.0 for inference (
#1925
)
2025-04-10 13:39:20 -07:00
remote
fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (
#1969
)
2025-04-23 15:33:19 +02:00
tests
refactor: move all llama code to models/llama out of meta reference (
#1887
)
2025-04-07 15:03:58 -07:00
utils
fix: OAI compat endpoint for meta reference inference provider (
#1962
)
2025-04-17 11:16:04 -07:00
__init__.py
API Updates (
#73
)
2024-09-17 19:51:35 -07:00
datatypes.py
feat: add health to all providers through providers endpoint (
#1418
)
2025-04-14 11:59:36 +02:00