This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack
Watch
1
Star
0
Fork
You've already forked llama-stack
0
forked from
phoenix-oss/llama-stack-mirror
Code
Pull requests
Releases
Packages
2
Activity
Actions
a5d6ab16b2
llama-stack
/
llama_stack
/
providers
/
inline
/
inference
/
meta_reference
History
Download ZIP
Download TAR.GZ
Ashwin Bharambe
a5d6ab16b2
fix: meta-reference parallel utils bug, use isinstance not equality
2025-04-24 11:27:49 -07:00
..
__init__.py
refactor: move all llama code to models/llama out of meta reference (
#1887
)
2025-04-07 15:03:58 -07:00
common.py
refactor: move all llama code to models/llama out of meta reference (
#1887
)
2025-04-07 15:03:58 -07:00
config.py
feat: add batch inference API to llama stack inference (
#1945
)
2025-04-12 11:41:12 -07:00
generators.py
feat: add batch inference API to llama stack inference (
#1945
)
2025-04-12 11:41:12 -07:00
inference.py
fix: OAI compat endpoint for meta reference inference provider (
#1962
)
2025-04-17 11:16:04 -07:00
model_parallel.py
feat: add batch inference API to llama stack inference (
#1945
)
2025-04-12 11:41:12 -07:00
parallel_utils.py
fix: meta-reference parallel utils bug, use isinstance not equality
2025-04-24 11:27:49 -07:00