This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack
Watch
1
Star
0
Fork
You've already forked llama-stack
0
forked from
phoenix-oss/llama-stack-mirror
Code
Pull requests
Releases
Packages
2
Activity
Actions
4205376653
llama-stack
/
llama_stack
/
providers
History
Download ZIP
Download TAR.GZ
Matthew Farrellee
4205376653
chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (
#1985
)
...
see
https://build.nvidia.com/meta/llama-3_3-70b-instruct
2025-04-17 06:50:40 -07:00
..
inline
feat: Implement async job execution for torchtune training (
#1437
)
2025-04-14 08:59:11 -07:00
registry
fix: use torchao 0.8.0 for inference (
#1925
)
2025-04-10 13:39:20 -07:00
remote
chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (
#1985
)
2025-04-17 06:50:40 -07:00
tests
refactor: move all llama code to models/llama out of meta reference (
#1887
)
2025-04-07 15:03:58 -07:00
utils
feat: Implement async job execution for torchtune training (
#1437
)
2025-04-14 08:59:11 -07:00
__init__.py
API Updates (
#73
)
2024-09-17 19:51:35 -07:00
datatypes.py
feat: add health to all providers through providers endpoint (
#1418
)
2025-04-14 11:59:36 +02:00