This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack
Watch
1
Star
0
Fork
You've already forked llama-stack
0
forked from
phoenix-oss/llama-stack-mirror
Code
Pull requests
Releases
Packages
2
Activity
Actions
4205376653
llama-stack
/
llama_stack
/
providers
/
remote
/
inference
/
nvidia
History
Download ZIP
Download TAR.GZ
Matthew Farrellee
4205376653
chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (
#1985
)
...
see
https://build.nvidia.com/meta/llama-3_3-70b-instruct
2025-04-17 06:50:40 -07:00
..
__init__.py
add NVIDIA NIM inference adapter (
#355
)
2024-11-23 15:59:00 -08:00
config.py
chore: move all Llama Stack types from llama-models to llama-stack (
#1098
)
2025-02-14 09:10:59 -08:00
models.py
chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (
#1985
)
2025-04-17 06:50:40 -07:00
NVIDIA.md
docs: Add NVIDIA platform distro docs (
#1971
)
2025-04-17 05:54:30 -07:00
nvidia.py
fix: 100% OpenAI API verification for together and fireworks (
#1946
)
2025-04-14 08:56:29 -07:00
openai_utils.py
refactor: move all llama code to models/llama out of meta reference (
#1887
)
2025-04-07 15:03:58 -07:00
utils.py
style: remove prints in codebase (
#1146
)
2025-02-18 19:41:37 -08:00