This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-07-06 05:59:13 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
f9ca441974
llama-stack-mirror
/
llama_stack
/
providers
/
remote
/
inference
/
groq
History
Download ZIP
Download TAR.GZ
Yuan Tang
f9ca441974
chore: Link to Groq docs in the warning message for preview model (
#1060
)
...
This should be `llama-3.2-3b` instead of `llama-3.2-3b-instruct`.
2025-02-13 12:14:57 -05:00
..
__init__.py
[
#432
] Add Groq Provider - chat completions (
#609
)
2025-01-03 08:27:49 -08:00
config.py
[
#432
] Add Groq Provider - chat completions (
#609
)
2025-01-03 08:27:49 -08:00
groq.py
chore: Link to Groq docs in the warning message for preview model (
#1060
)
2025-02-13 12:14:57 -05:00
groq_utils.py
feat: Support tool calling for streaming chat completion in remote vLLM provider (
#1063
)
2025-02-12 06:17:21 -08:00