llama-stack-mirror/llama_stack/providers/remote/inference/groq
2024-12-14 16:35:24 +11:00
..
__init__.py Add Groq provider - chat completions 2024-12-14 11:25:22 +11:00
config.py Add Groq provider - chat completions 2024-12-14 11:25:22 +11:00
groq.py Move model_id above so warning actually works 2024-12-14 16:35:24 +11:00
groq_utils.py Add Groq provider - chat completions 2024-12-14 11:25:22 +11:00