llama-stack-mirror/llama_stack/providers/remote/inference/fireworks
2025-03-11 10:42:52 -07:00
..
__init__.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
config.py feat: add (openai, anthropic, gemini) providers via litellm (#1267) 2025-02-25 22:07:33 -08:00
fireworks.py fix: Use re-entrancy and concurrency safe context managers for provider data (#1498) 2025-03-08 22:56:30 -08:00
models.py fix: remove Llama-3.2-1B-Instruct for fireworks 2025-03-11 10:42:52 -07:00