..
anthropic
feat(providers): Groq now uses LiteLLM openai-compat ( #1303 )
2025-02-27 13:16:50 -08:00
bedrock
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
cerebras
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
databricks
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
fireworks
fix: remove Llama-3.2-1B-Instruct for fireworks ( #1558 )
2025-03-11 11:19:29 -07:00
gemini
feat(providers): Groq now uses LiteLLM openai-compat ( #1303 )
2025-02-27 13:16:50 -08:00
groq
fix: register provider model name and HF alias in run.yaml ( #1304 )
2025-02-27 16:39:23 -08:00
nvidia
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
ollama
feat(logging): implement category-based logging ( #1362 )
2025-03-07 11:34:30 -08:00
openai
feat(providers): Groq now uses LiteLLM openai-compat ( #1303 )
2025-02-27 13:16:50 -08:00
passthrough
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
runpod
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
sambanova
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
sample
build: format codebase imports using ruff linter ( #1028 )
2025-02-13 10:06:21 -08:00
tgi
fix: solve ruff B008 warnings ( #1444 )
2025-03-06 16:48:35 -08:00
together
feat: Add open benchmark template codegen ( #1579 )
2025-03-12 11:12:08 -07:00
vllm
fix: Swap to AsyncOpenAI client in remote vllm provider ( #1459 )
2025-03-07 14:48:00 -05:00
__init__.py
impls
-> inline
, adapters
-> remote
(#381 )
2024-11-06 14:54:05 -08:00