..
bedrock
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
cerebras
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
ci-tests
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
dell
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
dev
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
experimental-post-training
fix: fix experimental-post-training template ( #1740 )
2025-03-20 23:07:19 -07:00
fireworks
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
groq
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
hf-endpoint
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
hf-serverless
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
meta-reference-gpu
feat: add batch inference API to llama stack inference ( #1945 )
2025-04-12 11:41:12 -07:00
nvidia
feat: NVIDIA allow non-llama model registration ( #1859 )
2025-04-24 17:13:33 -07:00
ollama
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
open-benchmark
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
passthrough
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
remote-vllm
docs: Add tips for debugging remote vLLM provider ( #1992 )
2025-04-18 14:47:47 +02:00
sambanova
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
tgi
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
together
test: verification on provider's OAI endpoints ( #1893 )
2025-04-07 23:06:28 -07:00
verification
fix: 100% OpenAI API verification for together and fireworks ( #1946 )
2025-04-14 08:56:29 -07:00
vllm-gpu
chore: Revert "chore(telemetry): remove service_name entirely" ( #1785 )
2025-03-25 14:42:05 -07:00
watsonx
feat: Add watsonx inference adapter ( #1895 )
2025-04-25 11:29:21 -07:00
__init__.py
Auto-generate distro yamls + docs ( #468 )
2024-11-18 14:57:06 -08:00
dependencies.json
feat: Add watsonx inference adapter ( #1895 )
2025-04-25 11:29:21 -07:00
template.py
feat(api): (1/n) datasets api clean up ( #1573 )
2025-03-17 16:55:45 -07:00