.. |
bedrock
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
cerebras
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
ci-tests
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
dell
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
dev
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
experimental-post-training
|
fix: fix experimental-post-training template (#1740)
|
2025-03-20 23:07:19 -07:00 |
fireworks
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
groq
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
hf-endpoint
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
hf-serverless
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
llama_api
|
feat: add api.llama provider, llama-guard-4 model (#2058)
|
2025-04-29 10:07:41 -07:00 |
meta-reference-gpu
|
docs: Fix missing --gpu all flag in Docker run commands (#2026)
|
2025-04-25 12:17:31 -07:00 |
nvidia
|
feat: Add NVIDIA NeMo datastore (#1852)
|
2025-04-28 09:41:59 -07:00 |
ollama
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
open-benchmark
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
passthrough
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
remote-vllm
|
docs: Add tips for debugging remote vLLM provider (#1992)
|
2025-04-18 14:47:47 +02:00 |
sambanova
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
tgi
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
together
|
test: verification on provider's OAI endpoints (#1893)
|
2025-04-07 23:06:28 -07:00 |
verification
|
fix: 100% OpenAI API verification for together and fireworks (#1946)
|
2025-04-14 08:56:29 -07:00 |
vllm-gpu
|
chore: Revert "chore(telemetry): remove service_name entirely" (#1785)
|
2025-03-25 14:42:05 -07:00 |
watsonx
|
feat: Add watsonx inference adapter (#1895)
|
2025-04-25 11:29:21 -07:00 |
__init__.py
|
Auto-generate distro yamls + docs (#468)
|
2024-11-18 14:57:06 -08:00 |
dependencies.json
|
feat: add api.llama provider, llama-guard-4 model (#2058)
|
2025-04-29 10:07:41 -07:00 |
template.py
|
feat(api): (1/n) datasets api clean up (#1573)
|
2025-03-17 16:55:45 -07:00 |