llama-stack/llama_stack/providers/utils
Ashwin Bharambe 928a39d17b
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
Groq has never supported raw completions anyhow. So this makes it easier
to switch it to LiteLLM. All our test suite passes.

I also updated all the openai-compat providers so they work with api
keys passed from headers. `provider_data`

## Test Plan

```bash
LLAMA_STACK_CONFIG=groq \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
   --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model=""
```

Also tested (openai, anthropic, gemini) providers. No regressions.
2025-02-27 13:16:50 -08:00
..
bedrock Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
common build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
datasetio build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
inference feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
kvstore precommit 2025-02-19 22:37:41 -08:00
memory build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
scoring Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
telemetry chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00