llama-stack-mirror/llama_stack
Ashwin Bharambe 928a39d17b
feat(providers): Groq now uses LiteLLM openai-compat (#1303)
Groq has never supported raw completions anyhow. So this makes it easier
to switch it to LiteLLM. All our test suite passes.

I also updated all the openai-compat providers so they work with api
keys passed from headers. `provider_data`

## Test Plan

```bash
LLAMA_STACK_CONFIG=groq \
   pytest -s -v tests/client-sdk/inference/test_text_inference.py \
   --inference-model=groq/llama-3.3-70b-versatile --vision-inference-model=""
```

Also tested (openai, anthropic, gemini) providers. No regressions.
2025-02-27 13:16:50 -08:00
..
apis ci: add mypy for static type checking (#1101) 2025-02-21 13:15:40 -08:00
cli fix(cli): Missing default for --image-type in stack run command (#1274) 2025-02-26 12:23:44 -08:00
distribution fix(test): update client-sdk tests to handle tool format parametrization better (#1287) 2025-02-26 21:16:00 -08:00
models/llama chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
providers feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
scripts ci: add mypy for static type checking (#1101) 2025-02-21 13:15:40 -08:00
strong_typing Ensure that deprecations for fields follow through to OpenAPI 2025-02-19 13:54:04 -08:00
templates feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
schema_utils.py ci: add mypy for static type checking (#1101) 2025-02-21 13:15:40 -08:00